Fishbowl Solutions was recently featured on Oracle’s Blog during WebCenter Partners Week, showcasing our mobile application for iPhone/Android – FishbowlToGo. Mobility product manager, Kim Negaard authored a post detailing how our newest mobility venture helps WebCenter customers get the most from their investment.
Access Oracle WebCenter Content on your iPhone or Android with FishbowlToGo
Fishbowl Solutions has been working with Oracle WebCenter customers since 2010 to extend WebCenter Content to mobile devices. We started working with mobile sales force enablement and have since extended our offerings to meet expanding customer needs. We are excited to announce the release of our newest mobile app, FishbowlToGo.
Read the whole blog post here: http://bit.ly/ZHLDxX
The post Fishbowl Solutions featured on Oracle Blog for WebCenter Partners Week appeared first on C4 Blog by Fishbowl Solutions.
According to recent research, cloud deployments continue to rise as enterprises finally grasp how these technologies can offer efficiency, agility and a leaner business model. As more companies embrace the cloud, however, realizing these benefits may depend on support from a third-party, such as dba services, for effective implementation.
SmallBusiness reported that in fact, 70 percent of small and medium-sized enterprises (SMEs) in a Fasthosts study said that cloud adoption will be a critical factor for growth over the next 12 months. Simon Yeoman, general manager of Fasthosts commented on the implications of the study's findings.
'"Many large enterprises have firmly established their cloud strategies but SMEs have up until now found the concept of cloud quite alien and therefore haven't integrated it into business operations," he said, according to the news source. "The results of this survey demonstrate that SMEs are starting to think seriously about the cloud and that they are taking important steps to use it to their business advantage."
When asked which aspect of business these companies felt the cloud would be most helpful in, 38 percent cited flexibility and scalability.
A major reason that more firms have turned to a cloud model is that software-as-a-service (SaaS) has enabled companies of all sized and budgets to quickly integrate the latest technologies at an affordable rate. Business 2 Community contributor Sara Harold revealed that for many SMEs, SaaS has transformed the IT infrastructure, offering dramatic savings as well as more powerful computing. Harold noted that these factors allow firms to experiment with new IT concepts and tools and adapt to a rapidly changing business environment.
Another key driver of cloud initiatives is the transition from capital expenditures to only paying for operating costs. Harold explained that SaaS and the cloud offer low subscription-based payment models, so there are no technological obstacles or need for hefty investments in hardware, maintenance and upgrades. As an example, she pointed out that ten years ago enterprises had to buy multiple copies of virus protection software and constantly invest in new solutions as technologies became more advanced. However, now businesses can purchase a single-user license and scale this software up in the cloud as the business expands, addressing new risks and needs.
One of the most important aspects of the cloud is that it is easier and more cost-effective to adjust the technology based on actual company demands, which allows for smarter investments and budgeting as well as boosts the bottom line.
RDX offers a full suite of cloud migration and administrative services that can be tailored to meet any customer's needs. To learn more our full suite of cloud migration and support services, please visit our Cloud DBA Service page or contact us.
When I started my work in IT, I used to be in a very small shop, and even though we had people in several places in the same state, everything used to be very centralized and from 9 to 5, and because we were basically only 2 people , our action plan used to be a talk over the lunch table and that would be it, we would go ahead and execute it after 5 PM, and I won’t lie sometimes before 5 :) .
Over the years I have understood that even if you are a 2 guy shop or a team of 15 separated by oceans and being miles apart, communication is the most important thing to have on your team, and one of the means of communication is having an action plan in place for any major/medium change you do in your organization. First this will generate discussions amongst your teammates and it will reduce the possibility of errors when you are faced with time and pressure constraints when implementing it.
This might sometimes feel like a mundane and boring task, as it will take an effort to come up with it and it will take time to verify it, but when game day comes along you will see the great benefit of having an action plan.
Another great benefit of having an action plan is that you also have a road map if you need to rollback your change, and that is also critical, because normally any major change or rollback is not done only by one person, take for example a change that takes about 7 or 8 hours to be done, and at the end when the UAT (User Application Testing) is done, 1 or 2 more hours ,the application team decides that a rollback is needed, you are probably not in a good state of mind to do the rollback after 8 hours of continuous work, if you have an action plan, one of your teammates can step in and you can have a rest, even if it is to go to the kitchen and have a sandwich and a coke and forget 10 minutes about that pressure.
As with life and with us being human, having an action plan doesn’t mean that everything will go smoothly or you won’t have an error in there, but believe me, it will reduce in a big way the possibility of an error if you execute it by memory or by doing one yourself without revision.
I do hope that you already have an action plan as part of your major/medium changes, but if you don’t, it is time to get FIT-ACER, here is an example of one (Kudos to Cesar Sanchez as it is his Action Plan Template), use it and modify it to your needs, it is a good start.
Welcome back to the WebCenter Blog.
Last week, we presented a number of different partner solutions for WebCenter. This week we will be focusing a bit more intently on the value of Content Management in the enterprise and to start things off, we'll be hearing from our partner, aurionPro SENA about their offerings for WebCenter, including their mobile app and Accounts Payable solutions.
The buzz throughout the halls of recent conferences spotlights “glamorous” technologies: cloud, social, mobile. It’s the mantra of industry analysts and has been adopted by pretty much everyone. Cloud, social, mobile. The ‘triad’ is unavoidable. Many of our customers are asking to implement Facebook or Yammer-style intranet solutions, and everyone’s asking for mobile delivery of more and more content. We’re proud that we've done some of the most innovative work completed to date building mobile apps and social/collaborative platforms leveraging the WebCenter suite (a few examples are included near the end of this post). But WebCenter is not just about cutting edge use cases.
aurionPro SENA has been working with WebCenter and its underlying technologies from the very beginning. In fact, ten of our technical, sales, and executive leaders were long-time Optika, Stellent, and/or Oracle employees (including our newest leadership team member, Ed Jackowiak, who previously was leading Oracle’s efforts to build and grow the North America IDM business). With over a decade of focused experience, and Specialized Status in both WebCenter Content and WebCenter Portal, we’ve seen and solved pretty much every use case possible…both the glamorous and the unglamorous.
You don’t see many keynote speeches these days focused on streamlining accounts payable processes. But WebCenter is an unsung hero for even the most commonplace use case. WebCenter’s Image Processing and Records Management solutions have saved huge amounts of hard dollars for dozens of our clients by automating manually-intensive and error-prone processing tasks. Replacement of legacy and homegrown systems with an Oracle WebCenter solution, along with the ability to integrate WebCenter features with back-end systems of record, are the driving factors for achieving these benefits. One of the true industry experts in this field, Sam Harp, previously a long-time employee of Optika, Stellent, and Oracle, has been leading these types of implementations for more than 20 years.
Somewhere in the middle of the glamour curve falls the topic of web solution and mobile app security. It’s certainly a hot topic, but maybe not as glitzy as ‘the triad’. As employees push the adoption of mobile devices in the workplace for convenience and productivity gains, companies are aggressively implementing information security solutions to ensure that sensitive data is protected through every channel that it is being accessed. The good news for WebCenter customers is that Oracle’s Security Suite, Identity Management (IDM), is second-to-none in the industry. It provides everything from single-sign-on functionality all the way through fine-grained access control, an absolute must for regulated and compliance-heavy industries like Financial Services and Healthcare. Implementing security processes such as employee on-boarding and off-boarding and integrating with multiple directory and user profile repositories can be challenging undertakings. Working hand-in-hand with Oracle, aurionPro SENA’s IDM practice, winner of 2 of the last 4 Oracle Excellence Awards in IDM and led by Oracle Deputy CTO and aurionPro SENA President, Swapnil Mehta, ensures successful and secure mobile, content, portal, records management, and image processing implementations.
From the glamorous to the unglamorous, the dedicated WebCenter team at aurionPro SENA has seen it all. In fact, we were delighted to have been recognized for our depth of expertise as the honorable mention in the 2012 Oracle Excellence Award in the WebCenter Category. Here are a few WebCenter solutions of interest that we've built recently:
- ContentiD: aurionPro SENA’s free WebCenter Content mobile app that allows organizations to securely search for and view documents, as well as review and process workflow assignments. All you need is network access to a WebCenter Content server and you’re up and running…there are no server components to install or customizations to make.
Download the app from Apple’s App Store here
- WebCenter Managed Services: Many customers who have asked us to host WebCenter solutions on their behalf often mistake that the “cloud” is the best path for them. Their real business challenge is to be able to focus their resources on core business objectives and less on complicated day-to-day IT tasks. We’ve built a mature, 24x7 system monitoring solution and a world-class service desk to help offload our clients’ costly and time-consuming support tasks. Some of our largest customers now trust us to manage their WebCenter solutions. Learn more here.
- Innovative Intranet : Hampered by an outdated intranet solution implemented on unsupported software, an industrial components manufacturer turned to aurionPro SENA to envision a secure and fresh new experience for their 100,000+ employees through the implementation of an Oracle WebCenter and Oracle IDM Proof of Concept.
- Business Expanding Extranet : An entertainment services company wanted to provide better collaboration with production companies, studios, and employees in order to gain greater market share through improved relationships. aurionPro SENA helped them achieve their goals through the design and development of a WebCenter-based portal that facilitated electronic data input, replacing manually-intensive, paper-based processes and enabling document collaboration across their user communities.
- Physical Records Management Deployment : A home improvement retail chain needed to replace a homegrown records management system that drove barcoding, labeling, and storage management. aurionPro SENA helped them to implement WebCenter Content: Records to manage 65,000 boxes.
- Streamlined Accounts Payable Process: Inundated by a legacy, paper-driven invoicing process, an electrical services company needed to improve their Accounts Payable process. aurionPro SENA implemented WebCenter Imaging to achieve incredible efficiency gains and dramatically improve visibility into process bottlenecks.
If you’d like to learn more about any of our productized, pre-packaged, or consulting offerings, feel free to get in touch with two of our other long-term Stellent and Oracle experts, Mark Tepsic and Steven Sommer , or visit our website at aurionprosena.com .
For my readers who are preparing the ocm 11g exam, the environment just changed (From 13th May 2013 onwards)
Instead of using OEM 10g, you will be using OEM 11g.
The upgrade exam is still using OEM 10g and DB 11gR1 (!) but I did not care installing OEM 10g and I prepared with OEM 11g.
On Thursday I’ll be flying out to Bulgaria for BGOUG Spring 2013. It’s been about 18 months since I’ve visited the people over there, so I’m really looking forward to getting stuck in.
This will be my first conference of the year, so I’m feeling a little nervous at the moment. I’m sure the adrenalin rush will kick in and get me through.
I’m signed up for the southern leg of the OTN Tour of Latin America (Chile, Peru, Uruguay, Argentina, Brazil), but it will be a while before I get any confirmation, so there are no guarantees yet.
Fun, fun, fun…
Tim…BGOUG Spring 2013… was first posted on May 13, 2013 at 8:31 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
In my previous posting in this series, I looked at the new 11.1.1..7.1 release of the Oracle BI Applications at a high-level, and talked about how this new release uses ODI as the embedded ETL tool instead of Informatica PowerCenter. Support for Informatica will come with patch set 2 (PS2) of BI Apps 126.96.36.199.x giving customers the choice of which ETL to use (with the caveat that customers upgrading from 7.9.x will typically have to stick with Informatica unless they want to completely re-implement using ODI), but for this initial release at least, ODI and some new Fusion Middleware tools take over from Informatica and the DAC, giving us what could well be a much simpler architecture for supplying the underlying data for the BI Apps dashboards.
In this posting then, I’m going to take a closer look at this new product architecture, and I’ll follow it with a more detailed look at how the various bits of ODI functionality replace the workflows, mappings, transformation operators and execution plans provided in earlier releases by Informatica and the DAC. For anyone familiar with the previous, 7.9.x versions of the BI Applications, the architecture diagram below shows the five tiers that this product typically implemented; tiers for the source data and data warehouse/repository databases, an ETL tier for Informatica and the DAC server, then two more tiers for the OBIEE application server and the client web browser.
Communication between the tiers was – to put it politely – “loosely coupled”, with DAC task names corresponding with Informatica workflow names, each workflow containing a single mapping, and all of the connections and sources having to be named “just so”, so that every part of the stack could communicate with all the others. It worked, but it was a lot of work to implement and configure, and once it was up and running in most cases customers were scared to then change it, in case a name or a connection got out of sync and everything then stopped working. Plus – Informatica skills are scarce in the Oracle world, and the DAC is an extra piece of technology that few DBAs really understood properly.
The 188.8.131.52.1 release of the BI Apps simplifies this architecture by removing the separate ETL tier, and instead using Oracle Data Integrator as the embedded ETL tool, with its server functions running as JEE applications within the same WebLogic domain as OBIEE 11g, giving us the overall architecture in the diagram below.
Now anyone who read my series of posts back in 2009 on the 184.108.40.206 release of the BI Apps, which also used ODI as the embedded ETL tool, will know that whilst ODI 10g could do the job of loading data into the BI Apps data warehouse, it lacked the load orchestration capabilities of Informatica and the DAC and wasn’t really set up to dynamically generate what have become, in ODI 11g, load plans. BI Apps 220.127.116.11 turned-out to be a one-off release and in the intervening years Oracle have added the aforementioned load plans along with other functionality aimed at better supporting the BI Apps, along with two new JEE applications that run in WebLogic to replace the old DAC. These new applications, along with the ODI JEE agent, ODI Console and the ODI SDK, are shown in the more detailed BI Applications 18.104.22.168.1 logical architecture diagram shown below.
Oracle BI Applications 22.214.171.124.1 has two main product tiers to it, made up of the following components:
- The Middleware (BI and ETL) tier; a WebLogic domain and associated system components, comprising BI components delivered as part of OBIEE 126.96.36.199 (including Essbase and related applications) as one managed server, and another managed server containing ODI Java components, including three new BI Apps-related ones; Configuration Manager, Functional Setup Manager, and ODI Load Plan Generator
- The Database (DW and Repositories) tier; for the time-being, Oracle only, and comprising a data warehouse schema (staging + performance layer), and a repository database containing the OBIEE repository schemas plus new ones to hold the ODI repository and other ETL/configuration metadata used for configuring your system.
Essbase at this stage is installed, but not used for the main BI applications, and all of it uses Fusion Middleware security (application roles and policies) along with the WebLogic Embedded LDAP server to hold users and groups. A special version of RCU is used to set up the new BI Apps-related schemas, and import data into them using Oracle database export files, so that the ODI repository, metadata tables and so forth are all populated prior to the first load taking place. Enterprise Manager Fusion Middleware Control is still used to manage and monitor the overall platform, but there’s now an entry for ODI along with Essbase, the latter of course being delivered as part of the 188.8.131.52 OBIEE platform release.
In the next posting in the series we’ll take a closer look at how ODI uses its JEE agent and mappings imported into its repository to load the BI Apps data warehouse, but what about the two new web-based configuration tools, Oracle BI Applications Configuration Manager (BIACM) and Oracle BI Applications Functional Setup Manager (FSM) – what do they do?
After you install OBIEE 184.108.40.206 and then the BI Applications 220.127.116.11.1, the BI Apps installer extends the BI domain to include FSM, BIACM and the ODI Load Plan Generator, along with some other supporting applications and libraries required for the full product. Load Plan Generator works behind the scenes to build new load plans in a similar way to the Execution Plan “Build” feature in the DAC, and the two web-based tools perform the following functions:
- Oracle BI Applications Configuration Manager performs system-wide setup tasks such as defining sources, selecting BI Apps modules and performing other, “one-only” tasks similar to the Setup feature in the DAC Console.
- Oracle BI Applications Functional Setup Manager is then used to list out, and track progress against, the various tasks required to configure the BI Applications modules, or “Offerings”, that you selected in the Configuration Manager
Most importantly though, these tools connect directly through to the ODI repository, so data sources you set up here will get pushed down to ODI as data servers in the ODI master repository; load plans you set up to, as in the screenshot below, load configuration tables, are ODI load plans and you can track their progress either from within ODI, or from within these applications themselves.
I haven’t had a chance to properly “diff” the RPD used in BI Apps 18.104.22.168.1 with the previous 7.9.x ones, or do a similar exercise for the underlying database data model, but on first glance the new RPD is at least recognisable, albeit with new sources and subject areas for the Fusion Apps, Oracle Transactional BI (OTBI), Real-Time Decisions and the like. The web catalog also looks familiar, but also has new content around the new applications along with additional content for the existing ones.
So, we’re at the point now where can start to think about loading data into the BI Apps data warehouse, and in tomorrows post we’ll take a look at what’s involved in a BI Apps 22.214.171.124.1 ETL load, and also look into how GoldenGate can now be used to extract and stage data prior to loading via ODI. Back tomorrow…
The long-awaited and anticipated 126.96.36.199.1 (PS1) release of the Oracle BI Applications became available early last week, with the software and documentation available for download on OTN. Over the next few blog posts, I’ll be taking an in-depth look at this new release, starting today with an overview of what’s new and any limitations in his initial version, and then over the next few posts taking a look at the product architecture, how it uses Oracle Data Integrator instead of Informatica to do the data loads, and what new content the 11g dashboards contain. For a bit of background into this release you’re best off taking a look at a series of posts I put together towards the end of last year on the BI Apps product roadmap, and I’ll recap on those posts a bit in this one as I go through at a high level what’s in this release.
Although the focus in BI Apps 188.8.131.52.1 is on ODI as the ETL tool, this new release actually delivers a whole new product architecture along with new dashboards, new content, and a new security framework. In addition, there’s now an option to use Oracle GoldenGate to create a new layer in the BI Apps data warehouse data architecture that replicates source data into the warehouse environment, giving you the ability to run the more large-scale ETL processes when you like, rather than when there’s an ETL window for the source systems.
Let’s start off though with a summary of what’s new from a functional perspective, and also what limitations there are for this first release in terms of sources, scope and so forth. BI Apps 184.108.40.206.1 delivers the following set of new features and capabilities:
- Oracle Data Integrator as the embedded ETL tool, along with a whole new FMW11g-centric architecture and set of utilities
- Two new analytic applications - Student Information Analytics, and Indirect Spend Planning
- New content for existing analytic applications including Financial Analytics, HR, Projects, CRM and Procurement & Spend
- Dashboards that are now written for OBIEE 11g rather than 10g, including 220.127.116.11 visualisations such as performance tiles
Now although, in general terms, BI Apps 18.104.22.168.1 covers all (or most…) of the existing analytic application modules along with all of the 7.9.x-era sources (EBS, PeopleSoft, JDE and Siebel), there are some important restrictions that you’ll need to be aware of when making any plans to use this new release, starting with upgrade paths (or lack of them):
- There’s no automatic upgrade path from BI Apps 7.9.x, and no automated migration routine to take you from Informatica to ODI; if you want BI Apps 22.214.171.124.1 now, you’ll have to reimplement rather than upgrade, or you can wait for BI Apps 11.1 PS2 which will support upgrades from earlier releases, but (important to note) keeps you on Informatica – any move from Informatica to ODI will need to be done yourself, as a re-implementation
- Only Oracle database sources and targets are supported in this initial release, in practice not a real issue for new implementations, but worth bearing in mind if you planned to use Teradata, for example, as your target data warehouse platform
- Oracle Fusion Applications aren’t supported as a source either, yet, so anyone using this will need to stay on BI Apps 126.96.36.199.x until an upgrade version becomes available
- A few edge-case analytic applications and sources aren’t supported in this release yet – Enterprise Asset Management, for example, is not yet supported for any source, whereas some other applications only support more recent PeopleSoft versions and not JDE, for example. As always, get the most up-to-date supported sources and applications list from Oracle before making any major investment in an implementation or upgrade project.
From a technical perspective though the major difference in this release, compared to the 7.9.6.x versions that preceded it, is the use of Oracle Data Integrator 11g as the embedded ETL tool rather than Informatica. To be clear, Informatica will still be supported as an ETL option for the BI Apps going well into the foreseeable future, but Informatica users will need to wait for the PS2 release due in the next twelve months or so before they can upgrade to the new 11g platform.
In addition and perhaps more importantly, it’s not envisaged that Informatica customer will move over to ODI unless they use the upgrade as an opportunity to re-implement their system now on ODI, moving across customisations themselves and essentially starting with a clean sheet of paper (which may not be a bad thing, if you’re thinking of tidying up your system following years of upgrades, customisations and so forth). What this does mean though is no DAC, no Informatica server and client tools, a new (and hopefully simpler) way of setting up and configuring your system, and in-theory a more closely-integrated set of tools all based around the modern, standards-based Fusion Middleware 11g architecture.
In this new world of ODI and the BI Apps, ODI load plans replace Informatica Workflows, whilst ODI packages and interfaces equate to Informatica mappings and maplets. The DAC is no more and is replaced by metadata within the ODI repository and other supporting schemas, with setup and configuration of the warehouse and ETL processes now carried out by two web-based tools, BI Applications Configuration Manager and Functional Setup Manager. The closer integration between these tool along with a chance for Oracle to re-think the BI Apps setup process should lead to easier configuration and customisations, but if you’re an Informatica developer it’s a whole new world, and the 11g platform makes a lot more use of Fusion Middleware platform functionality particularly around security and user provisioning.
So – all very exciting but quite daunting in terms of what needs to be learnt, and new processes that need to be thought through and put together before you can start making use of the new 188.8.131.52.1 feature set. We’ll start tomorrow then by taking a closer look at the BI Apps 184.108.40.206.1 technical architecture including the new configuration tools, and where ODI sits in the new product architecture, based on our first impressions of the product.
So MindTap just won a CODiE award for “Best Post-secondary Personalized Learning Solution.” In and of itself, this isn’t a big deal. No offense intended to current or prior winners, but the CODiEs often feel like awards for “Best Instant Coffee” or “Best New Technology Product by an Important Sponsor of Our Awards Program.” They’re not exactly signals of breakthrough educational product design. But I’m glad that the award was given in this case because I think MindTap does represent an important innovation that addresses some of the trends that we’ve been blogging about here at e-Literate (which was one of the reasons that I was enticed to work on MindTap at Cengage for a while).
MindTap is not a “personalized learning solution.” While it does allow students to do things like integrate their Evernote accounts and choose whether they want to read or listen to texts, the level of personalization for the learners is not terribly different from other products on the market. (And it certainly is nowhere near as radical as the vision for a Personalized Learning Environment which came from the UK’s JISC and elsewhere, and from which terms like “personalized learning solution” and “personalized learning experience” have been bastardized). Nor are there adaptive analytics or other sorts of machine-driven personalization in the product at this time. Rather, the key differentiator in the current incarnation of MindTap is the way in which it creates a more refined and complete learning experience out of the box while still enabling faculty to customize those experiences to the needs of their students in pretty significant and, in some cases, new ways. This is exactly where the textbook, LMS, and MOOC markets are all headed, and MindTap got there first.
The Problem to be Solved
In order to understand the value of a product like MindTap, it’s important to understand where textbook publishers do and do not compete. You’re not going to see a lot of MindTap-style products for courses like “Advanced Topics in International Trade Policy,” “Research in Genetics,” “Greek Film,” or “Intermediate Killer Shark Genre.” These smaller courses are relatively uninteresting to textbook publishers because they don’t have the scale necessary to generate significant revenues, and they are also better suited to hand-crafted course designs that are tailored to the strengths of the particular professor doing the teaching and can be highly tailored to the needs and interests of the students in the class. Rather, the courses in question are more like “Introduction to Psychology,” “General Biology I,” “Microeconomics,” or “Survey of Western Civilization.” (English Composition is an anomaly in this categorization because of the way it is taught.) These courses are generally taught in large lecture halls with little or no writing—and when there is writing, it is often graded quickly on a narrow range of criteria by overworked graduate students—and relatively generic syllabi (particularly in non-elite institutions).
A lot of the heated debate over whether college is “broken” revolves around these sorts of classes without ever explicitly defining the scope of the problem. Those who say school is broken and need to be disrupted tend to argue as if all college courses are giant, boring lecture courses. Those who argue against the “school is broken” meme tend to characterize these large lecture-centric courses as exceptions. Neither characterization is entirely accurate. On one hand, there are huge swaths of courses in just about any college catalog that are not large lecture courses. On the other hand, because the large lecture courses are concentrated in core curriculum and core major classes, most students have to take a handful of these courses in order to graduate.
Regardless of how pervasive or rare you think these courses are, everybody seems to agree that they are not terribly effective. But what should be done about the problem? Shrinking the class size is simply not going to happen, given both budget realities and the moral imperative to increase access to education. And yet, the current situation is bad not only for the students but also for the instructors. Keep in mind that the people teaching these survey courses are disproportionately either junior faculty who are doing all kinds of other duties to earn tenure or adjuncts who are working unreasonable course loads just to make ends meet. They generally don’t have a lot of time to either carefully craft a course or give students a lot of (or any) individual attention. They often have little choice but to take what the publisher is giving them as their course outline and run with it. In and of itself, the direct adoption of a publisher’s curriculum isn’t necessarily bad for many of these courses. The whole idea of a core course is that it helps all students getting a particular degree or a particular major to master certain competencies that they should have. There is a strong argument for consistency of curriculum across core courses. But the current situation neither guarantees consistency of curriculum nor saves the instructor time for either thoughtful customization of the curriculum or any other purpose. There is still a lot of hand assembly required to pull together reading assignments, assessments, slides and lecture notes, and so on. It is generally not a creative process because there is little time for creativity, but it is nevertheless a labor-intensive process and one that is prone to introduce variation in hitting those core competencies without any checks or even necessarily a lot of reflection on it.A Better Compromise
If instructors are going to adopt a third-party course curriculum anyway, then we should at least use technology to remove the hand assembly. Why not provide the readings, multimedia, assignments and assessments, neatly integrated with a basic syllabus, into one ready-to-use digital package for the students? At its most basic, this is what “courseware” is and what MindTap does. It provides students and instructors with a ready-to-go complete course structure with all the materials and assessments placed in a logical and easily navigable order. Joel Spolsky once defined poor user interface design as forcing users to make choices that they don’t care about. That is also an apt description for 80% of the pre-semester course preparation process that instructors go through with these big survey courses. Pre-assembling the elements of the vanilla version of the course frees up the instructors’ time to focus on the customizations that they actually do care about. To begin with, the course structure is already assembled and visible, which makes it easier for the instructor to think about its total shape. Removing unwanted content or changing content order is trivially easy, making the roughing in of the course structure very quick.
But things get really interesting when you start looking at adding to the learning path structure in MindTap rather than just moving or deleting things. In ed tech discussions, we tend to talk about APIs as if the main differentiation is having them versus not having them. Can you or can you not integrate Google Docs into a course? But in reality, the specifics of the integration can make an enormous difference in how practically useful the added functionality is to teachers and students. Do you want to make a folder of your documents (like your syllabus) available to the students at all times in the course with one or two clicks, or do you want to insert your own supplemental document right into the course reading, zero clicks away for the student and on their default navigation path? These two types of integration serve fundamentally different purposes in the course. In MindTap, you can do both and more. And importantly, making these different customizations is intuitive and almost trivially easy. Radical customization of the course structure is very much possible. But both because there is far less instructor time wasted with hand assembly of course elements and because customizations are visible and visualizable in the learning path structure, the percentage of time spent on meaningful instructional activities, whether that’s course customization or student interaction, is likely to be higher. For this reason, the MindApp model and the learning path structure are MindTap’s crown jewels.Table Stakes
Of course, MindTap doesn’t have a monopoly on useful courseware platform design. For example, WileyPLUS enables instructors to see which course materials and assessments are associated with which learning objectives. This helps instructors to align what they’re teaching and assessing on to what they think the student should be learning. More importantly, none of these innovations from any of the platforms are going to magically change poor large lecture classes into great educational experiences. The key to solving that problem is not the technology by itself but the learning design that it enables. The classroom flipping craze is a craze precisely because it is a learning design that can improve the pedagogical impact of these large survey classes. But anyone who has actually tried to flip their class will tell you that it’s not easy to do well. Faculty need pedagogical models other than the ones that they learned from their own professors, including the practical tips and support necessary to make those models work in the real world. They need course designs based on learning science and collected experience of innovators, and supported by technology. The MindTap platform doesn’t provide that. No technology platform does. And as far as I can tell, Cengage is not yet designing courseware for MindTap that even attempts to do this. But in order to accomplish the bigger goal, we first need to strike a new balance regarding course design customization. It’s not a question of “more” versus “less.” There will always be times when it is wise to allow a skilled instructor to tune a course. But there needs to be more of a sophisticated collaboration between the individual instructor, a curriculum design team (whether that team works for a textbook publisher or a university), and the other instructors teaching the course at the same institution in order to arrive at better pedagogical approaches that can be adopted and adapted to best effect by individual teachers. In order to accomplish that, you need to start with a combination of platform and content that makes meaningless course assembly unnecessary and meaningful course customization both easy and visible. This is what we mean at e-Literate when we write about “courseware.” And at the moment, MindTap is the best example I know of what a next-generation courseware platform will look like.
I wrote a couple of days ago about replacing my MacBook Pro hard drive with SSD. At the same time I bought a little SSD to use as the system drive for my desktop. I fitted that this morning, installed a fresh copy of Fedora 18 and mounted the original 1TB hard drive as a data drive.
Like the MacBook Pro, my desktop is a few years old, but still has plenty of grunt (Quad Core and 8G RAM) for what I need it for. I do run the odd VM on it, but any heavy stuff is run on my server, so there is no incentive to go out an buy the latest kit for what is essentially just a client PC.
The addition of the SSD means the start up time is a much better and it just feels a lot more responsive. Most apps start up almost instantly. Even GIMP, which used to take an age to start, is mega quick. I’ve put a couple of VMs on it and not surprisingly, they are fast to start up too. Overall I’m really pleased with the outcome.
The funny thing is, I never noticed how noisy spinning rust was until I switched to these SSDs. The Mac is silent and runs for a lot longer before the fan kicks in. The desktop is also silent, until I pull something from the data disk, at which point I hear that slight grinding noise.
I don’t think I would invest in large capacity SSDs for home until the prices drop considerably, but having witnessed the before and after results on these two old machines, I can’t imagine ever running without an SSD system disk again.
Update: I worked through some of the suggestions here to enable TRIM support and reduce wear.Desktop SSD… was first posted on May 11, 2013 at 8:36 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
Prize Winners : Oracle E-Business Suite R12 Integration and OA Framework Development and Extension Cookbook
A couple of weeks ago I started a competition to win 2 copies of Oracle E-Business Suite R12 Integration and OA Framework Development and Extension Cookbook by Andy Penver. Thanks to Packt for donating the prizes. The competition closed yesterday and the lucky winners are:
- Ajay Sharma
I’ve sent your email addresses to my contact at Packt, who will contact you to deliver your e-book.
Tim…Prize Winners : Oracle E-Business Suite R12 Integration and OA Framework Development and Extension Cookbook was first posted on May 11, 2013 at 7:41 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
It is indeed going to be a super busy rest of the month for the entire team as over 41 RAC databases data guard configuration need to be done. We will be pretty engaged and occupied for the next 3 weeks creating standby databases and configuring DG setup in the context to have a fully functional DR environment.
We have done similar practice in the past (a few months ago) to test the DR capabilities for database and application, and now its time to have a permanent DR configuration. Therefore, anticipate a lot of blogging about DR stuff in the coming days at my blog.
Wish me luck people.
In my forum discussion about free buffer waits I came across a term that I didn’t understand: “inode lock contention”. I’m pretty sure I had seen this same term years ago on one of Steve Adams’ pages on IO. But, I didn’t really understand what the term meant and so it was hard to understand whether this was something I was seeing on our production system that was experiencing “free buffer waits”.
First I had to figure out what an inode was. I knew that it had something to do with the way Unix filesystems work but reading this article really helped clear up what inodes are at least on HP-UX. Inodes are small chunks of bytes that are used to define a Unix filesystem. On HP-UX’s VxFS filesystems a type 1 inode can point to up to 10 extents of one or more contiguous 8K blocks on a large filesystem. The filesystem I’ve been testing on appears to have 32 meg extents if I’m reading this output from lvdisplay correctly:
LV Size (Mbytes) 1472000 Current LE 46000
Total size of 1,472,000 meg divided by 46,000 logical extents = 32 meg per extent.
Since the inode can point to 1 to 10 extents it could point to between 32 and 320 meg.
My test case had 15 tables that were more than 1 gigabytes each. It seems like each table should span multiple inodes so even if there is locking at the inode level it looks like it won’t lock the entire table at once. Still, it seems unlikely to me that every time a table is updated that reads from all the other parts of the table pointed to by the same inode are really blocked by an inode lock. Yet that is what this document suggests:
“During a read() system call, VxFS will acquire the inode lock in shared mode, allowing many processes to read a single file concurrently without lock contention. However, when a write() system call is made, VxFS will attempt to acquire the lock in exclusive mode. The exclusive lock allows only one write per file to be in progress at a time, and also blocks other processes reading the file. These locks on the VxFS inode can cause serious performance problems when there are one or more file writers and multiple file readers.”
It uses the term “file” but I assume if you have a large file that has multiple inodes it means it will lock just the pieces associated with the one inode that points to the blocks that are being written. The article goes on to explain how you can use the “cio” option to enable concurrent IO and eliminate this inode contention preventing writers from blocking readers. But, I’ve been testing with just the direct IO options and not the cio option and seeing great results. So, would I see even better improvement with concurrent io?
I didn’t want to mess with our current filesystem mount options since testing had proven them to be so effective but I found that in glance, a performance monitoring tool like top, you have an option to display inode waits. So, I took a test that was running with direct IO and had 15 merge statements loading data into the same empty table at once and ran glance to see if there were any inode waits. There were not:
So, I don’t know if I can depend on this statistic in glance or not. It appears that the direct IO mount options are all we need:
There may be some case within Oracle 11.2.03 on HP-UX 11.31 where you can be hampered by inode lock contention despite having direct IO enabled but my tests have not confirmed it and I’ve banged pretty hard on my test system with a couple of different types of tests.
The new release of BI Publisher 220.127.116.11 has a very nice new feature for those of you wanting to build reports on top of the BI Server data model. In previous releases you would need to either write the logical sql yourself or build an Answer request and copy the SQL from the Advanced tab and paste it into the BIP data modeler.
With the new release comes the ability to create reports without the need for a data model at all. You have the option when creating a new report to use a subject area directly.
Once you have selected the subject area you are interested in you can decide on whether to continue into the wizard to help you build the layout. Or to strike out on your own and build the layout yourself.
If you go for the latter and load up the layout editor, you get to see all of the data items you would see in the Answers builder in the data tree. Its then a case of dragging and dropping the columns into the layout, just as you would normally with a sample data source.
Once you are back to the report editor, the final step is to add some parameters.
This is a little different to a conventional BIP report. There is no data model definition per se i.e the logical SQL is not stored but rather, the columns you added to the layout and the subject area(s) you pulled them from. Yes' you can go across subject areas, but you need to know if its going to make sense or even work before you add more. You add more subject areas click on the subject area name where the data model name normally resides. You'll then get a shuttle dialog that lets you add more subject areas. You can then add columns in the layout builder.
Getting back to the parameters, on the report editor page, click the Parameters link (top right.) This will open the parameters dialog.
You can add parameters and set how they will be displayed; whether folks can select all; do they see check boxes, a drop box or text box; whether other parameters should be limited by the choice made for this box. Everything you get with a regular BIP parameter.
Finally, the report rendered with the parameters.
If you have a need to build a more highly formatted report on the BI Server data then this is definitely the way to go. This approach really does open up BIP reporting to business users. No need to write SQL, just pick the columns you want and format them in a simple to use interface.
Before you ask, you can not build report layouts in MSWord or Excel for this type of data source, not yet anyhoo :0)
A new component that showed up in the JDeveloper 18.104.22.168 release is the af:listView component. This component will become more and more popular as more people target tablet devices with ADF Faces UI. The component allows you to create a scrollable list from a collection of data, and it also does fetching with ranges so you don't get too much network traffic. If you ever used a contacts list on a smart phone you'll recognize the list view source of inspiration - check out the runtime demo of the component here.
The component was actually backported into 22.214.171.124 from the 12c version - and while in the 12c version of JDeveloper there is better design time support for adding and binding a listview to a page, in the current release the work will mostly be manual.
However, for the lazy developer there are some shortcuts you can take to create the list component faster.
Here is a short video that shows you how to leverage an existing table component on your page to make the creation of the list component easier and with more functionality.
Here’s a quick and dirty script to create a procedure (in the SYS schema – so be careful) to check the Hakan Factor for an object. If you’re not familiar with the Hakan Factor, it’s the value that gets set when you use the command “alter table minimize records_per_block;”.
I was prompted to publish this note by an item on the OTN SQL forum describing a problem with partition exchange with a table when there were bitmap indexes in place and the table had been changed to have some extra columns added. (Problem as yet unresolved as I publish).
If you start playing with the Hakan Factor, you’ll find that there are some odd little bugs in what gets stored and how it gets used. (SQL updated to use bitand() to reflect comments below and Karsten Spang’s blog note; also edited following a comment on OTN to show the rest of the spare1 flag bits)
create or replace procedure show_hakan( i_table in varchar2, i_owner in varchar2 default user ) as m_obj number(8,0); m_flags varchar2(12); m_hakan number(8,0); begin /* created by show_hakan.sql */ select obj#, /* case when (spare1 > 5 * power(2,15)) then (spare1 - 5 * power(2,15)) when (spare1 > power(2,17)) then (spare1 - power(2,17)) when (spare1 > power(2,15)) then (spare1 - power(2,15)) else spare1 end hakan */ to_char( bitand( spare1, to_number('ffff8000','xxxxxxxx') ), 'xxxxxxxx' ) flags, bitand(spare1, 32767) hakan -- 0x7fff into m_obj, m_flags, m_hakan from tab$ where obj# in ( select object_id from dba_objects where object_name = upper(i_table) and object_type = 'TABLE' and owner = upper(i_owner) ) ; dbms_output.put_line( 'Hakan factor for object ' || m_obj || ' (' || i_owner || '.' || i_table || ') is ' || m_hakan || ' with flags ' || m_flags ); end; / drop public synonym show_hakan; create public synonym show_hakan for show_hakan; grant execute on show_hakan to public;
You’ll notice that I’ve done an “upper()” on the table and owner – that means you’re in trouble if you have created an schemas or tables with mixed-case names (but you wouldn’t do that in a production system, would you?)Update – A little bug history
One of the odd details of the Hakan factor is that the value it shows is one less than the number of rows that will be stored in a block; and since it looks as if the factor is not allowed to drop to zero, you can’t hack the Hakan factor to force one row per block.
So here’s a (trivial and sub-optimal) piece of code to check current number of rows per block in a simple heap table (assuming the tablespace consists of a single file):
select ct, count(*) from ( select dbms_rowid.rowid_block_number(rowid), count(*) ct from t1 group by dbms_rowid.rowid_block_number(rowid) ) group by ct order by ct ;
Here’s the output of a session, running under 126.96.36.199, cut and pasted from the screen:
SQL> @afiedt.buf CT COUNT(*) ---------- ---------- 9 1 16 1 SQL> alter table t1 nominimize records_per_block; SQL> alter table t1 minimize records_per_block; SQL> execute show_hakan('t1') Hakan factor for object 48865 (TEST_USER.t1) is: 15 SQL> alter table t1 move; SQL> @afiedt.buf CT COUNT(*) ---------- ---------- 10 1 15 1 SQL> alter table t1 nominimize records_per_block; SQL> alter table t1 minimize records_per_block; SQL> execute show_hakan('t1') Hakan factor for object 48865 (TEST_USER.t1) is: 14
Every time you moved the table, 188.8.131.52 (and earlier) used the actual stored value of the Hakan Factor to rebuild the table; but if you regenerated the Hakan Factor the stored value was one less than the actual row count. So if you kept repeating the process the number of rows per block would decrease by one each time and the table would get bigger and bigger.
It’s a silly example – but the real-world relevance was that a direct path insert behaved differently from a normal insert and this could result in a significant amount of wasted space if you were doing bulk loads in your overnight batch; so the code changed in 10g to make the normal and direct path inserts consistent with each other, but the change went the wrong way and, as a side effect, you get one more row per block than suggested by the Hakan Factor – and you can’t trick the Hakan factor into enforcing one row per block any more.
Well, the honest-to-goodness truth is...I follow the instructions of people who are smarter about it than I am. It's a practice I follow quite often. Albert Einstein used to say that he never memorized anything he could look up. As a lazy guy, that sounds really good to me; it's worked out well.
So, in terms of steps for upgrading OBIEE to 184.108.40.206 (or anything else regarding OBIEE, for that matter), I look to the very bright people at Rittman-Mead. You can find all you'll ever want to know about the upgrade here. That's it.
Google (or Bing or whatever your search engine of choice happens to be) is your friend.