While I haven't seen one in action yet, the flash pack seems to be 6 Sun Flash Accelerator F40 PCIe Cards each of which has a capacity of 400 Gb. These cards run amazingly fast with read times of more than 2 GB/second (write time is about half that speed at 1+ GB/second). These cards normally sell for almost $6K each, so Oracle is providing the flash add-on pack for no more markup than you'd get if you bought them on your own (but you'd then have to get them into the Exalytics machine all on your own).
This Matters If You Own EssbaseWhy would you want this? Essbase, primarily. Essbase uses a ton of disk I/O and one of the ways Exalytics can speed up Essbase is by pulling your cubes into a RAMDisk (since you have 1 Tb of RAM to play with). At some point, though, it has to get that data from physical drives to a RAMDisk (unless you're building all your cubes at start up in memory each time). Having blazingly speedy flash drives with .25 millisecond read latency allows you to store your cubes on the flash drive and then pull into RAM much more quickly than reading from traditional drives.
We have tested Essbase running on flash drives and it helps everything (particularly minimizes the negative effects of fragmentation since seek time drops to basically nothing on flash). For customers buying Exalytics primarily for Essbase, the Exalytics Flash Upgrade Kit should be strongly considered with every Exalytics purchase (and if you already own Exalytics, buy it to put on top).
OBIEE is much less affected by hard drives, so while it may help OBIEE, this really matters a lot more to Essbase customers.
Oracle EPM Fully Supported on ExalyticsSince we're on the subject of Exalytics, now that 126.96.36.199 is out, all Oracle EPM/Hyperion components certified to run on Linux will run on Exalytics PS2. These include:
- Administration Services
- Calculation Manager
- EPM Workspace
- Essbase Server
- Essbase Studio Server
- Financial Reporting
- Interactive Reporting (32-bit only)
- Oracle HTTP Server
- Profitability and Cost Management
- Production Reporting (32-bit only)
- Provider Services
- Reporting and Analysis Framework Services and Web Application
- Shared Services
- Web Analysis
The title is not an original bon mot by me – it’s been said often, by others, and by many with more experience than I have in developing standards. It is with mixed emotions that I feel compelled to talk about a (generally good and certainly well-intentioned) standards organization: the US National Institute of Standards and Technology (NIST). I should state at the outset that I have a lot of respect for NIST. In the past, I have even urged a Congressional committee (House Science and Technology, if memory serves) to try to allocate more money to NIST for cybersecurity standards work. I’ve also met a number of people who work at NIST – some of whom have since left NIST and brought their considerable talents to other government agencies, one of whom I ran into recently and mentioned how I still wore a black armband years after he had left NIST because he had done such great work there and I missed working with him. All that said, I’ve seen a few trends at NIST recently that are – of concern.
When in Doubt, Hire a Consultant
I’ve talked in other blog entries about the concern I have that so much of NIST’s outwardly-visible work seems to be done not by NIST but by consultants. I’m not down on consultants for all purposes, mind you – what is having your tires rotated and your oil changed except “using a car consultant?” However, in the area of developing standards or policy guidance it is of concern, especially when, as has been the case recently, the number of consultants working on a NIST publication or draft document is greater than the number of NIST employees contributing to it. There are business reasons, often, to use consultants. But you cannot, should not, and must not “outsource” a core mission, or why are you doing it? This is true in spades for government agencies. Otherwise, there is an entire beltway’s worth of people just aching to tell you about a problem you didn’t know you had, propose a standard (or regulation) for it, write the standard/regulation, interpret it and “certify” that Other Entities meet it. To use a song title, “Nice Work If You Can Get It.”* Some recent consultant-heavy efforts are all over the map, perhaps because there isn’t a NIST employee to say, "you say po-TAY-to, I sy po-TAH-to, let's call the whole thing off." ** (Or at least make sure the potato standard is Idaho russet – always a good choice.)
Another explanation – not intentionally sinister but definitely a possibility – is that consultants’ business models are often tied to repeat engagements. A short, concise, narrowly-tailored and readily understandable standard isn’t going to generate as much business for them as a long, complex and “subject to interpretation – and five people will interpret this six different ways” – document.
In short: I really don’t like reading a document like NISTIR 7622 (more on which below) where most of the people who developed it are consultants. NIST’s core mission is standards development: NIST needs to own their core mission and not farm it out.
Son of FISMA
I have no personal experience with the Federal Information Security Management Act of 2002 (FISMA) except the amount of complaining I hear about it second hand, which is considerable. The gist of the complaints is that FISMA asks people to do a lot of stuff that looks earnestly security oriented, not all of which is equally important.
Why should we care? To quote myself (in an obnoxiously self-referential way): “time, money and (qualified security) people are always limited.” That is, the more security degenerates into a list of the 3000 things you Must Do To Appease the Audit Gods, the less real security we will have (really, who keeps track of 3000 Must Dos, much less does them? It sounds like a demented Girl Scout merit badge). And, in fact, the one thing you read about FISMA is that many government agencies aren’t actually compliant because they missed a bunch of FISMA checkboxes. Especially since knowledgeable resources (that is, good security people) are limited, it’s much better to do the important things well then maintain the farce that you can check 3000 boxes, which certainly cannot all be equally important. (It’s not even clear how many of these requirements contribute to actual security as opposed to supporting the No Auditor Left Behind Act.)
If the scuttlebutt I hear is accurate, the only thing that could make FISMA worse is – you guessed it –adding more checkboxes. It is thus with considerable regret that I heard recently that NIST updated NIST Special Publication 800-53 (which NIST has produced as part of its statutory responsibilities under FISMA). The Revision 4 update included more requirements in the area of supply chain risk management and software assurance and trustworthiness. Now why would I, a maven of assurance, object to this? Because a) we already have actual standards around assurance, b) having FISMA-specific requirements means that pretty much every piece of Commercial Off-the-Shelf (COTS) software will have to be designed and built to be FISMA compliant or COTS software/hardware vendors can’t sell into the Federal government and (c) we don’t want a race by other governments to come up with competing standards, to the point where we’re checking not 3000 but 9000 or 12000 boxes and probably can’t come up with a single piece of COTS globally let alone one that meets all 12000 requirements. (Another example is the set of supply chain/assurance requirements in the telecom sector in India that include a) asking for details about country of origin and b) specific contractual terms that buyers anywhere in the supply chain are expected to use. An unintended result is that a vendor will need to (a) disclose sensitive supply chain data (which itself may be a trade secret) and (b) modify processes around global COTS to sell into one country.)
Some of the new NIST guidance is problematic for any COTS supplier. To provide one example, consider:
“The artifacts generated by these development activities (e.g., functional specifications, high-level/low-level designs, implementation representations [source code and hardware schematics], the results from static/dynamic testing and code analysis (emphasis mine)) can provide important evidence that the information systems (including the components that compose those systems) will be more reliable and trustworthy. Security evidence can also be generated from security testing conducted by independent, accredited, third-party assessment organizations (e.g., Common Criteria Testing Laboratories (emphasis mine), Cryptographic/Security Testing Laboratories, and other assessment activities by government and private sector organizations.)”
For a start, to the extent that components are COTS, such “static testing” is certainly not going to happen by a third party nor will the results be provided to a customer. Once you allow random customers – especially governments – access to your source code or to static analysis results, you might as well gift wrap your code and send it to a country that engages in industrial espionage, because no vendor, having agreed to this for one government, will ever be able to say no to Nation States That Steal Stuff. (And static analysis results, to the extent some vulnerabilities are not fixed yet, just provide hackers a road map for how and where to break in.) Should vendors do static analysis themselves? Sure, and many do. It’s fair for customers to ask whether this is done, and how a supplier ensures that the worst stuff is fixed before the supplier ships product. But it is worth noting – again – that if these tools were easy to use and relatively error free, everyone would be at a high level of tools usage maturity years ago. Using static analysis tools is like learning Classic Greek – very hard, indeed. (OK, koinic Greek isn’t too bad but Homeric Greek or Linear B, fuhgeddabout it.)
With reference to the Common Criteria (CC), the difficulty now is that vendors have a much harder time doing CC evaluations than in the past because of other forces narrowing CC evaluations into a small set of products that have Protection Profiles (PPs). The result has been and will be for the foreseeable future – fewer evaluated products. The National Information Assurance Partnership (NIAP) – the US evaluation scheme – has ostensibly good reasons for their “narrowed/focused” CC-directions. But it is more than a little ironic that the NIST 800-53 revision should mention CC evaluations as an assurance measure at a time when the pipeline of evaluated products is shrinking, in large part due to the directions taken by another government entity (NIAP). What is industry to make of this apparent contradiction? Besides corporate head scratching, that is.
There are other – many other sections – I could comment upon, but one sticks out as worthy of notice:
“Supply chain risk is part of the advanced persistent threat (APT).”
It’s bad enough that “supply chain risk” is such a vague term that it encompasses basically any and all risk of buying from a third party. (Including “buying a crummy product” which is not actually a supply chain-specific risk but a risk of buying any and all products.) Can bad guys try to corrupt the supply chain? Sure. Does that make any and all supply chain risks “part of APT?” Heck, no. We have enough hysteria about supply chain risk and APT without linking them together for Super-Hysteria.
To sum up, I don’t disagree that customers in some cases – and for some, not all applications – may wish higher levels of assurance or have a heightened awareness of cyber-specific supply chain threats (e.g., counterfeiting and deliberate insertion of malware in code). However, incorporation of supply chain provisions and assurance requirements into NIST 800-53 has the unintended effect of requiring any and all COTS products to be sold to government agencies – which is all of them as far as I know – to be subject to FISMA.
What if the state of Idaho decided that every piece of software had to attest to the fact that No Actual Moose were harmed during the production of this software and that any moose used in code production all had background checks? What if every other state enumerated specific assurance requirements and specific supply chain risk management practices? What if they conflict with each other, or with the NIST 800-53 requirements? I mean really, why are these specific requirements called out in NIST 800-53 at all? There really aren’t that many ways to build good software. FISMA as interpreted by NIST 800-53 really, really shouldn’t roll its own.
IT Came from Outer Space – NISTIR 7622
I’ve already opined at length about how bad the NIST Interagency Report (NISTIR) 7622 is. I had 30 pages of comments on the first 80-page draft. The second draft only allowed comments of the Excel Spreadsheet form: “Section A.b, change ‘must’ to ‘should,’ for the reason ‘because ‘must’ is impossible’” and so on. This format didn’t allow for wholesale comments such as “it’s unclear what problem this section is trying to solve and represents overreach, fuzzy definition and fuzzier thinking.” NISTIR 7622 was and is so dreadful that an industry association signed a letter that said, in effect, NISTIR 7622 was not salvageable, couldn’t be edited to something that could work, and needed to be scrapped in toto.
I have used NISTIR 7622 multiple times as a negative example: most recently, to an audience of security practitioners as to why they need to be aware of what regulations are coming down the pike and speak up early and often. I also used it in the context of a (humorous) paper I did at the recent RSA Conference with a colleague, the subject of which was described as “doubtless-well-intentioned legislation/regulation-that-has-seriously-unfortunate-yet-doubtless-unintended-consequences.” That’s about as tactful as you can get.
Alas, Dracula does rise from the grave,*** because I thought I heard noises at a recent Department of Homeland Security event that NISTIR 7622 was going to move beyond “good advice” and morph into a special publication. (“Run for your lives, store up garlic and don’t go out after dark without a cross!”) The current version of NISTIR 7622 – after two rounds of edits and heaven knows how many thousands of hours of scrutiny – is still unworkable, overscoped and completely unclear: you have a better chance of reading Linear B**** than understanding this document (and for those who don’t already know, Linear B is not merely “all Greek to me” – it’s actually all Greek to anybody). Ergo, NISTIR 7622 needs to die the true death: the last thing anyone should do with it is make a special publication out of it. It’s doubling down on dreck. Make it stop. Now. Please.
The last section is, to be fair, not really about NIST per se. NIST has been tasked, by virtue of a recent White House Executive Order, with developing a framework for improving cybersecurity. As part of that tasking, NIST has published a Request For Information (RFI) seeking industry input on said framework. NIST has also scheduled several meetings to actively draw in thoughts and comments from those outside NIST. As a general rule, and NISTIR 7622 notwithstanding, NIST is very good at eliciting and incorporating feedback from a broad swath of stakeholders. It’s one of their strengths and one of the things I like about them. More importantly, I give major kudos to NIST and its Director Pat Gallagher for forcefully making the point that NIST would not interfere with IT design, development and manufacture, in the speech he gave when he kicked off NIST’s work on the Framework: “the Framework must be technology neutral and it must enable critical infrastructure sectors to benefit from a competitive [technology] market. (…) In other words, we will not be seeking to tell industry how to build your products or how to run your business.”
The RFI responses are posted publicly and are, well, all over the map. What is concerning to me is the apparent desire of some respondents to have the government tell industry how to run their businesses. More specifically, how to build software, how to manage supply chain risk, and so forth. No, no, and no. (Maybe some of the respondents are consultants lobbying the government to require businesses to hire these consultants to comply with this or that mandate.)
For one thing, “security by design” concepts have already been working their way into development for a number of years: many companies are now staking their reputations on the security of their products and services. Market forces are working. Also, it’s a good time to remind people that more transparency is reasonable – for example, to enable purchasers to make better risk-based acquisition decisions – but when you buy COTS you don’t get to tell the provider how to build it. That’s called “custom code” or “custom development.” Just like, I don’t get to walk into <insert name of low-end clothing retailer here> and tell them, that I expect my “standard off-the-shelf blue jeans” to ex post facto be tailored to me specifically, made of “organic, local and sustainable cotton” (leaving aside the fact that nobody grows cotton in Idaho), oh, and embroidered with not merely rhinestones but diamonds. The retailer’s response should be “pound sand/good luck with that.” It’s one thing to ask your vendor “tell me what you did to build security into this product” and “tell me how you help mitigate counterfeiting” but something else for a non-manufacturing entity – the government – to dictate exactly how industry should build products and manage risk. Do we really want the government telling industry how to build products? Further, do we really want a US-specific set of requirements for how to build products for a global marketplace? What’s good for the (US) goose is good for the (European/Brazilian/Chinese/Russian/Indian/Korean/name your foreign country) gander.
An illustrative set of published responses to the NIST RFI – and my response to the response – follows:
1. “NIST should likewise recognize that Information Technology (IT) products and services play a critical role in addressing cybersecurity vulnerabilities, and their exclusion from the Framework will leave many critical issues unaddressed.”
Comment: COTS is general purpose software and not built for all threat environments. If I take my regular old longboard and attempt to surf Maverick’s on a 30 foot day and “eat it,” as I surely will, not merely because of my lack of preparation for 30-foot waves but because you need, as every surfer knows, a “rhino chaser” or “elephant gun” board for those conditions, is it the longboard shaper’s fault? Heck, no. No surfboard is designed for all surf conditions; neither is COTS designed for all threat environments. Are we going to insist on products designed for one-size-fits-all threat conditions? If so, we will all, collectively, “wipe out.” (Can’t surf small waves well on a rhino chaser. Can’t walk the board on one, either.)
Nobody agrees on what, precisely, constitutes critical infrastructure. Believe it or not, some governments appear to believe that social media should be part of critical national infrastructure. (Clearly, the World As We Know It will come to an end if I can’t post a picture of my dog Koa on Facebook.) And even if certain critical infrastructure functions – say, power generation – depend on COTS hardware and software, the surest way to weaken their security is to apply an inflexible and country-specific regulatory framework to that COTS hardware and software. We have an existing standard for the evaluation of COTS IT, it’s called the Common Criteria (see below): let’s use it rather than reinvent the digital wheel.
2. “Software that is purchased or built by critical infrastructure operators should have a reasonable protective measures applied during the software development process.”
Comment: Thus introducing an entirely new and undefined term into the assurance lexicon: “protective measures.” I’ve worked in security – actually, the security of product development – for 20 years and I have no idea what this means. Does it mean that every product should self defend? I confess, I rather like the idea of applying the Marine Corps ethos – “every Marine a rifleman” – to commercial software. Every product should understand when it is under attack and every product should self-defend. It is a great concept but we do not, as an industry, know how to do that - yet. Does “protective measures” mean “quality measures?” Does it mean “standard assurance measures?” Nobody knows. And any term that is this nebulous will be interpreted by every reader as Something Different.
3. “Ultimately, <Company X> believes that the public-private establishment of baseline security assurance standards for the ICT industry should cover all key components of the end-to-end lifecycle of ICT products, including R&D, product development, procurement, supply chain, pre-installation product evaluation, and trusted delivery/installation, and post-installation updates and servicing.”
Comment: I can see the religious wars over tip-of-tree vs. waterfall vs. agile development methodologies. There is no single development methodology, there is no single set of assurance practices that will work for every organization (for goodness’ sake, you can’t even find a single vulnerability analysis tool that works well against all code bases).
Too many in government and industry cannot express concerns or problem statements in simple, declarative sentences, if at all. They don’t, therefore, have any business attempting to standardize how all commercial products are built (what problem will this solve, exactly?). Also, if there is an argument for baseline assurance requirements, it certainly can’t be for everything, or are we arguing that “FindEasyRecipes.com” is critical infrastructure and need to be built to withstand hostile nation state attacks that attempt to steal your brioche recipe if not tips on how to get sugar to caramelize at altitude?
4. “Application of this technique to the Common Criteria for Information Technology Security Evaluation revealed a number of defects in that standard. The journal Information and Software Technology will soon publish an article describing our technique and some of the defects we found in the Common Criteria.”
Comment: Nobody ever claimed the Common Criteria was perfect. What it does have going for it is a) it’s an ISO standard and b) by virtue of the Common Criteria Recognition Arrangement (CCRA), evaluating once against the Common Criteria gains you recognition in 20-some other countries. Putting it differently, the quickest way to make security much, much worse is to have a Balkanization of assurance requirements. (Taking a horse and jumping through mauve, pink, and yellow hoops doesn’t make the horse any better, but it does enrich the hoop manufacturers, quite nicely.) In the security realm, doing the same thing four times doesn’t give you four times the security, it reduces security by four times, as limited (skilled) resource goes to doing the same thing four different ways. If we want better security, improve the Common Criteria and, by the way, major IT vendors and the Common Criteria national schemes – which come from each CCRA member country’s information assurance agency, like the NSA in the US – have been hard at work for the last few years applying their considerable security expertise and resources to do just that. Having state-by-state or country-by-country assurance requirements will make security worse – much, much worse.
5. “…vendor adoption of industry standard security models. In addition, we also believe that initiatives to motivate vendors to more uniformly adopt vulnerability and log data categorization, reporting and detection automation ecosystems will be a significant step in ensuring security tools can better detect, report and repair security vulnerabilities.”
Comment: There are so many flaws in this, one hardly knows where to start. There are existing vulnerability “scoring” standards – namely, the Common Vulnerability Scoring System (CVSS), ***** though there are some challenges with it, such as the fact that the value of the data compromised should make a difference in the score: a “breach” of Aunt Gertrude’s Whiskey Sauce Recipe is not, ceteris paribus, as dire as a breach of Personally Identifiable Information (PII) if for no other reason than a company can incur large fines for the latter, far exceeding Aunt Gertrude’s displeasure at the former. Even if she cuts you out of her will.
Also, there is work going on to standardize descriptions of product vulnerabilities (that is, the format and type). However, not all vendors release the exact same amount of information when they announce security vulnerabilities and should not be required to. Oracle believes that it is not necessary to release either exploit code or the exact type of vulnerability; e.g., buffer overflow, cross-site request forgery (CSRF) or the like because this information does not help customers decide whether to apply a patch or not: it merely enables hackers to break into things faster. Standardize how you refer to particular advisory bulletin elements and make them machine readable? Sure. Insist on dictating business practices (e.g., how much information to release) – heck, no. That’s between a vendor and its customer base. Lastly, security tools cannot, in general “repair” security vulnerabilities – typically, only patch application can do that.
6. “All owners and operators of critical infrastructure face risk from the supply chain. Purchasing hardware and software potentially introduce security risk into the organization. Creation of a voluntary vendor certification program may help drive innovation and better security in the components that are essential to delivery of critical infrastructure services.”
Comment: The insanity of the following comment astounds: “Purchasing hardware and software potentially introduce security risk into the organization.” News flash: all business involves “risk.” Not doing something is a risk. So, what else is new? Actually, attempting to build everything yourself also involves risk – not being able to find qualified people, the cost (and ability) to maintain a home-grown solution, and so forth. To quote myself again: “Only God created something from nothing: everyone else has a supply chain.”****** In short, everyone purchases something from outside their own organization. Making all purchases into A Supply Chain Risk as opposed to, say, a normal business risk is silly and counterproductive. It also makes it far less likely that specific, targeted supply chain threats can be addressed at all if “buying something – anything – is a risk” is the threat definition.
At this point, I think I’ve said enough. Maybe too much. Again, I appreciate NIST as an organization and as I said above the direction they have set for the Framework (not to $%*& with IT innovation) is really to their credit. I believe NIST needs to in-source more of their standards/policy development, because it is their core mission and because consultants have every incentive to create perpetual work for themselves (and none whatsoever to be precise and focused). NIST should adopt a less-is-more mantra vis-a-vis security. It is better to ask organizations do a few critical things well than to ask them to do absolutely everything – with not enough resource (which is a collective industry problem and not one likely to be solved any time soon). Lastly, we need to remember that we are a proud nation of innovators. Governments generally don’t do well when they tell industry how to do their core mission – innovate – and, absent a truly compelling public policy argument for so doing, they shouldn’t try.
*”Nice Work If You Can Get It,” lyrics by Ira Gershwin, music by George Gershwin. Don’t you just love Gershwin?
** “Let’s Call The Whole Thing Off.” Another gem by George and Ira Gershwin.
*** Which reminds me – I really hate the expression “there are no silver bullets.” Of course there are silver bullets. How many vampires and werewolves do you see wandering around?
****Speaking of which, I just finished a fascinating if short read: The Man Who Deciphered Linear B: The Story of Michael Ventris.
*****CVSS is undergoing revision.
****** If you believe the account in Genesis, that is.
In the two previous postings in this series on the Oracle BI Apps 188.8.131.52.1, we looked at the release at a high-level, and then at the product architecture including the new configuration and functional setup tools. From a technology and developer perspective though probably the most interesting thing about this new release is its use of Oracle Data Integrator as the ETL tool rather than Informatica, and the doing-away with the DAC for load orchestration and monitoring.
This introduction of ODI brings a number of potential benefits to customers and developers and gives Oracle the opportunity to simplify the product architecture, but bear in mind that there’s no migration path from the earlier 7.9.x releases to this version, with Informatica customers instead having to wait until the “patch set 2″ version due in the next twelve months; even then, migration between tools won’t be automatic, with existing Informatica-based installations expected to stay on Informatica unless they choose to re-implement using ODI.
So how does ODI work within this new release, and how has the DAC been replaced? Let’s take a look in this final piece in our short series on Oracle BI Apps 184.108.40.206.1, starting by looking at the overall role that ODI plays in the platform architecture.
Existing ODI developers will know that the tool uses two repositories, known as the Master and Work repositories, to store details of data sources and targets, mappings, data models and other aspects of an ETL project. Within the BI Apps these two repositories are stored in a schema called prefix_ODI_REPO, for example DEV_ODI_REPO, and are accompanied by a new schema called prefix_BIACOMP, again for example DEV_BIACOMP. The BIACOMP schema contains tables used by the various new WebLogic-based BI Apps supporting applications, and contain details of the functional setup of the BI Apps, load plans that have been generated and so forth. There’s also another schema called prefix_BIACOMP_IO which is used for read-write access to the BIACOMP schema, and all of these are held in a repository database alongside the usual schemas used for OBIEE, MDS and so forth.
The major difference in using ODI within this environment is that it’s treated as an “embedded” ETL tool, so that in most circumstances you won’t need to use ODI Studio itself to kick-off load plans, monitor their execution, set up sources and targets and so forth. This was the original vision for Informatica within the original BI Apps, but Oracle are able to do this far more effectively with ODI as they own all parts of the tech stack, can alter ODI to make it easier to embed, they’e got control over ODI’s various metadata APIs and so forth. What this means in practice is that the setup of the ODI topology (to connect to the ERP sources, and the target data warehouse) is done for you via a web-based application called the Oracle BI Applications Configuration Manager, and you can kick-off and then monitor your running ETL jobs from Configuration Manager and from ODI Console, the web-based operator tool that’s been around since the 11g release of ODI. The screenshot below shows Configuration Manager setting up the source database ODI topology entry, with the details that you provide then being pushed through to the ODI master repository:
Setting up a new BI Apps system involves using the Configuration Manager to define the connections through to the various source systems, then select the BI Apps modules (Financial Analytics, for example, and then the various subject areas within it) that you wish to implement. There are then a number of steps you can perform to set up system-wide settings, for example to select default currencies or languages, and then you come to run your first ODI load plan – which in this instance copies settings from your source system into the relevant tables in the BIACOMP schema, performing automatically the task that you had to do via the various domain configuration spreadsheets in the earlier 7.9.x releases – the screenshot below shows this ODI load plan listed out and having run successfully.
You can then view the execution steps and outcome either in ODI Console (embedded within Configuration Manager), or over at ODI Studio, using the Operator navigator.
Moving over to ODI Studio, the folders (or “adapters”) that in Informatica used to hold workflows and mappings for the various source systems, are contained with the BI Apps project within the Work repository and the Designer navigator. In the screenshot below you can also see the Fusion Apps adapter that’s not supported in this particular release, and the ETL Data Lineage adapter that should get enabled in an upcoming patch release.
In the screenshot above you can also see one of the loading tasks, SDE_ORA_APAgingBucketsDimenson, is a package that (were you to expand the Interfaces entry) makes reference to a regular, and also a temporary, interface.
Packages in ODI perform the same role as Informatica workflows in earlier releases of the BI Apps, and each package runs some steps to refresh variables, work out if its doing a full or incremental load, and then call the relevant ODI interface. Interfaces in ODI for the BI Apps typically load from other temporary interfaces, with these temporary interfaces performing the role of maplets in the Informatica version of the BI Apps, as you can see in the screenshot on the left below. On the right, you can see the flow for another mapping, along with one of the custom KMs that come as part of the BI Apps 220.127.116.11.1 package.
Individual packages are then assembled into the equivalent of BI Apps 7.9.x “execution plans” through a new JEE application called the Load Plan Generator, which also gets installed into ODI Studio as a plug-in so you can develop new data loading routines away from the full production setup. As you can see in the final screenshot below, these load plans are then visible from within ODI Studio (whether you generated them there, or from Configuration Manager), and like all ODI 11g load plans you can view the outcome of each load plan instance run, restart it if this feature is enabled, and so forth.
So there you have it – how ODI is used within the BI Apps 18.104.22.168.1. I’m going to take a break now as it’s almost time for the Atlanta run of the Rittman Mead BI Forum 2013, but once I’m back in the UK I’ll try and put something together for the blog on pulling together your first ETL run. Until then – have fun with the release.
This is going to be a more personal blog post than I typically make here at e-Literate.
The open letter from San José State University’s philosophy department in protest of the edX JusticeX course being taught at SJSU is getting a lot of attention, as is the follow-up statement from the SJSU faculty senate. I have some concerns with both of these letters—particularly the one from the philosophy department—but before I get into them, I’d like to emphasize my points of agreement and solidarity with the department:
- As a former philosophy major and a former teacher of philosophy courses to seventh and eighth graders, I strongly believe that a course in social justice is critical to every American’s education.
- I also strongly agree that, in order for such a course to be effective, it must be up-to-date, relevant to the students, and involve in-depth facilitated discussion.
- I agree that there is a bit of a bait-and-switch going on, possibly unintentionally, with the rhetoric about MOOCs providing superior pedagogy over lecture classes (which is probably somewhat true) and then moving to swap out discussion classes for MOOCs instead.
- I agree that some MOOC fans (though by no means all of them) have simplistic notions of how MOOCs can make university education cheaper without thinking through the consequences either to the quality of education or the fiscal health of the colleges and universities that still provide tremendous value to our nation and our culture.
- I agree that intellectual diversity is very important, particularly when discussing complex issues that are essential to a functioning democracy, and that the potential for an intellectual monoculture is a concern worth taking very seriously.
- While I have no knowledge of the negotiations between edX and SJSU, I strongly agree that such partnerships should be conceived and implemented with active consultation and collaboration with faculty unless there is exceptionally strong justification to do otherwise.
Despite all this common ground on values that are dear to me, I find aspects of the department’s letter to be deeply problematic.
To begin with, there is this:
Good quality online courses and blended courses (to which we have no objections) do not save money, but pre-packaged ones do, and a lot.
That statement is demonstrably false. Good quality online courses and blended courses can, in fact, save money. How do we know? For starters, the National Center for Academic Transformation has a long list of course redesign projects they have been doing in collaboration with colleges in universities since 1999, many of which have achieved substantial cost savings. And some of them actually achieved substantial improvement in outcomes while achieving substantial cost savings. Nor is NCAT alone. There is a growing body of empirically backed academic literature showing that we can teach more students more effectively for less money across a variety of subjects. Some subjects are easier to redesign than others. But cost savings in high-quality courses is possible as a general proposition (and does not require open content licensing, by the way). The SJSU philosophy department’s blanket denial of this possibility is not credible.
As a result, the authors of the letter are also less credible when they write,
In addition to providing students with an opportunity to engage with active scholars, expertise in the physical classroom, sensitivity to its diversity, and familiarity with one’s own students is just not available in a one-size-fits-all blended course produced by an outside vendor….When a university such as ours purchases a course from an outside vendor, the faculty cannot control the design or content of the course; therefore we cannot develop and teach content that fits with our overall curriculum and is based on both our own highly developed and continuously renewed competence and our direct experience of our students’ abilities and needs.
There appears to be a significant disconnect here. On the one hand, the department argues (correctly, in my view) that philosophy students gain great benefit from “the opportunity to engage with active scholars.” On the other hand, they assert that the philosophy department has “expertise in the physical classroom” and a “highly developed and continuously renewed competence” despite the overwhelming likelihood that most of the faculty have not had significant opportunities to engage with active scholars in pedagogy-related fields.
They could have made their case just as effectively without foreclosing the possibility of improving on what they already do. As the letter from the SJSU Faculty Association notes in response to the improved completion rates of the edX course,
The pedagogical infrastructure and work that has gone into the preparation of the edX material could easily be replicated if SJSU made a commitment to pedagogy and made training in pedagogy central to all faculty.
This is a defensible argument that the philosophy department could have made. But it didn’t. Instead, it implicitly denied the existence of the scholarship of teaching and explicitly blamed the university’s financial issues on “industry” for “demanding that public universities devote their resources to providing ready-made employees, while at the same time…resisting paying the taxes that support public education.” The collective effect of these rhetorical moves is to absolve the department of all responsibility for addressing the real problems the university is facing.
By ignoring the scholarship of teaching, the department missed an opportunity to engage the MOOC question in a different way. Rather than thinking of MOOCs as products to be bought or rejected, they could have approached them as experiments in teaching methods that can be validated, refuted, or refined through the collective efforts of a scholarly community. Researchers collaborate across university boundaries all the time. The same can be true in the scholarship of teaching. The faculty could have demanded access to the edX data and the freedom to adjust the course design. The letter authors seem deeply invested in positioning the edX course as something that is locked down from a third-party commercial vendor. But in reality, the edX course is developed by a faculty member and provided by a university-based non-profit entity. Perhaps the department felt that there wasn’t sufficient opportunity in this particular course design to make a request to have a collaboration worthwhile. But their rhetoric gives no indication that there is any room for such exploration under any circumstances, or indeed that the department has anything to learn about use of educational technology that could lead to either improved outcomes or lower costs.
Equally disturbing is the tendency in both letters to dismiss the fiscal crisis as something caused solely by greedy capitalists. It’s worth requoting the earlier referenced comment from the philosophy department letter here:
Industry is demanding that public universities devote their resources to providing ready-made employees, while at the same time they are resisting paying the taxes that support public education.
To begin with, “industry” isn’t alone in demanding that public universities devote their resources to producing employable graduates. Students and their parents are asking for it too, as are individual human taxpayers. On this last point, I am not a Californian, but I understand that individual human taxpayers have an unusually direct say regarding tax rates in the state of California. The purpose of education as a public good is a serious and complicated question that deserves more careful treatment from people who should know better.
Nor are taxes the only issue. While it is true that there has been progressive defunding of public colleges and universities in the United States, it is also true that tuition costs have been rising dramatically across the country in private as well as public schools. And it is true that the public colleges and universities in California in particular are struggling with unanticipated swelling enrollments as they strive to meet the as-yet-unfulfilled moral imperative of universal access to education. Given all of this, it is not a morally defensible position to simply point the finger at the rich guys and say, “It’s their fault. Make them fix it.” To the degree that course redesign can positively impact student access to education, faculty have a moral obligation to be leading the charge. And from a strategic perspective, they are more likely to prevent dumb ideas—such as gutting quality residential education in favor of least-common-denominator, video-driven xMOOCs—from taking hold.
But perhaps the worst aspect of the simplistic finger pointing is the way in which it pollutes the civic discourse. It encourages individual stakeholders to harden into an “us vs. them” position that reduces the likelihood of citizens coming together to solve real, hard problems that are deeply intwined with issues of social justice. Here’s an example of a comment made on this blog in response to a post about the California SB 520 bill:
Remember that when the Nazis led the people into the gas chamber they told them that it was a refreshing shower after a long train ride. Do not be fooled! This sweet sounding bill is the gas chamber of good education in California. Once we are in the questions will be pointless. As the pellets drop we will realize we should have questioned things sooner.
Setting aside the fact that the only justifiable use of genocide as an analogy is when talking about another genocide, this kind of rhetoric is enormously damaging to the possibility of a productive dialectic regarding how to solve the very real and complicated problems that our system of higher education faces, including both the need to increase access and the complexities of funding that imperative. And, sadly, this comment was written by a member of the SJSU philosophy department.
Dominic Brooks published a note recently about some very nasty SQL – originally thinking that it was displaying a run-time problem due to the extreme number of copies of the lnnvl() function the optimizer had produced. In fact it turned out to be a parse-time problem rather than a run-time problem, but when I first read Dominic’s note I was sufficiently surprised that I decided to try modelling the query.
Unfortunately the query had more than 1,000 predicates, (OR’ed together) and some of them included in-lists. Clearly, writing this up by hand wasn’t going to be a good idea, so I wrote a script to generate both the data, and the query, as follows – first a table to query:
create table t1 as with generator as ( select --+ materialize rownum id from dual connect by level <= 1e4 ) select rownum id1, rownum id2, rownum id, lpad(rownum,10) v1, rpad('x',100) padding from generator v1, generator v2 where rownum <= 1e5 ; create index t1_i1 on t1(id1, id2); begin dbms_stats.gather_table_stats( ownname => user, tabname =>'T1', method_opt => 'for all columns size 1' ); end; /
Then a piece of code to write a nasty query:
set pagesize 0 set feedback off set termout off spool temp1.sql prompt select * from t1 where 1 = 2 select 'or (id1 = ' || rownum || ' and id2 = ' || (rownum + 1) || ')' from t1 where rownum <= 750 union all select 'or ( id1 = ' || (rownum + 1000) || ' and id2 in (' || rownum || ',' || (rownum+1) || '))' from t1 where rownum <= 250 ; prompt / spool off
Here’s an example of the text generated by the code – with the parameters set to 5 and 3 respectively (and notice how I’ve rigged the query so that it doesn’t return any data, whatever the optimizer thinks):
select * from t1 where 1 = 2 or (id1 = 1 and id2 = 2) or (id1 = 2 and id2 = 3) or (id1 = 3 and id2 = 4) or (id1 = 4 and id2 = 5) or (id1 = 5 and id2 = 6) or ( id1 = 1001 and id2 in (1,2)) or ( id1 = 1002 and id2 in (2,3)) or ( id1 = 1003 and id2 in (3,4)) /
So here’s the plan from the above query:
--------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 8 | 1008 | 16 (0)| 00:00:01 | | 1 | CONCATENATION | | | | | | | 2 | TABLE ACCESS BY INDEX ROWID | T1 | 1 | 126 | 3 (0)| 00:00:01 | |* 3 | INDEX RANGE SCAN | T1_I1 | 1 | | 2 (0)| 00:00:01 | | 4 | TABLE ACCESS BY INDEX ROWID | T1 | 1 | 126 | 3 (0)| 00:00:01 | |* 5 | INDEX RANGE SCAN | T1_I1 | 1 | | 2 (0)| 00:00:01 | | 6 | TABLE ACCESS BY INDEX ROWID | T1 | 1 | 126 | 3 (0)| 00:00:01 | |* 7 | INDEX RANGE SCAN | T1_I1 | 1 | | 2 (0)| 00:00:01 | | 8 | INLIST ITERATOR | | | | | | | 9 | TABLE ACCESS BY INDEX ROWID| T1 | 5 | 630 | 7 (0)| 00:00:01 | |* 10 | INDEX RANGE SCAN | T1_I1 | 5 | | 6 (0)| 00:00:01 | --------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - access("ID1"=1003) filter("ID2"=3 OR "ID2"=4) 5 - access("ID1"=1002) filter((LNNVL("ID1"=1003) OR LNNVL("ID2"=3) AND LNNVL("ID2"=4)) AND ("ID2"=2 OR "ID2"=3)) 7 - access("ID1"=1001) filter((LNNVL("ID1"=1002) OR LNNVL("ID2"=2) AND LNNVL("ID2"=3)) AND (LNNVL("ID1"=1003) OR LNNVL("ID2"=3) AND LNNVL("ID2"=4)) AND ("ID2"=1 OR "ID2"=2)) 10 - access(("ID1"=1 AND "ID2"=2 OR "ID1"=2 AND "ID2"=3 OR "ID1"=3 AND "ID2"=4 OR "ID1"=4 AND "ID2"=5 OR "ID1"=5 AND "ID2"=6)) filter((LNNVL("ID1"=1001) OR LNNVL("ID2"=1) AND LNNVL("ID2"=2)) AND (LNNVL("ID1"=1002) OR LNNVL("ID2"=2) AND LNNVL("ID2"=3)) AND (LNNVL("ID1"=1003) OR LNNVL("ID2"=3) AND LNNVL("ID2"=4)))
As you can see, the first five predicates end up in line 10 of the plan with 10 repetitions (5 * 2) of the lnnvl() function. The last three predicates show up in lines 3, 5, and 7 – and the on each line we see two more lnnvl() calls than on the previous – just imagine, then, how many lnnvl() calls the optimizer will have added to the query plan by the time we have 750 occurrences in the inlist iterator (line 8) and 250 occurrences of the slightly complex predicate. Here are the relevant CPU stats (from v$session_stats) from running the generated script on 22.214.171.124, on Windows 32-bit, 2.8GHz CPU:
Name Value ---- ----- recursive cpu usage 1,848 CPU used when call started 1,854 CPU used by this session 1,854 DB time 1,870 parse time cpu 1,847 parse time elapsed 1,862
Clearly the parse time is extreme – though not as dramatic as in Dominic’s example; but having set up the first draft of the sample code it’s easy enough to change the number of occurrences of each type of predicate, and it’s pretty easy to make longer in-lists in the more complex of the two types of predicate. It’s not too difficult to get an execution plan that mimics Dominic’s in length and time to parse.
It’s not just the parse times that are interesting when you start doing this, by the way – it’s worth playing around to see what happens. It’s probably best to run the query to pull the plans from memory if you want to see the plans, though – if you try using “explain plan” then you start using memory in the SGA for some of the work: in one of my examples I had to abort the instance after a few minutes.
The singer is Canadian astronaut Commander Chris Hadfield who has been tweeting and posting pictures from space – be careful, you may get hooked: https://twitter.com/Cmdr_Hadfield/status/332819772989378560/photo/1Update:
When I posted the link to the video it had received 1.5M views; less than 24 hours later it’s up to roughly 7M. (And they weren’t all Richard Foote). Clearly the images have caught the imagination of a lot of people. If you have looked at the twitter stream it’s equally inspiring – and not just for the pictures.
With Oracle OpenWorld 2013 in San Francisco on the horizon you may have already seen some mention of the Oracle Excellence Awards. But do you know what these awards are all about?
The Oracle Excellence Awards recognize the achievements of members of the Oracle community in eleven award categories across the spectrum of roles involved in making things happen in enterprise IT. Several categories will be of particular interest to Oracle Technology Network members. These include:
Oracle Fusion Middleware Innovation
This category recognizes accomplishments in the following sub-categories:
- Oracle Exalogic Elastic Cloud
- Oracle Cloud Application Foundation
- Oracle Service-Oriented Architecture & Business Process Management
- Oracle WebCenter
- Oracle Identity Management
- Oracle Data Integration
- Oracle Application Development Framework and Fusion Development
- Business Analytics (Oracle BI, Oracle EPM, and Oracle Exalytics)
Oracle Magazine Technologist of the Year
Recognizes individual accomplishment in the following categories:
- Big Data Architect
- Cloud Architect
- Database Developer
- Enterprise Architect
- Mobile Architect
- Social Architect
Winners in each category in the Oracle Excellence Awards get complimentary passes to Oracle OpenWorld 2013 in San Francisco, along with other benefits. This is a big deal!
Nominations for the categories listed above close June 21, 2013. So if you or someone you know is worthy of this recognition, what are you waiting for?
Click the links above for more information.
Well, Brighton is now a wrap and we’re all now over in Atlanta, getting ready for the second leg of the 2013 Rittman Mead BI Forum, running from this Wednesday, 15th May 2013 through to Friday, 17th May. Photos from the Brighton event are up on Flickr now, but for anyone who’s coming down to the Georgia Tech Hotel & Conference Center for later this week, this posting contains the detailed agenda for the event, along with a preview of what’s coming in terms of social events, guest speakers and the masterclass.
Wednesday starts with the optional one-day masterclass, this year on Oracle Data Integration and led by myself, Stewart Bryson and Michael Rainey. I previewed the data integration masterclass previously on the blog, and the planned timetable for the masterclass looks like this:
Day 1 : Optional Oracle Data Integration Masterclass, followed by Registration, Drinks and Keynote/Meal
10.00 – 11.00 : Welcome, and Introduction to Oracle Data Integrator 11g (Stewart Bryson)
11.00 – 11.15 : Morning Coffee
11.15 – 11.45 : ODI and the Oracle Reference Architecture for Information Management (Stewart Bryson)
11.45 – 12.45 : ODI and GoldenGate – A Perfect Match… (Michael Rainey)
12.45 – 13.30 : Lunch
13.30 – 14.30 : ODI and Hadoop, MapReduce and Big Data Sources (Mark Rittman)
14.30 – 15.30 : The Three R’s of ODI Fault Tolerance : Resuming, Restarting and Restoring (Stewart Bryson)
15.30 – 16.30 : Scripting and Automating ODI using Groovy and the ODI SDK (Michael Rainey)
The event itself officially opens at 4pm on Wednesday, May 15th 2013 with registration taking place then, and a drinks reception in the hotel bar from 5pm to 6pm. At 6pm we have the Oracle keynote led by Jack Berkowitz and Philippe Lions, and then an informal meal in the hotel restaurant from 7pm – 10pm.
The main conference then opens at 8am on the Thursday morning, with registration open from 8am – 8.45am, opening remarks from myself at 8.45am and the first session starting at 9am. Here’s the timetable as planned for Thursday:
Day 2 : Main Conference Sessions, Guest Speaker and Gala Meal
8.45am – 9.00am : Opening Remarks Mark Rittman, Rittman Mead
9.00am – 10.am : Rene Kuipers, VX Company, “It’s all in the genes – The power of Oracle Exadata and the Oracle Database”
10.00am – 10.30am : Morning coffee
10.30am – 11.30am : Jack Berkowitz, Oracle : “OBI Presentation, Interaction and Mobility”
11.30am – 12.30am : Venkatakrishnan J, Rittman Mead, “In Memory Analytics – Times Ten, Essbase 126.96.36.199 – Analysis – A Comparison”
12.30pm – 1.15pm : Lunch
1.15pm – 1.30pm : TED Session 1 : Kevin McGinley – “OBIEE and OEID: What if…?”
1.30pm – 1.45pm : TED Session 2 : Jon Mead, Rittman Mead, “Why I want to be working with Business Intelligence in 5 years time”
1.45pm – 2.00pm : TED Session 3 : Jeremy Harms – “A BI Publisher Beginner’s MacGyver-Hack for Financial Reporting with OBIEE: A Quickie!”
2.15pm – 3.15pm : Alan Lee, Oracle, “Update on BI Metadata Architecture and Design Tool”
3.15pm – 3.45pm : Afternoon coffee and beers
3.45pm – 4.45pm : Jeff McQuigg, KPI Partners Inc, “Performance Tuning the BI Apps with a Performance Layer”
After the first day’s presentations we’ll take a short break, and then convene again back in the conference room at 5pm for our special guest speaker session, this year being provided by Method R’s Cary Millsap, who many of you will know from his Optimizing Oracle Performance book and his “response time” approach to performance tuning. Just after Cary’s session at around 6.30pm we’ll then be taken by coach to “4th and Swift”, the venue for the gala meal, where we’ll be from around 7pm through to around 10pm.
5.00pm – 6.00pm : Guest Keynote: Cary Millsap– “Thinking Clearly about Performance”
6.30pm – 7.00pm : Depart for Restaurant
7.00pm – 10.00pm : Gala Meal – 4th and Swift, Atlanta
Day 3 : Main Conference Sessions, and Close
The final day of the BI Forum is all about big data, and the BI Apps, with a special session from Pythian’s Alex Gorbachev on Hadoop and Oracle Data Warehousing, sessions by Oracle on Big Data and OBIEE, a big data debate, an an extended session by Oracle’s Florian Schouten and Accenture’s Kevin McGinley on the BI Apps 188.8.131.52.1.
We also have sessions on Endeca, OBIEE time-series analysis and extending OBIEE using plug-ins, so hopefully everyone will be able to stay until 5pm when the event will close.
8.30am – 9.30am : Tim Vlamis, Vlamis Software Solutions Inc,”Forecasting and Time Series Analysis in Oracle BI”
9.30am – 10.30am : Special Guest: Alex Gorbachev, Pythian – “Hadoop versus the Relational Data Warehouse.”
10.30am – 11.00am : Morning Coffee
11.00am – 12.00pm : Christian Screen, Capgemini, “How to Create a Plug-In for Oracle BI 11g”
12.00pm – 1pm : Marty Gubar and Alan Lee – OBIEE and Hadoop/Big Data
1.00pm – 1.45pm : Lunch
1.45pm – 2.45pm : Debate – “Big Data – Hype, or the Future or Oracle BI/DW?”
2.45pm – 4.15pm : Florian Schouten (Oracle) and Kevin McGinley (Accenture) – Oracle BI Apps 11g and ODI
4.15pm – 5.00pm : Adam Seed, Rittman Mead – “Endeca – Looking beyond the general demos”
You’ll notice we’ve brought back the popular “debate” section this year, with this year’s topic being “Big Data – Hype, or the Future of BI/DW?”. I’ll be looking for volunteers to argue the case for either of the two sides in the debate, so if you’ve got a view on whether big data is going to be the salvation of BI, whether it’ll turn us in to the COBOL programmers of the future, or whether its just a load of hot air (or you just like having an argument), let me know when you arrive and we’ll pull the debating teams together.
Other than that – have a safe journey over, and see at least some of you in Atlanta later in the week!
Fishbowl Solutions was recently featured on Oracle’s Blog during WebCenter Partners Week, showcasing our mobile application for iPhone/Android – FishbowlToGo. Mobility product manager, Kim Negaard authored a post detailing how our newest mobility venture helps WebCenter customers get the most from their investment.
Access Oracle WebCenter Content on your iPhone or Android with FishbowlToGo
Fishbowl Solutions has been working with Oracle WebCenter customers since 2010 to extend WebCenter Content to mobile devices. We started working with mobile sales force enablement and have since extended our offerings to meet expanding customer needs. We are excited to announce the release of our newest mobile app, FishbowlToGo.
Read the whole blog post here: http://bit.ly/ZHLDxX
The post Fishbowl Solutions featured on Oracle Blog for WebCenter Partners Week appeared first on C4 Blog by Fishbowl Solutions.
According to recent research, cloud deployments continue to rise as enterprises finally grasp how these technologies can offer efficiency, agility and a leaner business model. As more companies embrace the cloud, however, realizing these benefits may depend on support from a third-party, such as dba services, for effective implementation.
SmallBusiness reported that in fact, 70 percent of small and medium-sized enterprises (SMEs) in a Fasthosts study said that cloud adoption will be a critical factor for growth over the next 12 months. Simon Yeoman, general manager of Fasthosts commented on the implications of the study's findings.
'"Many large enterprises have firmly established their cloud strategies but SMEs have up until now found the concept of cloud quite alien and therefore haven't integrated it into business operations," he said, according to the news source. "The results of this survey demonstrate that SMEs are starting to think seriously about the cloud and that they are taking important steps to use it to their business advantage."
When asked which aspect of business these companies felt the cloud would be most helpful in, 38 percent cited flexibility and scalability.
A major reason that more firms have turned to a cloud model is that software-as-a-service (SaaS) has enabled companies of all sized and budgets to quickly integrate the latest technologies at an affordable rate. Business 2 Community contributor Sara Harold revealed that for many SMEs, SaaS has transformed the IT infrastructure, offering dramatic savings as well as more powerful computing. Harold noted that these factors allow firms to experiment with new IT concepts and tools and adapt to a rapidly changing business environment.
Another key driver of cloud initiatives is the transition from capital expenditures to only paying for operating costs. Harold explained that SaaS and the cloud offer low subscription-based payment models, so there are no technological obstacles or need for hefty investments in hardware, maintenance and upgrades. As an example, she pointed out that ten years ago enterprises had to buy multiple copies of virus protection software and constantly invest in new solutions as technologies became more advanced. However, now businesses can purchase a single-user license and scale this software up in the cloud as the business expands, addressing new risks and needs.
One of the most important aspects of the cloud is that it is easier and more cost-effective to adjust the technology based on actual company demands, which allows for smarter investments and budgeting as well as boosts the bottom line.
RDX offers a full suite of cloud migration and administrative services that can be tailored to meet any customer's needs. To learn more our full suite of cloud migration and support services, please visit our Cloud DBA Service page or contact us.
When I started my work in IT, I used to be in a very small shop, and even though we had people in several places in the same state, everything used to be very centralized and from 9 to 5, and because we were basically only 2 people , our action plan used to be a talk over the lunch table and that would be it, we would go ahead and execute it after 5 PM, and I won’t lie sometimes before 5 :) .
Over the years I have understood that even if you are a 2 guy shop or a team of 15 separated by oceans and being miles apart, communication is the most important thing to have on your team, and one of the means of communication is having an action plan in place for any major/medium change you do in your organization. First this will generate discussions amongst your teammates and it will reduce the possibility of errors when you are faced with time and pressure constraints when implementing it.
This might sometimes feel like a mundane and boring task, as it will take an effort to come up with it and it will take time to verify it, but when game day comes along you will see the great benefit of having an action plan.
Another great benefit of having an action plan is that you also have a road map if you need to rollback your change, and that is also critical, because normally any major change or rollback is not done only by one person, take for example a change that takes about 7 or 8 hours to be done, and at the end when the UAT (User Application Testing) is done, 1 or 2 more hours ,the application team decides that a rollback is needed, you are probably not in a good state of mind to do the rollback after 8 hours of continuous work, if you have an action plan, one of your teammates can step in and you can have a rest, even if it is to go to the kitchen and have a sandwich and a coke and forget 10 minutes about that pressure.
As with life and with us being human, having an action plan doesn’t mean that everything will go smoothly or you won’t have an error in there, but believe me, it will reduce in a big way the possibility of an error if you execute it by memory or by doing one yourself without revision.
I do hope that you already have an action plan as part of your major/medium changes, but if you don’t, it is time to get FIT-ACER, here is an example of one (Kudos to Cesar Sanchez as it is his Action Plan Template), use it and modify it to your needs, it is a good start.
Welcome back to the WebCenter Blog.
Last week, we presented a number of different partner solutions for WebCenter. This week we will be focusing a bit more intently on the value of Content Management in the enterprise and to start things off, we'll be hearing from our partner, aurionPro SENA about their offerings for WebCenter, including their mobile app and Accounts Payable solutions.
The buzz throughout the halls of recent conferences spotlights “glamorous” technologies: cloud, social, mobile. It’s the mantra of industry analysts and has been adopted by pretty much everyone. Cloud, social, mobile. The ‘triad’ is unavoidable. Many of our customers are asking to implement Facebook or Yammer-style intranet solutions, and everyone’s asking for mobile delivery of more and more content. We’re proud that we've done some of the most innovative work completed to date building mobile apps and social/collaborative platforms leveraging the WebCenter suite (a few examples are included near the end of this post). But WebCenter is not just about cutting edge use cases.
aurionPro SENA has been working with WebCenter and its underlying technologies from the very beginning. In fact, ten of our technical, sales, and executive leaders were long-time Optika, Stellent, and/or Oracle employees (including our newest leadership team member, Ed Jackowiak, who previously was leading Oracle’s efforts to build and grow the North America IDM business). With over a decade of focused experience, and Specialized Status in both WebCenter Content and WebCenter Portal, we’ve seen and solved pretty much every use case possible…both the glamorous and the unglamorous.
You don’t see many keynote speeches these days focused on streamlining accounts payable processes. But WebCenter is an unsung hero for even the most commonplace use case. WebCenter’s Image Processing and Records Management solutions have saved huge amounts of hard dollars for dozens of our clients by automating manually-intensive and error-prone processing tasks. Replacement of legacy and homegrown systems with an Oracle WebCenter solution, along with the ability to integrate WebCenter features with back-end systems of record, are the driving factors for achieving these benefits. One of the true industry experts in this field, Sam Harp, previously a long-time employee of Optika, Stellent, and Oracle, has been leading these types of implementations for more than 20 years.
Somewhere in the middle of the glamour curve falls the topic of web solution and mobile app security. It’s certainly a hot topic, but maybe not as glitzy as ‘the triad’. As employees push the adoption of mobile devices in the workplace for convenience and productivity gains, companies are aggressively implementing information security solutions to ensure that sensitive data is protected through every channel that it is being accessed. The good news for WebCenter customers is that Oracle’s Security Suite, Identity Management (IDM), is second-to-none in the industry. It provides everything from single-sign-on functionality all the way through fine-grained access control, an absolute must for regulated and compliance-heavy industries like Financial Services and Healthcare. Implementing security processes such as employee on-boarding and off-boarding and integrating with multiple directory and user profile repositories can be challenging undertakings. Working hand-in-hand with Oracle, aurionPro SENA’s IDM practice, winner of 2 of the last 4 Oracle Excellence Awards in IDM and led by Oracle Deputy CTO and aurionPro SENA President, Swapnil Mehta, ensures successful and secure mobile, content, portal, records management, and image processing implementations.
From the glamorous to the unglamorous, the dedicated WebCenter team at aurionPro SENA has seen it all. In fact, we were delighted to have been recognized for our depth of expertise as the honorable mention in the 2012 Oracle Excellence Award in the WebCenter Category. Here are a few WebCenter solutions of interest that we've built recently:
- ContentiD: aurionPro SENA’s free WebCenter Content mobile app that allows organizations to securely search for and view documents, as well as review and process workflow assignments. All you need is network access to a WebCenter Content server and you’re up and running…there are no server components to install or customizations to make.
Download the app from Apple’s App Store here
- WebCenter Managed Services: Many customers who have asked us to host WebCenter solutions on their behalf often mistake that the “cloud” is the best path for them. Their real business challenge is to be able to focus their resources on core business objectives and less on complicated day-to-day IT tasks. We’ve built a mature, 24x7 system monitoring solution and a world-class service desk to help offload our clients’ costly and time-consuming support tasks. Some of our largest customers now trust us to manage their WebCenter solutions. Learn more here.
- Innovative Intranet : Hampered by an outdated intranet solution implemented on unsupported software, an industrial components manufacturer turned to aurionPro SENA to envision a secure and fresh new experience for their 100,000+ employees through the implementation of an Oracle WebCenter and Oracle IDM Proof of Concept.
- Business Expanding Extranet : An entertainment services company wanted to provide better collaboration with production companies, studios, and employees in order to gain greater market share through improved relationships. aurionPro SENA helped them achieve their goals through the design and development of a WebCenter-based portal that facilitated electronic data input, replacing manually-intensive, paper-based processes and enabling document collaboration across their user communities.
- Physical Records Management Deployment : A home improvement retail chain needed to replace a homegrown records management system that drove barcoding, labeling, and storage management. aurionPro SENA helped them to implement WebCenter Content: Records to manage 65,000 boxes.
- Streamlined Accounts Payable Process: Inundated by a legacy, paper-driven invoicing process, an electrical services company needed to improve their Accounts Payable process. aurionPro SENA implemented WebCenter Imaging to achieve incredible efficiency gains and dramatically improve visibility into process bottlenecks.
If you’d like to learn more about any of our productized, pre-packaged, or consulting offerings, feel free to get in touch with two of our other long-term Stellent and Oracle experts, Mark Tepsic and Steven Sommer , or visit our website at aurionprosena.com .
For my readers who are preparing the ocm 11g exam, the environment just changed (From 13th May 2013 onwards)
Instead of using OEM 10g, you will be using OEM 11g.
The upgrade exam is still using OEM 10g and DB 11gR1 (!) but I did not care installing OEM 10g and I prepared with OEM 11g.
On Thursday I’ll be flying out to Bulgaria for BGOUG Spring 2013. It’s been about 18 months since I’ve visited the people over there, so I’m really looking forward to getting stuck in.
This will be my first conference of the year, so I’m feeling a little nervous at the moment. I’m sure the adrenalin rush will kick in and get me through.
I’m signed up for the southern leg of the OTN Tour of Latin America (Chile, Peru, Uruguay, Argentina, Brazil), but it will be a while before I get any confirmation, so there are no guarantees yet.
Fun, fun, fun…
Tim…BGOUG Spring 2013… was first posted on May 13, 2013 at 8:31 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
In my previous posting in this series, I looked at the new 11.1.1..7.1 release of the Oracle BI Applications at a high-level, and talked about how this new release uses ODI as the embedded ETL tool instead of Informatica PowerCenter. Support for Informatica will come with patch set 2 (PS2) of BI Apps 184.108.40.206.x giving customers the choice of which ETL to use (with the caveat that customers upgrading from 7.9.x will typically have to stick with Informatica unless they want to completely re-implement using ODI), but for this initial release at least, ODI and some new Fusion Middleware tools take over from Informatica and the DAC, giving us what could well be a much simpler architecture for supplying the underlying data for the BI Apps dashboards.
In this posting then, I’m going to take a closer look at this new product architecture, and I’ll follow it with a more detailed look at how the various bits of ODI functionality replace the workflows, mappings, transformation operators and execution plans provided in earlier releases by Informatica and the DAC. For anyone familiar with the previous, 7.9.x versions of the BI Applications, the architecture diagram below shows the five tiers that this product typically implemented; tiers for the source data and data warehouse/repository databases, an ETL tier for Informatica and the DAC server, then two more tiers for the OBIEE application server and the client web browser.
Communication between the tiers was – to put it politely – “loosely coupled”, with DAC task names corresponding with Informatica workflow names, each workflow containing a single mapping, and all of the connections and sources having to be named “just so”, so that every part of the stack could communicate with all the others. It worked, but it was a lot of work to implement and configure, and once it was up and running in most cases customers were scared to then change it, in case a name or a connection got out of sync and everything then stopped working. Plus – Informatica skills are scarce in the Oracle world, and the DAC is an extra piece of technology that few DBAs really understood properly.
The 220.127.116.11.1 release of the BI Apps simplifies this architecture by removing the separate ETL tier, and instead using Oracle Data Integrator as the embedded ETL tool, with its server functions running as JEE applications within the same WebLogic domain as OBIEE 11g, giving us the overall architecture in the diagram below.
Now anyone who read my series of posts back in 2009 on the 18.104.22.168 release of the BI Apps, which also used ODI as the embedded ETL tool, will know that whilst ODI 10g could do the job of loading data into the BI Apps data warehouse, it lacked the load orchestration capabilities of Informatica and the DAC and wasn’t really set up to dynamically generate what have become, in ODI 11g, load plans. BI Apps 22.214.171.124 turned-out to be a one-off release and in the intervening years Oracle have added the aforementioned load plans along with other functionality aimed at better supporting the BI Apps, along with two new JEE applications that run in WebLogic to replace the old DAC. These new applications, along with the ODI JEE agent, ODI Console and the ODI SDK, are shown in the more detailed BI Applications 126.96.36.199.1 logical architecture diagram shown below.
Oracle BI Applications 188.8.131.52.1 has two main product tiers to it, made up of the following components:
- The Middleware (BI and ETL) tier; a WebLogic domain and associated system components, comprising BI components delivered as part of OBIEE 184.108.40.206 (including Essbase and related applications) as one managed server, and another managed server containing ODI Java components, including three new BI Apps-related ones; Configuration Manager, Functional Setup Manager, and ODI Load Plan Generator
- The Database (DW and Repositories) tier; for the time-being, Oracle only, and comprising a data warehouse schema (staging + performance layer), and a repository database containing the OBIEE repository schemas plus new ones to hold the ODI repository and other ETL/configuration metadata used for configuring your system.
Essbase at this stage is installed, but not used for the main BI applications, and all of it uses Fusion Middleware security (application roles and policies) along with the WebLogic Embedded LDAP server to hold users and groups. A special version of RCU is used to set up the new BI Apps-related schemas, and import data into them using Oracle database export files, so that the ODI repository, metadata tables and so forth are all populated prior to the first load taking place. Enterprise Manager Fusion Middleware Control is still used to manage and monitor the overall platform, but there’s now an entry for ODI along with Essbase, the latter of course being delivered as part of the 220.127.116.11 OBIEE platform release.
In the next posting in the series we’ll take a closer look at how ODI uses its JEE agent and mappings imported into its repository to load the BI Apps data warehouse, but what about the two new web-based configuration tools, Oracle BI Applications Configuration Manager (BIACM) and Oracle BI Applications Functional Setup Manager (FSM) – what do they do?
After you install OBIEE 18.104.22.168 and then the BI Applications 22.214.171.124.1, the BI Apps installer extends the BI domain to include FSM, BIACM and the ODI Load Plan Generator, along with some other supporting applications and libraries required for the full product. Load Plan Generator works behind the scenes to build new load plans in a similar way to the Execution Plan “Build” feature in the DAC, and the two web-based tools perform the following functions:
- Oracle BI Applications Configuration Manager performs system-wide setup tasks such as defining sources, selecting BI Apps modules and performing other, “one-only” tasks similar to the Setup feature in the DAC Console.
- Oracle BI Applications Functional Setup Manager is then used to list out, and track progress against, the various tasks required to configure the BI Applications modules, or “Offerings”, that you selected in the Configuration Manager
Most importantly though, these tools connect directly through to the ODI repository, so data sources you set up here will get pushed down to ODI as data servers in the ODI master repository; load plans you set up to, as in the screenshot below, load configuration tables, are ODI load plans and you can track their progress either from within ODI, or from within these applications themselves.
I haven’t had a chance to properly “diff” the RPD used in BI Apps 126.96.36.199.1 with the previous 7.9.x ones, or do a similar exercise for the underlying database data model, but on first glance the new RPD is at least recognisable, albeit with new sources and subject areas for the Fusion Apps, Oracle Transactional BI (OTBI), Real-Time Decisions and the like. The web catalog also looks familiar, but also has new content around the new applications along with additional content for the existing ones.
So, we’re at the point now where can start to think about loading data into the BI Apps data warehouse, and in tomorrows post we’ll take a look at what’s involved in a BI Apps 188.8.131.52.1 ETL load, and also look into how GoldenGate can now be used to extract and stage data prior to loading via ODI. Back tomorrow…
The long-awaited and anticipated 184.108.40.206.1 (PS1) release of the Oracle BI Applications became available early last week, with the software and documentation available for download on OTN. Over the next few blog posts, I’ll be taking an in-depth look at this new release, starting today with an overview of what’s new and any limitations in his initial version, and then over the next few posts taking a look at the product architecture, how it uses Oracle Data Integrator instead of Informatica to do the data loads, and what new content the 11g dashboards contain. For a bit of background into this release you’re best off taking a look at a series of posts I put together towards the end of last year on the BI Apps product roadmap, and I’ll recap on those posts a bit in this one as I go through at a high level what’s in this release.
Although the focus in BI Apps 220.127.116.11.1 is on ODI as the ETL tool, this new release actually delivers a whole new product architecture along with new dashboards, new content, and a new security framework. In addition, there’s now an option to use Oracle GoldenGate to create a new layer in the BI Apps data warehouse data architecture that replicates source data into the warehouse environment, giving you the ability to run the more large-scale ETL processes when you like, rather than when there’s an ETL window for the source systems.
Let’s start off though with a summary of what’s new from a functional perspective, and also what limitations there are for this first release in terms of sources, scope and so forth. BI Apps 18.104.22.168.1 delivers the following set of new features and capabilities:
- Oracle Data Integrator as the embedded ETL tool, along with a whole new FMW11g-centric architecture and set of utilities
- Two new analytic applications - Student Information Analytics, and Indirect Spend Planning
- New content for existing analytic applications including Financial Analytics, HR, Projects, CRM and Procurement & Spend
- Dashboards that are now written for OBIEE 11g rather than 10g, including 22.214.171.124 visualisations such as performance tiles
Now although, in general terms, BI Apps 126.96.36.199.1 covers all (or most…) of the existing analytic application modules along with all of the 7.9.x-era sources (EBS, PeopleSoft, JDE and Siebel), there are some important restrictions that you’ll need to be aware of when making any plans to use this new release, starting with upgrade paths (or lack of them):
- There’s no automatic upgrade path from BI Apps 7.9.x, and no automated migration routine to take you from Informatica to ODI; if you want BI Apps 188.8.131.52.1 now, you’ll have to reimplement rather than upgrade, or you can wait for BI Apps 11.1 PS2 which will support upgrades from earlier releases, but (important to note) keeps you on Informatica – any move from Informatica to ODI will need to be done yourself, as a re-implementation
- Only Oracle database sources and targets are supported in this initial release, in practice not a real issue for new implementations, but worth bearing in mind if you planned to use Teradata, for example, as your target data warehouse platform
- Oracle Fusion Applications aren’t supported as a source either, yet, so anyone using this will need to stay on BI Apps 184.108.40.206.x until an upgrade version becomes available
- A few edge-case analytic applications and sources aren’t supported in this release yet – Enterprise Asset Management, for example, is not yet supported for any source, whereas some other applications only support more recent PeopleSoft versions and not JDE, for example. As always, get the most up-to-date supported sources and applications list from Oracle before making any major investment in an implementation or upgrade project.
From a technical perspective though the major difference in this release, compared to the 7.9.6.x versions that preceded it, is the use of Oracle Data Integrator 11g as the embedded ETL tool rather than Informatica. To be clear, Informatica will still be supported as an ETL option for the BI Apps going well into the foreseeable future, but Informatica users will need to wait for the PS2 release due in the next twelve months or so before they can upgrade to the new 11g platform.
In addition and perhaps more importantly, it’s not envisaged that Informatica customer will move over to ODI unless they use the upgrade as an opportunity to re-implement their system now on ODI, moving across customisations themselves and essentially starting with a clean sheet of paper (which may not be a bad thing, if you’re thinking of tidying up your system following years of upgrades, customisations and so forth). What this does mean though is no DAC, no Informatica server and client tools, a new (and hopefully simpler) way of setting up and configuring your system, and in-theory a more closely-integrated set of tools all based around the modern, standards-based Fusion Middleware 11g architecture.
In this new world of ODI and the BI Apps, ODI load plans replace Informatica Workflows, whilst ODI packages and interfaces equate to Informatica mappings and maplets. The DAC is no more and is replaced by metadata within the ODI repository and other supporting schemas, with setup and configuration of the warehouse and ETL processes now carried out by two web-based tools, BI Applications Configuration Manager and Functional Setup Manager. The closer integration between these tool along with a chance for Oracle to re-think the BI Apps setup process should lead to easier configuration and customisations, but if you’re an Informatica developer it’s a whole new world, and the 11g platform makes a lot more use of Fusion Middleware platform functionality particularly around security and user provisioning.
So – all very exciting but quite daunting in terms of what needs to be learnt, and new processes that need to be thought through and put together before you can start making use of the new 220.127.116.11.1 feature set. We’ll start tomorrow then by taking a closer look at the BI Apps 18.104.22.168.1 technical architecture including the new configuration tools, and where ODI sits in the new product architecture, based on our first impressions of the product.
So MindTap just won a CODiE award for “Best Post-secondary Personalized Learning Solution.” In and of itself, this isn’t a big deal. No offense intended to current or prior winners, but the CODiEs often feel like awards for “Best Instant Coffee” or “Best New Technology Product by an Important Sponsor of Our Awards Program.” They’re not exactly signals of breakthrough educational product design. But I’m glad that the award was given in this case because I think MindTap does represent an important innovation that addresses some of the trends that we’ve been blogging about here at e-Literate (which was one of the reasons that I was enticed to work on MindTap at Cengage for a while).
MindTap is not a “personalized learning solution.” While it does allow students to do things like integrate their Evernote accounts and choose whether they want to read or listen to texts, the level of personalization for the learners is not terribly different from other products on the market. (And it certainly is nowhere near as radical as the vision for a Personalized Learning Environment which came from the UK’s JISC and elsewhere, and from which terms like “personalized learning solution” and “personalized learning experience” have been bastardized). Nor are there adaptive analytics or other sorts of machine-driven personalization in the product at this time. Rather, the key differentiator in the current incarnation of MindTap is the way in which it creates a more refined and complete learning experience out of the box while still enabling faculty to customize those experiences to the needs of their students in pretty significant and, in some cases, new ways. This is exactly where the textbook, LMS, and MOOC markets are all headed, and MindTap got there first.
The Problem to be Solved
In order to understand the value of a product like MindTap, it’s important to understand where textbook publishers do and do not compete. You’re not going to see a lot of MindTap-style products for courses like “Advanced Topics in International Trade Policy,” “Research in Genetics,” “Greek Film,” or “Intermediate Killer Shark Genre.” These smaller courses are relatively uninteresting to textbook publishers because they don’t have the scale necessary to generate significant revenues, and they are also better suited to hand-crafted course designs that are tailored to the strengths of the particular professor doing the teaching and can be highly tailored to the needs and interests of the students in the class. Rather, the courses in question are more like “Introduction to Psychology,” “General Biology I,” “Microeconomics,” or “Survey of Western Civilization.” (English Composition is an anomaly in this categorization because of the way it is taught.) These courses are generally taught in large lecture halls with little or no writing—and when there is writing, it is often graded quickly on a narrow range of criteria by overworked graduate students—and relatively generic syllabi (particularly in non-elite institutions).
A lot of the heated debate over whether college is “broken” revolves around these sorts of classes without ever explicitly defining the scope of the problem. Those who say school is broken and need to be disrupted tend to argue as if all college courses are giant, boring lecture courses. Those who argue against the “school is broken” meme tend to characterize these large lecture-centric courses as exceptions. Neither characterization is entirely accurate. On one hand, there are huge swaths of courses in just about any college catalog that are not large lecture courses. On the other hand, because the large lecture courses are concentrated in core curriculum and core major classes, most students have to take a handful of these courses in order to graduate.
Regardless of how pervasive or rare you think these courses are, everybody seems to agree that they are not terribly effective. But what should be done about the problem? Shrinking the class size is simply not going to happen, given both budget realities and the moral imperative to increase access to education. And yet, the current situation is bad not only for the students but also for the instructors. Keep in mind that the people teaching these survey courses are disproportionately either junior faculty who are doing all kinds of other duties to earn tenure or adjuncts who are working unreasonable course loads just to make ends meet. They generally don’t have a lot of time to either carefully craft a course or give students a lot of (or any) individual attention. They often have little choice but to take what the publisher is giving them as their course outline and run with it. In and of itself, the direct adoption of a publisher’s curriculum isn’t necessarily bad for many of these courses. The whole idea of a core course is that it helps all students getting a particular degree or a particular major to master certain competencies that they should have. There is a strong argument for consistency of curriculum across core courses. But the current situation neither guarantees consistency of curriculum nor saves the instructor time for either thoughtful customization of the curriculum or any other purpose. There is still a lot of hand assembly required to pull together reading assignments, assessments, slides and lecture notes, and so on. It is generally not a creative process because there is little time for creativity, but it is nevertheless a labor-intensive process and one that is prone to introduce variation in hitting those core competencies without any checks or even necessarily a lot of reflection on it.A Better Compromise
If instructors are going to adopt a third-party course curriculum anyway, then we should at least use technology to remove the hand assembly. Why not provide the readings, multimedia, assignments and assessments, neatly integrated with a basic syllabus, into one ready-to-use digital package for the students? At its most basic, this is what “courseware” is and what MindTap does. It provides students and instructors with a ready-to-go complete course structure with all the materials and assessments placed in a logical and easily navigable order. Joel Spolsky once defined poor user interface design as forcing users to make choices that they don’t care about. That is also an apt description for 80% of the pre-semester course preparation process that instructors go through with these big survey courses. Pre-assembling the elements of the vanilla version of the course frees up the instructors’ time to focus on the customizations that they actually do care about. To begin with, the course structure is already assembled and visible, which makes it easier for the instructor to think about its total shape. Removing unwanted content or changing content order is trivially easy, making the roughing in of the course structure very quick.
But things get really interesting when you start looking at adding to the learning path structure in MindTap rather than just moving or deleting things. In ed tech discussions, we tend to talk about APIs as if the main differentiation is having them versus not having them. Can you or can you not integrate Google Docs into a course? But in reality, the specifics of the integration can make an enormous difference in how practically useful the added functionality is to teachers and students. Do you want to make a folder of your documents (like your syllabus) available to the students at all times in the course with one or two clicks, or do you want to insert your own supplemental document right into the course reading, zero clicks away for the student and on their default navigation path? These two types of integration serve fundamentally different purposes in the course. In MindTap, you can do both and more. And importantly, making these different customizations is intuitive and almost trivially easy. Radical customization of the course structure is very much possible. But both because there is far less instructor time wasted with hand assembly of course elements and because customizations are visible and visualizable in the learning path structure, the percentage of time spent on meaningful instructional activities, whether that’s course customization or student interaction, is likely to be higher. For this reason, the MindApp model and the learning path structure are MindTap’s crown jewels.Table Stakes
Of course, MindTap doesn’t have a monopoly on useful courseware platform design. For example, WileyPLUS enables instructors to see which course materials and assessments are associated with which learning objectives. This helps instructors to align what they’re teaching and assessing on to what they think the student should be learning. More importantly, none of these innovations from any of the platforms are going to magically change poor large lecture classes into great educational experiences. The key to solving that problem is not the technology by itself but the learning design that it enables. The classroom flipping craze is a craze precisely because it is a learning design that can improve the pedagogical impact of these large survey classes. But anyone who has actually tried to flip their class will tell you that it’s not easy to do well. Faculty need pedagogical models other than the ones that they learned from their own professors, including the practical tips and support necessary to make those models work in the real world. They need course designs based on learning science and collected experience of innovators, and supported by technology. The MindTap platform doesn’t provide that. No technology platform does. And as far as I can tell, Cengage is not yet designing courseware for MindTap that even attempts to do this. But in order to accomplish the bigger goal, we first need to strike a new balance regarding course design customization. It’s not a question of “more” versus “less.” There will always be times when it is wise to allow a skilled instructor to tune a course. But there needs to be more of a sophisticated collaboration between the individual instructor, a curriculum design team (whether that team works for a textbook publisher or a university), and the other instructors teaching the course at the same institution in order to arrive at better pedagogical approaches that can be adopted and adapted to best effect by individual teachers. In order to accomplish that, you need to start with a combination of platform and content that makes meaningless course assembly unnecessary and meaningful course customization both easy and visible. This is what we mean at e-Literate when we write about “courseware.” And at the moment, MindTap is the best example I know of what a next-generation courseware platform will look like.
I wrote a couple of days ago about replacing my MacBook Pro hard drive with SSD. At the same time I bought a little SSD to use as the system drive for my desktop. I fitted that this morning, installed a fresh copy of Fedora 18 and mounted the original 1TB hard drive as a data drive.
Like the MacBook Pro, my desktop is a few years old, but still has plenty of grunt (Quad Core and 8G RAM) for what I need it for. I do run the odd VM on it, but any heavy stuff is run on my server, so there is no incentive to go out an buy the latest kit for what is essentially just a client PC.
The addition of the SSD means the start up time is a much better and it just feels a lot more responsive. Most apps start up almost instantly. Even GIMP, which used to take an age to start, is mega quick. I’ve put a couple of VMs on it and not surprisingly, they are fast to start up too. Overall I’m really pleased with the outcome.
The funny thing is, I never noticed how noisy spinning rust was until I switched to these SSDs. The Mac is silent and runs for a lot longer before the fan kicks in. The desktop is also silent, until I pull something from the data disk, at which point I hear that slight grinding noise.
I don’t think I would invest in large capacity SSDs for home until the prices drop considerably, but having witnessed the before and after results on these two old machines, I can’t imagine ever running without an SSD system disk again.
Update: I worked through some of the suggestions here to enable TRIM support and reduce wear.Desktop SSD… was first posted on May 11, 2013 at 8:36 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.