Skip navigation.

Feed aggregator

Brian Whitmer No Longer in Operational Role at Instructure

Michael Feldstein - Wed, 2015-03-11 09:17

By Phil HillMore Posts (302)

Just over a year and a half ago, Devlin Daley left Instructure, the company he co-founded. It turns out that both founders have made changes as Brian Whitmer, the other company co-founder, left his operational role in 2014 but is still on the board of directors. For some context from the 2013 post:

Instructure was founded in 2008 by Brian Whitmer and Devlin Daley. At the time Brian and Devlin were graduate students at BYU who had just taken a class taught by Josh Coates, where their assignment was to come up with a product and business model to address a specific challenge. Brian and Devlin chose the LMS market based on the poor designs and older architectures dominating the market. This design led to the founding of Instructure, with Josh eventually providing seed funding and becoming CEO by 2010.

Brian had a lead role until last year for Instructure’s usability design and for it’s open architecture and support for LTI standards.

The reason for Brian’s departure (based on both Brian’s comments and comments from Instructure statements) is based on his family. Brian’s daughter has Rett Syndrome:

Rett syndrome is a rare non-inherited genetic postnatal neurological disorder that occurs almost exclusively in girls and leads to severe impairments, affecting nearly every aspect of the child’s life: their ability to speak, walk, eat, and even breathe easily.

As Instructure grew, Devlin became the road show guy while Brian stayed mostly at home, largely due to family. Brian’s personal experiences have led him to create a new company: CoughDrop.

Some people are hard to hear — through no fault of their own. Disabilities like autism, cerebral palsy, Down syndrome, Angelman syndrome and Rett syndrome make it harder for many individuals to communicate on their own. Many people use Augmentative and Alternative Communication (AAC) tools in order to help make their voices heard.

We work to help bring out the voices of those with complex communication needs through good tech that actually makes things easier and supports everyone in helping the individual succeed.

This work sounds a lot like early Instructure, as Brian related to me this week.

Augmentative Communication is a lot like LMS space was, in need of a reminder of how things can be better.

By the middle of 2014, Brian left all operational duties although he remains on the board (and he plans to remain on the board and acting as an adviser).

How will this affect Instructure? I would look at Brian’s key roles in usability and open platform to see if Instructure keeps up his vision. From my view the usability is just baked into the company’s DNA[1] and will likely not suffer. The question is more on the open side. Brian led the initiative for the App Center as I described in 2013:

The key idea is that the platform is built to easily add and support multiple applications. The apps themselves will come from EduAppCenter, a website that launched this past week. There are already more than 100 apps available, with the apps built on top of the Learning Tools Interoperability (LTI) specification from IMS global learning consortium. There are educational apps available (e.g. Khan Academy, CourseSmart, Piazza, the big publishers, Merlot) as well as general-purpose tools (e.g. YouTube, Dropbox, WordPress, Wikipedia).

The apps themselves are wrappers that pre-integrate and give structure access to each of these tools. Since LTI is the most far-reaching ed tech specification, most of the apps should work on other LMS systems. The concept is that other LMS vendors will also sign on the edu-apps site, truly making them interoperable. Whether that happens in reality remains to be seen.

What the App Center will bring once it is released is the simple ability for Canvas end-users to add the apps themselves. If a faculty adds an app, it will be available for their courses, independent of whether any other faculty use that set up. The same applies for students who might, for example, prefer to use Dropbox to organize and share files rather than native LMS capabilities.

The actual adoption by faculty and institutions of this capability takes far longer than people writing about it (myself included) would desire. It takes time and persistence to keep up the faith. The biggest risk that Instructure faces by losing Brian’s operational role is whether they will keep this vision and maintain their support for open standards and third-party apps – opening up the walled garden, in other words.

Melissa Loble, Senior Director of Partners & Programs at Instructure[2], will play a key role in keeping this open vision alive. I have not heard anything indicating that Instructure is changing, but this is a risk from losing a founder who internally ‘owned’ this vision.

I plan to share some other HR news from the ed tech market in future posts, but for now I wish Brian the best with his new venture – he is one of the truly good guys in ed tech.

Update: I should have given credit to Audrey Watters, who prompted me to get a clear answer on this subject.

  1. Much to Brian’s credit
  2. Formerly Associate Dean of Distance Ed at UC Irvine and key player in Walking Dead MOOC

The post Brian Whitmer No Longer in Operational Role at Instructure appeared first on e-Literate.

APEX Connect June 2015

Denes Kubicek - Wed, 2015-03-11 07:39
APEX Connect in Düsseldorf in June 2015 is going to be the biggest APEX-only event in Germany so far. You should consider joining us.

APEX Connect in Düsseldorf im Juni 2015 wird der größte APEX-Treffen bisher sein. Meldet euch und hilft uns es noch erfolgreicher und größer zu machen. Viele interessante Vorträge und vor allem viele interessante Persönlichkeiten aus der APEX-Welt werden dort sein. Das ist eine ausgezeichnete Gelegenheit viel Neues zu erfahren. Anmeldungsformular kann man hier aufrufen. Die Preise sind moderat und durchaus im Rahmen.

Categories: Development

Annonce : DB12c certifiée avec EM12c

Jean-Philippe Pinte - Wed, 2015-03-11 00:55


Il est désormais possible d’utiliser une base 12.1.0.2.1 pour le référentiel d’ Oracle Enterprise Manager 12.1.0.4 (OMR) : http://ora.cl/tY3  


Adding MySQL driver to Spring Boot CLI Groovy Demo

Pas Apicella - Tue, 2015-03-10 15:18
I previously showed how you can use the Spring Boot CLI to create a simple Restful Application saying Hello World as shown in the link below using Groovy.

http://theblasfrompas.blogspot.com.au/2015/02/spring-boot-hello-world-from-command.html

If you wanted to extend that demo to include additional dependencies JAR file such as MySQL driver jar file we would do the following

1. You can add extensions to the CLI using the install command as shown below to add MySQL driver. This is installed in the LIB folder of the Spring Boot CLI location directory

> spring install mysql:mysql-connector-java:5.1.34

2. Package the application into a JAR which now includes the MySQL driver JAR file to enable you to connect to a MySQL instance from your application. You will need to write the code to do that , BUT now the JAR file is packaged in the JAR created to enable you to do that.

> spring jar -cp /usr/local/Cellar/springboot/1.2.1.RELEASE/lib/mysql-connector-java-5.1.34.jar hello.jar hello.groovy

Note: If you find that you reach the limit of the CLI tool, you will probably want to look at converting your application to full Gradle or Maven built “groovy project”

More Information

http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#cli-using-the-cli


http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Dana Center and New Mathways Project: Taking curriculum innovations to scale

Michael Feldstein - Tue, 2015-03-10 15:01

By Phil HillMore Posts (301)

Last week the University of Texas’ Dana Center announced a new initiative to digitize their print-based math curriculum and expand to all 50 community colleges in Texas. The New Mathways Project is ‘built around three mathematics pathways and a supporting student success course’, and they have already developed curriculum in print:

Tinkering with the traditional sequence of math courses has long been a controversial idea in academic circles, with proponents of algebra saying it teaches valuable reasoning skills. But many two-year college students are adults seeking a credential that will improve their job prospects. “The idea that they should be broadly prepared isn’t as compelling as organizing programs that help them get a first [better-paying] job, with an eye on their second and third,” says Uri Treisman, executive director of the Charles A. Dana Center at UT Austin, which spearheads the New Mathways Project. [snip]

Treisman’s team has worked with community-college faculty to create three alternatives to the traditional math sequence. The first two pathways, which are meant for humanities majors, lead to a college-level class in statistics or quantitative reasoning. The third, which is still in development, will be meant for science, technology, engineering, and math majors, and will focus more on algebra. All three pathways are meant for students who would typically place into elementary algebra, just one level below intermediate algebra.

When starting, the original problem was viewed as ‘fixing developmental math’. As they got into the design, the team restated the problem to be solved as ‘developing coherent pathways through gateway courses into modern degrees of study that lead to economic mobility’. The Dana Center worked with the Texas Association of Community Colleges to develop the curriculum, which is focused on active learning and group work that can be tied to the real world.

The Dana Center approach is based on four principles:

  • Courses student take in college math should be connected to their field of study.
  • The curriculum should accelerate or compress to allow students to move through developmental courses in one year.
  • Courses should align with student support more closely, and sophisticated learning support will be connected to campus support structures.
  • Materials should be connected to context-sensitive improvement strategy.

What they have found is that there are multiple programs nationwide working roughly along the same principles, including the California improvement project, Accelerated learning project at Baltimore City College, and work in Tennessee at Austin Peay College. In their view the fact of independent bodies coming to similar conclusions adds validity to the overall concept.

One interesting aspect of the project is that it is targeted for an entire state’s community college system – this is not a pilot approach. After winning an Request for Proposal selection, Pearson[1] will integrate the active-learning content into a customized mix of MyMathLabs, Learning Catalytics, StatCrunch and CourseConnect tools. Given the Dana Center’s small size, one differentiator for Pearson was their size and ability to help a program move to scale.

Another interesting aspect is the partnership approach with TACC. As shared on the web site:

  • A commitment to reform: The TACC colleges have agreed to provide seed money for the project over 10 years, demonstrating a long-term commitment to the project.
  • Input from the field: TACC member institutions will serve as codevelopers, working with the Dana Center to develop the NMP course materials, tools, and services. They will also serve as implementation sites. This collaboration with practitioners in the field is critical to building a program informed by the people who will actually use it.
  • Alignment of state and institutional policies: Through its role as an advocate for community colleges, TACC can connect state and local leaders to develop policies to support the NMP goal of accelerated progress to and through coursework to attain a degree.

MDRC, the same group analyzing CUNY’s ASAP program, will provide independent reporting of the results. There should be implementation data available by the end of the year, and randomized controlled studies to be released in 2016.

To me, this is a very interesting initiative to watch. Given MDRC’s history of thorough documentation, we should be able to learn plenty of lessons from the state-wide deployment.

  1. Disclosure: Pearson is a client of MindWires Consulting.

The post Dana Center and New Mathways Project: Taking curriculum innovations to scale appeared first on e-Literate.

Smart HR IT Brings New Insights to Age-Old HR Challenges

Linda Fishman Hoyle - Tue, 2015-03-10 14:44

A Guest Post written by Oracle's Aaron Lazenby, Profit Magazine

In the age of digital disruption, there’s plenty to distract executives from their core mission. Chief human resource officers (CHROs) are no exception; new technologies (such as big data analytics, social recruiting, and gamification) have the potential to transform the people function. But executives have to balance the adoption of new technologies with the demands of maintaining the current business.

For Joyce Westerdahl, (pictured left), CHRO at Oracle, the key is keeping her eye on the business. “Stay absolutely focused on what’s happening at your own company and in your own industry,” she says. Here, Westerdahl talks to Profit about how she assesses her department’s technology needs, how she approaches new IT trends, and what HR managers should be paying attention to in order to succeed.

Lazenby (pictured right): What drives Oracle’s talent management strategy?

Westerdahl: Our main focus is to make Oracle a destination employer. We want this to be a place where people can grow their careers; a place where employees are challenged but have the support they need to get the job done. And we want our employees to stay—for the sake of their own professional development and the growth of the company. Having the right technology in place—to automate processes, improve the employee experience, and add new insights—is a key part of how we make that possible.

But when I look at the talent management challenges we face at Oracle, for example, I don’t think things have changed that much in the past 20 years. There has always been a war for talent. Even during tough recessions, we are always competing for top talent. We are always looking for better ways to recruit the right people. What have changed are the tools we have at our disposal for finding and engaging them.

Lazenby: How has Oracle’s HR strategy influenced the products the company creates?

Westerdahl: Our acquisition journey has created an incredibly diverse environment, from both a talent and a technology perspective. When we add companies with different business models, platforms, and cultures to the Oracle family, we have a unique opportunity to learn about other businesses from the inside and translate that into new product functionality. Integrating and onboarding acquired employees and transitioning the HR processes and technologies becomes a stream of knowledge, best practices, and requirements that feeds into product development.

When I look back at two of our key acquisitions—PeopleSoft in 2005 and Sun in 2010—I am reminded of how big a difference technology can make in HR’s ability to support a robust acquisition strategy. The offer process is complex and critical in an acquisition. With PeopleSoft, we created 7,500 US paper offer packages for new employees and loaded them onto FedEx trucks for distribution. It was a resource- and time-intensive, manual process that took three weeks. Five years later when we acquired Sun, we used technology to automate the process and it took less than an hour to generate more than 11,000 US offers. We were able to generate and distribute offers and receive acceptances in about a week.

Lazenby: How are trends like big data affecting HR?

Westerdahl: The volume of new HR data has become astounding over the past couple years. Being able to harness and leverage data is critical to HR’s ability to add strategic business value. When we develop an analytics strategy for Oracle HR, we start with a strategy around what the business wants to achieve. Then we translate that into data requirements and measures: What do we need to know? How can we measure our efforts to make sure we’re on track? What are the key trend indicators, and what is the process for translating the data into actionable HR efforts?

There is tremendous value in being able to use data to improve the employee experience so we can attract, engage, and develop the right talent for our business. For example, an employee engagement survey we conducted revealed that new employees were having a hard time onboarding at Oracle. We could also see this reflected in data that measures time to productivity for new employees. But with the survey, we had another measure that showed us not only the productivity aspect but also the employee frustration aspect. It’s a richer view of the problem, which helped us shape the right solution.

Lazenby: What do you think HR managers can learn from Oracle’s experience?

Westerdahl: I think the challenges HR faces are mostly the same as they always have been: how to recruit the best talent, how to onboard recruits when we are growing quickly, and how to retain and develop employees. But now technology supports new ways of doing things, so you have to decide how to use IT to solve these age-old HR challenges within your business. The key is turning things upside down and viewing things from a fresh perspective. There is no one-size-fits-all approach.

Converting a Classic PIA Component to Fluid UI

PeopleSoft Technology Blog - Tue, 2015-03-10 14:17

PeopleSoft just published a new Red-Paper that describes how to convert an existing Classic Component to a new Fluid UI Component.  It's a great resource for anyone that is interested in learning more about Fluid UI and the steps required to move a Pixel-Perfect Component to a new responsive Fluid UI Component.
The Red-Paper is called Converting Classic PIA Components to PeopleSoft Fluid User Interface.  You can find it on My Oracle Support, document Id 1984833.1

There are a few things I want to point out that are in the document. 

  1. It gives great instruction with an example of how to convert a Component to Fluid UI. 
  2. There is a very important section on classic controls that are not supported in Fluid UI, or require some type of conversion.  I can't stress how important that is.
  3. There is an appendix that identifies the delivered PeopleSoft style classes.  You'll find that appendix extremely valuable when you're looking for the right style to get the UI you want.

Of course converting an existing component is only one way to take advantage of the new Fluid UI.  Refactoring existing components to optimize them for the different form factors, or building components are certainly possible.  In many cases, developers want to leverage existing components to take advantage of tried and tested business logic.


Notes on HBase

DBMS2 - Tue, 2015-03-10 12:24

I talked with a couple of Cloudera folks about HBase last week. Let me frame things by saying:

  • The closest thing to an HBase company, ala MongoDB/MongoDB or DataStax/Cassandra, is Cloudera.
  • Cloudera still uses a figure of 20% of its customers being HBase-centric.
  • HBaseCon and so on notwithstanding, that figure isn’t really reflected in Cloudera’s marketing efforts. Cloudera’s marketing commitment to HBase has never risen to nearly the level of MongoDB’s or DataStax’s push behind their respective core products.
  • With Cloudera’s move to “zero/one/many” pricing, Cloudera salespeople have little incentive to push HBase hard to accounts other than HBase-first buyers.

Also:

  • Cloudera no longer dominates HBase development, if it ever did.
    • Cloudera is the single biggest contributor to HBase, by its count, but doesn’t make a majority of the contributions on its own.
    • Cloudera sees Hortonworks as having become a strong HBase contributor.
    • Intel is also a strong contributor, as are end user organizations such as Chinese telcos. Not coincidentally, Intel was a major Hadoop provider in China before the Intel/Cloudera deal.
  • As far as Cloudera is concerned, HBase is just one data storage technology of several, focused on high-volume, high-concurrency, low-latency short-request processing. Cloudera thinks this is OK because of HBase’s strong integration with the rest of the Hadoop stack.
  • Others who may be inclined to disagree are in several cases doing projects on top of HBase to extend its reach. (In particular, please see the discussion below about Apache Phoenix and Trafodion, both of which want to offer relational-like functionality.)

Cloudera’s views on HBase history — in response to the priorities I brought to the conversation — include:

  • HBase initially favored consistency over performance/availability, while Cassandra initially favored the opposite choice. Both products, however, have subsequently become more tunable in those tradeoffs.
  • Cloudera’s initial contributions to HBase focused on replication, disaster recovery and so on. I guess that could be summarized as “scaling”.
  • Hortonworks’ early HBase contributions included (but were not necessarily limited to):
    • Making recovery much faster (10s of seconds or less, rather than minutes or more).
    • Some of that consistency vs. availability tuning.
  • “Coprocessors” were added to HBase ~3 years ago, to add extensibility, with the first use being in security/permissions.
  • With more typical marketing-oriented version numbers:
    • HBase .90, the first release that did a good job on durability, could have been 1.0.
    • HBase .92 and .94, which introduced coprocessors, could have been Version 2.
    • HBase .96 and .98 could have been Version 3.
    • The recent HBase 1.0 could have been 4.0.

The HBase roadmap includes:

  • A kind of BLOB/CLOB (Binary/Character Large OBject) support.
    • Intel is heavily involved in this feature.
    • The initial limit is 10 megabytes or so, due to some limitations in the API (I didn’t ask why that made sense). This happens to be all the motivating Chinese customer needs for the traffic photographs it wants to store.
  • Various kinds of “multi-tenancy” support (multi-tenancy is one of those terms whose meaning is getting stretched beyond recognition), including:
    • Mixed workload support (short-request and analytic) on the same nodes.
    • Mixed workload support on different nodes in the same cluster.
    • Security between different apps in the same cluster.
  • (Still in the design phase) Bottleneck Whack-A-Mole, with goals including but not limited to:
    • Scale-out beyond the current assumed limit of ~1200 nodes.
    • More predictable performance, based on smaller partition sizes.
  • (Possibly) Multi-data-center fail-over.

Not on the HBase roadmap per se are global/secondary indexes. Rather, we talked about projects on top of HBase which are meant to provide those. One is Apache Phoenix, which supposedly:

  • Makes it simple to manage compound keys. (E.g., City/State/ZipCode)
  • Provides global secondary indexes (but not in a fully ACID way).
  • Offers some very basic JOIN support.
  • Provides a JDBC interface.
  • Offers efficiencies in storage utilization, scan optimizations, and aggregate calculations.

Another such project is Trafodion — supposedly the Welsh word for “transaction” — open sourced by HP. This seems to be based on NonStop SQL and Neoview code, which counter-intuitively have always been joined at the hip.

There was a lot more to the conversation, but I’ll stop here for two reasons:

  • This post is pretty long already.
  • I’m reserving some of the discussion until after I’ve chatted with vendors of other NoSQL systems.

Related link

  • My July 2011 post on HBase offers context, as do the comments on it.
Categories: Other

Some stuff on my mind, March 10, 2015

DBMS2 - Tue, 2015-03-10 10:27

I found yesterday’s news quite unpleasant.

  • A guy I knew and had a brief rivalry with in high school died of colon cancer, a disease that I’m at high risk for myself.
  • GigaOm, in my opinion the best tech publication — at least for my interests — shut down.
  • The sex discrimination trial around Kleiner Perkins is undermining some people I thought well of.

And by the way, a guy died a few day ago snorkeling at the same resort I like to go to, evidently doing less risky things than I on occasion have.

So I want to unclutter my mind a bit. Here goes.

1. There are a couple of stories involving Sam Simon and me that are too juvenile to tell on myself, even now. But I’ll say that I ran for senior class president, in a high school where the main way to campaign was via a single large poster, against a guy with enough cartoon-drawing talent to be one of the creators of the Simpsons. Oops.

2. If one suffers from ulcerative colitis as my mother did, one is at high risk of getting colon cancer, as she also did. Mine isn’t as bad as hers was, due to better tolerance for medication controlling the disease. Still, I’ve already had a double-digit number of colonoscopies in my life. They’re not fun. I need another one soon; in fact, I canceled one due to the blizzards.

Pro-tip — never, ever have a colonoscopy without some kind of anesthesia or sedation. Besides the unpleasantness, the lack of meds increases the risk that the colonoscopy will tear you open and make things worse. I learned that the hard way in New York in the early 1980s.

3. Five years ago I wrote optimistically about the evolution of the information ecosystem, specifically using the example of the IT sector. One could argue that I was right. After all: 

  • Gartner still seems to be going strong.
  • O’Reilly, Gartner and vendors probably combine to produce enough good conferences.
  • A few traditional journalists still do good work (in the areas covered by this blog Doug Henschen comes to mind).
  • A few vendor folks are talented and responsible enough to add to the discussion. A few small-operation folks — e.g. me — are still around.

Still, the GigaOm news is not encouraging.

4. As TechCrunch and Pando reported, plaintiff Ellen Pao took the stand and sounded convincing in her sexual harassment suit against Kleiner Perkins (but of course she hadn’t been cross-examined yet). Apparently there was a major men-only party hosted by partner Al Gore, a candidate I first supported in 1988. And partner Ray Lane, somebody who at Oracle showed tremendous management effectiveness, evidently didn’t do much to deal with Pao’s situation.

Blech.

At some point I want to write about a few women who were prominent in my part of the tech industry in the 1980s — at least Ann Winblad, Esther Dyson, and Sandy Kurtzig, maybe analyst/investment banker folks Cristina Morgan and Ruthann Quindlen as well. We’ve come a long way since those days (when, in particular, I could briefly list a significant fraction of the important women in the industry). There seems to be a lot further yet to go.

5. All that said — I’m indeed working on some cool stuff. Some is evident from recent posts. Other may be reflected in an upcoming set of posts that focus on NoSQL, business intelligence, and — I hope — the intersection of the two areas.

6. Speaking of recent posts, I did one on marketing for young companies that brings a lot of advice and tips together. I think it’s close to being a must-read.

Categories: Other

Loading CSV files with special characters in Oracle DB

Dimitri Gielis - Tue, 2015-03-10 10:08
I often need to load the data of Excel or CSV files into the Oracle Database.

Ever got those annoying question marks when you try to load the data? or instead of question marks you just get empty blanks when the file is using special characters? Here's an example:


My database characterset is UTF-8, so ideally you want to load your data UTF-8 encoded.
With Excel I've not found an easy way to specify the encoding to UTF-8 when saving to a CSV file.Although in Excel (OSX) - Preferences - General - Web Options - Encoding, I specified UTF-8, it still saves the file as Western (Mac OS Roman).
I've two workarounds I use to get around the issue. Open the file in a text editor e.g. BBEdit and click the encoding option and select UTF-8.

Another way is to open Terminal and use the iconv command line tool to convert the file

iconv -t UTF8 -f MACROMAN < file.csv > file-utf8.csv

If you get a CSV file and you want to import it in Excel first, the best way I found is to create a new Workbook and import the CSV file (instead of opening directly). You can import either by using File - Import or Data - Get External Data - Import Text File. During the import you can specify the File origin and you can see which data format works for you.


After the manipulations in Excel you can save again as CSV as outlines above to make sure you resulting CSV file is UTF-8 encoded.
Finally to import the data you can use APEX, SQL Developer or SQLcl to load your CSV file into your table.
Categories: Development

PeopleTools 8.54: Performance Performance Monitor Enhancements

David Kurtz - Tue, 2015-03-10 04:09
This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
Transaction History Search ComponentThere are a number of changes:
  • You can specify multiple system identifiers.  For example, you might be monitoring Portal, HR and CRM.  Now you can search across all of them in a single search.
    • It has always been the case that when you drill into the Performance Monitoring Unit (PMU), by clicking on the tree icon, you would see the whole of a PMU that invoked services from different systems.
  • You can also specify multiple transaction types, rather than have to search each transaction type individually.
This is a useful enhancement when searching for a specific or a small number of transaction.  However, I do not think it will save you from having to query the underlying transactions table.
PPM Archive Process The PPM archive process (PSPM_ARCHIVE) has been significantly rewritten in PeopleTools 8.54.  In many places, it still uses this expression to identify rows to be archived or purged:
%DateTimeDiff(X.PM_MON_STRT_DTTM, %CurrentDateTimeIn) >= (PM_MAX_HIST_AGE * 24 * 60)
This expands to
ROUND((CAST(( CAST(SYSTIMESTAMP AS TIMESTAMP)) AS DATE) - CAST((X.PM_MON_STRT_DTTM) AS DATE)) * 1440, 0)
   >= (PM_MAX_HIST_AGE * 24 *  60)
which has no chance of using an index.  This used to cause performance problems when the archive process had not been run for a while and the high water marks on the history tables had built up.

Now, the archive process now works hour by hour, and this will use the index on the timestamp column.
"... AND X.PM_MON_STRT_DTTM <= SYSDATE - PM_MAX_HIST_AGE 
and (PM_MON_STRT_DTTM) >= %Datetimein('" | DateTimeValue(&StTime) | "')
and (PM_MON_STRT_DTTM) <= %DateTimeIn('" | DateTimeValue(&EndTime) | "')"
Tuxedo Queuing Since Performance Monitor was first introduced, event 301 has never reported the length of the inbound message queues in Tuxedo.  The reported queue length was always zero.  This may have been fixed in PeopleTools 8.53, but I have only just noticed it
Java Management Extensions (JMX) SupportThere have been some additions to Performance Monitor that suggest that it will be possible to extract performance metrics using JMX.  The implication is that the Oracle Enterprise Manager Application Management Pack of PeopleSoft will be able to do this.  However, so far I haven't found any documentation. The new component is not mentioned in the PeopleTools 8.54: Performance Monitor documentation.
  • New Table
    • PS_PTPMJMXUSER - keyed on PM_AGENTID
  • New Columns
    • PSPMSYSDEFAULTS - PTPHONYKEY.  So far I have only seen it set to 0.
    • PSPMAGENT - PM_JMX_RMI_PORT.  So far only seen it set to 1
  • New Component

    ©David Kurtz, Go-Faster Consultancy Ltd.

    Log Buffer #413, A Carnival of the Vanities for DBAs

    Pythian Group - Mon, 2015-03-09 21:15

    This Log Buffer Editions scours the Internet and brings some of the fresh blog posts from Oracle, SQL Server and MySQL.

    Oracle:

    Most of Kyles’ servers tend to be Linux VMs on VMware ESX without any graphics desktops setup, so it can be disconcerting trying to install Oralce with it’s graphical “runInstaller” being the gate way we have to cross to achieve installation.

    Working around heatbeat issues caused by tracing or by regexp

    APEX 5 EA Impressions: Custom jQuery / jQuery UI implementations

    Introduction to the REST Service Editor, Generation (PART 2)

    Due to recent enhancements and importance within Oracle’s storage portfolio, StorageTek Storage Archive Manager 5.4 (SAM-QFS) has been renamed to Oracle Hierarchical Storage Manager (Oracle HSM) 6.0.

    SQL Server:

    There are different techniques to optimize the performance of SQL Server queries but wouldn’t it be great if we had some recommendations before we started planning or optimizing queries so that we didn’t have to start from the scratch every time? This is where you can use the Database Engine Tuning Advisor utility to get recommendations based on your workload.

    Data Mining Part 25: Microsoft Visio Add-Ins

    Stairway to Database Source Control Level 3: Working With Others (Centralized Repository)

    SQL Server Hardware will provide the fundamental knowledge and resources you need to make intelligent decisions about choice, and optimal installation and configuration, of SQL Server hardware, operating system and the SQL Server RDBMS.

    Questions About SQL Server Transaction Log You Were Too Shy To Ask

    MySQL:

    The post shows how you can easily read the VCAP_SERVICES postgresql credentials within your Java Code using the maven repo. This assumes your using the ElephantSQL Postgresql service. A single connection won’t be ideal but for demo purposes might just be all you need.

    MariaDB 5.5.42 Overview and Highlights

    How to test if CVE-2015-0204 FREAK SSL security flaw affects you

    Using master-master for MySQL? To be frankly we need to get rid of that architecture. We are skipping the active-active setup and show why master-master even for failover reasons is the wrong decision.

    Resources for Highly Available Database Clusters: ClusterControl Release Webinar, Support for Postgres, New Website and More

    Categories: DBA Blogs

    Recovering an Oracle Database with Missing Redo

    Pythian Group - Mon, 2015-03-09 21:14
    Background

    I ran into a situation where we needed to recover from an old online backup which (due to some issues with the RMAN “KEEP” command) was missing the archived redo log backups/files needed to make the backup consistent.  The client wasn’t concerned about data that changed during the backup, they were interested in checking some very old data from long before this online backup had started.

    Visualizing the scenario using a timeline (not to scale):

      |-------|------------------|---------|------------------|
      t0      t1                 t2        t3                 t4
              Data is added                                   Present
    

    The client thought that some data had become corrupted and wasn’t sure when but knew that it wasn’t recently so the flashback technologies were not an option.  Hence they wanted a restore of the database into a new temporary server as of time t1 which was in the distant past.

    An online (hot) backup was taken between t2 and t3 and was considered to be old enough or close enough to t1 however the problem was that all archived redo log backups were missing. The client was certain that the particular data they were interested in would not have change during the online backup.

    Hence the question is: without the necessary redo data to make the online backup consistent (between times t2 and t3) can we still open the database to extract data from prior to when the online backup began?  The official answer is “no” – the database must be made consistent to be opened.  And with an online backup the redo stream is critical to making the backed up datafiles consistent.  So without the redo vectors in the redo stream, the files cannot be made consistent with each other and hence the database cannot be opened.  However the unofficial, unsupported answer is that it can be done.

    This article covers the unsupported and unofficial methods for opening a database with consistency corruption so that certain data can be extracted.

    Other scenarios can lead to the same situation.  Basically this technique can be used to open the Oracle database any time the datafiles cannot be made consistent.

     

    Demo Setup

    To illustrate the necessary steps I’ve setup a test 12c non-container database called NONCDB.  And to simulate user transactions against it I ran a light workload using the Swingbench Order Entry (SOE) benchmark from another computer in the background.

    Before beginning any backups or recoveries I added two simple tables to the SCOTT schema and some rows to represent the “old” data (with the words “OLD DATA” in the C2 column):

    SQL> create table scott.parent (c1 int, c2 varchar2(16), constraint parent_pk primary key (c1)) tablespace users;
    
    Table created.
    
    SQL> create table scott.child (c1 int, c2 varchar2(16), foreign key (c1) references scott.parent(c1)) tablespace soe;
    
    Table created.
    
    SQL> insert into scott.parent values(1, 'OLD DATA 001');
    
    1 row created.
    
    SQL> insert into scott.parent values(2, 'OLD DATA 002');
    
    1 row created.
    
    SQL> insert into scott.child  values(1, 'OLD DETAILS A');
    
    1 row created.
    
    SQL> insert into scott.child  values(1, 'OLD DETAILS B');
    
    1 row created.
    
    SQL> insert into scott.child  values(1, 'OLD DETAILS C');
    
    1 row created.
    
    SQL> insert into scott.child  values(2, 'OLD DETAILS D');
    
    1 row created.
    
    SQL> commit;
    
    Commit complete.
    
    SQL>
    

     

    Notice that I added a PK-FK referential integrity constraint and placed each table is a different tablespace so they could be backed up at different times.

    These first entries represent my “old data” from time t1.

     

    The Online Backup

    The next step is to perform the online backup.  For simulation purposes I’m adjusting the steps a little bit to try to represent a real life situation where the data in my tables is being modified while the backup is running.  Hence my steps are:

    • Run an online backup of all datafiles except for the USERS tablespace.
    • Add some more data to my test tables (hence data going into the CHILD table is after the SOE tablespace backup and the data into the PARENT table is before the USERS tablespace backup).
    • Record the current archived redo log and then delete it to simulate the lost redo data.
    • Backup the USERS tablespace.
    • Add some post backup data to the test tables.

    The actual commands executed in RMAN are:

    $ rman
    
    Recovery Manager: Release 12.1.0.2.0 - Production on Thu Feb 26 15:59:36 2015
    
    Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
    
    RMAN> connect target
    
    connected to target database: NONCDB (DBID=1677380280)
    
    RMAN> backup datafile 1,2,3,5;
    
    Starting backup at 26-FEB-15
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=46 device type=DISK
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00005 name=/u01/app/oracle/oradata/NONCDB/datafile/SOE.dbf
    input datafile file number=00001 name=/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_system_b2k8dsno_.dbf
    input datafile file number=00002 name=/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_sysaux_b2k8f3d4_.dbf
    input datafile file number=00003 name=/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_undotbs1_b2k8fcdm_.dbf
    channel ORA_DISK_1: starting piece 1 at 26-FEB-15
    channel ORA_DISK_1: finished piece 1 at 26-FEB-15
    piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T155942_bgz9ol3g_.bkp tag=TAG20150226T155942 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:11:16
    Finished backup at 26-FEB-15
    
    Starting Control File and SPFILE Autobackup at 26-FEB-15
    piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/autobackup/2015_02_26/o1_mf_s_872698259_bgzb0647_.bkp comment=NONE
    Finished Control File and SPFILE Autobackup at 26-FEB-15
    
    RMAN> alter system switch logfile;
    
    Statement processed
    
    RMAN> commit;
    
    Statement processed
    
    RMAN> alter system switch logfile;
    
    Statement processed
    
    RMAN> insert into scott.parent values (3, 'NEW DATA 003');
    
    Statement processed
    
    RMAN> insert into scott.child  values (3, 'NEW DETAILS E');
    
    Statement processed
    
    RMAN> commit;
    
    Statement processed
    
    RMAN> select sequence# from v$log where status='CURRENT';
    
     SEQUENCE#
    ----------
            68
    
    RMAN> alter system switch logfile;
    
    Statement processed
    
    RMAN> alter database backup controlfile to '/tmp/controlfile_backup.bkp';
    
    Statement processed
    
    RMAN> backup datafile 4;
    
    Starting backup at 26-FEB-15
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00004 name=/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_users_b2k8gf7d_.dbf
    channel ORA_DISK_1: starting piece 1 at 26-FEB-15
    channel ORA_DISK_1: finished piece 1 at 26-FEB-15
    piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T165814_bgzdrpmk_.bkp tag=TAG20150226T165814 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 26-FEB-15
    
    Starting Control File and SPFILE Autobackup at 26-FEB-15
    piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/autobackup/2015_02_26/o1_mf_s_872701095_bgzdrrrh_.bkp comment=NONE
    Finished Control File and SPFILE Autobackup at 26-FEB-15
    
    RMAN> alter database backup controlfile to '/tmp/controlfile_backup.bkp';
    
    Statement processed
    
    RMAN> insert into scott.parent values (4, 'NEW DATA 004');
    
    Statement processed
    
    RMAN> insert into scott.child  values (4, 'NEW DETAILS F');
    
    Statement processed
    
    RMAN> commit;
    
    Statement processed
    
    RMAN> exit
    
    
    Recovery Manager complete.
    $
    

     

    Notice that in the above steps that since I’m using Oracle Database 12c I’m able to execute normal SQL commands from RMAN – this is a RMAN 12c new feature.

     

    Corrupting the Backup

    Now I’m going to corrupt my backup by removing one of the archived redo logs needed to make the backup consistent:

    SQL> set pages 999 lines 120 trims on tab off
    SQL> select 'rm '||name stmt from v$archived_log where sequence#=68;
    
    STMT
    ------------------------------------------------------------------------------------------------------------------------
    rm /u01/app/oracle/fast_recovery_area/NONCDB/archivelog/2015_02_26/o1_mf_1_68_bgzcnv04_.arc
    
    SQL> !rm /u01/app/oracle/fast_recovery_area/NONCDB/archivelog/2015_02_26/o1_mf_1_68_bgzcnv04_.arc
    
    SQL>
    

     

    Finally I’ll remove the OLD data to simulate the data loss (representing t4):

    SQL> select * from scott.parent order by 1;
    
            C1 C2
    ---------- ----------------
             1 OLD DATA 001
             2 OLD DATA 002
             3 NEW DATA 003
             4 NEW DATA 004
    
    SQL> select * from scott.child order by 1;
    
            C1 C2
    ---------- ----------------
             1 OLD DETAILS A
             1 OLD DETAILS B
             1 OLD DETAILS C
             2 OLD DETAILS D
             3 NEW DETAILS E
             4 NEW DETAILS F
    
    6 rows selected.
    
    SQL> delete from scott.child where c2 like 'OLD%';
    
    4 rows deleted.
    
    SQL> delete from scott.parent where c2 like 'OLD%';
    
    2 rows deleted.
    
    SQL> commit;
    
    Commit complete.
    
    SQL>
    

     

    Attempting a Restore and Recovery

    Now let’s try to recover from our backup on a secondary system so we can see if we can extract that old data.

    After copying over all of the files, the first thing to do is to try a restore as per normal:

    $ rman target=/
    
    Recovery Manager: Release 12.1.0.2.0 - Production on Mon Mar 2 08:40:12 2015
    
    Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database (not started)
    
    RMAN> startup nomount;
    
    Oracle instance started
    
    Total System Global Area    1577058304 bytes
    
    Fixed Size                     2924832 bytes
    Variable Size                503320288 bytes
    Database Buffers            1056964608 bytes
    Redo Buffers                  13848576 bytes
    
    RMAN> restore controlfile from '/tmp/controlfile_backup.bkp';
    
    Starting restore at 02-MAR-15
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=12 device type=DISK
    
    channel ORA_DISK_1: copied control file copy
    output file name=/u01/app/oracle/oradata/NONCDB/controlfile/o1_mf_b2k8d9nq_.ctl
    output file name=/u01/app/oracle/fast_recovery_area/NONCDB/controlfile/o1_mf_b2k8d9v5_.ctl
    Finished restore at 02-MAR-15
    
    RMAN> alter database mount;
    
    Statement processed
    released channel: ORA_DISK_1
    
    RMAN> restore database;
    
    Starting restore at 02-MAR-15
    Starting implicit crosscheck backup at 02-MAR-15
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=12 device type=DISK
    Crosschecked 4 objects
    Finished implicit crosscheck backup at 02-MAR-15
    
    Starting implicit crosscheck copy at 02-MAR-15
    using channel ORA_DISK_1
    Crosschecked 2 objects
    Finished implicit crosscheck copy at 02-MAR-15
    
    searching for all files in the recovery area
    cataloging files...
    cataloging done
    
    using channel ORA_DISK_1
    
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/NONCDB/datafile/o1_mf_system_b2k8dsno_.dbf
    channel ORA_DISK_1: restoring datafile 00002 to /u01/app/oracle/oradata/NONCDB/datafile/o1_mf_sysaux_b2k8f3d4_.dbf
    channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/NONCDB/datafile/o1_mf_undotbs1_b2k8fcdm_.dbf
    channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/NONCDB/datafile/SOE.dbf
    channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T155942_bgz9ol3g_.bkp
    channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T155942_bgz9ol3g_.bkp tag=TAG20150226T155942
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:01:46
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/NONCDB/datafile/o1_mf_users_b2k8gf7d_.dbf
    channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T165814_bgzdrpmk_.bkp
    channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T165814_bgzdrpmk_.bkp tag=TAG20150226T165814
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
    Finished restore at 02-MAR-15
    
    RMAN>
    

     

    Notice that it did restore the datafiles from both the SOE and USERS tablespaces, however we know that those are inconsistent with each other.

    Attempting to do the recovery should give us an error due to the missing redo required for consistency:

    RMAN> recover database;
    
    Starting recover at 02-MAR-15
    using channel ORA_DISK_1
    
    starting media recovery
    
    archived log for thread 1 with sequence 67 is already on disk as file /u01/app/oracle/fast_recovery_area/NONCDB/archivelog/2015_02_26/o1_mf_1_67_bgzcn05f_.arc
    archived log for thread 1 with sequence 69 is already on disk as file /u01/app/oracle/fast_recovery_area/NONCDB/archivelog/2015_02_26/o1_mf_1_69_bgzdqo9n_.arc
    Oracle Error:
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_system_bh914cx2_.dbf'
    
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 03/02/2015 08:44:21
    RMAN-06053: unable to perform media recovery because of missing log
    RMAN-06025: no backup of archived log for thread 1 with sequence 68 and starting SCN of 624986 found to restore
    
    RMAN>
    

     

    As expected we got the dreaded ORA-01547, ORA-01194, ORA-01110 errors meaning that we don’t have enough redo to make the recovery successful.

     

    Attempting a Recovery

    Now the crux of the situation. We’re stuck with the common inconsistency error which most seasoned DBAs should be familiar with:

    Oracle Error:
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_system_bh914cx2_.dbf'
    
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 03/02/2015 08:44:21
    RMAN-06053: unable to perform media recovery because of missing log
    RMAN-06025: no backup of archived log for thread 1 with sequence 68 and starting SCN of 624986 found to restore

     

    And of course we need to be absolutely positive that we don’t have the missing redo somewhere.  For example in an RMAN backup piece on disk or on tape somewhere from an archive log backup that can be restored.  Or possibly still in one of the current online redo logs.  DBAs should explore all possible options for retrieving the missing redo vectors in some form or another before proceeding.

    However, if we’re absolutely certain of the following we can continue:

    1. We definitely can’t find the missing redo anywhere.
    2. We absolutely need to extract data from prior to the start of the online backup.
    3. Our data definitely wasn’t modified during the online backup.

     

    The natural thing to check first when trying to open the database after an incomplete recovery is the fuzziness and PIT (Point In Time) of the datafiles from SQLPlus:

    SQL> select fuzzy, status, checkpoint_change#,
      2         to_char(checkpoint_time, 'DD-MON-YYYY HH24:MI:SS') as checkpoint_time,
      3         count(*)
      4    from v$datafile_header
      5   group by fuzzy, status, checkpoint_change#, checkpoint_time
      6   order by fuzzy, status, checkpoint_change#, checkpoint_time;
    
    FUZZY STATUS  CHECKPOINT_CHANGE# CHECKPOINT_TIME        COUNT(*)
    ----- ------- ------------------ -------------------- ----------
    NO    ONLINE              647929 26-FEB-2015 16:58:14          1
    YES   ONLINE              551709 26-FEB-2015 15:59:43          4
    
    SQL>
    

     

    The fact that there are two rows returned and that not all files have FUZZY=NO indicates that we have a problem and that more redo is required before the database can be opened with the RESETLOGS option.

    But our problem is that we don’t have that redo and we’re desperate to open our database anyway.

     

    Recovering without Consistency

    Again, recovering without consistency is not supported and should only be attempted as a last resort.

    Opening the database with the data in an inconsistent state is actually pretty simple.  We simply need to set the “_allow_resetlogs_corruption” hidden initialization parameter and set the undo management to “manual” temporarily:

    SQL> alter system set "_allow_resetlogs_corruption"=true scope=spfile;
    
    System altered.
    
    SQL> alter system set undo_management='MANUAL' scope=spfile;
    
    System altered.
    
    SQL> shutdown abort;
    ORACLE instance shut down.
    SQL> startup mount;
    ORACLE instance started.
    
    Total System Global Area 1577058304 bytes
    Fixed Size                  2924832 bytes
    Variable Size             503320288 bytes
    Database Buffers         1056964608 bytes
    Redo Buffers               13848576 bytes
    Database mounted.
    SQL>
    

     

    Now, will the database open? The answer is still: “probably not”.  Giving it a try we get:

    SQL> alter database open resetlogs;
    alter database open resetlogs
    *
    ERROR at line 1:
    ORA-01092: ORACLE instance terminated. Disconnection forced
    ORA-00600: internal error code, arguments: [2663], [0], [551715], [0], [562781], [], [], [], [], [], [], []
    Process ID: 4538
    Session ID: 237 Serial number: 5621
    
    
    SQL>
    

     

    Doesn’t look good, right?  Actually the situation is not that bad.

    To put it simply this ORA-00600 error means that a datafile has a recorded SCN that’s ahead of the database SCN.  The current database SCN is shown as the 3rd argument (in this case 551715) and the datafile SCN is shown as the 5th argument (in this case 562781).  Hence a difference of:

    562781 - 551715 = 11066

    In this example, that’s not too large of a gap.  But in a real system, the difference may be more significant.  Also if multiple datafiles are ahead of the current SCN you should expect to see multiple ORA-00600 errors.

    The solution to this problem is quite simple: roll forward the current SCN until it exceeds the datafile SCN.  The database automatically generates a number of internal transactions on each startup hence the way to roll forward the database SCN is to simply perform repeated shutdowns and startups.  Depending on how big the gap is, it may be necessary to repeatedly shutdown abort and startup – the gap between the 5th and 3rd parameter to the ORA-00600 will decrease each time.  However eventually the gap will reduce to zero and the database will open:

    SQL> connect / as sysdba
    Connected to an idle instance.
    SQL> shutdown abort
    ORACLE instance shut down.
    SQL> startup
    ORACLE instance started.
    
    Total System Global Area 1577058304 bytes
    Fixed Size                  2924832 bytes
    Variable Size             503320288 bytes
    Database Buffers         1056964608 bytes
    Redo Buffers               13848576 bytes
    Database mounted.
    Database opened.
    SQL>
    

     

    Now presumably we want to query or export the old data so the first thing we should do is switch back to automatic undo management using a new undo tablespace:

    SQL> create undo tablespace UNDOTBS2 datafile size 50M;
    
    Tablespace created.
    
    SQL> alter system set undo_tablespace='UNDOTBS2' scope=spfile;
    
    System altered.
    
    SQL> alter system set undo_management='AUTO' scope=spfile;
    
    System altered.
    
    SQL> shutdown abort
    ORACLE instance shut down.
    SQL> startup
    ORACLE instance started.
    
    Total System Global Area 1577058304 bytes
    Fixed Size                  2924832 bytes
    Variable Size             503320288 bytes
    Database Buffers         1056964608 bytes
    Redo Buffers               13848576 bytes
    Database mounted.
    Database opened.
    SQL>
    

     

    Finally the database is opened (although the data is inconsistent) and the “old” data can be queried:

    SQL> select * from scott.parent;
    
            C1 C2
    ---------- ----------------
             1 OLD DATA 001
             2 OLD DATA 002
             3 NEW DATA 003
    
    SQL> select * from scott.child;
    
            C1 C2
    ---------- ----------------
             1 OLD DETAILS A
             1 OLD DETAILS B
             1 OLD DETAILS C
             2 OLD DETAILS D
    
    SQL>
    

     

    As we can see, all of the “old” data (rows that begin with “OLD”) that were from before the backup began (before t2) is available.  And only part of the data inserted during the backup (rows where C1=3) as would be expected – that’s our data inconsistency.

    We’ve already seen that we can SELECT the “old” data.  We can also export it:

    $ expdp scott/tiger dumpfile=DATA_PUMP_DIR:OLD_DATA.dmp nologfile=y
    
    Export: Release 12.1.0.2.0 - Production on Mon Mar 2 09:39:11 2015
    
    Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
    Starting "SCOTT"."SYS_EXPORT_SCHEMA_02":  scott/******** dumpfile=DATA_PUMP_DIR:OLD_DATA.dmp nologfile=y
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 640 KB
    Processing object type SCHEMA_EXPORT/USER
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/COMMENT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
    . . exported "SCOTT"."CHILD"                             5.570 KB       4 rows
    . . exported "SCOTT"."PARENT"                            5.546 KB       3 rows
    Master table "SCOTT"."SYS_EXPORT_SCHEMA_02" successfully loaded/unloaded
    ******************************************************************************
    Dump file set for SCOTT.SYS_EXPORT_SCHEMA_02 is:
      /u01/app/oracle/admin/NONCDB/dpdump/OLD_DATA.dmp
    Job "SCOTT"."SYS_EXPORT_SCHEMA_02" successfully completed at Mon Mar 2 09:39:46 2015 elapsed 0 00:00:34
    
    $
    

     

    At this point we’ve either queried or extracted that critical old data which was the point of the exercise and we should immediately discard the restored database.  Remember it has data inconsistency which may include in internal tables an hence shouldn’t be used for anything beyond querying or extracting that “old” data.  Frequent crashes or other bizarre behavior of this restored database should be expected.  So get in, get the data, get out, and get rid of it!

     

    Conclusion

    If “desperate times call for desperate measures” and if you’re in that situation described in detail above where you need the data, are missing the necessary redo vectors, and are not concerned about the relevant data being modified during the backup then there options.

    The “more redo needed for consistency” error stack should be familiar to most DBAs:

    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    

    And they may also be somewhat familiar with the “_allow_resetlogs_corruption” hidden initialization parameter.  However don’t let the resulting ORA-00600 error make the recovery attempt seem unsuccessful:

    ORA-00600: internal error code, arguments: [2663], [0], [551715], [0], [562781], [], [], [], [], [], [], []
    

    This error is overcome-able and the database likely can still be opened so the necessary data can be queried or extracted.

    Note: this process has been tested with Oracle Database 10g, Oracle Database 11g, and Oracle Database 12c.

    Categories: DBA Blogs

    Oracle Database Tools updated - check out SQLcl

    Dimitri Gielis - Mon, 2015-03-09 16:31
    Today Oracle released new versions of:

    Also Oracle REST Data Services 3 got a new EA2 version.You may want to check Kris Rice's blog for new features.
    I already blogged about all of the tools before, but not yet about SQLcl.This is a command line tool, I call it "SQL*Plus on steroids" (or as Jeff calls it SQL Developer meets SQL*Plus). It's particularly useful when you're on your server and quickly need to run some queries. Or if you're a command line guy/girl all the time, this tool is for you.
    Here's a screenshot how to connect to your database with SQLcl from Linux.

    Typing help will show you a list of quick shortcuts.
    For example if you type APEX you get a list of your APEX applications

    What I really like about SQLcl is that it formats the output so nicely. With SQL*Plus you had to set column widths, page sizes etc. Not with SQLcl, it's smart and formats it nicely.
    Next to that you can quickly output your query in JSON format by typing "set sqlformat json":

    There're many more features - a good starting point is this presentation and video by Jeff Smith.
    Categories: Development

    Development on Windows 8.1 Phone and Tablet

    Oracle AppsLab - Mon, 2015-03-09 14:59

    This is a follow up to my previous post (“Where are the Mobile Windows Devices?“) in which I gave my initial impressions of mobile windows devices.  As part of our assessment of these devices we also developed a few apps and this post details how that went.

    Getting Started

    Windows Phone 8.1 applications have to be developed on Windows 8.1.  I am using a Mac so I installed Windows 8.1 Enterprise Trial (90-day Free Trial) in a Parallels VM.  In order to run the Phone Emulator (which is also a VM and so I was running a VM in a VM), I had to enable Nested Virtualization in Parallels.

    Development is done in Visual Studio, I don’t think you can use any other IDE. You can download a version of Visual Studio Express for free.

    Finally, you’ll need a developer license to develop and test a Windows Store app before the Store can certify it. When you run Visual Studio for the first time, it prompts you to obtain a developer license. Read the license terms and then click I Accept if you agree. In the User Account Control (UAC) dialog box, click Yes if you want to continue. It was $19 for a developer license.

    Development

    There are 2 distinct ways to develop applications on the Windows Platform.

    Using the Windows Runtime (WinRT)

    Applications build with WinRT are called “Windows Runtime apps”, again, there are 2 types of these:

    • “Windows Phone Store apps” are WinRT apps that run on the Windows Phone.
    • “Windows Store apps” that run on a Windows device such as a PC or tablet.

    What’s really cool is that Visual Studio provide a universal Windows app template that lets you create a Windows Store app (for PCs, tablets, and laptops) and a Windows Phone Store app in the same project. When your work is finished, you can produce app packages for the Windows Store and Windows Phone Store with a single action to get your app out to customers on any Windows device. These applications can share a lot of their code, both business logic and presentation layer.

    Even better, you can create Windows Runtime apps using the programming languages you’re most familiar with, like JavaScript, C#, Visual Basic, or C++. You can even write components in one language and use them in an app that’s written in another language.  Windows Runtime apps can use the Windows Runtime, a native API built into the operating system. This API is implemented in C++ and bindings (called “projections”) are created for  JavaScript, C#, Visual Basic, and C++ in a way that feels natural for each language.

    Note that this is very different than the Phonegap/Cordova approach that also let you write apps in JavaScript. Universal Windows Apps do not run in a UIWebView/WebView, they are native applications for which (some of) the application logic gets run through the JavaScript engine. This means that they do not suffer from the challenges we face with Phonegap/Cordova (you can’t use cutting edge features, performance issues, etc.), yet you still get the benefits of using the language you are already familiar with.

    This also allows you to use existing JavaScript libraries and CSS templates, no porting requires. You can even write one app use multiple languages, leveraging the dynamic nature of JavaScript for app logic while leveraging languages like C# and C++ for more computationally intensive tasks.

    Traditional (Not using the WinRT)

    Applications that do not use the WinRT are called Windows desktop app and are executables or browser plug-ins that runs in the Windows desktop environment. These apps are typically written in Win32 and COM, .NET, WPF, or Direct3D APIs. There are also Windows Phone Silverlight apps which are Windows Phone apps that uses the Windows Phone Silverlight UI Framework instead of the Windows Runtime and can be sold in the Windows Phone Store.

    Deployment

    To deploy to my device I had to first “developer unlock” my phone (instructions).

    Deployment is a breeze from Visual Studio, just hook up your phone, select your device and hit deploy. The application gets saved to your phone and it opens. It appears in the apps list like all other apps.  You can also “side-load” applications to other windows machines for testing purpose, just package your application up in Visual Studio, put it on a USB stick, stick it in the other Tablet/PC and run the install script created by the packaging process.

    I created 2 simple application, one was a C# Universal Application and one was a JavaScript/CSS3/HTML5 Universal Application. I was able to deploy and run both on a Tablet, Desktop and Phone without any problem. They were very simple applications but I could not see any performance difference between the C# application and the JS application.

    Additional Findings

    For best User Experience when developing Universal Apps using JS/HTML5/CSS3 you should develop Single Page Applications (SPA).  This ensures there are no weird “page loads” in the middle of your app running.  Users will not expect this from their application, remember, these are universal apps and could be run by a user on his desktop.

    State can be easily shared between devices by automatically roaming app settings and state, along with Windows settings, between trusted devices on which the user is logged in with the same Microsoft account.

    Applications on the Windows App Store come with build in crashanalytics: This is one of the valuable services you get in exchange for your annual registration with the Store, no need to build it yourself.
    <h3Conclusion

    As a JavaScript developer myself I am extremely excited by the fact that I can develop native applications on the Windows Platform using tools that I am already familiar with.  Furthermore, with Windows 10 it seems that Microsoft is doubling down on Universal Apps and with that OS Upgrade, my JavaScript apps can soon also be deployed to the HoloLens, Surface Hub, and IoT devices like the Raspberry Pi 2!Possibly Related Posts:

    Thoughts On Student Engagement

    Floyd Teter - Mon, 2015-03-09 13:55
    For those of us in the Higher Education portion of Oracle’s ecosystem, the big conference of the year - Alliance - is less than one week away.  But I already have a suspicion about the hot topic this year.  I’m betting on the subject of student engagement.

    There was a time when student engagement was all about educational institutions reaching out to students and potential students.  But there were only a few ways to get that done:  advertising, public relations, events.  And the schools controlled the discussion.  Because it wasn’t really a discussion as much as a series of one-way broadcasts from the universities to the students.
    But things have changed as new technologies have taken root in higher education. Social media, chat apps, mobile…now, not only can the students and potential students talk back to the universities, but they can also talk to each other.  So the schools no longer have control of the discussion.  While there are significant upsides to this turning of the tables, there’s also a downside…the schools, to a very great degree, are in the dark about the tastes, preferences, and habits of their students and potential students.  This is especially true in talking about “digital body language”.  What technology do those students and potential students use? What are their technology habits?  How can they be reached?  How can we learn more about what is important to them?  The real crux of successful student engagement is hidden in distracting complexities.
    A real challenge in all this comes from a distraction over platforms.  There are lots of social and communication platforms out there coming and going:  Facebook, Twitter, Snapchat, Instagram, Webchat…you get the idea.  Platforms come and go, and nobody has any idea of the next big thing.  But you can’t ignore them, because your students and potential students are already there.
    Another clear shift is that the days of individual and isolated decision-making are gone.  People want to check in with the groups that are important to them and know what other people are doing before making a decision.  So we have different people, all with different needs and hot buttons, all interacting with each other in a variety of networks to influence individual decisions and choices.  So decision making is much more complex.
    These complexities distract from the real point of student engagement - schools learning about and adapting to their constituencies by talking with and listening to students and potential students.
    To eliminate the complexities and efficiently get to the crux of student engagement in today’s environment, schools need more analysis in order to get the planning, design, and execution of the education process matched with the needs and wants of their students and potential students.  In other words, you have to learn about digital body language without getting wrapped around the axle about platforms and social networks.  You have to be able to engage in the discussions with your students and potential students where they are, when they are there…while not getting bogged down by the platforms and networks yourself.  It’s a challenge.  I’m sure we’ll hear more at Alliance.

    Oracle Fusion Middleware Partner Community Award for Outstanding ACM/BPM Contribution 2015

    Andrejus Baranovski - Mon, 2015-03-09 11:38
    We have received award for Outstanding ACM/BPM Contribution 2015. This award was given to me and my colleague - Danilo Schmiedel, for successful work in joint ACM/BPM and ADF projects, by Oracle Fusion Middleware Partner Community. Thanks and really proud to be a part of the community !


    I'm looking forward to share interesting ideas about ADF/ACM/BPM/Mobile through this blog in the future.

    Webcast: Expanding Your Digital Marketing Experience

    WebCenter Team - Mon, 2015-03-09 09:29
    Oracle Corporation Expanding Your Digital Marketing Experience Accelerating Digital Business & Marketing Transformation

    Successful digital marketing starts with optimizing lead conversion by empowering marketers to manage the online experience with a minimum of IT involvement. Research has shown the correlation between user experience across channels, and higher conversion rates -- The more consistent, continuous and unified the customer's digital experience is, the higher the conversion rates. In fact, many companies have seen as much as a 70% increase through personalized web experiences.

    The key to delivering a continuous, personalized, unified digital experience across channels and systems is to combine the use of Web Experience Management (WEM) and marketing automation systems allowing marketers to leverage the web and email, without IT involvement. This powerful combination:
    • Creates a unified engagement platform that gives marketers a robust suite for customer acquisition
    • Speeds time-to-market with the ability to quickly and easily publish content and sites without IT involvement
    • Separates and enables marketing agility from IT stability
    Register now for this webcast.

    Red Button Top Register Now Red Button Bottom Live Webcast Calendar March 12, 2015
    10:00 a.m. PT /
    1:00 p.m. ET Featured Speaker:

    Joshua Duhl Joshua Duhl
    Senior Principal
    Product Manager,
    Oracle Hardware and Software Engineered to Work Together Copyright © 2015, Oracle Corporation and/or its affiliates.
    All rights reserved.
    Contact Us | Legal Notices | Privacy

    Where are the Mobile Windows Devices?

    Oracle AppsLab - Mon, 2015-03-09 09:27

    That was one of the questions one of the Oracle’s Executives asked when we presented our new Cloud UX Lab.  The short answer was that there were none.  As far as I am aware, we never did any testing of any of our prototypes and applications on Windows Phones or tablets because, frankly, we thought it didn’t matter.   Windows Phones (and tablets) are a distant third to the 2 behemoths in this space, Android and iOS, and even lost market share in the year just wrapped up compared (2.7%) to 2013 (3.3%) according to IDC.  However, they are predicted to do better in the years ahead (although these predictions have been widely off in the past) and it seems that there is some pressure from our Enterprise Apps customers to look at the Windows Mobile platform, hence the question.  Never afraid of a challenge, we ordered a Surface Pro 3 and a Nokia Lumia 1520, used them for a few weeks, ran some test, wrote some apps and jotted down our findings, leading to this blog post.

    Initial impressions Surface Pro 3

    I’m going to be short about the Surface Pro 3, it’s basically a PC without a physical keyboard (although you can get one if you want) but with a touch screen and a stylus.  It even runs the same version of Windows 8.1 as your PC.  I must admit that the Tiles seem more practical on the tablet than on a PC, but I could do without the constant reminders to “upgrade Windows” and “upgrade Defender,” complete with mandatory reboots, just like on your PC.  The most infuriating part about this is that the virtual keyboard does not automatically pop up when you tap on an input field, just like on your PC that doesn’t have the concept of a Virtual Keyboard.  Instead you have to explicitly open it to be able to type anything.

    Fortunately, there are some advantages too, e.g. anything that runs on your Windows PC probably will run fine on the Windows tablet, confirmed by our tests.  It has a USB 3.0 port that works just like … a USB port.  Plug in a USB Drive and you can instantly access it, just like on your PC, quite handy for when you have to side-load applications (more on that in a later post).

    The whole package is also quite pricy, similar to a premium laptop.  It’s more of a competitor for the likes of Apple’s Macbook Air than the iPad I think.  I’m thinking people who try to use their iPads as little laptops are probably better of with this.

    Lumia 1520

    The phone on the other hand is a different beast.  The Windows 8.1 Phone OS, unlike the tablet version, is a smartphone OS.  As such, it has none of the drawbacks that the tablet displayed.  My first impression of the phone was that it is absolutely huge.  It measures 6 inches across and dwarfs my iPhone 6, which I already thought was big.  It’s even bigger than the iPhone 6+ and the Samsung Galaxy Note 4.  My thumb can reach less than 50% of the screen, this is not a phone you can handle with one hand.

    iPhone 4S vs iPhone 6 vs Lumia 1520

    iPhone 4S vs iPhone 6 vs Lumia 1520

    Initial setup was relatively quick, it comes “preinstalled” with a bunch of apps, although, they are not really installed on the phone yet, they get installed on first boot.  It took about 10-15 minutes for all “preinstalled” phone apps to be installed.

    The screen is absolutely gorgeous with bright colors and supreme fine detail, courtesy of a 367ppi AMOLED ClearBlack screen.  It also performs very good outside, in bright light.  It has an FM Radio which uses your headphone cable as the antenna (no headphones, no radio), a USB port and a microSD port.  It also has a dedicated, two stage camera shutter button.  There’s no physical mute button though.  The Tiles work really well on the phone.  They are much easier to tap than the app icons on either Android or iOS and you can resize them.

    I tried installing the same apps as I have on my iPhone, but this was unfortunately where I hit my first giant snag.  I knew the ecosystem was underdeveloped compared to Android and iOS, but I didn’t know it was this bad.  Staples on my iPhone like Feedly, Flickr, VLC, Instapaper and Pocket don’t exist on the Windows Phone platform.  You also won’t find a dedicated app to listen to your Amazon Prime music or watch your movies.  If you want to watch the latest exploits of the Lannisters, you are also going to have to do it on another device, no HBO Go or XFinity on the Windows Phone.  There is also no version of Cisco VPN, which means it’s a non-starter for Oracle employees as that is the only way to access our intranet.  Weirder still, there is no Chrome or Firefox available on Windows Phones, which means I had to do all my testing on the version of IE that came with the phone (gulp!).

    Impressions after a week of usage

    I used the Lumia as my main phone for a week (poor pockets), I just popped in the micro SIM card from my iPhone into the Lumia and it worked.  I really got hooked to the constantly updating Live Tiles.  News, stock prices, weather, calendar notifications, facebook notifications etc. get pushed straight to my main screen without having to open any apps.  I can glance and drill down if I want to, or just ignore them.  They are a little bit of a distraction with their constant flipping motion, but overall very cool.

    The other thing that was very noticeable was that the top notification bar is actually transparent and so it doesn’t seem like you lose that part of your screen, I liked that.

    The Windows Store has a try-before-you-buy feature, something that would be a godsend on the iPhone: my kids love to buy games and then drop them within a day never to be used again.  You can also connect the Windows Phone to your XBox One and use it as an input device/remote control.

    Another feature that I highly appreciated, especially as a newbie to the Windows Phone, was the smart learning notifications (not sure if that is the official name).  Rather than dumping all the help-information on you when you open the app for the first time, the phone seems to be monitoring what you do and how you do it.  If there is a better/easier way of doing that task, after repeated use, it will let you know, in a completely non condescending way, that “You are doing it wrong.” This seems to be a much better approach because if you tell me the first time I use the app how to use all its features, I will forget by the time I actually want to use that feature, or worse, I might never use that feature so now you wasted my time telling me about it.

    As for overall performance, there was some noticeable “jank” in the phones animations, it just didn’t feel as buttery smooth as the iPhone 6.

    The camera

    The camera really deserves its own chapter.  The 1520 is the sister phone of the Lumia 1020, which has a whopping 41 megapixel image sensor.  The 1520 has to make due with 20 megapixels but that is still at least double of what you find in most smartphones.  Megapixel size isn’t everything but it does produce some wonderful pictures.  One of the reasons that Nokia went with these large sensors is because they wanted to support better zooming.  Because you can’t optically zoom with a phone camera, you need a much bigger lens for that, a phone does digital zooming which typically leads to a pixelated mess when you zoom in.  Unless of course you start with a very high resolution image, which is what Nokia did.

    One of the interesting features of the photo app is that it supports “lenses.”  These are plugins you can install in the photo app that add features not available out-of-the-box.  There are dozens of these lenses, it’s basically an app store in an app, that add features like (instagram) filters, 360 shots, panoramic pictures etc.  One lens promises to make you look better in selfies (it didn’t work on me).  One really neat lens is Nokia’s “Refocus” lens that brings a Lytro-like variable depth of field to your phone, and it works great too.

    Refocus

    In the same lens app you can also filter out all colors except for the object you click on, called “color pop,” so you get this effect:

    color pop

    Color pop in action

    In the app, you can keep clicking on other objects (e.g. the table) to pop their color.

    Other than the 20 megapixel sensor, the phone is also equipped with a top notch Carl Zeiss lens.  The phone has a physical, dedicated, two-stage shutter button, half-press for focus and full press for taking the picture.  It also has a larger-than-usual degree of manual control. You’ll find the usual settings for flash mode, ISO, white balance and exposure compensation but also parameters for shutter speed and focus. The latter two are not usually available on mobile phones.  The camera also performs really well in low light conditions.

    Summary

    I like the phone and its OS, and I really like the camera. The Tiles also works really well on a phone. I dislike the performance, the size and the lack of applications, the latter is a deal-breaker for me. I had some trepidation about going cold turkey Windows Phone for the week but it turned out alright. However, I was happy to switch back to my iPhone 6 at the end of the week.
    I’m a bit more on the fence about the tablet. If you get the physical keyboard, it might work out better but then you basically have a laptop, so not sure what the point is. The fact that it runs windows has it’s advantages (everything runs just as on windows) and disadvantages (keyboard issues).

    I can’t wait to get my hands on Windows 10 and a HoloLens :-)

    Happy Coding!

    Mark.Possibly Related Posts:

    Flashback logging

    Jonathan Lewis - Mon, 2015-03-09 08:44

    When database flashback first appeared many years ago I commented (somewhere, but don’t ask me where) that it seemed like a very nice idea for full-scale test databases if you wanted to test the impact of changes to batch code, but I couldn’t really see it being a good idea for live production systems because of the overheads.

    Features and circumstances change, of course, and someone recently pointed out that if your production system is multi-terabyte and you’re running with a dataguard standby and some minor catastrophe forces you to switch to the standby then you don’t really want to be running without a standby for the time it would take for you to use restore and recover an old backup to create a new standby and there may be cases where you could flashback the original primary to before the catastrophe and turn it into the standby from that point onward. Sounds like a reasonable argument to me – but you might still need to think very carefully about how to minimise the impact of enabling database flashback, especially if your database is a datawarehouse, DSS, or mixed system.

    Imagine you have a batch processes that revolve around loading data into an empty table with a couple of indexes – it’s a production system so you’re running with archivelog mode enabled, and then you’re told to switch on database flashback. How much impact will that have on your current loading strategies ? Here’s a little bit of code to help you on your way – I create an empty table as a clone of the view all_objects, and create one index, then I insert 1.6M rows into it. I’ve generated 4 different sets of results: flashback on or off, then either maintaining the index during loading or marking it unusable then rebuilding it after the load. Here’s the minimum code:

    
    create table t1 segment creation immediate tablespace test_8k
    as
    select * from all_objects
    where   rownum < 1
    ;
    
    create index t1_i1 on t1(object_name, object_id) tablespace test_8k_assm_auto;
    -- alter index t1_i1 unusable;
    
    insert /*+ append */ into t1
    with object_data as (
            select --+ materialize
                    *
            from
                    all_objects
            where
                    rownum <= 50000
    ),
    counter as (
            select  --+ materialize
                    rownum id
            from dual
            connect by
                    level <= 32
    )
    select
            /*+ leading (ctr obj) use_nl(obj) */
            obj.*
    from
            counter         ctr,
            object_data     obj
    ;
    
    -- alter index t1_i1 rebuild;
    
    

    Here’s a quick summary of the timing I got  before I talk about the effects (running 11.2.0.4):

    Flashback off:
    Maintain index in real time: 138 seconds
    Rebuild index at end: 66 seconds

    Flashback on:
    Maintain index in real time: 214 seconds
    Rebuild index at end: 112 seconds

    It is very important to note that these timings do not allow you to draw any generic conclusions about optimum strategies for your systems. The only interpretation you can put on them is that different circumstances may lead to very different timings, so it’s worth looking at what you could do with your own systems to find good strategies for different cases.

    Most significant, probably, is the big difference between the two options where flashback is enabled – if you’ve got to use it, how do you do damage limitation. Here are some key figures, namely the file I/O stats and the some instance activity stats, I/O stats first:

    
    "Real-time" maintenance
    ---------------------------------
    Tempfile Stats - 09-Mar 11:41:57
    ---------------------------------
    file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs    Writes      Blocks   Avg Csecs    File name
    -----       -----      ------    --------   ---------     -------   ---------    ------      ------   ---------    -------------------
        1       1,088      22,454      20.638        .063         296        .000     1,011      22,455        .000    /u01/app/oracle/oradata/TEST/datafile/o1_mf_temp_938s5v4n_.tmp
    
    ---------------------------------
    Datafile Stats - 09-Mar 11:41:58
    ---------------------------------
    file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs     M_Reads   Avg Csecs         Writes      Blocks   Avg Csecs    File name
    -----       -----      ------    --------   ---------     -------   ---------     -------   ---------         ------      ------   ---------    -------------------
        3      24,802      24,802       1.000        .315      24,802        .315           0        .000          2,386      20,379        .239    /u01/app/oracle/oradata/TEST/datafile/o1_mf_undotbs1_938s5n46_.dbf
        5         718      22,805      31.762        .001           5        .000         713        .002            725      22,814        .002    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k_bcdy0y3h_.dbf
        6       8,485       8,485       1.000        .317       8,485        .317           0        .000            785       6,938        .348    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k__bfqsmt60_.dbf
    
    Mark Unusable and Rebuild
    ---------------------------------
    Tempfile Stats - 09-Mar 11:53:04
    ---------------------------------
    file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs    Writes      Blocks   Avg Csecs    File name
    -----       -----      ------    --------   ---------     -------   ---------    ------      ------   ---------    -------------------
        1       1,461      10,508       7.192        .100           1        .017       407      10,508        .000    /u01/app/oracle/oradata/TEST/datafile/o1_mf_temp_938s5v4n_.tmp
    
    ---------------------------------
    Datafile Stats - 09-Mar 11:53:05
    ---------------------------------
    file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs     M_Reads   Avg Csecs         Writes      Blocks   Avg Csecs    File name
    -----       -----      ------    --------   ---------     -------   ---------     -------   ---------         ------      ------   ---------    -------------------
        3          17          17       1.000       5.830          17       5.830           0        .000             28          49       1.636    /u01/app/oracle/oradata/TEST/datafile/o1_mf_undotbs1_938s5n46_.dbf
        5         894      45,602      51.009        .001           2        .002         892        .001            721      22,811        .026    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k_bcdy0y3h_.dbf
        6       2,586       9,356       3.618        .313         264       3.064       2,322        .001          2,443       9,214        .000    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k__bfqsmt60_.dbf
    
    

    There are all sorts of interesting differences in these results due to the different way in which Oracle handles the index. For the “real-time” maintenance the session accumulates the key values and rowids as it writes the table, then sorts them, then does an cache-based bulk update to the index. For the “rebuild” strategy Oracle simply scans the table after it has been loaded, sorts the key values and indexes, then writes the index to disc using direct path writes; you might expect the total work done to be roughly the same in both cases – but it’s not.

    I’ve shown 4 files: the temporary tablespace, the undo tablespace, the tablespace holding the table and the tablespace holding the index; and the first obvious difference is the number of blocks written and read and the change in average read size on the temporary tablespace. Both sessions had to spill to disc for the sort, and both did a “one-pass” sort; the difference in the number of blocks written and read appears because the “real-time” session wrote the sorted data set back to the temporary tablespace one more time than it really needed to – it merged the sorted data in a single pass but wrote the data back to the temporary tablespace before reading it again and applying it to the index (for a couple of points on tracing sorts, see this posting). I don’t know why Oracle chose to use a much smaller read slot size in the second case, though.

    The next most dramatic thing we see is that real-time maintenance introduced 24,800 single block reads with 20,000 blocks written to the undo tablespace (with a few thousand more that would eventually be written by dbwr – I should have included a “flush buffer_cache” in my tests), compared to virtually no activity in the “rebuild” case. The rebuild generates no undo; real-time maintenance (even starting with an empty index) generates undo because (in theory) someone might look at the index and need to see a read-consistent image of it. So it’s not surprising that we see a lot of writes to the undo tablespace – but where did the reads come from? I’ll answer question that later.

    It’s probably not a surprise to see the difference in the number of blocks read from the table’s tablespace. When we rebuild the index we have to do a tablescan to acquire the data; but, again, we can ask why did we see 22,800 blocks read from the table’s tablespace when we were doing the insert with real-time maintenance. On a positive note those reads were multiblock reads, but what caused them? Again, I’ll postpone the answer.

    Finally we see that the number of blocks read (reason again postponed) and written to the index’s tablespace are roughly similar. The writes differ because because the rebuild is doing direct path writes, while the real-time maintenance is done in the buffer cache, so there are some outstanding index blocks to be written. The reads are similar, though one test is exclusively single block reads and the other is doing (small) multiblock reads – which is just a little bit more efficient.  The difference in the number of reads is because the rebuild was at the default pctfree=10 while the index maintenance was a massive “insert in order” which would have packed the index leaf blocks at 100%.

    To start the explanation – here are the most significant activity stats – some for the session, a couple for the instance:

    
    "Real-time" maintenance
    -----------------------
    Name                                                                     Value
    ----                                                                     -----
    physical reads for flashback new                                        33,263
    redo entries                                                           118,290
    redo size                                                          466,628,852
    redo size for direct writes                                        187,616,044
    undo change vector size                                            134,282,356
    flashback log write bytes                                          441,032,704
    
    Rebuild
    -------
    Name                                                                     Value
    ----                                                                     -----
    physical reads for flashback new                                           156
    redo entries                                                            35,055
    redo size                                                          263,801,792
    redo size for direct writes                                        263,407,628
    undo change vector size                                                122,156
    flashback log write bytes                                          278,036,480
    
    

    The big clue is the “physical reads for flashback new”. When you modify a block, if it hasn’t been dumped into the flashback log recently (as defined by the hidden _flashback_barrier_interval parameter) then the original version of the block has to be written to the flashback log before the change can be applied; moreover, if a block is being “newed” (Oracle-speak for being reformatted for a new use) it will also be written to flashback log. Given the way that the undo tablespace works it’s not surprising if virtually every block you modify in the undo tablespace has to be written to the flashback log before you use it. The 33,264 blocks read for “flashback new” consists of the 24,800 blocks read from the undo tablespace when we were maintaining the index in real-time plus a further 8,460 from “somewhere” – which, probably not coincidentally, matches the number of blocks read from the index tablespace as we create the index. The odd thing is that we don’t see the 22,800 reads on the table’s tablespace (which don’t occur when flashback is off) reported as “physical reads for flashback new”; this looks like a reporting error to me.

    So the volume of undo requires us to generate a lot of flashback log as well as the usual increase in the amount of redo. As a little side note, we get confirmation from these stats that the index was rebuilt using direct path writes – there’s an extra 75MB of redo for direct writes.

    Summary

    If you are running with flashback enabled in a system that’s doing high volume data loading remember that the “physical reads for flashback new” could be a major expense. This is particularly expensive on index maintenance, which can result in a large number single block reads of the undo tablespace. The undo costs you three times – once for the basic cost of undo (and associated redo), once for the extra reads, and once for writing the flashback log. Although you have to do tablescans to rebuild indexes, the cost of an (efficient, possibly direct path) tablescan may be much less than the penalty of the work relating to flashback.

    Footnote: since you can’t (officially) load data into a table with an unusable unique index or constraint, you may want to experiment with using non-unique indexes to support unique/PK constraints and disabling the constraints while loading.