Skip navigation.

Feed aggregator

APEX 5.0: pimping the Login page

Dimitri Gielis - Wed, 2015-03-11 17:29
When you create a new application in APEX 5.0, the login page probably looks like this:


I love the build-in login page of APEX itself - luckily it's easy enough to build that in our own apps too. Thank you APEX Dev team!

The first step is to change the region type to be of Login Region Template:


We want to add a nice icon on top of the Login text. You can use the Icon CSS Class in the Region options - in this case I opted for fa-medkit:

Next up is making the Login button bigger and make it the complete width like the items.In APEX 5.0 you can use the Template Options to do that:

Once we stretched the Login button it fits the entire size.
Next up is getting some icons in the username and password field.For the username we use the "icon-login-username" css class.Instead of the label we make that hidden and specify a placeholder, so before you start typing you see the word username and when you start typing the text disappears.

For the password field we do the same thing, but for the css class we specify "icon-login-password".


Finally your login screen looks like this:


Great? Absolutely - and so easy with APEX 5.0!

What's next? Is there anything better? Euh... yes, what about live validation?
Sure we can do that in APEX 5.0 without too much hassle :)) Thanks once again APEX Dev Team!

In the item make sure the item is set to Value Required and add in the Post Text following span:


That will give you a nice visual indication if you entered text:


Cool? Creating login pages in APEX 5.0 is ... (you fill in the word)

Interested in more? We're doing an APEX 5.0 UI Training in May.
Categories: Development

Delphix User Group Presentation

Bobby Durrett's DBA Blog - Wed, 2015-03-11 16:30

My Delphix user group presentation went well today. 65 people attended.  It was great to have so much participation.

Here are links to my PowerPoint slides and a recording of the WebEx:

Slides: PowerPoint

Recording: WebEx

Also, I want to thank two Delphix employees, Ann Togasaki and Matthew Yeh.  Ann did a great job of converting my text bullet points into a visually appealing PowerPoint.  She also translated my hand drawn images into useful drawings.  Matthew did an amazing job of taking my bullet points and my notes and adding meaningful graphics to my text only slides

I could not have put the PowerPoint together in time without Ann and Matthew’s help and they did a great job.

Also, for the first time I wrote out my script word for word and added it to the notes on the slides.  So, you can see what I intended to say with each slide.

Thank you to Adam Leventhal of Delphix for inviting me to do this first Delphix user group WebEx presentation.  It was a great experience for me and I hope that it was useful to the user community as well.

– Bobby

Categories: DBA Blogs

Webcast Replay: Public Sector FMW: Mobility Solutions – Re-Think Mobile

WebCenter Team - Wed, 2015-03-11 14:48
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

Mobile is the digital disruptor that has transformed industries and organizations big and small. Mobile transformations are everywhere, across all industries in organizations of all sizes. The enterprise mobile market is expected to bring in $140 billion by 2020, and yet today 7 in 10 enterprises are still struggling to keep pace with new mobile devices and systems. We know that access to relevant information, anywhere and anytime is expected, yet connecting to back-end systems and securing the corporate data is both complex and a necessity.

Watch this webinar to learn how customers are re-thinking their enterprise mobile strategy and unifying the client, content, context, security and cloud in their enterprise mobile strategy. Through case studies and live demonstrations, Oracle Gold Partner 3Di will present how customers like you have successfully addressed these questions using Oracle Technologies and 3Di's solutions, innovations and services.

Find out more here

Product links:

o http://www.oracle.com/technetwork/middleware/webcenter/suite/overview/index.html

o http://www.oracle.com/us/technologies/bpm/overview/index.html

o http://www.oracle.com/technetwork/developer-tools/maf/overview/index.html

Case Studies:

o https://blogs.oracle.com/webcenter/entry/los_angeles_department_of_building

o https://blogs.oracle.com/fusionmiddleware/entry/ladwp_transformed_customer_experience_with

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

Three Weeks with the Nike+ Fuelband SE

Oracle AppsLab - Wed, 2015-03-11 11:42

I don’t like wearing stuff on my wrist, but in my ongoing quest to learn more about the wearables our users wear, I have embarked on a journey.

For science! And for better living through math, a.k.a. the quantified self.

And because I’ll be at HCM World later this month talking about wearables, and because wearables are a thing, and we have a Storify to prove it, and we need to understand them better, and the Apple Watch is coming (squee!) to save us all from our phones and restore good old face time (not that Facetime) and and and. Just keep reading.

Moving on, I just finished wearing the Nike+ Fuelband SE for three weeks, and today, I’m starting on a new wearable. It’s a surprise, just wait three weeks.

Now that I’ve compiled a fair amount of anecdotal data, I figured a loosely organized manifest of observations (not quite a review) was in order.

The band

The Fuelband isn’t my first fitness tracker; you might recall I wore the Misfit Shine for a few months. Unlike the minimalist Shine, the Fuelband has quite a few more bells and whistles, starting with its snazzy display.

Check out a teardown of the nifty little bracelet, some pretty impressive stuff inside there, not bad for a shoe and apparel company.

I’ve always admired the design aspects of Nike’s wearables, dating back to 2012 when Noel (@noelportugal) first started wearing one. So, it was a bit sad to hear about a year ago that Nike was closing that division.

Turns out the Fuelband wasn’t dead, and when Nike finally dropped an Android version of the Nike+ Fuelband app, I sprang into action, quite literally.

Anyway, the band didn’t disappoint. It’s lightweight and can be resized using a nifty set of links that can be added or removed.

IMG_20150217_121004

The fit wasn’t terribly tight, and the band is surprisingly rigid, which eventually caused a couple areas on my wrist to rub a little raw, no biggie.

The biggest surprise was the first pinch I got closing the clasp. After a while, it got easier to close and less pinchy, but man that first one was a zinger.

The battery life was good, something that I initially worried about, lasting about a week per full charge. Nike provides an adapter cord, but the band’s clasp can be plugged directly into  a USB port, which is a cool feature, albeit a bit awkward looking.

It’s water-resistant too, which is a nice plus.

Frankly, the band is very much the same one that Noel showed me in 2012, and the lack of advancement is one of the complaints users have had over the years.

The app and data

Entering into this, I fully expected to be sucked back into the statistical vortex that consumed me with the Misfit Shine, and yeah, that happened again. At least, I knew what to expect this time.

Initial setup of the band requires a computer and a software download, which isn’t ideal. Once that was out of the way, I could do everything using the mobile app.

The app worked flawlessly, and it looks good, more good design from Nike. I can’t remember any sync issues or crashes during the three-week period. Surprising, considering Nike resisted Android for so long. I guess I expected their foray into Android to be janky.

I did find one little annoyance. The app doesn’t support the Android Gallery for adding a profile picture, but that’s the only quibble I have.

Everything on the app is easily figured out; there’s a point system, NikeFuel. The band calculates steps and calories too, but NikeFuel is Nike’s attempt to normalize effort for specific activities, which also allows for measurement and competition among participants.

The default the NikeFuel goal for each day is 2,000, a number that can be configured. I left it at 2,000 because I found that to be easy to reach.

The app includes Sessions too, which allow the wearer to specify the type of activity s/he is doing. I suppose this makes the NikeFuel calculation more accurate. I used Sessions as a way to categorize and compare workouts.

I tested a few Session types and was stunned to discover that the elliptical earned me less than half the NikeFuel than running on a treadmill for the same duration.

Screenshot_2015-03-09-23-31-12 Screenshot_2015-03-09-23-31-52 Screenshot_2015-03-09-23-31-47 Screenshot_2015-03-09-23-32-14 Screenshot_2015-03-09-23-32-23

Update: Forgot to mention that the app communicates in real time with the band (vs. periodic syncing), so you can watch your NikeFuel increase during a workout, pretty cool.

Overall, the Android app and the web app at nikeplus.com are both well-done and intuitive. There’s a community aspect too, but that’s not for me. Although I did enjoy watching my progress vs. other men my age in the web app.

One missing feature of the Fuelband, at least compared to its competition, is the lack of sleep tracking. I didn’t really miss this at first, but now that I have it again, with the surprise wearable I’m testing now, I’m realizing I want it.

Honestly, I was a bit sad to take off the Fuelband after investing three weeks into it. Turns out, I really liked wearing it. I even pondered continuing its testing and wearing multiple devices to do an apples-to-apples comparison, but Ultan (@ultan) makes that look good. I can’t.

So, stay tuned for more wearable reviews, and find me at HCM World if you’re attending.

Anything to add? Find the comments.Possibly Related Posts:

#db12c now certified for #em12c repository (MOS Note: 1987905.1) with some restrictions

DBASolved - Wed, 2015-03-11 11:06

Last October (2014), at Oracle Open World 2014, I posted about a discussion where there was confusion on if Oracle Database 12c was supported as the Oracle Management Repository (OMR).  At the time, Oracle had put a temporary suspension on support for the OMR running on Oracle Database 12c. 

Over the last week or so, in discussions with some friends I heard that there may be an announcement on this topic soon.  As of yesterday, I was provided a MOS note number to reference (1987905.1) for OMR support on database 12c.  In checking out the note, it appears that the OMR can now be ran on a database 12c instance (12.1.0.2) with some restrictions.

These restrictions are:

  • Must apply database patch 20243268
  • Must apply patchset 12.1.0.2.1 (OCT PSU) or later

This note (1987905.1) is welcomed by many in the community who want to build their OMS on the latested database version.  What is missing from the note is if installing the OMR into a pluggable database (PDB) is support.  Guess the only way to find out is to try building a new Oracle Enterprise Manager 12c on top of a pluggable and see what happens.  At least for now, Oracle Database 12c is supported as the OMR.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

Flashback Logging

Jonathan Lewis - Wed, 2015-03-11 09:21

One of the waits that is specific to ASSM (automatic segment space management) is the “enq: FB – contention” wait. You find that the “FB” enqueue has the following description and wait information when you query v$lock_type, and v$event_name:


SQL> execute print_table('select * from v$lock_type where type = ''FB''')
TYPE                          : FB
NAME                          : Format Block
ID1_TAG                       : tablespace #
ID2_TAG                       : dba
IS_USER                       : NO
DESCRIPTION                   : Ensures that only one process can format data blocks in auto segment space managed tablespaces

SQL> execute print_table('select * from v$event_name where name like ''enq: FB%''')
EVENT#                        : 806
EVENT_ID                      : 1238611814
NAME                          : enq: FB - contention
PARAMETER1                    : name|mode
PARAMETER2                    : tablespace #
PARAMETER3                    : dba
WAIT_CLASS_ID                 : 1893977003
WAIT_CLASS#                   : 0
WAIT_CLASS                    : Other

This tells us that a process will acquire the lock when it wants to format a batch of blocks in a segment in a tablespace using ASSM – and prior experience tells us that this is a batch of 16 consecutive blocks in the current extent of the segment; and when we see a wait for an FB enqueue we can assume that two session have simultaneously tried to format the same new batch of blocks and one of them is waiting for the other to complete the format. In some ways, this wait can be viewed (like the “read by other session” wait) in a positive light – if the second session weren’t waiting for the first session to complete the block format it would have to do the formatting itself, which means the end-user has a reduced response time. On the other hand the set of 16 blocks picked by a session is dependent on its process id, so the second session might have picked a different set of 16 blocks to format, which means in the elapsed time of one format call the segment could have had 32 blocks formatted – this wouldn’t have improved the end-user’s response time, but it would mean that more time would pass before another session had to spend time formatting blocks. Basically, in a highly concurrent system, there’s not a lot you can do about FB waits (unless, of course, you do some clever partitioning of the hot objects).

There is actually one set of circumstances where you can have some control of how much time is spent on the wait, but before I mention it I’d like to point out a couple more details about the event itself. First, the parameter3/id2_tag is a little misleading: you can work out which blocks are being formatted (if you really need to), but the “dba” is NOT a data block address (which you might think if you look at the name and a few values). There is a special case when the FB enqueue is being held while you format the blocks in the 64KB extents that you get from system allocated extents, and there’s probably a special case (which I haven’t bothered to examine) if you create a tablespace with uniform extents that aren’t a multiple of 16 blocks, but in the general case the “dba” consists of two parts – a base “data block address” and a single (hex) digit offset identifying which batch of 16 blocks will be formatted.

For example: a value of 0x01800242 means start at data block address 0x01800240, count forward 2 * 16 blocks then format 16 blocks from that point onwards. Since the last digit can only range from 0x0 to 0xf this means the first 7 (hex) digits of a “dba” can only reference 16 batches of 16 blocks, i.e. 256 blocks. It’s not coincidence (I assume) that a single bitmap space management block can only cover 256 blocks in a segment – the FB enqueue is tied very closely to the bitmap block.

So now it’s time to ask why this discussion of the FB enqueue appears in an article titled “Flashback Logging”. Enable the 10704 trace at level 10, along with the 10046 trace at level 8 and you’ll see. Remember that Oracle may have to log the old version of a block before modifying it and if it’s a block that’s being reused it may contribute to “physical reads for flashback new” – here’s a trace of a “format block” event:


*** 2015-03-10 12:50:35.496
ksucti: init session DID from txn DID:
ksqgtl:
        ksqlkdid: 0001-0023-00000014

*** 2015-03-10 12:50:35.496
*** ksudidTrace: ksqgtl
        ktcmydid(): 0001-0023-00000014
        ksusesdi:   0000-0000-00000000
        ksusetxn:   0001-0023-00000014
ksqgtl: RETURNS 0
WAIT #140627501114184: nam='db file sequential read' ela= 4217 file#=6 block#=736 blocks=1 obj#=192544 tim=1425991835501051
WAIT #140627501114184: nam='db file sequential read' ela= 674 file#=6 block#=737 blocks=1 obj#=192544 tim=1425991835501761
WAIT #140627501114184: nam='db file sequential read' ela= 486 file#=6 block#=738 blocks=1 obj#=192544 tim=1425991835502278
WAIT #140627501114184: nam='db file sequential read' ela= 522 file#=6 block#=739 blocks=1 obj#=192544 tim=1425991835502831
WAIT #140627501114184: nam='db file sequential read' ela= 460 file#=6 block#=740 blocks=1 obj#=192544 tim=1425991835503326
WAIT #140627501114184: nam='db file sequential read' ela= 1148 file#=6 block#=741 blocks=1 obj#=192544 tim=1425991835504506
WAIT #140627501114184: nam='db file sequential read' ela= 443 file#=6 block#=742 blocks=1 obj#=192544 tim=1425991835504990
WAIT #140627501114184: nam='db file sequential read' ela= 455 file#=6 block#=743 blocks=1 obj#=192544 tim=1425991835505477
WAIT #140627501114184: nam='db file sequential read' ela= 449 file#=6 block#=744 blocks=1 obj#=192544 tim=1425991835505985
WAIT #140627501114184: nam='db file sequential read' ela= 591 file#=6 block#=745 blocks=1 obj#=192544 tim=1425991835506615
WAIT #140627501114184: nam='db file sequential read' ela= 449 file#=6 block#=746 blocks=1 obj#=192544 tim=1425991835507157
WAIT #140627501114184: nam='db file sequential read' ela= 489 file#=6 block#=747 blocks=1 obj#=192544 tim=1425991835507684
WAIT #140627501114184: nam='db file sequential read' ela= 375 file#=6 block#=748 blocks=1 obj#=192544 tim=1425991835508101
WAIT #140627501114184: nam='db file sequential read' ela= 463 file#=6 block#=749 blocks=1 obj#=192544 tim=1425991835508619
WAIT #140627501114184: nam='db file sequential read' ela= 685 file#=6 block#=750 blocks=1 obj#=192544 tim=1425991835509400
WAIT #140627501114184: nam='db file sequential read' ela= 407 file#=6 block#=751 blocks=1 obj#=192544 tim=1425991835509841

*** 2015-03-10 12:50:35.509
ksqrcl: FB,16,18002c2
ksqrcl: returns 0

Note: we acquire the lock (ksqgtl), read 16 blocks by “db file sequential read”, write them to the flashback log (buffer), format them in memory, release the lock (ksqrcl). That lock can be held for quite a long time – in this case 13 milliseconds. Fortunately the all the single block reads after the first have been accelerated by O/S prefetching, your timings may vary.

The higher the level of concurrent activity the more likely it is that processes will collide trying to format the same 16 blocks (the lock is exclusive, so the second will request and wait, then find that the blocks are already formatted when it finally get the lock). This brings me to the special case where waits for the FB enqueue waits might have a noticeable impact … if you’re running parallel DML and Oracle decides to use “High Water Mark Brokering”, which means the parallel slaves are inserting data into a single segment instead of each using its own private segment and leaving the query co-ordinator to clean up round the edges afterwards. I think this is most likely to happen if you have a tablespace using fairly large extents and Oracle thinks you’re going to process a relatively small amount of data (e.g. small indexes on large tables) – the trade-off is between collisions between processes and wasted space from the private segments.


Brian Whitmer No Longer in Operational Role at Instructure

Michael Feldstein - Wed, 2015-03-11 09:17

By Phil HillMore Posts (301)

Just over a year and a half ago, Devlin Daley left Instructure, the company he co-founded. It turns out that both founders have made changes as Brian Whitmer, the other company co-founder, left his operational role in 2014 but is still on the board of directors. For some context from the 2013 post:

Instructure was founded in 2008 by Brian Whitmer and Devlin Daley. At the time Brian and Devlin were graduate students at BYU who had just taken a class taught by Josh Coates, where their assignment was to come up with a product and business model to address a specific challenge. Brian and Devlin chose the LMS market based on the poor designs and older architectures dominating the market. This design led to the founding of Instructure, with Josh eventually providing seed funding and becoming CEO by 2010.

Brian had a lead role until last year for Instructure’s usability design and for it’s open architecture and support for LTI standards.

The reason for Brian’s departure (based on both Brian’s comments and comments from Instructure statements) is based on his family. Brian’s daughter has Rett Syndrome:

Rett syndrome is a rare non-inherited genetic postnatal neurological disorder that occurs almost exclusively in girls and leads to severe impairments, affecting nearly every aspect of the child’s life: their ability to speak, walk, eat, and even breathe easily.

As Instructure grew, Devlin became the road show guy while Brian stayed mostly at home, largely due to family. Brian’s personal experiences have led him to create a new company: CoughDrop.

Some people are hard to hear — through no fault of their own. Disabilities like autism, cerebral palsy, Down syndrome, Angelman syndrome and Rett syndrome make it harder for many individuals to communicate on their own. Many people use Augmentative and Alternative Communication (AAC) tools in order to help make their voices heard.

We work to help bring out the voices of those with complex communication needs through good tech that actually makes things easier and supports everyone in helping the individual succeed.

This work sounds a lot like early Instructure, as Brian related to me this week.

Augmentative Communication is a lot like LMS space was, in need of a reminder of how things can be better.

By the middle of 2014, Brian left all operational duties although he remains on the board (and he plans to remain on the board and acting as an adviser).

How will this affect Instructure? I would look at Brian’s key roles in usability and open platform to see if Instructure keeps up his vision. From my view the usability is just baked into the company’s DNA[1] and will likely not suffer. The question is more on the open side. Brian led the initiative for the App Center as I described in 2013:

The key idea is that the platform is built to easily add and support multiple applications. The apps themselves will come from EduAppCenter, a website that launched this past week. There are already more than 100 apps available, with the apps built on top of the Learning Tools Interoperability (LTI) specification from IMS global learning consortium. There are educational apps available (e.g. Khan Academy, CourseSmart, Piazza, the big publishers, Merlot) as well as general-purpose tools (e.g. YouTube, Dropbox, WordPress, Wikipedia).

The apps themselves are wrappers that pre-integrate and give structure access to each of these tools. Since LTI is the most far-reaching ed tech specification, most of the apps should work on other LMS systems. The concept is that other LMS vendors will also sign on the edu-apps site, truly making them interoperable. Whether that happens in reality remains to be seen.

What the App Center will bring once it is released is the simple ability for Canvas end-users to add the apps themselves. If a faculty adds an app, it will be available for their courses, independent of whether any other faculty use that set up. The same applies for students who might, for example, prefer to use Dropbox to organize and share files rather than native LMS capabilities.

The actual adoption by faculty and institutions of this capability takes far longer than people writing about it (myself included) would desire. It takes time and persistence to keep up the faith. The biggest risk that Instructure faces by losing Brian’s operational role is whether they will keep this vision and maintain their support for open standards and third-party apps – opening up the walled garden, in other words.

Melissa Loble, Senior Director of Partners & Programs at Instructure[2], will play a key role in keeping this open vision alive. I have not heard anything indicating that Instructure is changing, but this is a risk from losing a founder who internally ‘owned’ this vision.

I plan to share some other HR news from the ed tech market in future posts, but for now I wish Brian the best with his new venture – he is one of the truly good guys in ed tech.

Update: I should have given credit to Audrey Watters, who prompted me to get a clear answer on this subject.

  1. Much to Brian’s credit
  2. Formerly Associate Dean of Distance Ed at UC Irvine and key player in Walking Dead MOOC

The post Brian Whitmer No Longer in Operational Role at Instructure appeared first on e-Literate.

APEX Connect June 2015

Denes Kubicek - Wed, 2015-03-11 07:39
APEX Connect in Düsseldorf in June 2015 is going to be the biggest APEX-only event in Germany so far. You should consider joining us.

APEX Connect in Düsseldorf im Juni 2015 wird der größte APEX-Treffen bisher sein. Meldet euch und hilft uns es noch erfolgreicher und größer zu machen. Viele interessante Vorträge und vor allem viele interessante Persönlichkeiten aus der APEX-Welt werden dort sein. Das ist eine ausgezeichnete Gelegenheit viel Neues zu erfahren. Anmeldungsformular kann man hier aufrufen. Die Preise sind moderat und durchaus im Rahmen.

Categories: Development

Annonce : DB12c certifiée avec EM12c

Jean-Philippe Pinte - Wed, 2015-03-11 00:55


Il est désormais possible d’utiliser une base 12.1.0.2.1 pour le référentiel d’ Oracle Enterprise Manager 12.1.0.4 (OMR) : http://ora.cl/tY3  


Adding MySQL driver to Spring Boot CLI Groovy Demo

Pas Apicella - Tue, 2015-03-10 15:18
I previously showed how you can use the Spring Boot CLI to create a simple Restful Application saying Hello World as shown in the link below using Groovy.

http://theblasfrompas.blogspot.com.au/2015/02/spring-boot-hello-world-from-command.html

If you wanted to extend that demo to include additional dependencies JAR file such as MySQL driver jar file we would do the following

1. You can add extensions to the CLI using the install command as shown below to add MySQL driver. This is installed in the LIB folder of the Spring Boot CLI location directory

> spring install mysql:mysql-connector-java:5.1.34

2. Package the application into a JAR which now includes the MySQL driver JAR file to enable you to connect to a MySQL instance from your application. You will need to write the code to do that , BUT now the JAR file is packaged in the JAR created to enable you to do that.

> spring jar -cp /usr/local/Cellar/springboot/1.2.1.RELEASE/lib/mysql-connector-java-5.1.34.jar hello.jar hello.groovy

Note: If you find that you reach the limit of the CLI tool, you will probably want to look at converting your application to full Gradle or Maven built “groovy project”

More Information

http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#cli-using-the-cli


http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Dana Center and New Mathways Project: Taking curriculum innovations to scale

Michael Feldstein - Tue, 2015-03-10 15:01

By Phil HillMore Posts (301)

Last week the University of Texas’ Dana Center announced a new initiative to digitize their print-based math curriculum and expand to all 50 community colleges in Texas. The New Mathways Project is ‘built around three mathematics pathways and a supporting student success course’, and they have already developed curriculum in print:

Tinkering with the traditional sequence of math courses has long been a controversial idea in academic circles, with proponents of algebra saying it teaches valuable reasoning skills. But many two-year college students are adults seeking a credential that will improve their job prospects. “The idea that they should be broadly prepared isn’t as compelling as organizing programs that help them get a first [better-paying] job, with an eye on their second and third,” says Uri Treisman, executive director of the Charles A. Dana Center at UT Austin, which spearheads the New Mathways Project. [snip]

Treisman’s team has worked with community-college faculty to create three alternatives to the traditional math sequence. The first two pathways, which are meant for humanities majors, lead to a college-level class in statistics or quantitative reasoning. The third, which is still in development, will be meant for science, technology, engineering, and math majors, and will focus more on algebra. All three pathways are meant for students who would typically place into elementary algebra, just one level below intermediate algebra.

When starting, the original problem was viewed as ‘fixing developmental math’. As they got into the design, the team restated the problem to be solved as ‘developing coherent pathways through gateway courses into modern degrees of study that lead to economic mobility’. The Dana Center worked with the Texas Association of Community Colleges to develop the curriculum, which is focused on active learning and group work that can be tied to the real world.

The Dana Center approach is based on four principles:

  • Courses student take in college math should be connected to their field of study.
  • The curriculum should accelerate or compress to allow students to move through developmental courses in one year.
  • Courses should align with student support more closely, and sophisticated learning support will be connected to campus support structures.
  • Materials should be connected to context-sensitive improvement strategy.

What they have found is that there are multiple programs nationwide working roughly along the same principles, including the California improvement project, Accelerated learning project at Baltimore City College, and work in Tennessee at Austin Peay College. In their view the fact of independent bodies coming to similar conclusions adds validity to the overall concept.

One interesting aspect of the project is that it is targeted for an entire state’s community college system – this is not a pilot approach. After winning an Request for Proposal selection, Pearson[1] will integrate the active-learning content into a customized mix of MyMathLabs, Learning Catalytics, StatCrunch and CourseConnect tools. Given the Dana Center’s small size, one differentiator for Pearson was their size and ability to help a program move to scale.

Another interesting aspect is the partnership approach with TACC. As shared on the web site:

  • A commitment to reform: The TACC colleges have agreed to provide seed money for the project over 10 years, demonstrating a long-term commitment to the project.
  • Input from the field: TACC member institutions will serve as codevelopers, working with the Dana Center to develop the NMP course materials, tools, and services. They will also serve as implementation sites. This collaboration with practitioners in the field is critical to building a program informed by the people who will actually use it.
  • Alignment of state and institutional policies: Through its role as an advocate for community colleges, TACC can connect state and local leaders to develop policies to support the NMP goal of accelerated progress to and through coursework to attain a degree.

MDRC, the same group analyzing CUNY’s ASAP program, will provide independent reporting of the results. There should be implementation data available by the end of the year, and randomized controlled studies to be released in 2016.

To me, this is a very interesting initiative to watch. Given MDRC’s history of thorough documentation, we should be able to learn plenty of lessons from the state-wide deployment.

  1. Disclosure: Pearson is a client of MindWires Consulting.

The post Dana Center and New Mathways Project: Taking curriculum innovations to scale appeared first on e-Literate.

Smart HR IT Brings New Insights to Age-Old HR Challenges

Linda Fishman Hoyle - Tue, 2015-03-10 14:44

A Guest Post written by Oracle's Aaron Lazenby, Profit Magazine

In the age of digital disruption, there’s plenty to distract executives from their core mission. Chief human resource officers (CHROs) are no exception; new technologies (such as big data analytics, social recruiting, and gamification) have the potential to transform the people function. But executives have to balance the adoption of new technologies with the demands of maintaining the current business.

For Joyce Westerdahl, (pictured left), CHRO at Oracle, the key is keeping her eye on the business. “Stay absolutely focused on what’s happening at your own company and in your own industry,” she says. Here, Westerdahl talks to Profit about how she assesses her department’s technology needs, how she approaches new IT trends, and what HR managers should be paying attention to in order to succeed.

Lazenby (pictured right): What drives Oracle’s talent management strategy?

Westerdahl: Our main focus is to make Oracle a destination employer. We want this to be a place where people can grow their careers; a place where employees are challenged but have the support they need to get the job done. And we want our employees to stay—for the sake of their own professional development and the growth of the company. Having the right technology in place—to automate processes, improve the employee experience, and add new insights—is a key part of how we make that possible.

But when I look at the talent management challenges we face at Oracle, for example, I don’t think things have changed that much in the past 20 years. There has always been a war for talent. Even during tough recessions, we are always competing for top talent. We are always looking for better ways to recruit the right people. What have changed are the tools we have at our disposal for finding and engaging them.

Lazenby: How has Oracle’s HR strategy influenced the products the company creates?

Westerdahl: Our acquisition journey has created an incredibly diverse environment, from both a talent and a technology perspective. When we add companies with different business models, platforms, and cultures to the Oracle family, we have a unique opportunity to learn about other businesses from the inside and translate that into new product functionality. Integrating and onboarding acquired employees and transitioning the HR processes and technologies becomes a stream of knowledge, best practices, and requirements that feeds into product development.

When I look back at two of our key acquisitions—PeopleSoft in 2005 and Sun in 2010—I am reminded of how big a difference technology can make in HR’s ability to support a robust acquisition strategy. The offer process is complex and critical in an acquisition. With PeopleSoft, we created 7,500 US paper offer packages for new employees and loaded them onto FedEx trucks for distribution. It was a resource- and time-intensive, manual process that took three weeks. Five years later when we acquired Sun, we used technology to automate the process and it took less than an hour to generate more than 11,000 US offers. We were able to generate and distribute offers and receive acceptances in about a week.

Lazenby: How are trends like big data affecting HR?

Westerdahl: The volume of new HR data has become astounding over the past couple years. Being able to harness and leverage data is critical to HR’s ability to add strategic business value. When we develop an analytics strategy for Oracle HR, we start with a strategy around what the business wants to achieve. Then we translate that into data requirements and measures: What do we need to know? How can we measure our efforts to make sure we’re on track? What are the key trend indicators, and what is the process for translating the data into actionable HR efforts?

There is tremendous value in being able to use data to improve the employee experience so we can attract, engage, and develop the right talent for our business. For example, an employee engagement survey we conducted revealed that new employees were having a hard time onboarding at Oracle. We could also see this reflected in data that measures time to productivity for new employees. But with the survey, we had another measure that showed us not only the productivity aspect but also the employee frustration aspect. It’s a richer view of the problem, which helped us shape the right solution.

Lazenby: What do you think HR managers can learn from Oracle’s experience?

Westerdahl: I think the challenges HR faces are mostly the same as they always have been: how to recruit the best talent, how to onboard recruits when we are growing quickly, and how to retain and develop employees. But now technology supports new ways of doing things, so you have to decide how to use IT to solve these age-old HR challenges within your business. The key is turning things upside down and viewing things from a fresh perspective. There is no one-size-fits-all approach.

Converting a Classic PIA Component to Fluid UI

PeopleSoft Technology Blog - Tue, 2015-03-10 14:17

PeopleSoft just published a new Red-Paper that describes how to convert an existing Classic Component to a new Fluid UI Component.  It's a great resource for anyone that is interested in learning more about Fluid UI and the steps required to move a Pixel-Perfect Component to a new responsive Fluid UI Component.
The Red-Paper is called Converting Classic PIA Components to PeopleSoft Fluid User Interface.  You can find it on My Oracle Support, document Id 1984833.1

There are a few things I want to point out that are in the document. 

  1. It gives great instruction with an example of how to convert a Component to Fluid UI. 
  2. There is a very important section on classic controls that are not supported in Fluid UI, or require some type of conversion.  I can't stress how important that is.
  3. There is an appendix that identifies the delivered PeopleSoft style classes.  You'll find that appendix extremely valuable when you're looking for the right style to get the UI you want.

Of course converting an existing component is only one way to take advantage of the new Fluid UI.  Refactoring existing components to optimize them for the different form factors, or building components are certainly possible.  In many cases, developers want to leverage existing components to take advantage of tried and tested business logic.


Notes on HBase

DBMS2 - Tue, 2015-03-10 12:24

I talked with a couple of Cloudera folks about HBase last week. Let me frame things by saying:

  • The closest thing to an HBase company, ala MongoDB/MongoDB or DataStax/Cassandra, is Cloudera.
  • Cloudera still uses a figure of 20% of its customers being HBase-centric.
  • HBaseCon and so on notwithstanding, that figure isn’t really reflected in Cloudera’s marketing efforts. Cloudera’s marketing commitment to HBase has never risen to nearly the level of MongoDB’s or DataStax’s push behind their respective core products.
  • With Cloudera’s move to “zero/one/many” pricing, Cloudera salespeople have little incentive to push HBase hard to accounts other than HBase-first buyers.

Also:

  • Cloudera no longer dominates HBase development, if it ever did.
    • Cloudera is the single biggest contributor to HBase, by its count, but doesn’t make a majority of the contributions on its own.
    • Cloudera sees Hortonworks as having become a strong HBase contributor.
    • Intel is also a strong contributor, as are end user organizations such as Chinese telcos. Not coincidentally, Intel was a major Hadoop provider in China before the Intel/Cloudera deal.
  • As far as Cloudera is concerned, HBase is just one data storage technology of several, focused on high-volume, high-concurrency, low-latency short-request processing. Cloudera thinks this is OK because of HBase’s strong integration with the rest of the Hadoop stack.
  • Others who may be inclined to disagree are in several cases doing projects on top of HBase to extend its reach. (In particular, please see the discussion below about Apache Phoenix and Trafodion, both of which want to offer relational-like functionality.)

Cloudera’s views on HBase history — in response to the priorities I brought to the conversation — include:

  • HBase initially favored consistency over performance/availability, while Cassandra initially favored the opposite choice. Both products, however, have subsequently become more tunable in those tradeoffs.
  • Cloudera’s initial contributions to HBase focused on replication, disaster recovery and so on. I guess that could be summarized as “scaling”.
  • Hortonworks’ early HBase contributions included (but were not necessarily limited to):
    • Making recovery much faster (10s of seconds or less, rather than minutes or more).
    • Some of that consistency vs. availability tuning.
  • “Coprocessors” were added to HBase ~3 years ago, to add extensibility, with the first use being in security/permissions.
  • With more typical marketing-oriented version numbers:
    • HBase .90, the first release that did a good job on durability, could have been 1.0.
    • HBase .92 and .94, which introduced coprocessors, could have been Version 2.
    • HBase .96 and .98 could have been Version 3.
    • The recent HBase 1.0 could have been 4.0.

The HBase roadmap includes:

  • A kind of BLOB/CLOB (Binary/Character Large OBject) support.
    • Intel is heavily involved in this feature.
    • The initial limit is 10 megabytes or so, due to some limitations in the API (I didn’t ask why that made sense). This happens to be all the motivating Chinese customer needs for the traffic photographs it wants to store.
  • Various kinds of “multi-tenancy” support (multi-tenancy is one of those terms whose meaning is getting stretched beyond recognition), including:
    • Mixed workload support (short-request and analytic) on the same nodes.
    • Mixed workload support on different nodes in the same cluster.
    • Security between different apps in the same cluster.
  • (Still in the design phase) Bottleneck Whack-A-Mole, with goals including but not limited to:
    • Scale-out beyond the current assumed limit of ~1200 nodes.
    • More predictable performance, based on smaller partition sizes.
  • (Possibly) Multi-data-center fail-over.

Not on the HBase roadmap per se are global/secondary indexes. Rather, we talked about projects on top of HBase which are meant to provide those. One is Apache Phoenix, which supposedly:

  • Makes it simple to manage compound keys. (E.g., City/State/ZipCode)
  • Provides global secondary indexes (but not in a fully ACID way).
  • Offers some very basic JOIN support.
  • Provides a JDBC interface.
  • Offers efficiencies in storage utilization, scan optimizations, and aggregate calculations.

Another such project is Trafodion — supposedly the Welsh word for “transaction” — open sourced by HP. This seems to be based on NonStop SQL and Neoview code, which counter-intuitively have always been joined at the hip.

There was a lot more to the conversation, but I’ll stop here for two reasons:

  • This post is pretty long already.
  • I’m reserving some of the discussion until after I’ve chatted with vendors of other NoSQL systems.

Related link

  • My July 2011 post on HBase offers context, as do the comments on it.
Categories: Other

Some stuff on my mind, March 10, 2015

DBMS2 - Tue, 2015-03-10 10:27

I found yesterday’s news quite unpleasant.

  • A guy I knew and had a brief rivalry with in high school died of colon cancer, a disease that I’m at high risk for myself.
  • GigaOm, in my opinion the best tech publication — at least for my interests — shut down.
  • The sex discrimination trial around Kleiner Perkins is undermining some people I thought well of.

And by the way, a guy died a few day ago snorkeling at the same resort I like to go to, evidently doing less risky things than I on occasion have.

So I want to unclutter my mind a bit. Here goes.

1. There are a couple of stories involving Sam Simon and me that are too juvenile to tell on myself, even now. But I’ll say that I ran for senior class president, in a high school where the main way to campaign was via a single large poster, against a guy with enough cartoon-drawing talent to be one of the creators of the Simpsons. Oops.

2. If one suffers from ulcerative colitis as my mother did, one is at high risk of getting colon cancer, as she also did. Mine isn’t as bad as hers was, due to better tolerance for medication controlling the disease. Still, I’ve already had a double-digit number of colonoscopies in my life. They’re not fun. I need another one soon; in fact, I canceled one due to the blizzards.

Pro-tip — never, ever have a colonoscopy without some kind of anesthesia or sedation. Besides the unpleasantness, the lack of meds increases the risk that the colonoscopy will tear you open and make things worse. I learned that the hard way in New York in the early 1980s.

3. Five years ago I wrote optimistically about the evolution of the information ecosystem, specifically using the example of the IT sector. One could argue that I was right. After all: 

  • Gartner still seems to be going strong.
  • O’Reilly, Gartner and vendors probably combine to produce enough good conferences.
  • A few traditional journalists still do good work (in the areas covered by this blog Doug Henschen comes to mind).
  • A few vendor folks are talented and responsible enough to add to the discussion. A few small-operation folks — e.g. me — are still around.

Still, the GigaOm news is not encouraging.

4. As TechCrunch and Pando reported, plaintiff Ellen Pao took the stand and sounded convincing in her sexual harassment suit against Kleiner Perkins (but of course she hadn’t been cross-examined yet). Apparently there was a major men-only party hosted by partner Al Gore, a candidate I first supported in 1988. And partner Ray Lane, somebody who at Oracle showed tremendous management effectiveness, evidently didn’t do much to deal with Pao’s situation.

Blech.

At some point I want to write about a few women who were prominent in my part of the tech industry in the 1980s — at least Ann Winblad, Esther Dyson, and Sandy Kurtzig, maybe analyst/investment banker folks Cristina Morgan and Ruthann Quindlen as well. We’ve come a long way since those days (when, in particular, I could briefly list a significant fraction of the important women in the industry). There seems to be a lot further yet to go.

5. All that said — I’m indeed working on some cool stuff. Some is evident from recent posts. Other may be reflected in an upcoming set of posts that focus on NoSQL, business intelligence, and — I hope — the intersection of the two areas.

6. Speaking of recent posts, I did one on marketing for young companies that brings a lot of advice and tips together. I think it’s close to being a must-read.

Categories: Other

Loading CSV files with special characters in Oracle DB

Dimitri Gielis - Tue, 2015-03-10 10:08
I often need to load the data of Excel or CSV files into the Oracle Database.

Ever got those annoying question marks when you try to load the data? or instead of question marks you just get empty blanks when the file is using special characters? Here's an example:


My database characterset is UTF-8, so ideally you want to load your data UTF-8 encoded.
With Excel I've not found an easy way to specify the encoding to UTF-8 when saving to a CSV file.Although in Excel (OSX) - Preferences - General - Web Options - Encoding, I specified UTF-8, it still saves the file as Western (Mac OS Roman).
I've two workarounds I use to get around the issue. Open the file in a text editor e.g. BBEdit and click the encoding option and select UTF-8.

Another way is to open Terminal and use the iconv command line tool to convert the file

iconv -t UTF8 -f MACROMAN < file.csv > file-utf8.csv

If you get a CSV file and you want to import it in Excel first, the best way I found is to create a new Workbook and import the CSV file (instead of opening directly). You can import either by using File - Import or Data - Get External Data - Import Text File. During the import you can specify the File origin and you can see which data format works for you.


After the manipulations in Excel you can save again as CSV as outlines above to make sure you resulting CSV file is UTF-8 encoded.
Finally to import the data you can use APEX, SQL Developer or SQLcl to load your CSV file into your table.
Categories: Development

PeopleTools 8.54: Performance Performance Monitor Enhancements

David Kurtz - Tue, 2015-03-10 04:09
This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
Transaction History Search ComponentThere are a number of changes:
  • You can specify multiple system identifiers.  For example, you might be monitoring Portal, HR and CRM.  Now you can search across all of them in a single search.
    • It has always been the case that when you drill into the Performance Monitoring Unit (PMU), by clicking on the tree icon, you would see the whole of a PMU that invoked services from different systems.
  • You can also specify multiple transaction types, rather than have to search each transaction type individually.
This is a useful enhancement when searching for a specific or a small number of transaction.  However, I do not think it will save you from having to query the underlying transactions table.
PPM Archive Process The PPM archive process (PSPM_ARCHIVE) has been significantly rewritten in PeopleTools 8.54.  In many places, it still uses this expression to identify rows to be archived or purged:
%DateTimeDiff(X.PM_MON_STRT_DTTM, %CurrentDateTimeIn) >= (PM_MAX_HIST_AGE * 24 * 60)
This expands to
ROUND((CAST(( CAST(SYSTIMESTAMP AS TIMESTAMP)) AS DATE) - CAST((X.PM_MON_STRT_DTTM) AS DATE)) * 1440, 0)
   >= (PM_MAX_HIST_AGE * 24 *  60)
which has no chance of using an index.  This used to cause performance problems when the archive process had not been run for a while and the high water marks on the history tables had built up.

Now, the archive process now works hour by hour, and this will use the index on the timestamp column.
"... AND X.PM_MON_STRT_DTTM <= SYSDATE - PM_MAX_HIST_AGE 
and (PM_MON_STRT_DTTM) >= %Datetimein('" | DateTimeValue(&StTime) | "')
and (PM_MON_STRT_DTTM) <= %DateTimeIn('" | DateTimeValue(&EndTime) | "')"
Tuxedo Queuing Since Performance Monitor was first introduced, event 301 has never reported the length of the inbound message queues in Tuxedo.  The reported queue length was always zero.  This may have been fixed in PeopleTools 8.53, but I have only just noticed it
Java Management Extensions (JMX) SupportThere have been some additions to Performance Monitor that suggest that it will be possible to extract performance metrics using JMX.  The implication is that the Oracle Enterprise Manager Application Management Pack of PeopleSoft will be able to do this.  However, so far I haven't found any documentation. The new component is not mentioned in the PeopleTools 8.54: Performance Monitor documentation.
  • New Table
    • PS_PTPMJMXUSER - keyed on PM_AGENTID
  • New Columns
    • PSPMSYSDEFAULTS - PTPHONYKEY.  So far I have only seen it set to 0.
    • PSPMAGENT - PM_JMX_RMI_PORT.  So far only seen it set to 1
  • New Component

    ©David Kurtz, Go-Faster Consultancy Ltd.

    Log Buffer #413, A Carnival of the Vanities for DBAs

    Pythian Group - Mon, 2015-03-09 21:15

    This Log Buffer Editions scours the Internet and brings some of the fresh blog posts from Oracle, SQL Server and MySQL.

    Oracle:

    Most of Kyles’ servers tend to be Linux VMs on VMware ESX without any graphics desktops setup, so it can be disconcerting trying to install Oralce with it’s graphical “runInstaller” being the gate way we have to cross to achieve installation.

    Working around heatbeat issues caused by tracing or by regexp

    APEX 5 EA Impressions: Custom jQuery / jQuery UI implementations

    Introduction to the REST Service Editor, Generation (PART 2)

    Due to recent enhancements and importance within Oracle’s storage portfolio, StorageTek Storage Archive Manager 5.4 (SAM-QFS) has been renamed to Oracle Hierarchical Storage Manager (Oracle HSM) 6.0.

    SQL Server:

    There are different techniques to optimize the performance of SQL Server queries but wouldn’t it be great if we had some recommendations before we started planning or optimizing queries so that we didn’t have to start from the scratch every time? This is where you can use the Database Engine Tuning Advisor utility to get recommendations based on your workload.

    Data Mining Part 25: Microsoft Visio Add-Ins

    Stairway to Database Source Control Level 3: Working With Others (Centralized Repository)

    SQL Server Hardware will provide the fundamental knowledge and resources you need to make intelligent decisions about choice, and optimal installation and configuration, of SQL Server hardware, operating system and the SQL Server RDBMS.

    Questions About SQL Server Transaction Log You Were Too Shy To Ask

    MySQL:

    The post shows how you can easily read the VCAP_SERVICES postgresql credentials within your Java Code using the maven repo. This assumes your using the ElephantSQL Postgresql service. A single connection won’t be ideal but for demo purposes might just be all you need.

    MariaDB 5.5.42 Overview and Highlights

    How to test if CVE-2015-0204 FREAK SSL security flaw affects you

    Using master-master for MySQL? To be frankly we need to get rid of that architecture. We are skipping the active-active setup and show why master-master even for failover reasons is the wrong decision.

    Resources for Highly Available Database Clusters: ClusterControl Release Webinar, Support for Postgres, New Website and More

    Categories: DBA Blogs

    Recovering an Oracle Database with Missing Redo

    Pythian Group - Mon, 2015-03-09 21:14
    Background

    I ran into a situation where we needed to recover from an old online backup which (due to some issues with the RMAN “KEEP” command) was missing the archived redo log backups/files needed to make the backup consistent.  The client wasn’t concerned about data that changed during the backup, they were interested in checking some very old data from long before this online backup had started.

    Visualizing the scenario using a timeline (not to scale):

      |-------|------------------|---------|------------------|
      t0      t1                 t2        t3                 t4
              Data is added                                   Present
    

    The client thought that some data had become corrupted and wasn’t sure when but knew that it wasn’t recently so the flashback technologies were not an option.  Hence they wanted a restore of the database into a new temporary server as of time t1 which was in the distant past.

    An online (hot) backup was taken between t2 and t3 and was considered to be old enough or close enough to t1 however the problem was that all archived redo log backups were missing. The client was certain that the particular data they were interested in would not have change during the online backup.

    Hence the question is: without the necessary redo data to make the online backup consistent (between times t2 and t3) can we still open the database to extract data from prior to when the online backup began?  The official answer is “no” – the database must be made consistent to be opened.  And with an online backup the redo stream is critical to making the backed up datafiles consistent.  So without the redo vectors in the redo stream, the files cannot be made consistent with each other and hence the database cannot be opened.  However the unofficial, unsupported answer is that it can be done.

    This article covers the unsupported and unofficial methods for opening a database with consistency corruption so that certain data can be extracted.

    Other scenarios can lead to the same situation.  Basically this technique can be used to open the Oracle database any time the datafiles cannot be made consistent.

     

    Demo Setup

    To illustrate the necessary steps I’ve setup a test 12c non-container database called NONCDB.  And to simulate user transactions against it I ran a light workload using the Swingbench Order Entry (SOE) benchmark from another computer in the background.

    Before beginning any backups or recoveries I added two simple tables to the SCOTT schema and some rows to represent the “old” data (with the words “OLD DATA” in the C2 column):

    SQL> create table scott.parent (c1 int, c2 varchar2(16), constraint parent_pk primary key (c1)) tablespace users;
    
    Table created.
    
    SQL> create table scott.child (c1 int, c2 varchar2(16), foreign key (c1) references scott.parent(c1)) tablespace soe;
    
    Table created.
    
    SQL> insert into scott.parent values(1, 'OLD DATA 001');
    
    1 row created.
    
    SQL> insert into scott.parent values(2, 'OLD DATA 002');
    
    1 row created.
    
    SQL> insert into scott.child  values(1, 'OLD DETAILS A');
    
    1 row created.
    
    SQL> insert into scott.child  values(1, 'OLD DETAILS B');
    
    1 row created.
    
    SQL> insert into scott.child  values(1, 'OLD DETAILS C');
    
    1 row created.
    
    SQL> insert into scott.child  values(2, 'OLD DETAILS D');
    
    1 row created.
    
    SQL> commit;
    
    Commit complete.
    
    SQL>
    

     

    Notice that I added a PK-FK referential integrity constraint and placed each table is a different tablespace so they could be backed up at different times.

    These first entries represent my “old data” from time t1.

     

    The Online Backup

    The next step is to perform the online backup.  For simulation purposes I’m adjusting the steps a little bit to try to represent a real life situation where the data in my tables is being modified while the backup is running.  Hence my steps are:

    • Run an online backup of all datafiles except for the USERS tablespace.
    • Add some more data to my test tables (hence data going into the CHILD table is after the SOE tablespace backup and the data into the PARENT table is before the USERS tablespace backup).
    • Record the current archived redo log and then delete it to simulate the lost redo data.
    • Backup the USERS tablespace.
    • Add some post backup data to the test tables.

    The actual commands executed in RMAN are:

    $ rman
    
    Recovery Manager: Release 12.1.0.2.0 - Production on Thu Feb 26 15:59:36 2015
    
    Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
    
    RMAN> connect target
    
    connected to target database: NONCDB (DBID=1677380280)
    
    RMAN> backup datafile 1,2,3,5;
    
    Starting backup at 26-FEB-15
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=46 device type=DISK
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00005 name=/u01/app/oracle/oradata/NONCDB/datafile/SOE.dbf
    input datafile file number=00001 name=/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_system_b2k8dsno_.dbf
    input datafile file number=00002 name=/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_sysaux_b2k8f3d4_.dbf
    input datafile file number=00003 name=/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_undotbs1_b2k8fcdm_.dbf
    channel ORA_DISK_1: starting piece 1 at 26-FEB-15
    channel ORA_DISK_1: finished piece 1 at 26-FEB-15
    piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T155942_bgz9ol3g_.bkp tag=TAG20150226T155942 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:11:16
    Finished backup at 26-FEB-15
    
    Starting Control File and SPFILE Autobackup at 26-FEB-15
    piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/autobackup/2015_02_26/o1_mf_s_872698259_bgzb0647_.bkp comment=NONE
    Finished Control File and SPFILE Autobackup at 26-FEB-15
    
    RMAN> alter system switch logfile;
    
    Statement processed
    
    RMAN> commit;
    
    Statement processed
    
    RMAN> alter system switch logfile;
    
    Statement processed
    
    RMAN> insert into scott.parent values (3, 'NEW DATA 003');
    
    Statement processed
    
    RMAN> insert into scott.child  values (3, 'NEW DETAILS E');
    
    Statement processed
    
    RMAN> commit;
    
    Statement processed
    
    RMAN> select sequence# from v$log where status='CURRENT';
    
     SEQUENCE#
    ----------
            68
    
    RMAN> alter system switch logfile;
    
    Statement processed
    
    RMAN> alter database backup controlfile to '/tmp/controlfile_backup.bkp';
    
    Statement processed
    
    RMAN> backup datafile 4;
    
    Starting backup at 26-FEB-15
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00004 name=/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_users_b2k8gf7d_.dbf
    channel ORA_DISK_1: starting piece 1 at 26-FEB-15
    channel ORA_DISK_1: finished piece 1 at 26-FEB-15
    piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T165814_bgzdrpmk_.bkp tag=TAG20150226T165814 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 26-FEB-15
    
    Starting Control File and SPFILE Autobackup at 26-FEB-15
    piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/autobackup/2015_02_26/o1_mf_s_872701095_bgzdrrrh_.bkp comment=NONE
    Finished Control File and SPFILE Autobackup at 26-FEB-15
    
    RMAN> alter database backup controlfile to '/tmp/controlfile_backup.bkp';
    
    Statement processed
    
    RMAN> insert into scott.parent values (4, 'NEW DATA 004');
    
    Statement processed
    
    RMAN> insert into scott.child  values (4, 'NEW DETAILS F');
    
    Statement processed
    
    RMAN> commit;
    
    Statement processed
    
    RMAN> exit
    
    
    Recovery Manager complete.
    $
    

     

    Notice that in the above steps that since I’m using Oracle Database 12c I’m able to execute normal SQL commands from RMAN – this is a RMAN 12c new feature.

     

    Corrupting the Backup

    Now I’m going to corrupt my backup by removing one of the archived redo logs needed to make the backup consistent:

    SQL> set pages 999 lines 120 trims on tab off
    SQL> select 'rm '||name stmt from v$archived_log where sequence#=68;
    
    STMT
    ------------------------------------------------------------------------------------------------------------------------
    rm /u01/app/oracle/fast_recovery_area/NONCDB/archivelog/2015_02_26/o1_mf_1_68_bgzcnv04_.arc
    
    SQL> !rm /u01/app/oracle/fast_recovery_area/NONCDB/archivelog/2015_02_26/o1_mf_1_68_bgzcnv04_.arc
    
    SQL>
    

     

    Finally I’ll remove the OLD data to simulate the data loss (representing t4):

    SQL> select * from scott.parent order by 1;
    
            C1 C2
    ---------- ----------------
             1 OLD DATA 001
             2 OLD DATA 002
             3 NEW DATA 003
             4 NEW DATA 004
    
    SQL> select * from scott.child order by 1;
    
            C1 C2
    ---------- ----------------
             1 OLD DETAILS A
             1 OLD DETAILS B
             1 OLD DETAILS C
             2 OLD DETAILS D
             3 NEW DETAILS E
             4 NEW DETAILS F
    
    6 rows selected.
    
    SQL> delete from scott.child where c2 like 'OLD%';
    
    4 rows deleted.
    
    SQL> delete from scott.parent where c2 like 'OLD%';
    
    2 rows deleted.
    
    SQL> commit;
    
    Commit complete.
    
    SQL>
    

     

    Attempting a Restore and Recovery

    Now let’s try to recover from our backup on a secondary system so we can see if we can extract that old data.

    After copying over all of the files, the first thing to do is to try a restore as per normal:

    $ rman target=/
    
    Recovery Manager: Release 12.1.0.2.0 - Production on Mon Mar 2 08:40:12 2015
    
    Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database (not started)
    
    RMAN> startup nomount;
    
    Oracle instance started
    
    Total System Global Area    1577058304 bytes
    
    Fixed Size                     2924832 bytes
    Variable Size                503320288 bytes
    Database Buffers            1056964608 bytes
    Redo Buffers                  13848576 bytes
    
    RMAN> restore controlfile from '/tmp/controlfile_backup.bkp';
    
    Starting restore at 02-MAR-15
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=12 device type=DISK
    
    channel ORA_DISK_1: copied control file copy
    output file name=/u01/app/oracle/oradata/NONCDB/controlfile/o1_mf_b2k8d9nq_.ctl
    output file name=/u01/app/oracle/fast_recovery_area/NONCDB/controlfile/o1_mf_b2k8d9v5_.ctl
    Finished restore at 02-MAR-15
    
    RMAN> alter database mount;
    
    Statement processed
    released channel: ORA_DISK_1
    
    RMAN> restore database;
    
    Starting restore at 02-MAR-15
    Starting implicit crosscheck backup at 02-MAR-15
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=12 device type=DISK
    Crosschecked 4 objects
    Finished implicit crosscheck backup at 02-MAR-15
    
    Starting implicit crosscheck copy at 02-MAR-15
    using channel ORA_DISK_1
    Crosschecked 2 objects
    Finished implicit crosscheck copy at 02-MAR-15
    
    searching for all files in the recovery area
    cataloging files...
    cataloging done
    
    using channel ORA_DISK_1
    
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/NONCDB/datafile/o1_mf_system_b2k8dsno_.dbf
    channel ORA_DISK_1: restoring datafile 00002 to /u01/app/oracle/oradata/NONCDB/datafile/o1_mf_sysaux_b2k8f3d4_.dbf
    channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/NONCDB/datafile/o1_mf_undotbs1_b2k8fcdm_.dbf
    channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/NONCDB/datafile/SOE.dbf
    channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T155942_bgz9ol3g_.bkp
    channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T155942_bgz9ol3g_.bkp tag=TAG20150226T155942
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:01:46
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/NONCDB/datafile/o1_mf_users_b2k8gf7d_.dbf
    channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T165814_bgzdrpmk_.bkp
    channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T165814_bgzdrpmk_.bkp tag=TAG20150226T165814
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
    Finished restore at 02-MAR-15
    
    RMAN>
    

     

    Notice that it did restore the datafiles from both the SOE and USERS tablespaces, however we know that those are inconsistent with each other.

    Attempting to do the recovery should give us an error due to the missing redo required for consistency:

    RMAN> recover database;
    
    Starting recover at 02-MAR-15
    using channel ORA_DISK_1
    
    starting media recovery
    
    archived log for thread 1 with sequence 67 is already on disk as file /u01/app/oracle/fast_recovery_area/NONCDB/archivelog/2015_02_26/o1_mf_1_67_bgzcn05f_.arc
    archived log for thread 1 with sequence 69 is already on disk as file /u01/app/oracle/fast_recovery_area/NONCDB/archivelog/2015_02_26/o1_mf_1_69_bgzdqo9n_.arc
    Oracle Error:
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_system_bh914cx2_.dbf'
    
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 03/02/2015 08:44:21
    RMAN-06053: unable to perform media recovery because of missing log
    RMAN-06025: no backup of archived log for thread 1 with sequence 68 and starting SCN of 624986 found to restore
    
    RMAN>
    

     

    As expected we got the dreaded ORA-01547, ORA-01194, ORA-01110 errors meaning that we don’t have enough redo to make the recovery successful.

     

    Attempting a Recovery

    Now the crux of the situation. We’re stuck with the common inconsistency error which most seasoned DBAs should be familiar with:

    Oracle Error:
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_system_bh914cx2_.dbf'
    
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 03/02/2015 08:44:21
    RMAN-06053: unable to perform media recovery because of missing log
    RMAN-06025: no backup of archived log for thread 1 with sequence 68 and starting SCN of 624986 found to restore

     

    And of course we need to be absolutely positive that we don’t have the missing redo somewhere.  For example in an RMAN backup piece on disk or on tape somewhere from an archive log backup that can be restored.  Or possibly still in one of the current online redo logs.  DBAs should explore all possible options for retrieving the missing redo vectors in some form or another before proceeding.

    However, if we’re absolutely certain of the following we can continue:

    1. We definitely can’t find the missing redo anywhere.
    2. We absolutely need to extract data from prior to the start of the online backup.
    3. Our data definitely wasn’t modified during the online backup.

     

    The natural thing to check first when trying to open the database after an incomplete recovery is the fuzziness and PIT (Point In Time) of the datafiles from SQLPlus:

    SQL> select fuzzy, status, checkpoint_change#,
      2         to_char(checkpoint_time, 'DD-MON-YYYY HH24:MI:SS') as checkpoint_time,
      3         count(*)
      4    from v$datafile_header
      5   group by fuzzy, status, checkpoint_change#, checkpoint_time
      6   order by fuzzy, status, checkpoint_change#, checkpoint_time;
    
    FUZZY STATUS  CHECKPOINT_CHANGE# CHECKPOINT_TIME        COUNT(*)
    ----- ------- ------------------ -------------------- ----------
    NO    ONLINE              647929 26-FEB-2015 16:58:14          1
    YES   ONLINE              551709 26-FEB-2015 15:59:43          4
    
    SQL>
    

     

    The fact that there are two rows returned and that not all files have FUZZY=NO indicates that we have a problem and that more redo is required before the database can be opened with the RESETLOGS option.

    But our problem is that we don’t have that redo and we’re desperate to open our database anyway.

     

    Recovering without Consistency

    Again, recovering without consistency is not supported and should only be attempted as a last resort.

    Opening the database with the data in an inconsistent state is actually pretty simple.  We simply need to set the “_allow_resetlogs_corruption” hidden initialization parameter and set the undo management to “manual” temporarily:

    SQL> alter system set "_allow_resetlogs_corruption"=true scope=spfile;
    
    System altered.
    
    SQL> alter system set undo_management='MANUAL' scope=spfile;
    
    System altered.
    
    SQL> shutdown abort;
    ORACLE instance shut down.
    SQL> startup mount;
    ORACLE instance started.
    
    Total System Global Area 1577058304 bytes
    Fixed Size                  2924832 bytes
    Variable Size             503320288 bytes
    Database Buffers         1056964608 bytes
    Redo Buffers               13848576 bytes
    Database mounted.
    SQL>
    

     

    Now, will the database open? The answer is still: “probably not”.  Giving it a try we get:

    SQL> alter database open resetlogs;
    alter database open resetlogs
    *
    ERROR at line 1:
    ORA-01092: ORACLE instance terminated. Disconnection forced
    ORA-00600: internal error code, arguments: [2663], [0], [551715], [0], [562781], [], [], [], [], [], [], []
    Process ID: 4538
    Session ID: 237 Serial number: 5621
    
    
    SQL>
    

     

    Doesn’t look good, right?  Actually the situation is not that bad.

    To put it simply this ORA-00600 error means that a datafile has a recorded SCN that’s ahead of the database SCN.  The current database SCN is shown as the 3rd argument (in this case 551715) and the datafile SCN is shown as the 5th argument (in this case 562781).  Hence a difference of:

    562781 - 551715 = 11066

    In this example, that’s not too large of a gap.  But in a real system, the difference may be more significant.  Also if multiple datafiles are ahead of the current SCN you should expect to see multiple ORA-00600 errors.

    The solution to this problem is quite simple: roll forward the current SCN until it exceeds the datafile SCN.  The database automatically generates a number of internal transactions on each startup hence the way to roll forward the database SCN is to simply perform repeated shutdowns and startups.  Depending on how big the gap is, it may be necessary to repeatedly shutdown abort and startup – the gap between the 5th and 3rd parameter to the ORA-00600 will decrease each time.  However eventually the gap will reduce to zero and the database will open:

    SQL> connect / as sysdba
    Connected to an idle instance.
    SQL> shutdown abort
    ORACLE instance shut down.
    SQL> startup
    ORACLE instance started.
    
    Total System Global Area 1577058304 bytes
    Fixed Size                  2924832 bytes
    Variable Size             503320288 bytes
    Database Buffers         1056964608 bytes
    Redo Buffers               13848576 bytes
    Database mounted.
    Database opened.
    SQL>
    

     

    Now presumably we want to query or export the old data so the first thing we should do is switch back to automatic undo management using a new undo tablespace:

    SQL> create undo tablespace UNDOTBS2 datafile size 50M;
    
    Tablespace created.
    
    SQL> alter system set undo_tablespace='UNDOTBS2' scope=spfile;
    
    System altered.
    
    SQL> alter system set undo_management='AUTO' scope=spfile;
    
    System altered.
    
    SQL> shutdown abort
    ORACLE instance shut down.
    SQL> startup
    ORACLE instance started.
    
    Total System Global Area 1577058304 bytes
    Fixed Size                  2924832 bytes
    Variable Size             503320288 bytes
    Database Buffers         1056964608 bytes
    Redo Buffers               13848576 bytes
    Database mounted.
    Database opened.
    SQL>
    

     

    Finally the database is opened (although the data is inconsistent) and the “old” data can be queried:

    SQL> select * from scott.parent;
    
            C1 C2
    ---------- ----------------
             1 OLD DATA 001
             2 OLD DATA 002
             3 NEW DATA 003
    
    SQL> select * from scott.child;
    
            C1 C2
    ---------- ----------------
             1 OLD DETAILS A
             1 OLD DETAILS B
             1 OLD DETAILS C
             2 OLD DETAILS D
    
    SQL>
    

     

    As we can see, all of the “old” data (rows that begin with “OLD”) that were from before the backup began (before t2) is available.  And only part of the data inserted during the backup (rows where C1=3) as would be expected – that’s our data inconsistency.

    We’ve already seen that we can SELECT the “old” data.  We can also export it:

    $ expdp scott/tiger dumpfile=DATA_PUMP_DIR:OLD_DATA.dmp nologfile=y
    
    Export: Release 12.1.0.2.0 - Production on Mon Mar 2 09:39:11 2015
    
    Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
    Starting "SCOTT"."SYS_EXPORT_SCHEMA_02":  scott/******** dumpfile=DATA_PUMP_DIR:OLD_DATA.dmp nologfile=y
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 640 KB
    Processing object type SCHEMA_EXPORT/USER
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/COMMENT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
    . . exported "SCOTT"."CHILD"                             5.570 KB       4 rows
    . . exported "SCOTT"."PARENT"                            5.546 KB       3 rows
    Master table "SCOTT"."SYS_EXPORT_SCHEMA_02" successfully loaded/unloaded
    ******************************************************************************
    Dump file set for SCOTT.SYS_EXPORT_SCHEMA_02 is:
      /u01/app/oracle/admin/NONCDB/dpdump/OLD_DATA.dmp
    Job "SCOTT"."SYS_EXPORT_SCHEMA_02" successfully completed at Mon Mar 2 09:39:46 2015 elapsed 0 00:00:34
    
    $
    

     

    At this point we’ve either queried or extracted that critical old data which was the point of the exercise and we should immediately discard the restored database.  Remember it has data inconsistency which may include in internal tables an hence shouldn’t be used for anything beyond querying or extracting that “old” data.  Frequent crashes or other bizarre behavior of this restored database should be expected.  So get in, get the data, get out, and get rid of it!

     

    Conclusion

    If “desperate times call for desperate measures” and if you’re in that situation described in detail above where you need the data, are missing the necessary redo vectors, and are not concerned about the relevant data being modified during the backup then there options.

    The “more redo needed for consistency” error stack should be familiar to most DBAs:

    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    

    And they may also be somewhat familiar with the “_allow_resetlogs_corruption” hidden initialization parameter.  However don’t let the resulting ORA-00600 error make the recovery attempt seem unsuccessful:

    ORA-00600: internal error code, arguments: [2663], [0], [551715], [0], [562781], [], [], [], [], [], [], []
    

    This error is overcome-able and the database likely can still be opened so the necessary data can be queried or extracted.

    Note: this process has been tested with Oracle Database 10g, Oracle Database 11g, and Oracle Database 12c.

    Categories: DBA Blogs

    Oracle Database Tools updated - check out SQLcl

    Dimitri Gielis - Mon, 2015-03-09 16:31
    Today Oracle released new versions of:

    Also Oracle REST Data Services 3 got a new EA2 version.You may want to check Kris Rice's blog for new features.
    I already blogged about all of the tools before, but not yet about SQLcl.This is a command line tool, I call it "SQL*Plus on steroids" (or as Jeff calls it SQL Developer meets SQL*Plus). It's particularly useful when you're on your server and quickly need to run some queries. Or if you're a command line guy/girl all the time, this tool is for you.
    Here's a screenshot how to connect to your database with SQLcl from Linux.

    Typing help will show you a list of quick shortcuts.
    For example if you type APEX you get a list of your APEX applications

    What I really like about SQLcl is that it formats the output so nicely. With SQL*Plus you had to set column widths, page sizes etc. Not with SQLcl, it's smart and formats it nicely.
    Next to that you can quickly output your query in JSON format by typing "set sqlformat json":

    There're many more features - a good starting point is this presentation and video by Jeff Smith.
    Categories: Development