Skip navigation.

Feed aggregator

Pearson’s Efficacy Listening Tour

Michael Feldstein - Thu, 2014-09-11 14:06

Back around New Year, Michael wrote a post examining Pearson’s efficacy initiative and calling on the company to engage in active discussions with various communities within higher education about defining “efficacy” with educators rather than for educators. It turns out that post got a fair bit of attention within the company. It was circulated in a company-wide email from CEO John Fallon, and the blog post and all the comments were required reading for portions of the company leadership. After a series of discussions with the company, we, through our consulting company, have been hired by Pearson to facilitate a few of these conversations. We also asked for and received permission to blog about them. Since this is an exception to our rule that we don’t blog about our paid engagements, we want to tell you a little more about the engagement, our rationale for blogging about it, and the ground rules.

The project itself is fairly straightforward. We’re facilitating conversations with a few different groups of educators in different contexts. The focus of each conversation is how they define and measure educational effectiveness in their respective contexts. There will be some  discussion of Pearson’s efficacy efforts at a high level, but mainly for the purpose of trying to map what the educators are telling us about their practices to how Pearson is thinking about efficacy in the current iteration of their approach. After doing a few of these, we’ll bring together the participants along with other educators in a culminating event. At this meeting, the participants will hear a summary of the lessons learned from the earlier conversations, learn a bit more about Pearson’s efficacy work, and then break up into mixed discussion groups to provide more feedback on how to move the efficacy conversation forward and how Pearson’s own efforts can be improved to make them maximally useful to educators.

Since both e-Literate readers and Pearson seemed to get a lot of value from our original post on the topic, we believe there would be value in sharing some of the ongoing conversation here as well. So we asked for and received permission from Pearson to blog about it. Here are the ground rules:

  • We are not getting paid to blog and are under no obligation to blog.
  • Our blog posts do not require prior editorial review by Pearson.
  • Discussions with Pearson during the engagement are considered fair game for blogging unless they are explicitly flagged as otherwise.
  • On the other hand, we will ask for Pearson customers for approval prior to writing about their own campus initiatives (and, in fact, will extend that courtesy to all academic participants).

The main focus of these posts, like the engagement itself, is likely to be on how the notion of efficacy resonates (or doesn’t) with various academic communities in various contexts. Defining and measuring the effectiveness of educational experiences—when measurement is possible and sensible—is a subject with much broader application’s than Pearson’s product development, which is why we are making an exception to our blogging recusal policy for our consulting engagements and why we appreciate Pearson giving us a free hand to write about what we learn.

The post Pearson’s Efficacy Listening Tour appeared first on e-Literate.

All Access Pass to Oracle Support

Chris Warticki - Thu, 2014-09-11 12:00

Looking for tips, recommendations and resources to help you keep your Oracle applications and systems running at peak performance? Want to find out how to get more out of your Oracle Premier Support coverage?

More than 500 experts from across Services and Support will be on hand at Oracle OpenWorld to answer your questions and share best practices for adopting and optimizing Oracle technology.

  • Find out what Oracle experts know about the best tools, tips and resources for supporting and upgrading Oracle technology. Attend one of our “Best Practices” sessions.
  • Stop by the Oracle Support Stars Bar to talk with support experts. Open daily @ Moscone West, Exhibition hall 3461.
  • See Oracle support tools in action at one of our demos.
View the schedule of all of our Oracle Premier Support activities at Oracle OpenWorld for more information. Visit the Services and Support Oracle OpenWorld Website to discover how you can take advantage of all Oracle OpenWorld has to offer. See you there!

Why use CASE when NVL will do?

Tony Andrews - Thu, 2014-09-11 11:42
I've found that many developers are reluctant to use "new" features like CASE expressions and ANSI joins. (By new I mean: this millennium.) But now they have started to and they get carried away.  I have seen this several times recently:     CASE WHEN column1 IS NOT NULL THEN column1 ELSE column2 END Before they learned to use CASE I'm sure they would have written the much simpler:     NVL (Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com1http://tonyandrews.blogspot.com/2014/09/why-use-case-when-nvl-will-do.html

check if using tcps part II

Laurent Schneider - Thu, 2014-09-11 11:31

in your current session, as written there, check sys_context('USERENV', 'NETWORK_PROTOCOL')

in another session, you could grab some hints out of the network service banner. Do the maths, when it is not-not using ssl, it probably is…


select sid,program,
  case when program not like 'ora___@% (P%)' then
  (select max(case
when NETWORK_SERVICE_BANNER like '%TCP/IP%' 
      then 'TCP'
when NETWORK_SERVICE_BANNER like '%Bequeath%' 
      then 'BEQUEATH'
when NETWORK_SERVICE_BANNER like '%IPC%' 
      then 'IPC'
when NETWORK_SERVICE_BANNER like '%SDP%' 
      then 'SDP'
when NETWORK_SERVICE_BANNER like '%NAMED P%' 
      then 'Named pipe'
when NETWORK_SERVICE_BANNER is null 
      then 'TCPS' end)
    from V$SESSION_CONNECT_INFO i 
    where i.sid=s.sid) end protocol
  from v$session s;

       SID PROGRAM         PROTOCOL
---------- --------------- --------
       415 sqlplus(TNS V1- BEQUEATH
       396 sqlplus(TNS V1- IPC     
         6 Toad            TCP     
         9 Toad            TCPS    
         1 oracle(DIAG)            
       403 Toad            TCP     

Rittman Mead/Oracle Data Integration Speakeasy @ Oracle Open World

Rittman Mead Consulting - Thu, 2014-09-11 10:59

If you are attending Oracle Open World this year and fancy bit of a different experience, come and join Rittman Mead and Oracle’s Data Integration teams for drinks and networking at 7pm on Tuesday 30th September at the Local Edition speakeasy on Market Street.

We will be providing a couple of hours of free drinks with the opportunity to quiz our leading data integration experts and Oracle’s data integration team about any aspect of the data integration toolset, architecture and our innovative implementation approaches, and to relax and kick back at the end of a long day. So whether you want to know about how ODI can facilitate your big data strategy, or implement data quality and data governance across your enterprise data architecture, please come along.

The Local Edition is located at 691 Market St, San Francisco, CA and the event runs from 7pm to 9pm. Please register here.

For further information on this event and the sessions we are presenting at Oracle Open World contact us at info@rittmanmead.com.

Categories: BI & Warehousing

GAO Report: Yes, student debt is growing problem

Michael Feldstein - Thu, 2014-09-11 10:57

In case anyone needed additional information to counter the Brookings-fed meme that “Americans who borrowed to finance their education are no worse off today than they were a generation ago”, theU.S. Government Accountability Office (GAO) released a report yesterday with some significant findings. As reported at Inside Higher Ed by Michael Stratford:

More than 700,000 households headed by Americans 65 or older now carry student debt, according to a report released Wednesday by the U.S. Government Accountability Office. And the amount of debt owed by borrowers 65 and older jumped from $2.8 billion in 2005 to $18.2 billion last year. [snip]

Between 2004 and 2010, for instance, the number of households headed by individuals 65 to 74 with student loan debt more than quadrupled, going from 1 percent to 4 percent of all such families. During that same period, the rate of borrowing among Americans under 44 years old increased between 40 and 80 percent, even though borrowing among that age group is far more prevalent than it is among senior citizens.

I have been highly critical of the Brookings Institutions and their report and update. This new information from the GAO goes outside the selective Brookings data set of households headed by people aged 20 – 40, but it should be considered by anyone trying to draw conclusions about student debt holders.

Noting that Brookings analysis is based on “Americans who borrowed to finance their education” and the GAO report is on student debt holders, it is worth asking if we’re looking at a similar definition. For the most part, yes, as explained at IHE:

While some of the debt reflects loans taken out by parents on behalf of their children, the vast majority — roughly 70 to 80 percent of the outstanding debt — is attributable to the borrowers’ own education. Parent PLUS loans accounted for only about 27 percent of the student debt held by borrowers 50 to 64 years old, and an even smaller share for borrowers over 65.

Go read at least the entire IHE article, if not the entire GAO report.

Student debt is a growing problem in the US, and the Brookings Institution conclusions are misleading at best.

The post GAO Report: Yes, student debt is growing problem appeared first on e-Literate.

OOW -Focus On Support and Services for Engineered Systems

Chris Warticki - Thu, 2014-09-11 08:00

Focus On Support and Services for Engineered Systems

Monday, Sep 29, 2014

Conference Sessions

Best Practices for Maintaining and Supporting Oracle Enterprise Manager
Farouk Abushaban, Senior Principal Technical Analyst, Oracle
2:45 PM - 3:30 PM Intercontinental - Grand Ballroom C CON8567 Oracle Exadata: Maintenance and Support Best Practices
Christian Trieb, CDO, Paragon Data GmbH
Jaime Figueroa, Senior Principal Technical Support Engineer, Oracle
Bennett Fleisher, Customer Support Director, Oracle
4:00 PM - 4:45 PM Moscone South - 310 CON8259 Wednesday, Oct 01, 2014

Conference Sessions

Sys Admin Best Practices: Maintaining Oracle Server and Storage Systems
Daniel Green, Sr Technical Support Engineer, Oracle
Jeff Nieusma, Senior Principal Engineer, Oracle
3:30 PM - 4:15 PM Intercontinental - Intercontinental C CON8313 Thursday, Oct 02, 2014

Conference Sessions

Real-World Oracle Maximum Availability Architecture with Oracle Engineered Systems
Bill Callahan, Director, Products and Technology, CCC Information Services
Jim Mckinstry, Consulting Practice Director, Oracle
9:30 AM - 10:15 AM Intercontinental - Grand Ballroom B CON2335 Optimize Oracle SuperCluster with Oracle Advanced Monitoring and Resolution
Erik Carlson, Vice President IT, Jabil Circuit, Inc.
George Mccormick, Field Sales Representative, Oracle
9:30 AM - 10:15 AM Marriott Marquis - Salon 4/5/6* CON2388 Extreme Analytics with Oracle Exalytics
Phil Scott, Senior Principal Instructor, Oracle
9:30 AM - 10:15 AM Moscone West - 3016 CON8594 Optimizing Oracle Exadata with Oracle Support Services: A Client View from KPN
Eric Zonneveld, Ing., KPN NV
Jan Dijken, Principal Advanced Support Engineer, Oracle
1:15 PM - 2:00 PM Moscone South - 305 CON7054   My Oracle Support Monday Mix

Monday, Sep 29

Join us for a fun and relaxing happy hour at the annual My Oracle Support Monday Mix. This year’s gathering is Monday, September 29 from 6:00 to 8:00 p.m. at the ThirstyBear Brewing Company – just a 3 minute walk from Moscone Center. Admission is free for Premier Support customers with your Oracle OpenWorld badge. Visit our web site for more details: http://www.oracle.com/goto/mondaymix 6:00 PM - 8:00 PM ThirstyBear Brewing Company Oracle Support Stars Bar & Mini Briefing Center

Monday, Sep 29

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Tuesday, Sep 30

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Wednesday, Oct 01

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 3:45 PM Moscone West Exhibition Hall, 3461 and 3908

To secure a seat in a session, please use Schedule Builder to add to your Schedule.

Oracle WebCenter & Oracle BPM Customer Appreciation Reception

WebCenter Team - Thu, 2014-09-11 06:27
Oracle WebCenter & Oracle BPM Customer Appreciation Reception

You’re invited to the Oracle WebCenter & Oracle BPM Customer Appreciation Reception!

Oracle WebCenter & Oracle Business Process Management (BPM) partners Aurionpro, AVIO Consulting, Bezzotech, Fishbowl Solutions, Keste, Redstone Content Solutions, TekStream & VASSIT invite you to a private cocktail reception at one of San Francisco's stunning National Historic Landmarks. Please join us and other Oracle WebCenter & Oracle BPM customers for heavy hors d'oeuvres and cocktails at this exclusive reception. You do not need to be an Oracle OpenWorld attendee to join us at the reception. If you will be in the San Francisco area, please RSVP to attend!

Oracle WebCenter & Oracle BPM Customer Appreciation Reception

Monday, September 29, 2014
6:30 p.m. – 8:30 p.m.

Old Mint, Mint Plaza
San Francisco, CA 94103
+1.415.537.1105

Register Now Red-arrow


Please RSVP by September 22, 2014. You will receive an e-mail notification confirming your attendance for the event.

Don’t miss the opportunity to talk with fellow customers and executives from Oracle WebCenter & Oracle BPM Product Management, Product Marketing, and Oracle’s premier Oracle WebCenter & Oracle BPM partners. We look forward to seeing you at this event!

Partners

If you are an employee or official of a government organization, please click here for important ethics information regarding this event.

Oracle Corporation Facebook Twitter linkedIn Blog youtube Copyright © 2014, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices and Terms of Use | Privacy Statement

Oracle Upgrade – from R12.2.3 to R12.2.4

Online Apps DBA - Thu, 2014-09-11 04:43
Last week, I performed ERP Upgrade from 12.2.3 to 12.2.4. I would like to share the document, which contains the steps performed, according to environment. This is just to give an idea, user must review the following documents and take environment specific action: Oracle E-Business Suite Release 12.2.4 Readme (Doc ID 1617458.1) Applyng the Latest [...]

This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
Categories: APPS Blogs

APEX 5.0: Bye bye Tabs, welcome to Navigation Lists

Dimitri Gielis - Thu, 2014-09-11 02:22
In previous versions of Oracle APEX (< 5.0) you could use Tabs for the navigation in your application.


Tabs were not that flexible, they were typical on top of your page in a specific look and feel. Since APEX 4.x I started to dismiss using Tabs in most of the cases, instead I would use a List with the "Page Tabs" template if people wanted that look and feel.

APEX 5.0 introduces the concept of a "Navigation List" that replaces the tabs. It's the same mechanism as before (a normal List which you find in Shared Components), but you can define in your User Interface which list to use as your Navigation List.

Go to Shared Components > User Interface Attributes:


Next in the User Interface section, click on Desktop (or the User Interface you want to adapt):


In the Attributes section you can define the List you want to use as "Navigation List"


Behind the scenes the Navigation List is put on the screen where the #NAVIGATION_LIST# token is specified in your Page Template.

The Navigation List is another example where APEX 5.0 makes common behaviour of developers more declarative and embedded in the product.
Categories: Development

2002 Honda passport timing belt replacement

EBIZ SIG BLOG - Wed, 2014-09-10 19:14
The Honda Passport was a activity-utility car bought via the japanese maker from 1994 through 2002. It used to be changed in 2003 through the Honda Pilot, a crossover utility automobile that shared one of the most underpinnings of the Honda Odyssey minivan. not like the Pilot, which adopted the lead of the Toyota Highlander in placing a mid-dimension crossover body on the underpinnings of what used to be basically a car, the Passport was once constructed on a rear-wheel-power truck chassis with all-wheel force as an choice. The experience quality and coping with reflected its truck origins, so the Pilot was a striking step ahead when it replaced the Passport.

The Passport was once actually a re-badged Isuzu Rodeo, a truck-based SUV inbuilt Indiana, at a plant that Subaru and Isuzu shared on the time. the primary era Passport, sold from 1994 via 1997, offered a collection of a one hundred twenty-horsepower 2.6-liter four-cylinder engine, paired with a 5-pace handbook gearbox, or a a hundred seventy five-hp 3.2-liter V-6--and an available four-pace automated transmission. Rear-wheel power was same old, and all-wheel pressure might be ordered as an choice. Trim ranges have been base and EX.
2002 honda passport check engine light flashingIn 1998, a 2nd-era Passport used to be introduced. It used to be still based on a truck chassis, nevertheless it came with extra relief and safety options than the earlier adaptation, and was considerably extra refined. The 4-door game-utility vehicle came usual with a 205-hp three.2-liter V-6, matched with a 5-speed guide gearbox on base versions, though a four-speed computerized transmission was additionally on hand.

The second Passport was once offered in two trim ranges: the LX will be ordered with the 5-velocity guide, with four-wheel-pressure as an possibility, and the extra upscale EX came with the 4-velocity automatic, once more with both force possibility. while the spare tire on the base LX was established on a swinging bracket on the tailgate, the EX relocated it to a service beneath the cargo house. For the 2000 version year, the Honda Passport received a handful of updates, together with non-compulsory 16-inch wheels on the LX and available two-tone paint treatments.
2002 honda passport transmission dipstick locationWhen taking into account the Passport as a used car, patrons should comprehend that the 1998-2002 models have been recalled in October 2010 as a result of body corrosion in the basic house where the rear suspension was mounted. Any autos with out seen corrosion have been handled with a rust-resistant compound, but reinforcement brackets were to be installed in those with more extreme rust. In some cases, the damage was once so extreme that Honda simply repurchased the autos from their homeowners. Used-automotive shoppers taking a look at Passports must be sure to in finding out whether the car had been via a remember, and what--if anything else--was achieved.
2002 honda passport keyless remote
2002 honda passport o2 sensor location
2002 honda passport picture gallery
2002 honda passport transmission problems
2002 honda passport starter replacement
Categories: APPS Blogs

2002 Honda passport timing belt replacement

Ameed Taylor - Wed, 2014-09-10 19:14
The Honda Passport was a activity-utility car bought via the japanese maker from 1994 through 2002. It used to be changed in 2003 through the Honda Pilot, a crossover utility automobile that shared one of the most underpinnings of the Honda Odyssey minivan. not like the Pilot, which adopted the lead of the Toyota Highlander in placing a mid-dimension crossover body on the underpinnings of what used to be basically a car, the Passport was once constructed on a rear-wheel-power truck chassis with all-wheel force as an choice. The experience quality and coping with reflected its truck origins, so the Pilot was a striking step ahead when it replaced the Passport.

The Passport was once actually a re-badged Isuzu Rodeo, a truck-based SUV inbuilt Indiana, at a plant that Subaru and Isuzu shared on the time. the primary era Passport, sold from 1994 via 1997, offered a collection of a one hundred twenty-horsepower 2.6-liter four-cylinder engine, paired with a 5-pace handbook gearbox, or a a hundred seventy five-hp 3.2-liter V-6--and an available four-pace automated transmission. Rear-wheel power was same old, and all-wheel pressure might be ordered as an choice. Trim ranges have been base and EX.
2002 honda passport check engine light flashingIn 1998, a 2nd-era Passport used to be introduced. It used to be still based on a truck chassis, nevertheless it came with extra relief and safety options than the earlier adaptation, and was considerably extra refined. The 4-door game-utility vehicle came usual with a 205-hp three.2-liter V-6, matched with a 5-speed guide gearbox on base versions, though a four-speed computerized transmission was additionally on hand.

The second Passport was once offered in two trim ranges: the LX will be ordered with the 5-velocity guide, with four-wheel-pressure as an possibility, and the extra upscale EX came with the 4-velocity automatic, once more with both force possibility. while the spare tire on the base LX was established on a swinging bracket on the tailgate, the EX relocated it to a service beneath the cargo house. For the 2000 version year, the Honda Passport received a handful of updates, together with non-compulsory 16-inch wheels on the LX and available two-tone paint treatments.
2002 honda passport transmission dipstick locationWhen taking into account the Passport as a used car, patrons should comprehend that the 1998-2002 models have been recalled in October 2010 as a result of body corrosion in the basic house where the rear suspension was mounted. Any autos with out seen corrosion have been handled with a rust-resistant compound, but reinforcement brackets were to be installed in those with more extreme rust. In some cases, the damage was once so extreme that Honda simply repurchased the autos from their homeowners. Used-automotive shoppers taking a look at Passports must be sure to in finding out whether the car had been via a remember, and what--if anything else--was achieved.
2002 honda passport keyless remote
2002 honda passport o2 sensor location
2002 honda passport picture gallery
2002 honda passport transmission problems
2002 honda passport starter replacement
Categories: DBA Blogs

Creating a Pivotal GemFireXD Data Source Connection from IntelliJ IDEA 13.x

Pas Apicella - Wed, 2014-09-10 19:04
In order to create a Pivotal GemFireXD Data Source Connection from IntelliJ 13.x , follow the steps below. You will need to define a GemFireXD driver , prior to creating the Data Source itself.

1. Bring up the Databases panel.

2. Define a GemFireXD Driver as follows


3. Once defined select it by using the following options. Your using the Driver you created at #2 above

+ -> Data Source -> com.pivotal.gemfirexd.jdbc.ClientDriver 

4. Create a Connection as shown below. You would need to having a running GemFireXD cluster at this point in order to connect.



5.  Once connected you can browse objects as shown below.



6. Finally we can run DML/DDL directly from IntelliJ as shown below.


http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Documentum upgrade project - D2EventSenderMailMethod & bug with Patch 12

Yann Neuhaus - Wed, 2014-09-10 18:55

We started the Documentum upgrade in the wintertime and our jobs ran successfully by following the defined schedule. Once we moved to the summertime we hit an issue: A job that was scheduled for instance at 4:00 AM was executed at 4:00 AM, but also started every 2 minutes until 5:00 AM. We had this issue on all our 6.7SP2P009 repositories - on upgraded as well as on new repositories.

Before opening an SR in powerlink, I first checked the date and time with the following query.

On the content server using idql:

 

1> select date(now) as date_now from dm_docbase_config
2> go
date_now          
------------------------
4/9/2014 17:47:55       
(1 row affected)
1>

 

The date and time was correct, EMC confirmed a bug and asked us to install the Patch 12 which solved the issue.

  Patch 12 and D2EventSenderMailMethod

Unfortunately the patch 12 introduced a bug on D2EventSenderMailMethod which does not work anymore. The mail could not be sent out. D2EventSenderMailMethod is a requirement for D2. It is used by D2 mails but also for some workflow functionalities. By default, if the event is not managed by D2 (ie : configured) the default Documentum mail method is executed, EMC said.

To test the mail issue, I used the dm_ContentWarning job by setting the -percent_full parameter to 5 (lower than the value displayed by df -k).

In $DOCUMENTUM/dba/log//MethodServer/test67.log thefollowing error was displayed:

 

Wrong number of arguments (31) passed to entry point 'Mail'.

 

And by setting the trace flag for the dm_event_sender method we saw:


2014-05-08T12:53:45.504165      7260[7260]      0100007b8000c978        TRACE LAUNCH [MethodServer]: ./dmbasic -f./dm_event_sender.ebs -eMail  --   "test67"  "xxx May 08 12:53:25 2014"  "DM_SYSADMIN"  "Take a look at /dev/mapper/vg00-lvpkgs--it's 81% full!!!"  "ContentWarning"  "0900007b8000aeb3"  "nulldate"  "10"  "dm_null_id"  " "  "dmsd"  "test67"  "event"  " "  "test67" ""  "undefined"  "dmsd"  "1b00007b80003110"  "5/8/2014 12:53:28"  "0"  "dm_document"  "text"  "3741"  "text"  "cs.xxy.org"  "80"  ""  "/soft/opt/documentum/share/temp/3799691ad29ffd73699c0e85b792ea66"  "./dm_mailwrapper.sh"  " " dmProcess::Exec() returns: 1

 

It worked when I updated the dm_server_config object:

 

update dm_server_config objects set mail_method = 'dm_event_template_sender'

 

EMC confirmed that this is a bug and should be fixed with D2 3.1 P05

Using Oracle GoldenGate for Trickle-Feeding RDBMS Transactions into Hive and HDFS

Rittman Mead Consulting - Wed, 2014-09-10 15:13

A few months ago I wrote a post on the blog around using Apache Flume to trickle-feed log data into HDFS and Hive, using the Rittman Mead website as the source for the log entries. Flume is a good technology to use for this type of capture requirement as it captures log entries, HTTP calls, JMS queue entries and other “event” sources easily, has a resilient architecture and integrates well with HDFS and Hive. But what if the source you want to capture activity for is a relational database, for example Oracle Database 12c? With Flume you’d need to spool the database transactions to file, whereas what you really want is a way to directly connect to the database engine and capture the changes from source.

Which is exactly what Oracle GoldenGate does, and what most people don’t realise is that GoldenGate can also load data into HDFS and Hive, as well as the usual database targets. Hive and HDFS aren’t fully-supported targets yet, you can use the Oracle GoldenGate for Java adapter to act as the handler process and then land the data in HDFS files or Hive tables on your target Hadoop platform. My Oracle Support has two tech nodes, “Integrating OGG Adapter with Hive (Doc ID 1586188.1)” and “Integrating OGG Adapter with HDFS (Doc ID 1586210.1)” that give example implementations of the Java adapters you’d need for these two target types, with the overall end-to-end process for landing Hive data looking like the diagram below (and the HDFS one just swapping out HDFS for Hive at the handler adapter stage)

NewImage

This is also a good example of the sorts of technology we’d use to implement the “data factory” concept within the new Oracle Information Management Reference Architecture, the part of the architecture that moves data between the Hadoop and NoSQL-based Data Reservoir, and the relationally-stored enterprise information store; in this case, trickle-feeding transactional data from the Oracle database into Hadoop, perhaps to archive it at lower-cost than we could do in an Oracle database, or to add transaction activity data to a Hadoop-based application

NewImage

So I asked my colleague Nelio Guimaraes to set up a GoldenGate capture process on our Cloudera CDH5.1 Hadoop cluster, using GoldenGate 12.1.2.0.0 for our source Oracle 11gR2 database and Oracle GoldenGate for Java, downloadable separately on edelivery.oracle.com under Oracle Fusion Middleware > Oracle GoldenGate Application Adapters 11.2.1.0.0 for JMS and Flat File Media Pack. In our example, we’re going to capture activity on the SCOTT.EMP table in the Oracle database, and then perform the following step to set up replication from it into a replica Hive table:

  1. Create a table in Hive that corresponds to the table in Oracle database.
  2. Create a table in the Oracle database and prepare the table for replication.
  3. Configure the Oracle GoldenGate Capture to extract transactions from the Oracle database and create the trail file.
  4. Configure the Oracle GoldenGate Pump to read the trail and invoke the custom adapter
  5. Configure the property file for the Hive handler
  6. Code, Compile and package the custom Hive handler
  7. Execute a test. 
Setting up the Oracle Database Source Capture

Let’s go into the Oracle database first, check the table definition, and then connect to Hadoop to create a Hive table of the same column definition.

[oracle@centraldb11gr2 ~]$ sqlplus scott/tiger
SQL*Plus: Release 11.2.0.3.0 Production on Thu Sep 11 01:08:49 2014
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
SQL> describe DEPT
 Name Null? Type
 ----------------------------------------- -------- ----------------------------
 DEPTNO NOT NULL NUMBER(2)
 DNAME VARCHAR2(14)
 LOC VARCHAR2(13)
SQL> exit
...
[oracle@centraldb11gr2 ~]$ ssh oracle@cdh51-node1
Last login: Sun Sep 7 16:11:36 2014 from officeimac.rittmandev.com
[oracle@cdh51-node1 ~]$ hive
...
create external table dept
(
 DEPTNO string, 
 DNAME string, 
 LOC string
) row format delimited fields terminated by '\;' stored as textfile
location '/user/hive/warehouse/department'; 
exit
...

Then I install Oracle Golden Gate 12.1.2 on the source Oracle database, just as you’d do for any Golden Gate install, and make sure supplemental logging is enabled for the table I’m looking to capture. Then I go into the ggsci Golden Gate command-line utility, to first register the user it’ll be connecting as, and what table it needs to capture activity for.

[oracle@centraldb11gr2 12.1.2]$ cd /u01/app/oracle/product/ggs/12.1.2/
[oracle@centraldb11gr2 12.1.2]$ ./ggsci
$ggsci> DBLOGIN USERID sys@ctrl11g, PASSWORD password sysdba
$ggsci> ADD TRANDATA SCOTT.DEPT COLS(DEPTNO), NOKEY

GoldenGate uses a number of components to replicate data from source to targets, as shown in the diagram below.

NewImageFor our purposes, though, there are just three that we need to configure; the Extract component, which captures table activity on the source; the Pump process that moves data (or the “trail”) from source database to the Hadoop cluster; and the Replicat component that takes that activity and applies it to the target tables. In our example, the extract and pump processes will be as normal, but we need to create a custom “handler” for the target Hive table that uses the Golden Gate Java API and the Hadoop FS Java API.

The tool we use to set up the extract and capture process is ggsci, the command-line Golden Gate Software Command Interface. I’ll use it first to set up the Manager process that runs on both source and target servers, giving it a port number and connection details into the source Oracle database.

$ggsci> edit params mgr
PORT 7809
USERID sys@ctrl11g, PASSWORD password sysdba
PURGEOLDEXTRACTS /u01/app/oracle/product/ggs/12.1.2/dirdat/*, USECHECKPOINTS

Then I create two configuration files, one for the extract process and one for the pump process, and then use those to start those two processes.

$ggsci> edit params ehive
EXTRACT ehive
USERID sys@ctrl11g, PASSWORD password sysdba
EXTTRAIL /u01/app/oracle/product/ggs/12.1.2/dirdat/et, FORMAT RELEASE 11.2
TABLE SCOTT.DEPT;
$ggsci> edit params phive
EXTRACT phive
RMTHOST cdh51-node1.rittmandev.com, MGRPORT 7809
RMTTRAIL /u01/app/oracle/product/ggs/11.2.1/dirdat/rt, FORMAT RELEASE 11.2
PASSTHRU
TABLE SCOTT.DEPT;
$ggsci> ADD EXTRACT ehive, TRANLOG, BEGIN NOW
$ggsci> ADD EXTTRAIL /u01/app/oracle/product/ggs/12.1.2/dirdat/et, EXTRACT ehive
$ggsci> ADD EXTRACT phive, EXTTRAILSOURCE /u01/app/oracle/product/ggs/12.1.2/dirdat/et
$ggsci> ADD RMTTRAIL /u01/app/oracle/product/ggs/11.2.1/dirdat/rt, EXTRACT phive

As the Java event handler on the target Hadoop platform won’t be able to ordinarily get table metadata for the source Oracle database, we’ll use the defgen utility on the source platform to create the parameter file that the replicat process will need.

$ggsci> edit params dept
defsfile ./dirsql/DEPT.sql
USERID ggsrc@ctrl11g, PASSWORD ggsrc
TABLE SCOTT.DEPT;
./defgen paramfile ./dirprm/dept.prm NOEXTATTR

Note that NOEXTATTR means no extra attributes; because the version on target is a generic and minimal version, the definition file with extra attributes won’t be interpreted. Then, this DEPT.sql file will need to be copied across to the target Hadoop platform where you’ve installed Oracle GoldenGate for Java, to the /dirsql folder within the GoldenGate install. 

[oracle@centraldb11gr2 12.1.2]$ ssh oracle@cdh51-node1
oracle@cdh51-node1's password: 
Last login: Wed Sep 10 17:05:49 2014 from centraldb11gr2.rittmandev.com
[oracle@cdh51-node1 ~]$ cd /u01/app/oracle/product/ggs/11.2.1/
[oracle@cdh51-node1 11.2.1]
$ pwd/u01/app/oracle/product/ggs/11.2.1
[oracle@cdh51-node1 11.2.1]$ ls dirsql/
DEPT.sql

Then, going back to the source Oracle database platform, we’ll start the Golden Gate Monitor process, and then the extract and pump processes.

[oracle@cdh51-node1 11.2.1]$ ssh oracle@centraldb11gr2
oracle@centraldb11gr2's password: 
Last login: Thu Sep 11 01:08:18 2014 from bdanode1.rittmandev.com
GGSCI (centraldb11gr2.rittmandev.com) 7> start mgr
Manager started.
 
GGSCI (centraldb11gr2.rittmandev.com) 8> start ehive
 
Sending START request to MANAGER ...
EXTRACT EHIVE starting
 
GGSCI (centraldb11gr2.rittmandev.com) 9> start phive
 
Sending START request to MANAGER ...
EXTRACT PHIVE starting

Setting up the Hadoop / Hive Replicat Process

Setting up the Hadoop side involves a couple of similar steps to the source capture side; first we configure the parameters for the Manager process, then configure the extract process that will pull table activity off of the trail file, sent over by the pump process on the source Oracle database.

[oracle@centraldb11gr2 12.1.2]$ ssh oracle@cdh51-node1
oracle@cdh51-node1's password: 
Last login: Wed Sep 10 21:09:38 2014 from centraldb11gr2.rittmandev.com
[oracle@cdh51-node1 ~]$ cd /u01/app/oracle/product/ggs/11.2.1/
[oracle@cdh51-node1 11.2.1]$ ./ggsci
$ggsci> edit params mgr
PORT 7809
PURGEOLDEXTRACTS /u01/app/oracle/product/ggs/11.2.1/dirdat/*, usecheckpoints, minkeepdays 3
$ggsci> add extract tphive, exttrailsource /u01/app/oracle/product/ggs/11.2.1/dirdat/rt
$ggsci> edit params tphive
EXTRACT tphive
SOURCEDEFS ./dirsql/DEPT.sql
CUserExit ./libggjava_ue.so CUSEREXIT PassThru IncludeUpdateBefores
GETUPDATEBEFORES
TABLE SCOTT.DEPT;

Now it’s time to create the Java hander that will write the trail data to the HDFS files and Hive table. The My Oracle Support Doc.ID 1586188.1 I mentioned at the start of the article has a sample Java program called SampleHandlerHive.java that writes incoming transactions into an HDFS file within the Hive directory, and also writes it to a file on the local filesystem. To get this working on our Hadoop system, we created a new java source code file from the content in SampleHandlerHive.java, updated the path from hadoopConf.addResource to point the the correct location of core-site.xml, hdfs-site.xml and mapred-site.xml, and then compiled it as follows:

export CLASSPATH=/u01/app/oracle/product/ggs/11.2.1/ggjava/ggjava.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/client/*
javac -d . SampleHandlerHive.java

Successfully executing the above command created the SampleHiveHandler.class under /u01/app/oracle/product/ggs/11.2.1//dirprm/com/mycompany/bigdata. To create the JAR file that the GoldenGate for Java adapter will need, I then need to change directory to the /dirprm directory under the Golden Gate install, and then run the following commands:

jar cvf myhivehandler.jar com
chmod 755 myhivehandler.jar

I also need to create a properties file for this JAR to use, in the same /dirprm directory. This properties file amongst other things tells the Golden Gate for Java adapter where in HDFS to write the data to (the location where the Hive table keeps its data files), and also references any other JAR files from the Hadoop distribution that it’ll need to get access to.

[oracle@cdh51-node1 dirprm]$ cat tphive.properties 
#Adapter Logging parameters. 
gg.log=log4j
gg.log.level=info
 
#Adapter Check pointing  parameters
goldengate.userexit.chkptprefix=HIVECHKP_
goldengate.userexit.nochkpt=true
 
# Java User Exit Property
goldengate.userexit.writers=jvm
jvm.bootoptions=-Xms64m -Xmx512M -Djava.class.path=/u01/app/oracle/product/ggs/11.2.1/ggjava/ggjava.jar:/u01/app/oracle/product/ggs/11.2.1/dirprm:/u01/app/oracle/product/ggs/11.2.1/dirprm/myhivehandler.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/client/hadoop-common-2.3.0-cdh5.1.0.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/etc/hadoop:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/etc/hadoop/conf.dist:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/hadoop-auth-2.3.0-cdh5.1.0.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/client/hadoop-hdfs-2.3.0-cdh5.1.0.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/client/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/client/protobuf-java-2.5.0.jar
 
#Properties for reporting statistics
# Minimum number of {records, seconds} before generating a report
jvm.stats.time=3600
jvm.stats.numrecs=5000
jvm.stats.display=TRUE
jvm.stats.full=TRUE
 
#Hive Handler.  
gg.handlerlist=hivehandler
gg.handler.hivehandler.type=com.mycompany.bigdata.SampleHandlerHive
gg.handler.hivehandler.HDFSFileName=/user/hive/warehouse/department/dep_data
gg.handler.hivehandler.RegularFileName=cinfo_hive.txt
gg.handler.hivehandler.RecordDelimiter=;
gg.handler.hivehandler.mode=tx

Now, the final step on the Hadoop side is to start its Golden Gate Manager process, and then start the Replicat and apply process.

GGSCI (cdh51-node1.rittmandev.com) 5> start mgr
 
Manager started. 
 
GGSCI (cdh51-node1.rittmandev.com) 6> start tphive
 
Sending START request to MANAGER ...
EXTRACT TPHIVE starting

Testing it All Out

So now I’ve got the extract and pump processes running on the Oracle Database side, and the apply process running on the Hadoop side, let’s do a quick test and see if it’s working. I’ll start by looking at what data is in each table at the beginning.

SQL> select * from dept;     

    DEPTNO DNAME  LOC
 ---------- -------------- -------------

10 ACCOUNTING  NEW YORK
20 RESEARCH  DALLAS
30 SALES  CHICAGO
40 OPERATIONS  BOSTON
50 TESTE  PORTO
60 NELIO  STS
70 RAQUEL  AVES
 
7 rows selected.

Over on the Hadoop side, there’s just one row in the Hive table:

hive> select * from customer;

OK 80MARCIA   ST

Now I’ll go back to Oracle and insert a new row in the DEPT table:

SQL> insert into dept (deptno, dname, loc)
  2  values (75, 'EXEC','BRIGHTON'); 

1 row created. 
SQL> commit; 

Commit complete.

And, going back over to Hadoop, I can see Golden Gate has added that record to the Hive table, by the Golden Gate for Java adapter writing the transaction to the underlying HDFS file.

hive> select * from customer;

OK 80MARCIA   ST
75 EXEC       BRIGHTON

So there you have it; Golden Gate replicating Oracle RBDMS transactions into HDFS and Hive, to complement Apache Flume’s ability to replicate log and event data into Hadoop. Moreover, as Michael Rainey explained in this three part blog series, Golden Gate is closely integrated into the new 12c release of Oracle Data Integrator, making it even easier to manage Golden Gate replication processes into your overall data loading project, and giving Hadoop developers and Golden Gate users access to the full set of load orchestration and data quality features in that product rather than having to rely on home-grown scripting, or Oozie.

Categories: BI & Warehousing

Oracle: How to move OMF datafiles in 11g and 12c

Yann Neuhaus - Wed, 2014-09-10 13:22

With OMF datafiles, you don't manage the datafile names. But how do you set the destination when you want to move them to another mount point? Let's see how easy (and online) it works in 12c. And how to do it with minimal downtime in 11g.

 

Testcase

Let's create a tablespace with two datafiles. It's OMF and goes into /u01:

 

SQL> alter system set db_create_file_dest='/u01/app/oracle/oradata' scope=memory;
System altered.

SQL> show parameter db_create_file_dest
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest                  string      /u01/app/oracle/oradata

SQL> create tablespace DEMO_OMF datafile size 5M;
Tablespace created.

SQL> alter tablespace DEMO_OMF add datafile size 5M;
Tablespace altered.

 

And I want to move those files in /u02.

 

12c online move

Here is how I generate my MOVE commands for all datafiles in /u01:

 

set echo off linesize 1000 trimspool on pagesize 0 feedback off
spool _move_omf.rcv
prompt set echo on;;
prompt report schema;;
prompt alter session set db_create_file_dest='/u02/app/oracle/oradata';;
select 'alter database move datafile '||file_id||';' from dba_data_files where file_name like '/u01/%' 
/
prompt report schema;;
spool off

 

which generates the following:

 

set echo on;
report schema;
alter session set db_create_file_dest='/u02/app/oracle/oradata';
alter database move datafile 7;
alter database move datafile 2;
report schema;

 

This works straightforward and online. That is the right solution if you are in 12c Enterprise Edition. The OMF destination is set at session level here. The move is done online, without any lock. The only overhead is that writes occured twice during the move operation. And in 12c we can run any SQL statement from RMAN, which is great.

 

11g backup as copy

How do you manage that in 11g? I like to do it with RMAN COPY. If you're in ARCHIVELOG then you can copy the datafiles one by one: backup it as copy, offline it, recover it, switch to it, online it. This is the fastest way. You can avoid the recovery step by putting the tablespace offline but:

  • you will have to wait that the earliest transaction finishes.
  • your downtime includes the whole copy. When activity is low the recovery is probably faster.

 

Here is how I generate my RMAN commands for all datafiles in /u01:

 

set echo off linesize 1000 trimspool on pagesize 0 feedback off
spool _move_omf.rcv
prompt set echo on;;
prompt report schema;;
with files as (
 select file_id , file_name , bytes from dba_data_files where file_name like '/u01/%' and online_status ='ONLINE' 
)
select stmt from (
select 00,bytes,file_id,'# '||to_char(bytes/1024/1024,'999999999')||'M '||file_name||';' stmt from files
union all
select 10,bytes,file_id,'backup as copy datafile '||file_id||' to destination''/u02/app/oracle/oradata'';' stmt from files
union all
select 20,bytes,file_id,'sql "alter database datafile '||file_id||' offline";' from files
union all
select 30,bytes,file_id,'switch datafile '||file_id||' to copy;' from files
union all
select 40,bytes,file_id,'recover datafile '||file_id||' ;' from files
union all
select 50,bytes,file_id,'sql "alter database datafile '||file_id||' online";' from files
union all
select 60,bytes,file_id,'delete copy of datafile '||file_id||';' from files
union all
select 90,bytes,file_id,'report schema;' from files
union all
select 91,bytes,file_id,'' from files
order by 2,3,1
)
/

 

which generates the following:

 

set echo on;
report schema;
#          5M /u01/app/oracle/oradata/DEMO/datafile/o1_mf_demo_omf_b0vg07m8_.dbf;
backup as copy datafile 2 to destination'/u02/app/oracle/oradata';
sql "alter database datafile 2 offline";
switch datafile 2 to copy;
recover datafile 2 ;
sql "alter database datafile 2 online";
delete copy of datafile 2;
report schema;

 

(I have reproduced the commands for one datafile only here.)

And I can run it in RMAN. Run it as cmdfile or in a run block so that it stops if an error is encountered. Of course, it's better to run them one by one and check that the datafiles are online at the end. Note that it does not concern SYSTEM tablespace for which the database must be closed.

Online datafile move is my favorite Oracle 12c feature. And it's the first new feature that you will practice if you come at our 12c new features workshop. And in any versions RMAN is my preferred way to manipulate database files.

Brookings Institution analysis on student debt becoming a farce

Michael Feldstein - Wed, 2014-09-10 12:39

I have previously written about the deeply flawed Brookings Institution analysis on student debt with its oft-repeated lede:

These data indicate that typical borrowers are no worse off now than they were a generation ago …

Their data is based on the triennial Survey of Consumer Finances (SCF) by the Federal Reserve Board, with the report based on 2010 data. With the release of the 2013 SCF data, Brookings Institution put out an update this week on their report, and they continue with the lede:

The 2013 data confirm that Americans who borrowed to finance their educations are no worse off today than they were a generation ago. Given the rising returns to postsecondary education, they are probably better off, on average. But just because higher education is still a good investment for most students does not mean that high and rising college costs should be left unquestioned.

This conclusion is drawn despite the following observations of changes from 2010 – 2013 in their own update:

  • The share of young (age 20 – 40) households with student debt rose from 36% to 38%;
  • The average amount of debt per household rose 14%;
  • The distribution of debt holders rose by 50% for debt levels of $20k – $75k and dropped by 19% for debt levels of $1k – $10k; and
  • Wage income is stagnant and same level as ~1999, yet debt amounts have risen by ~50% in that same time period (see below).

Wage and borrowing over time

Brookings’ conclusion from this chart?

The upshot of the 2013 data is that households with education debt today are still no worse off than their counterparts were more than 20 years ago. Even though rising debt continued to cut into stagnant incomes, the average household with debt is better off than it used to be.

The strongest argument that Brookings presents is that the median monthly payment-to-income ratios have stayed fairly consistent at ~4% over the past 20 years. What they fail to mention is that households are taking much longer to pay off student loans now.

More importantly, the Brookings analysis ignores the simple and direct measurement of loan delinquency. See this footnote from the original report [emphasis added]:

These statistics are based on households that had education debt, annual wage income of at least $1,000, and that were making positive monthly payments on student loans. Between 24 and 36 percent of borrowers with wage income of at least $1,000 were not making positive monthly payments, likely due to use of deferment and forbearance …

That’s what I call selective data analysis. In the same SCF report that Brookings used for its update:

Delinquencies

The delinquency rate for student loans has gone up ~50% from 2010 to 2013!

How can anyone claim that Americans with student debt are no worse off when:

  • More people have student debt;
  • The average amount of debt has risen;
  • Wage income has not risen; and
  • The delinquency rate for student loans has risen.

None of the secondary spreadsheet jockeying from Brookings counters these basic facts. This ongoing analysis by Brookings on student debt is a farce.

The post Brookings Institution analysis on student debt becoming a farce appeared first on e-Literate.

What the Apple Watch Tells Us About the Future of Ed Tech

Michael Feldstein - Wed, 2014-09-10 12:20

Nothing.

So please, if you’re thinking about writing that post or article, don’t.

I’m begging you.

The post What the Apple Watch Tells Us About the Future of Ed Tech appeared first on e-Literate.

ADF BC View Object Change Notification Listener

Andrejus Baranovski - Wed, 2014-09-10 11:15
ADF BC allows to define triggers to listen for row changes on VO level. We can listen for row updates, inserts and deletes. This can be useful, if you would like to invoke specific audit method or call custom methods to populate dependent transient VO's with updated data.

To enable such triggers, you must add a listener for VO, this can be done during VO creation from standard create method:


ADF BC API methods, such as rowInserted, rowUpdated, rowDeleted can be overridden. These method will be invoked automatically by the framework, when change happens. You can check rowUpdated method, I'm getting changed attribute names (actually it calls this method for each change separately). Getting changed value from current row, also retrieving posted value:


CountryId attribute is set to be refreshed on update/insert, this means change event should be triggered as well, even we would not change this attribute directly:


We should do a test now. Change two attributes - Street Address and State Province, press Save button:


Method rowUpdated is invoked two times, first time for Street Address change (method is invoked before data is posted to DB):


Second time is invoked for State Province change. This means, we can't get all changes in single call, each change is logged separately. It would be much more useful to get all changes in the current row through a single call:


After data is posted, Country ID attribute is updated - this changed is tracked successfully:


Let's try to create new row:


Method rowInserted is invoked, however it doesn't get data yet - key is Null:


Right after rowInserted event, rowUpdated event is called in the same transaction - we can access data from that method. This means rowUpdated generally is more reliable:


Try to remove a record:


Method rowDeleted will be invoked, row data is accessed and key is printed correctly:


Download sample application - ADFBCListenerApp.zip.

Webinar: 21st Century Education Goes Digital with Oracle WebCenter

Oracle Corporation Banner 21st Century Education Goes Digital with Oracle WebCenter

Learn how The Digital Campus with WebCenter can address top-of-mind issues for creating exceptional digital learning experiences, put content in context for the user and optimize business processes

The global education market is under-going a fundamental transformation — from the printed textbook and physical classroom to newer digital, online and mobile experiences.  Today, students can learn anywhere, anytime, from anyone on any device, bridging administrative and academic systems into single universal view.

Oracle WebCenter is at the center of innovation and engagement for any digital enterprise looking to empower exceptional experiences for students, faculty, administrators and researchers. It powerfully connects people, processes, and information with the most complete portfolio of portal, content management, Web experience management and collaboration technologies to enable student success.

Join this special event featuring the University of Pretoria, Fishbowl Solutions and Oracle, whose experts will illustrate successful design patterns and solution delivery for:

  • Student Portals. Create rich, interactive student experiences
  • Digital Repository. Deliver advanced content capture, tagging and sharing while securing enterprise data
  • Admissions. Leverage image capture and business process design to enable improved self-service

Attendees will benefit from the use-case insights and strategies of a world re-knowned university as well as a pre-built solution approach from Oracle and solutions partner Fishbowl to enable a truly modern digital campus.

Audio information:

Dial in Numbers: U.S / Canada: 877-698-7943 (toll free)
International: 706-679-0060(chargeable)
Passcode:
solutions2 Red Button Top Register Now Red Button Bottom

Calendar Sep 11, 2014
10:00 AM PT |
01:00 PM ET

If you are an employee or official of a government organization, please click here for important ethics information regarding this event. Hardware and Software Engineered to Work Together Copyright © 2014, Oracle Corporation and/or its affiliates.
All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement SEO100151617

Oracle Corporation – Worldwide Headquarters, 500 Oracle Parkway, OPL – E-mail Services, Redwood Shores, CA 94065, United States

Your privacy is important to us. You can login to your account to update your e-mail subscriptions or you can opt-out of all Oracle Marketing e-mails at any time.

Please note that opting-out of Marketing communications does not affect your receipt of important business communications related to your current relationship with Oracle such as Security Updates, Event Registration notices, Account Management and Support/Service communications.

The post Webinar: 21st Century Education Goes Digital with Oracle WebCenter appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other