Skip navigation.

Feed aggregator

Securing Big Data - Part 2 - understanding the data required to secure it

Steve Jones - Wed, 2015-01-07 09:00
In the first part of Securing Big Data I talked about the two different types of security.  The traditional IT and ACL security that needs to be done to match traditional solutions with an RDBMS but that is pretty much where those systems stop in terms of security which means they don't address the real threats out there, which are to do with cyber attacks and social engineering.  An ACL is only
Categories: Fusion Middleware

Fit for Work: A Team Experience of Wearable Technology

Usable Apps - Wed, 2015-01-07 04:08

By Sandra Lee (@sandralee0415)

What happens when co-workers try out wearable technology? Misha Vaughan (@mishavaughan), Director of Oracle Applications Communications and Outreach, explored just that.

“Instead of a general perspective, I wanted the team to have a personal experience of wearable technology”, said Misha. So, she gave each member of her team a Fitbit Flex activity tracker to use. The exercise proved insightful, with team members providing useful personal and enterprise-related feedback on device usage.

Fitbit Flex Awaits

Your Fitbit Flex awaits [Photo: Sandra Lee]

Team Dynamic and Initial Reactions

It was a free choice for team members to wear the Fitbit device or not. Those that did were inspired and enjoyed comparing activities and goals. Shannon Whiteman, Communication Operations and Online Training Manager, loved the competitive aspect. “If I saw someone had 100 more steps than I did, I’d take the stairs and walk an extra 101 steps to beat them.” Kathy Miedema, Senior Market Research Analyst, noted that the Fitbit “really motivated and validated my personal fitness activity”.

Fitbit Dashboard for Ultan O'Broin

Example of recorded activity: Ultan O’Broin’s (@usableapps) Fitbit dashboard

The exercise also provided observations on team dynamics in the workplace. Some chose not to wear the device whether for personal reasons, set-up issues, or lack of time; a reminder that although fun to try, such devices are not for everyone, and that’s OK.

The Fashion Perspective

Sarahi Mireles, User Experience Developer in Mexico, tried the Fitbit, but it didn’t fit her lifestyle, saying that “the interest is there [for wearables in general], but the design attraction is lacking.” Sarahi feels the ideal fitness tracker for her world is one with interchangeable looks, so she can wear it to work and to dinner. This typical user need is where fashion designers like Tory Burch offer value to technology developers, in this case partnering with Fitbit to make devices look more like beautiful bracelets and necklaces.

Tory Burch bracelet for Fitbit

Tory Burch for Fitbit metal-hinged bracelet

The Enterprise Employee Investment

Fitness plays a role in work/life balance, and health, happiness, and productivity are intrinsically linked. Overall, wellness contributes to the bottom line in a big way. Oracle is focused on such solutions too, researching user experiences that best engage, promote and support employee wellness.

Oracle HCM Cloud Wellness Page Prototype

Oracle HCM Cloud: Employee Wellness page prototype

Externally, at HCM World for example, Oracle's interest in this space offered analysts and customers complimentary Fitbit Zip devices for a voluntary wellness competition; the winner receiving a donation to the American Cancer Society.

Karen Scipi (@karenscipi), Senior Usability Engineer, reflected that companies like Oracle, in facilitating the use of the fitness device, are placing importance on employee health and fitness as an “employee investment.” Healthier individuals are happier and therefore more productive employees.

Jeremy Ashley (@jrwashley), Vice President of Applications User Experience, already leads his team in embracing wellness within the workplace, participating in the American Heart Association Bay Walk, for example. He explained how encouraging and measuring activity during the working day, whether through walking meetings or using activity trackers, is a meaningful way to identify with the Oracle Applications Cloud User Experience strategy too.  

Jeremy described how sensors in activity trackers—along with smart watches, heads-up displays, smart phones, and beacons—are part of the Internet of Things: that ubiquitous connectivity of technology and the Cloud that realize daily experiences for today's enterprise users to empathize with.

Your Data and the Enterprise Bottom Line

From the business perspective, employee activity data gathered from corporate wellness programs could lead to negotiated discounts and rewards for users from health care companies, for example; one possible incentive to enterprise adoption. Gamification, the encouraging of team members to engage and interact in collaborative and productive ways in work using challenges and competitions, is another strategy for workplace wellness programs uptake.

Ultan O’Broin, Director of User Experience, who travels globally out of Ireland, noted that although he personally hasn’t experienced any negative reactions to wearable technology, the issue of privacy of the data gathered, especially in Europe, is a huge concern.  

Data accuracy, permitting employees to voluntarily opt in or out of fitness and wellness programs, privacy issues, and what to do with that data once its collected, all need to reassure users and customers alike. Having HR involved in tracking, storing and using employee activity data is an enterprise dimension being explored.

User Experience Trends

Smart watch usage is on the rise, combining ability to unobtrusively track activity with other glanceable UI capabilities. Analysts now predict a shift in usage patterns as smart watches begin to replace fitness bands, but time will tell in this fast-moving space.

Regardless of your wearable device of choice, and the fashion, personal privacy, employee data, and corporate deployment considerations we’ve explored, wearable technology and wellness programs are enterprise happenings that are here to stay. It’s time to get on board and think about how your business can benefit.

Perhaps your team could follow Misha’s great initiative and explore wearable technology user experience for yourselves? Let us know in the comments!

You can read more about Oracle Applications User Experience team’s innovation and exploration of wearable technology on the Usable Apps Storify social story.

Series of SaaS Implementation Workshops for EMEA Partners

We are pleased to announce a series of different SaaS/Cloud Implementation Workshops. Oracle will organize several Workshops between January and June 2015. It will be possible to join the...

We share our skills to maximize your revenue!
Categories: DBA Blogs

How Effective is Blogging for Software Developer Outreach?

Usable Apps - Wed, 2015-01-07 02:05

By Joe Dumas, Oracle Applications User Experience

When you blog, are you reaching the right audience? Is blogging an effective way to spread your message? These are some of the questions that the Oracle Applications User Experience (OAUX) Communications and Outreach team asked me to help answer.

The team made the Simplified User Experience Design Patterns for the Oracle Applications Cloud Service eBook available for free on the web. They announced its availability on the Usable Apps blog.

Simplified User Experience Design Patterns for the Oracle Applications Cloud Service eBook

Simplified User Experience Design Patterns for the Oracle Applications Cloud Service eBook in use.

The eBook contains user experience design guidance and examples for building the Oracle Applications Cloud simplified UI. The target audience was developers building applications with Oracle ADF in the Oracle Java Cloud Service. To download the eBook (in a tablet-friendly format of choice), developers registered their name and email address on the eBook landing page.

To gather the information for analysis, I created a short online survey of questions and, using that database of thousands of email addresses, invited those registered users to complete the survey, without either obligation or incentive.

Of course, developers might have heard about the eBook in other ways, such as attending an OAUX workshop or visiting the Usable Apps website.

However, when I tabulated the survey results, more than half of the respondents had found out about the eBook from the blog.

Furthermore, I found that of those who used the book extensively, some 70% said they had first heard about it from the blog.

I also found that the survey respondents were mostly the very people for whom the book was intended. 70% of respondents made user interface design decisions for applications development teams, and all either worked for Oracle Partners or were applications development consultants for Oracle products.

I’ll explore in a further blog article about what parts of the eBook developers found most useful and other insights. But, as a taster, I can let you know now about receiving positive comments again and again about developers being “thrilled” with the content.

In these days of pervasive social media and other communications channels and a debate about the effectiveness of different online platforms, these findings show that blogs are indeed an effective way to reach out to a target audience, especially one committed to finding ways to work faster and smarter.

Do you communicate with developers or other information technology professionals using a blog? How often do you blog, and why? Share your experience in the comments.

For more eBook goodness from OAUX, download the Oracle Applications Cloud UX Strategy and Trends eBook too. More details are on the Voice of User Experience (VoX) blog.

Change Subversion password in Jdeveloper

Darwin IT - Wed, 2015-01-07 01:50
On most projects I used TortoiseSVN voor versioning with Subversion. It gives me a feeling of being more in control of my working copy. In the past I used JDevelopers built in versioning tool on a project, due to lack of administrator rights for installing Tortoise. And although you need to give it some time to get used to it, the versioning is pretty good actually.

On my current project I gave it a go again, since the customer is used to do versioning from inside the IDE (be it Eclipse or JDeveloper).

One thing that I remember to be hard to do was changing your subversion password. I remember that I could not find it in Jdeveloper 11g. And again in Jdeveloper 12c it was hard to find as well. So I started getting affraid to have to hack in the preference or property files. But it turns out that you can do it pretty easy from the tool. Be it that it is at a less obvious place.

In Jdeveloper go to the Team menu and choose Versions.

It will open the Versions Browser or Navigator:
You can right click on the particular connection (in my example the blurred one):

And here you can change your password.
You can also add or remove repository connections, import and export them. Which is handy because in SQLDeveloper there's also a Subversion client. Of course, since it's based on the same framework.


Some versioning features I'm happy about in JDeveloper:
  • The pending changes tools is neat: it gives a nice overview of pending changes divided over tabs with incoming outgoing (your own changes), (updates from coworkers), and candidates (added files that are not versioned yet)
  • Integrated in project navigator in JDeveloper, where you can do changes on file, project and application level.
I'm less happy about or getting used to:
  • Versioning functionality is pretty much scattered over the IDE in different context menu's. Not only because of the different versioning levels, but the preferences are done at different places. See the Versions Navigator above, but I tend to search for preferences in the Preferences menu.
  • There are folders that you don't want to have versioned, for example: files generated at build like compiled classes, processes and jar/war/ear files. You want to be able to set the 'svn:ignore' property for those. But those folders and files tend to be invisible in the project navigator. So you have to add the parent folders of those files and folders as a resource, set the property, commit and remove them as a resource again. 
  • When I want to set keywords (like Id, author, etc.) for svn keyword replacement, JDeveloper doesn't know of the possible values. 
But for most jobs the SVN integration is impressing complete.

Happy versioning in JDeveloper!

Test locally first, then deploy to the cloud.

Eric Rajkovic - Tue, 2015-01-06 22:12
As recommended by Cheuk Chau on Medium, I read 'How to Develop a Daily Writing Habit’ - I’ll try to put it in practice here with my second advise - should have been my #1 advise:
Run a local copy of your application first, before to test it on your cloud instance.
While it’s simple to run directly on the Cloud, and we have a lot of samples available to get started, working local first is the best way to get ready when trouble come your way.
Samples are available from:One challenge you will face : there are many ways to get code deployed - you need to find the best option for you and your organization. Here are a few :
  1. JDeveloper workspace, using 11.1.1.7.1 build.
  2. ant build.xml (part of the SDK samples)
  3. maven pom.xml (part of the SDK samples)
  4. java cli (part of the SDK samples)
  5. the web console
The other step important is to check the output of the deployment and minimize the number of error and warning you see.
In one case, I started to look at some error I was seeing first in my Cloud instance, assuming it was a new issue. After a few trials, I finally run another test with my local setup and found that I was getting the same error. I made a mistake while editing some of the source code for my application and did not catch the error with local testing, which would have been cheaper for me.

For my next post, my advise is going to be of a different form - what you should not try. The do not advices are as valuable as the do advises - In some case. you have to learn from your mistakes, but sometime, it’s cheaper to leverage someone else mistakes.

My Oracle Support Essentials Webcasts

Chris Warticki - Tue, 2015-01-06 17:58

My Oracle Support Essentials Webcasts

January 2015

Webcast My Oracle Support Essentials


Oracle Support delivers various seminars on Support Policies, Processes and Proactive Tools via live web conference and recorded sessions. Available to you as an Oracle Support user free of charge, these seminars, available in multiple languages and in different time zones, help ensure you optimize the value offered by Oracle Support.

Click on the corresponding session link to register for an upcoming seminar. Bookmark Note 553747.1 for your quick access to the latest live My Oracle Support Essentials webcasts schedule.

LIVE WEBCASTS

Webcast Language US Eastern Central Europe Singapore Register Customer User Administration English Jan 19, 09:00 PM Jan 20, 03:00 AM Jan 20, 10:00 AM Register Jan 20, 04:00 AM Jan 20, 10:00 AM Jan 20, 05:00 PM Register Jan 20, 11:00 AM Jan 20, 05:00 PM Jan 21, 12:00 AM Register Spanish Jan 20, 10:00 AM Jan 20, 04:00 PM Jan 20, 11:00 PM Register Finding Answers in My Oracle Support English Jan 20, 09:00 PM Jan 21, 03:00 AM Jan 21, 10:00 AM Register Jan 21, 04:00 AM Jan 21, 10:00 AM Jan 21, 05:00 PM Register Jan 21, 01:00 PM Jan 21, 07:00 PM Jan 22, 02:00 AM Register French Jan 21, 10:00 AM Jan 21, 04:00 PM Jan 21, 11:00 PM Register Portuguese Jan 21, 07:30 AM Jan 21, 01:30 PM Jan 21, 08:30 PM Register Spanish Jan 21, 10:00 AM Jan 21, 04:00 PM Jan 21, 11:00 PM Register Introduction to Premier Support English Jan 13, 09:00 PM Jan 14, 03:00 AM Jan 14, 10:00 AM Register Jan 14, 04:00 AM Jan 14, 10:00 AM Jan 14, 05:00 PM Register Jan 14, 01:00 PM Jan 14, 07:00 PM Jan 15, 02:00 AM Register Spanish Jan 14, 10:00 AM Jan 14, 04:00 PM Jan 14, 11:00 PM Register My Oracle Support Basics English Jan 14, 09:00 PM Jan 15, 03:00 AM Jan 15, 10:00 AM Register Jan 15, 04:00 AM Jan 15, 10:00 AM Jan 15, 05:00 PM Register Jan 15, 01:00 PM Jan 15, 07:00 PM Jan 16, 02:00 AM Register French Jan 15, 10:00 AM Jan 15, 04:00 PM Jan 15, 11:00 PM Register Portuguese Jan 15, 07:30 AM Jan 15, 01:30 PM Jan 15, 08:30 PM Register Spanish Jan 15, 10:00 AM Jan 15, 04:00 PM Jan 15, 11:00 PM Register Oracle Cloud Support English Jan 12, 09:00 PM Jan 13, 03:00 AM Jan 13, 10:00 AM Register Jan 13, 04:00 AM Jan 13, 10:00 AM Jan 13, 05:00 PM Register Jan 13, 01:00 PM Jan 13, 07:00 PM Jan 14, 02:00 AM Register Spanish Jan 13, 10:00 AM Jan 13, 04:00 PM Jan 13, 11:00 PM Register Service Request Flow and Best Practices English Jan 26, 09:00 PM Jan 27, 03:00 AM Jan 27, 10:00 AM Register Jan 27, 04:00 AM Jan 27, 10:00 AM Jan 27, 05:00 PM Register Jan 27, 01:00 PM Jan 27, 07:00 PM Jan 28, 02:00 AM Register French Jan 27, 10:00 AM Jan 27, 04:00 PM Jan 27, 11:00 PM Register Portuguese Jan 27, 07:30 AM Jan 27, 01:30 PM Jan 27, 08:30 PM Register Spanish Jan 27, 10:00 AM Jan 27, 04:00 PM Jan 27, 11:00 PM Register Support Configuration Based Services Essentials English Jan 27, 09:00 PM Jan 28, 03:00 AM Jan 28, 10:00 AM Register Jan 28, 04:00 AM Jan 28, 10:00 AM Jan 28, 05:00 PM Register Jan 28, 01:00 PM Jan 28, 07:00 PM Jan 29, 02:00 AM Register Spanish Jan 28, 10:00 AM Jan 28, 04:00 PM Jan 28, 11:00 PM Register Understanding Support Identifier Groups English Jan 20, 12:00 AM Jan 20, 06:00 AM Jan 20, 01:00 PM Register Jan 20, 07:00 AM Jan 20, 01:00 PM Jan 20, 08:00 PM Register Jan 20, 01:00 PM Jan 20, 07:00 PM Jan 21, 02:00 AM Register Spanish Jan 20, 12:00 PM Jan 20, 06:00 PM Jan 21, 01:00 AM Register Using the My Oracle Support Community Platform English Jan 21, 09:00 PM Jan 22, 03:00 AM Jan 22, 10:00 AM Register Jan 22, 04:00 AM Jan 22, 10:00 AM Jan 22, 05:00 PM Register Jan 22, 01:00 PM Jan 22, 07:00 PM Jan 23, 02:00 AM Register Spanish Jan 22, 10:00 AM Jan 22, 04:00 PM Jan 22, 11:00 PM Register
You may visit My Oracle Support for schedules of other live seminars. To see current local times around the world, visit world clock.

RECORDED TRAININGS

The following recorded My Oracle Support Essentials Webcasts can be viewed on demand. You may refer to Note 603505.1 for other available topics and recordings.


Topic Language View Customer User Administration English View Finding Answers in My Oracle Support English View Hardware Support Best Practices English View My Oracle Support Basics English View My Oracle Support New Features English View Oracle Cloud Support English View Service Request Flow and Best Practices English View Support Configuration Based Services Essentials English View Understanding Support Identifier Groups English View Using the My Oracle Support Community Platform English View
If you have further questions, please contact us by submitting a question in Using My Oracle Support. Hardware and Software Engineered to Work Together

Oracle Support

Copyright © 2015, Oracle. All rights reserved. Contact Us | Legal Notices | Privacy

Generating sample data for your APEX application

Dimitri Gielis - Tue, 2015-01-06 17:30
You need some sample data sometimes too? When I'm showing some new concepts at a customer or when I'm doing some training I just want some "random" data. Well, it's not really random data, it's a specific type of data I need depending the column and it should be a text that is somewhat meaningful and not hard to read like "1RT3HFIY".
When my wife is doing design and lay-out and she needs text, she's using Lorem Ipsum. In fact it's build in the Adobe tools she's using and the text looks readable (although it isn't). It would be so cool if for example SQL Developer had that feature "populate my table(s) with sample data" (even keeping relationships into account).
Before, I used data from all_objects or generated data with dbms_random and a connect by clause for the amount of records I wanted, but it wasn't ideal. I also looked at scrambling my data, which is nice because existing relations keep intact, but for me it didn't really work nicely if I needed to scramble a lot of columns. There're some companies having solutions for generating/scrambling data too, but below I want to share what I'm currently doing.
Go to generatedata.com and enter the definition of your table and which kind of data you want per column.

Once the definition is there you can define how you want to receive the data. I found the SQL tab didn't really work well, so I use CSV as output.
Next in Oracle SQL Developer I right click on my table and say "Import data" and select the csv.It automatically knows the format etc. and maps it correctly to my table. Hit Next and you have your sample data available :) 

You can also load the data straight from the Data Workshop in APEX.


Categories: Development

Career Day – Oracle Database Administrator

Bobby Durrett's DBA Blog - Tue, 2015-01-06 16:10

I will be talking at my daughter’s high school for career day on Monday, explaining my job as an Oracle Database Administrator.  Wish me luck!

The funny thing is that no one understands what Oracle DBAs do, unless they are one or work closely with one.  I have a feeling that my talk is going to fall flat, but if it helps one of the students in any way it will be worth it.

To me the best thing about being an Oracle DBA is that you can do a pretty interesting and technically challenging job and companies that are not technology centric will still hire you to do it.  I’ve always been interested in computer technology but have worked in non-technical companies my entire career – mainly a non-profit ministry and a food distribution company.  Neither companies make computers or sell software!

My other thought is how available computer technology is to students today.  Oracle, in one of the company’s more brilliant moves, made all of its software available for download so students can try out the very expensive software for free.  Plus all the manuals are available online.  What is it like to grow up as a student interested in computer technology in the age of the internet?  I can’t begin to compare it to my days in the 1980s when I was in high school and college.  Did we even have email?  I guess we must have but I can’t remember using it much.  Today a student who owns a laptop and has an internet connection has a world of technology at their fingertips far beyond what I had at their age.

Hopefully I wont bore the students to tears talking about being an Oracle DBA.  They probably still won’t know what it really is after I’m done.  But at least they will know that such a job exists, and maybe that will be helpful to them.

– Bobby

P.S.  There were over 100 students there.  They were pretty polite with only a little talking.  Here is a picture of myself on the left, my daughter in the center, and a coworker who also spoke at the career day on the right.

careerday





Categories: DBA Blogs

Updated Native Install/Clustering Whitepapers

Anthony Shorten - Tue, 2015-01-06 15:04

The Implementing Oracle ExaLogic and/or Oracle WebLogic Clustering  (Doc Id: 1334558.1) and Native Installation Oracle Utilities Application Framework (Doc Id: 1544969.1) have been updated with the latest information and advice from customers and partners.

The updates include:

  • Configuring additional parameters for UTF8 sites
  • Compatibility settings for various versions of Oracle WebLogic
  • Java Flight Control configuration for Java 7.

The whitepapers are available from My Oracle Support.

January 27th: Acorn Paper Products Sales Cloud Reference Forum

Linda Fishman Hoyle - Tue, 2015-01-06 14:42

Join us for an Oracle Sales Cloud Customer Reference Forum on Tuesday, January 27, 2015, with Acorn Paper Products' Jake Weissberg, Director of IT and David Karr, Chief Operating Officer. In this session, Weissberg and Karr will share why Oracle Sales Cloud was the right choice to optimize sales team productivity and effectiveness, while gaining executive visibility to the pipeline. They also will talk about how they were able to streamline their sales-to-order process with Oracle E-Business Suite.

Founded by Jack Bernstein in 1946, Acorn Paper Products Company started by selling job lot (over-run) boxes with five employees in an 11,000-square-foot warehouse. Today, Acorn, which is the end-user distribution arm of parent holding company Oak Paper Products Company, is a fourth-generation, family-owned business with more than 500,000 square feet of warehouse space, operating four specialty product divisions: creative services, janitorial and sanitary products, wine packaging, and agricultural packaging.

You can register now to attend the live Forum on Tuesday, January 27, 2015, at 9:00 a.m. Pacific Time / 12:00 p.m. Eastern Time and learn more from Acorn Paper Products directly.

Count (*)

Jonathan Lewis - Tue, 2015-01-06 12:04

The old chestnut about comparing speeds of count(*), count(1), count(non_null_column) and count(pk_column) has come up in the OTN database forum (at least) twice in the last couple of months. The standard answer is to point out that they will all execute the same code, and that the corroborating evidence for that claim is that, for a long time, the 10053 trace files have had a rubric reporting: CNT – count(col) to count(*) transformation or, for an even longer time, that the error message file (oraus.msg for the English Language version) has had an error code 10122 which produced (from at least Oracle 8i, if not 7.3):


SQL> execute dbms_output.put_line(sqlerrm(-10122))
ORA-10122: Disable transformation of count(col) to count(*)

But the latest repetition of the question prompted me to check whether a more recent version of Oracle had an even more compelling demonstration, and it does. I extracted the following lines from a 10053 trace file generated by 11.2.0.4 (and I know 10gR2 is similar) in response to selecting count(*), count(1) and count({non-null column}) respectively:


Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(*)" FROM "TEST_USER"."SAVED_ASH" "SAVED_ASH"

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(1)" FROM "TEST_USER"."SAVED_ASH" "SAVED_ASH"

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(SAMPLE_ID)" FROM "TEST_USER"."SAVED_ASH" "SAVED_ASH"

As you can see, Oracle has transformed all three select lists into count(*), hiding the transformation behind the original column alias. As an outsider’s proof of what’s going on, I don’t think you could get a more positive indicator than that.

 


No Discernible Growth in US Higher Ed Online Learning

Michael Feldstein - Tue, 2015-01-06 11:34

By 2015, 25 million post-secondary students in the United States will be taking classes online. And as that happens, the number of students who take classes exclusively on physical campuses will plummet, from 14.4 million in 2010 to just 4.1 million five years later, according to a new forecast released by market research firm Ambient Insight.

- Campus Technology, 2011

On the positive side, Moody’s notes that the U.S. Department of Education projects a 20-percent growth in master’s degrees and a 9-percent growth in associate degrees, opportunities in both online education and new certificate programs, and a rising earnings premium for those with college degrees.

- Chronicle of Higher Ed, 2014

Q.  How likely would it be that this fraction [% students taking online courses] would grow to become a majority of students over the next five years? A [from institutional academic leaders]. Nearly two-thirds responded that this was “Very likely,” with an additional one-quarter calling it “Likely.” [That’s almost 90% combined]

- Grade Change, Babson Survey 2013

More than two-thirds of instructors (68 percent) say their institutions are planning to expand their online offerings, but they are split on whether or not this is a good idea (36 percent positive, 38 percent negative, 26 percent neutral).

- Inside Higher Ed 2014

Still, the [disruptive innovation] theory predicts that, be it steam or online education, existing consumers will ultimately adopt the disruption, and a host of struggling colleges and universities — the bottom 25 percent of every tier, we predict — will disappear or merge in the next 10 to 15 years.

- Clayton Christensen in NY Times 2013

You could be forgiven for assuming that the continued growth of online education within US higher ed was a foregone conclusion. We all know it’s happening; the questions is how to adapt to the new world.

But what if the assumption is wrong? Based on the official Department of Education / NCES new IPEDS data for Fall 2013 term, for the first time there has been no discernible growth in postsecondary students taking at least one online course in the US.

From 2002 through 2013 the most reliable measure of this metric has been the Babson Survey Research Group (BSRG) annual reporting. While there are questions on absolute numbers due to questions on definition of what makes a course “online”, the year-over-year growth numbers have been quite reliable and are the most-referenced numbers available. Starting last year, using Fall 2012 data, the official IPEDS data started tracking online education, and last week they put out Fall 2013 data – allowing year-over-year changes.

I shared the recent overall IPEDS data in this post, noting the following:

By way of comparison, it is worth noting the similarities to the Fall 2012 data. The percentage data (e.g. percent of a sector taking exclusive / some / no DE courses) has not changed by more than 1% (rounded) in any of the data. This unfortunately makes the problems with IPEDS data validity all the more important.

It will be very interesting to see the Babson Survey Research Group data that is typically released in January. While Babson relies on voluntary survey data, as opposed to mandatory federal data reporting for IPEDS, their report should have better longitudinal validity. If this IPEDS data holds up, then I would expect the biggest story for this year’s Babson report to be the first year of no significant growth in online education since the survey started 15 years ago.

I subsequently found out that BSRG is moving this year to use the IPEDS data for online enrollment. So we already have the best data available, and there is no discernible growth. Nationwide there are just 77,493 more students taking at least one online class, a 1.4% increase.

Y-o-Y Analysis

Why The Phrase “No Discernible Growth”?

Even though there was a nationwide increase of 77,493 students taking at least one online course, representing a 1.4% growth, there is too much noise in the data for this to be considered real growth. Even with the drop in total enrollment, the percentage of students taking at least one online course only changed from 26.4% TO 27.1%.

Just take one school – Suffolk County Community College – who increased by roughly 21,600 student enrollments taking at least one online course from 2012 to 2013 due to a change in how they report data and not from actual enrollment increases. More than a quarter of the annual nationwide increase can be attributed to this one reporting change[1]. These and similar issues are why I use the phrase “no discernible growth” – the year-over-year changes are now lower than the ability of our data collection methods to accurately measure.

Combine Babson and IPEDS Growth Data

While we should not directly compare absolute numbers, it is reasonable to combine the BSRG year-over-year historical growth data (2003 – 2012) with the new IPEDS data (2012 – 2013).

Y-o-Y Growth Chart

One thing to notice is that is really a long-term trend of declining growth in online. With the release of last year’s BSRG report they specifically called out this trend.

The number of additional students taking at least one online course continued to grow at a rate far in excess of overall enrollments, but the rate was the lowest in a decade.

What has not been acknowledged or fully understood is the significance of this rate hitting zero, at least within the bounds of the noise in data collection.

Implications

Think of the implications here if online education has stopped growing in US higher education. Many of the assumptions underlying institutional strategic plans and ed tech vendor market data is based on continued growth in online learning. It is possible that there will be market changes leading back to year-over-year growth, but for now the assumptions might be wrong.

Rather than focusing just on this year, the more relevant questions are based on the future, particularly if you look at the longer-term trends. Have we hit a plateau in terms of the natural level of online enrollment? Will the trend continue to the point of online enrollments actually dropping below the overall enrollment? Will online enrollments bottom out and start to rise again once we get the newer generation of tools and pedagogical approaches such as personalized learning or competency-based education beyond pilot programs?

I am not one to discount the powerful effect that online education has had and will continue to have in the US, but the growth appears to be at specific schools rather than broad-based increases across sectors. Southern New Hampshire, Arizona State University, Grand Canyon University and others are growing their online enrollments, but University of Phoenix, DeVry University and others are dropping.

One issue to track is the general shift from for-profit enrollment to not-for-profit enrollment, even if the overall rates of online courses has remained relatively stable within each sector. There are approximately 80,000 fewer students taking at least one online course at for-profit institutions while there are approximately 157,000 more students in the same category at public and private not-for-profit sectors.

I suspect the changes will continue to happen in specific areas – number of working adults taking courses, often in competency-based programs, at specific schools and statewide systems with aggressive plans – but it also appears that just making assumptions of broad-based growth needs to be reconsidered.

Update: Please note that the data release is new and these are early results. If I find mistakes in the data or analysis that changes the analysis above, I’ll share in an updated post.

  1. Russ Poulin and I documented these issues in a separate post showing the noise is likely in the low hundreds of thousands.

The post No Discernible Growth in US Higher Ed Online Learning appeared first on e-Literate.

Another Echo Hack from Noel

Oracle AppsLab - Tue, 2015-01-06 10:44

Noel (@noelportugal) spent a lot of time during his holidays geeking out with his latest toy, Amazon Echo. Check out his initial review and his lights hack.

For a guy whose name means Christmas, seems it was a logical leap to use Alexa to control his Christmas tree lights too.

Let’s take a minute to shame Noel for taking portrait video. Good, moving on, oddly, I found out about this from a Wired UK article about Facebook’s acquisition of Wit.ai, an interesting nugget in its own right.

If you’re interested, check out Noel’s code on GitHub. Amazon is rolling out another batch of Echos to those who signed up back when the device was announced in November.

How do I know this? I just accepted my invitation and bought my very own Echo.

With all the connected home announcements coming out of CES 2015, I’m hoping to connect Alexa to some of the IoT gadgets in my home. Stretch goal for sure, given all the different ecosystems, but maybe this is finally the year that IoT pushes over the adoption hump.

Fingers crossed. The comments you must find.Possibly Related Posts:

Performance Problems with Dynamic Statistics in Oracle 12c

Pythian Group - Tue, 2015-01-06 09:55

I’ve been making some tests recently with the new Oracle 12.1.0.2 In-Memory option and have been faced with an unexpected  performance problem.  Here is a test case:

create table tst_1 as
with q as (select 1 from dual connect by level <= 100000)
select rownum id, 12345 val, mod(rownum,1000) ref_id  from q,q
where rownum <= 200000000;

Table created.

create table tst_2 as select rownum ref_id, lpad(rownum,10, 'a') name, rownum || 'a' name2</pre>
from dual connect by level <= 1000;

Table created.

begin
dbms_stats.gather_table_stats(
ownname          => user,
tabname          =>'TST_1',
method_opt       => 'for all columns size 1',
degree => 8
);
dbms_stats.gather_table_stats(
ownname          => user,
tabname          =>'TST_2',
method_opt       => 'for all columns size 1'
);
end;
/
PL/SQL procedure successfully completed.

alter table tst_1 inmemory;

Table altered.

select count(*) from tst_1;

COUNT(*)
----------
200000000

Waiting for in-memory segment population:

select segment_name, bytes, inmemory_size from v$im_segments;

SEGMENT_NAME         BYTES INMEMORY_SIZE

--------------- ---------- -------------

TST_1           4629463040    3533963264

Now let’s make a simple two table join:

select name, sum(val) from tst_1 a, tst_2 b where a.ref_id = b.ref_id and name2='50a'
group by name;

Elapsed: 00:00:00.17

Query runs pretty fast. Execution plan has the brand new vector transformation

Execution Plan
----------------------------------------------------------
Plan hash value: 213128033

--------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name                     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                          |     1 |    54 |  7756  (21)| 00:00:01 |
|   1 |  TEMP TABLE TRANSFORMATION        |                          |       |       |            |          |
|   2 |   LOAD AS SELECT                  | SYS_TEMP_0FD9D66FA_57B2B |       |       |            |          |
|   3 |    VECTOR GROUP BY                |                          |     1 |    24 |     5  (20)| 00:00:01 |
|   4 |     KEY VECTOR CREATE BUFFERED    | :KV0000                  |     1 |    24 |     5  (20)| 00:00:01 |
|*  5 |      TABLE ACCESS FULL            | TST_2                    |     1 |    20 |     4   (0)| 00:00:01 |
|   6 |   HASH GROUP BY                   |                          |     1 |    54 |  7751  (21)| 00:00:01 |
|*  7 |    HASH JOIN                      |                          |     1 |    54 |  7750  (21)| 00:00:01 |
|   8 |     VIEW                          | VW_VT_377C5901           |     1 |    30 |  7748  (21)| 00:00:01 |
|   9 |      VECTOR GROUP BY              |                          |     1 |    13 |  7748  (21)| 00:00:01 |
|  10 |       HASH GROUP BY               |                          |     1 |    13 |  7748  (21)| 00:00:01 |
|  11 |        KEY VECTOR USE             | :KV0000                  |   200K|  2539K|  7748  (21)| 00:00:01 |
|* 12 |         TABLE ACCESS INMEMORY FULL| TST_1                    |   200M|  1716M|  7697  (21)| 00:00:01 |
|  13 |     TABLE ACCESS FULL             | SYS_TEMP_0FD9D66FA_57B2B |     1 |    24 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - filter("NAME2"='50a')
   7 - access("ITEM_5"=INTERNAL_FUNCTION("C0") AND "ITEM_6"="C2")
  12 - inmemory(SYS_OP_KEY_VECTOR_FILTER("A"."REF_ID",:KV0000))
       filter(SYS_OP_KEY_VECTOR_FILTER("A"."REF_ID",:KV0000))

Note
-----
   - vector transformation used for this statement

After having such impressive performance I’ve decided to run the query in parallel:

select /*+ parallel(8) */ name, sum(val) from tst_1 a, tst_2 b
where a.ref_id = b.ref_id and name2='50a'
group by name;

Elapsed: 00:01:02.55

Query elapsed time suddenly dropped from 0.17 seconds to the almost 1 minute and 3 seconds. But the second execution runs in 0.6 seconds.
The new plan is:

Execution Plan
----------------------------------------------------------
Plan hash value: 3623951262

-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |          |     1 |    29 |  1143  (26)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR                     |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)               | :TQ10001 |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY                    |          |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE                      |          |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH                   | :TQ10000 |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   6 |       HASH GROUP BY                 |          |     1 |    29 |  1143  (26)| 00:00:01 |  Q1,00 | PCWP |            |
|*  7 |        HASH JOIN                    |          |   200K|  5664K|  1142  (26)| 00:00:01 |  Q1,00 | PCWP |            |
|   8 |         JOIN FILTER CREATE          | :BF0000  |     1 |    20 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
|*  9 |          TABLE ACCESS FULL          | TST_2    |     1 |    20 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
|  10 |         JOIN FILTER USE             | :BF0000  |   200M|  1716M|  1069  (21)| 00:00:01 |  Q1,00 | PCWP |            |
|  11 |          PX BLOCK ITERATOR          |          |   200M|  1716M|  1069  (21)| 00:00:01 |  Q1,00 | PCWC |            |
|* 12 |           TABLE ACCESS INMEMORY FULL| TST_1    |   200M|  1716M|  1069  (21)| 00:00:01 |  Q1,00 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   7 - access("A"."REF_ID"="B"."REF_ID")
   9 - filter("NAME2"='50a')
  12 - inmemory(SYS_OP_BLOOM_FILTER(:BF0000,"A"."REF_ID"))
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"A"."REF_ID"))

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 8 because of hint

We can see a Bloom filter instead of key vector, but this is not the issue. Problem is coming from the “dynamic statistics used: dynamic sampling (level=AUTO)” note.
In 10046 trace file I’ve found nine dynamic sampling queries and one of them was this one:

SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring
  optimizer_features_enable(default) no_parallel result_cache(snapshot=3600)
  */ SUM(C1)
FROM
 (SELECT /*+ qb_name("innerQuery")  */ 1 AS C1 FROM (SELECT /*+
  NO_VECTOR_TRANSFORM ORDERED */ "A"."VAL" "ITEM_1","A"."REF_ID" "ITEM_2"
  FROM "TST_1" "A") "VW_VTN_377C5901#0", (SELECT /*+ NO_VECTOR_TRANSFORM
  ORDERED */ "B"."NAME" "ITEM_3","B"."REF_ID" "ITEM_4" FROM "TST_2" "B" WHERE
  "B"."NAME2"='50a') "VW_VTN_EE607F02#1" WHERE ("VW_VTN_377C5901#0"."ITEM_2"=
  "VW_VTN_EE607F02#1"."ITEM_4")) innerQuery

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1     43.92      76.33          0          5          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3     43.92      76.33          0          5          0           0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 64     (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  RESULT CACHE  56bn7fg7qvrrw1w8cmanyn3mxr (cr=0 pr=0 pw=0 time=0 us)
         0          0          0   SORT AGGREGATE (cr=0 pr=0 pw=0 time=8 us)
         0          0          0    HASH JOIN  (cr=0 pr=0 pw=0 time=4 us cost=159242 size=2600000 card=200000)
 200000000  200000000  200000000     TABLE ACCESS INMEMORY FULL TST_1 (cr=3 pr=0 pw=0 time=53944537 us cost=7132 size=800000000 card=200000000)
         0          0          0     TABLE ACCESS FULL TST_2 (cr=0 pr=0 pw=0 time=3 us cost=4 size=9 card=1)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  asynch descriptor resize                        1        0.00          0.00
  Disk file operations I/O                        1        0.00          0.00
  CSS initialization                              1        0.00          0.00
  CSS operation: action                           1        0.00          0.00
  direct path write temp                       6267        0.02         30.37
********************************************************************************

Vector transformation is disabled, inefficient table order is fixed by the ORDERING hint and we are waiting for hash table creation based on huge TST_1 table.
Dynamic statistics feature has been greatly improved in Oracle 12c  with the support for joins and group by predicates. This is why we have such join during the parse time. Next document has the”Dynamic Statistics (previously known as dynamic sampling)” section inside: Understanding Optimizer Statistics with Oracle Database 12c where the new functionality is described.

Let’s make a simpler test:

select /*+ parallel(2) */ ref_id, sum(val) from tst_1 a group by ref_id;

Execution Plan
----------------------------------------------------------
Plan hash value: 2527371111

---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |          |  1000 |  9000 |  7949  (58)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR                   |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)             | :TQ10001 |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY                  |          |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE                    |          |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH                 | :TQ10000 |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   6 |       HASH GROUP BY               |          |  1000 |  9000 |  7949  (58)| 00:00:01 |  Q1,00 | PCWP |            |
|   7 |        PX BLOCK ITERATOR          |          |   200M|  1716M|  4276  (21)| 00:00:01 |  Q1,00 | PCWC |            |
|   8 |         TABLE ACCESS INMEMORY FULL| TST_1    |   200M|  1716M|  4276  (21)| 00:00:01 |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------------------

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)
   - Degree of Parallelism is 2 because of hint

We can see a “dynamic statistics used” note again. It’s a simple query without predicates with the single table with pretty accurate statistics. From my point of view, here is no reason for dynamic sampling at all.
Automatic dynamic sampling was introduced in 11G Release 2. Description of this feature can be found in this document: Dynamic sampling and its impact on the Optimizer.
“From Oracle Database 11g Release 2 onwards the optimizer will automatically decide if dynamic sampling will be useful and what dynamic sampling level will be used for SQL statements executed in parallel. This decision is based on size of the tables in the statement and the complexity of the predicates”.
Looks like algorithm has been changed in 12c and dynamic sampling is triggered in a broader set of use cases.
This behavior can be disabled at statement, session or system level using the fix control for the bug 7452863. For example,
ALTER SESSION SET “_fix_control”=’7452863:0′;

Summary

Dynamic statistics has been enhanced in Oracle 12c, but this can lead to a longer parse time.
Automatic dynamic statistics is used more often in 12c which can lead to a parse time increase in the more cases than before.

Categories: DBA Blogs

Troubleshooting a Multipath Issue

Pythian Group - Tue, 2015-01-06 09:37

Multipathing allows to configure multiple paths from servers to storage arrays. It provides I/O failover and load balancing. Linux uses device mapper kernel framework to support multipathing.

In this post I will explain the steps taken to troubleshoot a multipath issue. This should provide an glimpse into the tools and technology involved. Problem was reported in a RHEL6 system in which a backup software is complaining that the device from which /boot is mounted does not exist.

Following is the device. You can see the device name is a wwid.

# df
Filesystem 1K-blocks Used Available Use% Mounted on
[..]
/dev/mapper/3600508b1001c725ab3a5a49b0ad9848ep1
198337 61002 127095 33% /boot

File /dev/mapper/3600508b1001c725ab3a5a49b0ad9848ep1 is missing under /dev/mapper.

# ll /dev/mapper/
total 0
crw-rw—- 1 root root 10, 58 Jul 9 2013 control
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpatha -> ../dm-1
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathap1 -> ../dm-2
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathb -> ../dm-0
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathc -> ../dm-3
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathcp1 -> ../dm-4
lrwxrwxrwx 1 root root 7 Jul 9 2013 mpathcp2 -> ../dm-5
lrwxrwxrwx 1 root root 7 Jul 9 2013 vgroot-lvroot -> ../dm-6
lrwxrwxrwx 1 root root 7 Jul 9 2013 vgroot-lvswap -> ../dm-7

From /ect/fstab, it is found that UUID of the device is specified.

UUID=6dfd9f97-7038-4469-8841-07a991d64026 /boot ext4 defaults 1 2

From blkid, we can see the device associated with the UUID. blkid command prints the attributes of all block device in the system.

# blkid
/dev/mapper/mpathcp1: UUID=”6dfd9f97-7038-4469-8841-07a991d64026″ TYPE=”ext4″

Remounting the /boot mount point shows user friendly name /dev/mapper/mpathcp1.

# df
Filesystem 1K-blocks Used Available Use% Mounted on
[..]
/dev/mapper/mpathcp1 198337 61002 127095 33% /boot

From this far, we can understand that the system is booting with wwid as device name. But later the device name is converted into user friendly name. In multipath configuration user_friendly_names is enabled.

# grep user_friendly_names /etc/multipath.confuser_friendly_names yes

As per Red Hat documentation,

“When the user_friendly_names option in the multipath configuration file is set to yes, the name of a multipath device is of the form mpathn. For the Red Hat Enterprise Linux 6 release, n is an alphabetic character, so that the name of a multipath device might be mpatha or mpathb. In previous releases, n was an integer.”

As the system is mounting the right disk after booting up, problem should be with the user friendly name configuration in initramfs. Extracting the initramfs file and checking the multipath configuration shows that user_friendly_names parameter is enabled.

# cat initramfs/etc/multipath.conf
defaults {
user_friendly_names yes

Now the interesting point is that, /etc/multipath/bindings is missing in initramfs. But the file is in the system. /etc/multipath/bindings file is used to refer wwid with alias.

# cat /etc/multipath/bindings
# Multipath bindings, Version : 1.0
# NOTE: this file is automatically maintained by the multipath program.
# You should not need to edit this file in normal circumstances.
#
# Format:
# alias wwid
#
mpathc 3600508b1001c725ab3a5a49b0ad9848e
mpatha 36782bcb0005dd607000003b34ef072be
mpathb 36782bcb000627385000003ab4ef14636

initramfs can be created using dracut command.

# dracut -v -f test.img 2.6.32-131.0.15.el6.x86_64 2> /tmp/test.out

Building a test initramfs file shows that a newly created initramfs is including /etc/multipath/bindings.

# grep -ri bindings /tmp/test.out
I: Installing /etc/multipath/bindings

So this is what is happening,
When system boots up, initramfs looks for /etc/multipath/bindings for aliases in initramfs to use for user friendly names. But it could not find it and and uses wwid. After system boots up /etc/multipath/bindings is present and device names are changed to user friendly names.

Looks like the /etc/multipath/bindings file is created after kernel installation and initrd generation. This might have happened as multipath configuration was done after kernel installation. Even if the system root device is not on multipath, it is possible for multipath to be included in the initrd. For example, this can happen of the system root device is on LVM. This should be the reason why multupath.conf was included in the initramfs and not /etc/multipath/bindings.

To solve the issue we can to rebuild the initrd and restart the system. Re-installing existing kernel or installing new kernel would also fix the issue as the initrd would be rebuilt in both cases..

# dracut -v -f 2.6.32-131.0.15.el6.x86_64
Categories: DBA Blogs

Access Oracle GoldenGate JAgent XML from browser

DBASolved - Tue, 2015-01-06 09:26

There are many different ways of monitoirng Oracle GoldenGate; I have posted about many of these in earlier blog posts.  Additionally, I have talked about the different ways of monitoring Oracle GoldenGate at a few conferences as well.  (The slides can be found on my slideshare site if wanted).  In both my blog and presentations I highlight many different approaches; yet I forgot one that I think is really cool!  This one was shown to me by an Oracle Product Manager before Oracle Open World 2014 back in October (yes, I’m just now getting around to writing about it).  

This approach is using the Oracle GoldenGate Manager (port) to view a user friendly version of the XML that is passed by the Oracle Monitor Agent (JAgent) to monitoring tools like Oracle Enterprise Manager or Oracle GoldenGate Director.  This approach will not work with older versions of the JAgent.

Note: The Oracle Monitor Agent (JAgent) used in this approach is version 12.1.3.0.  It can be found here.  

Note: There is a license requirement to use this approach since this is part of the Management Pack for Oracle GoldenGate.  Contact you local sales rep for more info.

After the Oracle Monitor Agent (JAgent) is configured for your environment, the XML can be accessed via any web browser.  Within my test enviornment, I have servers named OEL and FRED.  The URLs needed to to view this cool feature are:

OEL:
http://oel.acme.com:15000/groups

FRED:
http://fred.acme.com:15000/groups

As you can see, by using the port number (15000) of the Manager process, I can directly tap into the information being feed to the management tools for monitoring.  The “groups” directory places you at the top level of the monitoring stack.  By clicking on a process groups, this will take you down into the process group and show additional items being monitored by the JAgent.

In this example, you are looking at the next level down for the process EXT on OEL.  At this point, you can see what is available: monitoring points, messages, status changes and associated files for the extract process.

OEL:
http://oel.acme.com:15000/groups/EXT


Digging further into the stack, you can see what files are associated with the process.  (This is an easy way to identify parameter files without having to go directly to the command line).

OEL:
http://oel.acme.com:15000/groups/EXT/files

OEL:
http://oel.acme.com:15000/groups/EXT/files/dirprm



As you can see, the new Oracle Monitor Agent (JAgent) provides you another way of viewing your Oracle GoldenGate environment without needing direct access to the server.  Although this is a cool way of looking at a Oracle GoldenGate environment, it does not replace traditionall monitoring approaches.  

Cool Tip: The OS tool “curl” can be used to dump similar XML output to a file (showed to me by the product team).

$ curl --silent http://oel.acme.com:15000/registry | xmllint --format -

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="/style/registry.xsl"?>
<registry xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://oel.acme.com:15000/schema/registry.xsd">
<process name="PMP" type="4" status="3"/>
<process name="EXT" type="2" mode="1" status="3"/>
<process name="MGR" type="1" status="3"/>
</registry>

In my opinion, many of the complants about the original version of the JAgent have been addressed with the latest release of the Oracle Monitor Agent (JAgent).  Give it a try!
 
Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Securing Big Data - Part 1

Steve Jones - Tue, 2015-01-06 09:00
As Big Data and its technologies such as Hadoop head deeper into the enterprise so questions around compliance and security rear their heads. The first interesting point in this is that it shows the approach to security that many of the Silicon Valley companies that use Hadoop at scale have taken, namely pretty little really.  It isn't that protecting information has been seen as a massively
Categories: Fusion Middleware

It's Spring for Oracle ACM API's

Darwin IT - Tue, 2015-01-06 08:24
Before the holiday season I was working on a service to receive e-mails in BPM using the UMS-email-adapter. Then process the attachments and the body and upload them to the Oracle ACM-case the email was meant for.

I won't get in too much detail here, since there are some articles on the use of ACM-API's like the ones of Niall Comminsky.

Unfortunately, until now, there are no WSDL/SOAP or REST services available on the ACM-API's, as they are on the Workflow Task API's.

However, it is not so hard to make the API's available as services. The trick is to wrap them up in a set of Java-beans, with one class with methods that do the jobs and create 'request and response beans' for the input parameters of the methods and the response.

A few years ago I wrote an article on using Spring components in SOA Suite 11g. This approach is still perfectly usable for SOA/BPM12c. And gives you a WSDL interface on the API's in near to no time.

There is one remark on the API's, though. That is on the creation of the the ACM Stream Service, or actually the creation of the BPMServiceClientFactory to get the context.

In the blog of Niall you'll read that you need to set the following context-properties:

        Map properties =
new HashMap();
properties.put(IWorkflowServiceClientConstants.CONNECTION_PROPERTY.CLIENT_TYPE,
BPMServiceClientFactory.REMOTE_CLIENT);
properties.put(IWorkflowServiceClientConstants.CONNECTION_PROPERTY.EJB_PROVIDER_URL,
"t3://localhost:7001");
properties.put(IWorkflowServiceClientConstants.CONNECTION_PROPERTY.EJB_SECURITY_CREDENTIALS,
cPwd);
properties.put(IWorkflowServiceClientConstants.CONNECTION_PROPERTY.EJB_SECURITY_PRINCIPAL,
cUser);
caseMgtAPI.mServiceClientFactory =
BPMServiceClientFactory.getInstance(properties, "default",
null);
Since in my case the service is running on the same server as the BPEL/BPM/ACM Process Engine is running, there's no need to create a connection (and thus provide an URL) and to authenticate as EJB_SECURITY_PRINCIPAL. So I found that the following suffices:
        Map properties =
new HashMap();
properties.put(IWorkflowServiceClientConstants.CONNECTION_PROPERTY.CLIENT_TYPE,
WorkflowServiceClientFactory.REMOTE_CLIENT);
properties.put(IWorkflowServiceClientConstants.CONNECTION_PROPERTY.EJB_INITIAL_CONTEXT_FACTORY,
"weblogic.jndi.WLInitialContextFactory");
BPMServiceClientFactory factory = BPMServiceClientFactory.getInstance(properties, null, null);
I would expect that  'WorkflowServiceClientFactory.REMOTE_CLIENT' should be'WorkflowServiceClientFactory.LOCAL_CLIENT', but need to verify that. The code above works in my case.
Update 12-1-2015: When using LOCAL_CLIENT I get the exception:
oracle.bpm.client.common.BPMServiceClientException: Cannot lookup Local EJB from a client. Try annotating it in the referred EJB. Veroorzaakt door: oracle.bpm.client.common.BPMServiceClientException: Cannot lookup Local EJB from a client. Try annotating it in the referred EJB.
So apparently you need to use REMOTE_CLIENT.

You do need to authenticate with the BPM user that is allowed to query the case, upload documents  etc. as follows:
 context = bpmFactory.getBPMUserAuthenticationService().authenticate(userName, userPassword.toCharArray(), null);
Hope this helps a little further in creating services on ACM.

Who is a DBA Leader?

Pakistan's First Oracle Blog - Tue, 2015-01-06 06:00
Sitting behind a big mahogany table, smoking Cuban Cigar, glaring at the person sitting across, one hand taking the receive of black phone to right ear, and the other hand getting the mobile phone off the left ear can be the image of a DBA boss in any white elephant government outfit, but it certainly cannot work in organization made up of professionals like database administrators. And if such image or similar image is working in any such company then that company is not great. It's as simple as that.






So who is DBA leader? The obvious answer is the person who leads a team of database administrators. Sounds simple enough, but it takes a lot to be a true leader. There are many DBA bosses at various layers, DBA managers at various layers, but being a DBA leader is very different. If you are a DBA leader, then you should be kinda worshiped. If you work in a team which has a DBA leader, then you are a very lucky person.

A DBA leader is the one who leads by an example. He walks the talk. He is the doer and not just talker. He inspires, motivates, and energizes the team members to follow him and then exceed his example. For instance, when client asks to improve that performance issue in the RAC cluster, the DBA leader would first jump in at the problem and start collaborating with team. He would analyze the problem, would present his potential solutions or at least line of action. He would engage the team and would listen to them. He won't just assing the problem to somebody, then disappear, and come back at 5pm asking about status. DBA leader is not super human, so he will get problems of which he won't have any knowledge. He will research the the problem with team and will learn and grow with them. That approach would electrify the team.

A DBA leader is a grateful person. He doesn't seem to thank his team enough for doing a great job. When under the able leadership of the DBA leader, team would reach to a solution, then regardless of his contribution, a DBA leader would make his team look awesome. That will generate immense prestige for the DBA leader at the same time, while making team looking great. Team would cherish the fact that solution was reached after deep insights of the DBA leader, and yet leader gave credit to them.

A DBA leader is the one who is always there. He falls before the team falls, and doesn't become aloof when things don't go well. Things will go wrong and crisis will come. In such situations, responsibility is shared and DBA leader doesn't shirk from it. In the team of DBA leader, there are no scapegoats.

A leader of DBAs keeps both big piture and ther details in perspective at the same time. He provides the vision and lives the vision from the front. He learns and then he leads. He does all of this and does it superbly and that is why he is the star and such a rare commodity, and that is why he he is the DBA LEADER.

Categories: DBA Blogs