Skip navigation.

Feed aggregator

PFCLScan Updated and Powerful features

Pete Finnigan - Tue, 2014-08-26 15:35

We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

Posted by Pete On 04/09/13 At 02:45 PM

Categories: Security Blogs

Oracle Security Training, 12c, PFCLScan, Magazines, UKOUG, Oracle Security Books and Much More

Pete Finnigan - Tue, 2014-08-26 15:35

It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]

Posted by Pete On 28/08/13 At 05:04 PM

Categories: Security Blogs

The Social Spotlight is Going to Shine on #OOW14

Linda Fishman Hoyle - Tue, 2014-08-26 15:21

A Guest Post by Mike Stiles, Senior Content Manager for Oracle Social Cloud (pictured left)

Want to see an example of “busy” and “everywhere”? Then keep an eye on the Oracle Social Cloud team as they head into this year’s Oracle OpenWorld. Famous for their motto of “surely we can tackle even more,” Oracle’s top socializers will be all over Moscone, from the Social Intelligence Center in CX Central to 16+ social track sessions to live demos to comprehensive social coverage. Oracle Social Cloud will be trumpeting the social business imperative with live, interactive displays and inspiring speakers from Oracle, General Motors, Chevrolet, FleishmanHillard, Nestle, Polaris, Banc of California, CMP.LY and more.

Catch as many of these highlights as you can. You know how we social people love for people to follow us:

  • Social Intelligence Center: Swing by the Oracle SRM “Social Intelligence Center” in CX Central in Moscone West. We don’t know if it will literally make you smarter, but it is a real world demonstration of how the Oracle Social Cloud’s Social Relationship Management (SRM) platform serves up big data visualizations. Specifically, we’ll be watching the web and social chatter around #OOW14 using advanced analytics and deeper listening. You can see the new graphical representations of social data and global activity, get some great ideas for establishing a Social Intelligence Center at your brand, or see firsthand how the Oracle SRM platform is a mean, modernizing, social management, streamlining machine. And don’t forget to tweet about what you see.
  • “Financial Services: Why Social Media Makes Cents” with Tom Chernaik of CMP.LY; Kevin Salas, Banc of California; and Angela Wells of Oracle Social. Monday, Sept. 29 @ 1:45p.m. [CON8561]
  • “A Sky-High Overview: Oracle Social Cloud” with Meg Bear, Group Vice President of Oracle Social. Tuesday, Sept. 30 @ 10 and 11:45a.m. [TGS8068]
  •  “Show Me the Money: Building the Business Case for Social” with Holly Spaeth of Polaris; Michelle Lapierre of Marriott; Meghan Blauvelt, Nestle; and Angela Wells of Oracle Social. Wednesday, Oct. 1 @ 11:45a.m. [CON8551]
  • “Social Relationship Management: Lessons Learned from the Olympics, Super Bowl, Grammys and More” with Jamie Barbour of Chevrolet; Melissa Schreiber of FleishmanHillard; and Erika Brookes of Oracle Social. Wednesday, Oct. 1 @ 1p.m. [CON8349]
  • “Global Command Centers: Why Social Engagement is Key to Connect with Customers” with Rebecca Harris and Whitney Drake of General Motors; Alan Chumley of FleishmanHillard; and Tara Roberts of Oracle Social. Wednesday, Oct. 1 @ 2:15p.m. [CON8350]
  • “Whose Customer is this Anyway? Rise of the CCO, the CDO and the “New” CMO” with Jeb Dasteel, Oracle’s Chief Customer Officer (CCO); other C-Suite executives; and Erika Brookes of Oracle Social. Wednesday, Oct. 1 @ 3:45p.m. [CON8457]
  • “Leveraging Social Identity to Build Better Customer Relations” with Andrew Jones of the Altimeter Group. Thursday, Oct. 2 @ 11:30a.m. [CON8348]
  • “When Social Data = Opportunity: Leveraging Social Data to Target Custom Audiences” with Michelle Lapierre of Marriott; and Cory Treffiletti of Oracle. Thursday, Oct. 2 @ 12:45p.m. [CON8554]

Want the most thorough coverage of Oracle Social’s OpenWorld activities imaginable? Then by all means make sure you friend and follow us on all our channels, including Twitter, Facebook, Google+, and LinkedIn. And subscribe to our daily Social Spotlight podcast!

We want YOU to contribute to our channels and share the top sights, sounds and takeaways that you’re getting from OpenWorld. Register today and we'll see you in San Francisco September 28 - October 2, 2014!

Functional Overview of the new PeopleSoft Fluid User Experience: The Home Page

PeopleSoft Technology Blog - Tue, 2014-08-26 14:50

PeopleTools 8.54 is a landmark release for Oracle/PeopleSoft, and you are starting to see a lot of information on it, both in this blog and elsewhere.  One of the most important aspects of this release is the new Fluid User Experience.  This is a broad-ranging subject, so you will see posts from a functional perspective (this post), a developer’s perspective, and announcements about Fluid applications that are being delivered by PeopleSoft.

Perhaps you’ve heard about how the Fluid user experience provides support for mobile applications.  While that is certainly true, Fluid offers much more than that.  What the Fluid UX really provides is the ability to access PeopleSoft applications across a variety of form factors from smart phones to tablets to desktops/laptops.  Fluid applications present a common user experience on a variety of devices regardless of screen size.  These applications are efficient and attractive as well, and offer the kind of user experience the modern work force is expecting.  So no matter how your users access PeopleSoft, they will be presented with the same intuitive user experience.  This post is the first of a series covering the main features that you will see in Fluid applications when you install them or if you develop some yourself.  We’ll also cover how to get started with Fluid/PeopleTools 8.54, and how new Fluid application pages will work with your existing applications.

We’ll start today with perhaps the most fundamental feature of the Fluid UX: the Home Page.  This page provides a base or launch pad for users to navigate to their essential work.  Home pages are designed for specific roles, so they contain all the essentials for each role without extraneous menus or content that might distract users.  In this way, Fluid Home Pages are conceptually similar to current home pages or dashboards, but Fluid home pages employ the new responsive UI that renders well on different form factors.

 Let’s look at the main features of the Home page.

The first thing you’ll notice about Home Pages is that they contain a variety of Tiles.  Tiles can serve as navigation mechanisms, but may also convey dynamic information.  Tiles are similar in purpose to pagelets, but are not as interactive.  They are responsive, however, and can automatically change size and position to accommodate different form factors.

Now let’s take a look at the Home Page header, which contains several useful features.  Central is the Home Page drop down menu.  This takes the place of tabs, and enables users to move among all the home pages that they use.  Users may serve in a variety of roles in an enterprise, and they may therefore have more than one Home Page—one for each role they play.  For example, a person may be a manager, but they are also an employee, and as such they have different activities and tasks they perform in both those roles.  They would likely have different home pages for those different roles. 

 Next is the PeopleSoft Search widget.  This provides for a search-centric navigation paradigm, and enables users to search from almost anywhere within their PeopleSoft system.  In addition, with the new PeopleSoft search, users can search across pillars and retrieve results from all their PeopleSoft content.  The search results are even actionable, so users can often complete a task right from the search results page.

Notifications are a handy mechanism that lets users know when there are tasks requiring their attention.  The Notifications widget displays the number of items requiring attention.  When the user clicks the widget a window displays the sum of all notifications from all applications to which the user has access.  The user can act on those items directly from the Notifications window.

Next is the Actions widget.  This menu is configurable, but one of the main actions available is Personalizations. This takes the user to a page where they can add or remove tiles from a Home Page, delete Home Pages, or even create new home pages and configure them.

Finally, and perhaps most importantly, we have the Navigation Bar widget.  This opens the Nav Bar, which enables users to get anywhere in their PeopleSoft system.  The Nav Bar is flexible configurable, powerful and intuitive.

The Nav Bar is a rich topic in its own right, and will be covered in a separate blog post in the near future.  In fact, the Fluid user experience is a large subject, so we’ll be posting many more articles describing its features in greater depth.  We’ll also provide a taste of developing Fluid applications.

If you would like more information on the Fluid UX (and everything PeopleSoft) please see the PeopleSoft Information Portal.

We will also be covering the Fluid UX in great depth in several sessions at Oracle Open World.  Come see us at the conference!  You’ll not only acquire useful information, but you can talk with us personally and see live demos of these features.  Here are a few sessions in particular that cover the Fluid UX:

  • A Closer Look at the New PeopleSoft Fluid User Experience (CON7567)
  • PeopleSoft Mobility Deep Dive: PeopleSoft Fluid User Interface and More (CON7588)
  • PeopleSoft Fluid User Interface: A Modern User Experience for PeopleSoft HCM on Any Device (CON7667)
  • PeopleSoft PeopleTools 8.54: PeopleSoft Fluid User Interface in Action (CON7595)
These sessions cover a wide variety of subjects from the functional (for end users and SMEs) to the technical (for developers).

Session Schedule Information OpenWorld 2014 San Francisco

Andrejus Baranovski - Tue, 2014-08-26 12:55
I have received my session schedule information for OpenWorld 2014. This year event is going to be quite busy with three sessions. Below you can check session titles along with times, looking forward to meet you in San Francisco !


Session ID: CON2623Session Title: Oracle ADF Development and Deployment to Oracle CloudVenue / Room: Moscone South - 270Date and Time: 10/1/14, 15:30 - 16:15
Session ID: CON3745 (together with Danilo Schmiedel)Session Title: Oracle Mobile Suite and Oracle Adaptive Case Management: A Strong Combination to Empower PeopleVenue / Room: Moscone West - 3018Date and Time: 10/1/14, 16:45 - 17:30
Session ID: CON2495Session Title: Data Caching Strategies for Oracle Mobile Application FrameworkVenue / Room: Marriott Marquis - Nob Hill C/DDate and Time: 10/2/14, 12:00 - 12:45

Rittman Mead in the OTN TOUR Latin America 2014

Rittman Mead Consulting - Tue, 2014-08-26 11:28

Another OTN Tour Latin America has come and gone. This is the most important technical event in the region visiting 12 countries and with more than 2500 attendees in two weeks.

This year Rittman Mead was part of the OTN Tour in Buenos Aires (Argentina) and Montevideo (Uruguay) presenting about ODI and OGG.

We have started in Buenos Aires on August 11 for the first day of the OTN Tour in Argentina. I’ve talked about the integration of ODI and OGG 12c, explaining all the technical details to configure and how to implement it. Most of the attendees didn’t work with these tools (but were curious about them) so I personalised a little the presentation giving them first an introduction of ODI and OGG.

As the vice-president of the UYOUG (Uruguayan Oracle User Group) I’m part of the organisation of the OTN Tour in my country, so we needed to come back in the same Monday to adjust some last details to have everything ready for the event in Uruguay.

Most of the speakers came on Wednesday, and we have spent a great day with Michelle Malcher, Kamran Agayev, Hans Forbrich and Mike Dietrich. First, we went to lunch at the Mercado del Puerto, an emblematic place that has lot of “parrillas” (kind of barbecues) and then we gave them a little city tour which included a visit to El Cerro de Montevideo. Finally we visited one of the most important wineries in Uruguay, Bodega Bouza where we have a wine tour followed by an amazing wine tasting of a variety of wines including Tannat which is our insignia grape. You know…it is important to be relaxed before a conference :-)

luch_allCerro_Montevideo

parrilla

The first day of the event in Uruguay was dedicated exclusively to technical sessions and in the second one we had the hands-on labs. The conference covered a wide range of topics from BI Mobile, e-Business Suite to how to upgrade to Oracle Database12c, Oracle Virtualization and Oracle RAC. All the sessions were packed with attendees.

Mike session Edel session

The next day, we had labs with PCs with software already installed but attendees could came with their own laptops to install all the software needed for the hands-on. We had the famous RAC Attack! lead by Kamran and with the help of the ninjas Michelle, Hans and Nelson Calero, and an Oracle Virtualization lab by Hernan Petitti for 7 hours!

Rac AttackRAC Attack2

It was a great event. You can see more pictures here and download the presentations here. The attendees as well as all the speakers were really happy with the result. And so did we.

This  is only the beginning for Rittman Mead in Latin America. There are a lot of things to come, so stay tuned!

Categories: BI & Warehousing

Community Source Is Dead

Michael Feldstein - Tue, 2014-08-26 11:21

As Phil noted in yesterday’s post, Kuali is moving to a for-profit model, and it looks like it is motivated more by sustainability pressures than by some grand affirmative vision for the organization. There has been a long-term debate in higher education about the value of “community source,” which is a particular governance and funding model for open source projects. This debate is arguably one of the reasons why Indiana University left the Sakai Foundation (as I will get into later in this post). At the moment, Kuali is easily the most high-profile and well-funded project that still identifies itself as Community Source. The fact that this project, led by the single most vocal proponent for the Community Source model, is moving to a different model strongly suggests that Community Source has failed.

It’s worth taking some time to talk about why it has failed, because the story has implications for a wide range of open-licensed educational projects. For example, it is very relevant to my recent post on business models for Open Educational Resources (OER).

What Is Community Source?

The term “Community Source” has a specific meaning and history within higher education. It was first (and possibly only) applied to a series of open source software projects funded by the Mellon Foundation, including Sakai, Kuali, Fedora, and DSpace (the latter two of which have merged). As originally conceived, Community Source was an approach that was intended to solve a perceived resource allocation problem in open source. As then-Mellon Foundation Associate Program Officer Chris Mackie put it,

For all that the OSS movement has produced some runaway successes, including projects like Perl, Linux, and Mozilla Firefox, there appear to be certain types of challenges that are difficult for OSS to tackle. Most notably, voluntaristic OSS projects struggle to launch products whose primary customers are institutions rather than individuals: financial or HR systems rather than Web servers or browsers; or uniform, manageable desktop environments rather than programming languages or operating systems. This limitation may trace to any of several factors: the number of programmers having the special expertise required to deliver an enterprise information system may be too small to sustain a community; the software may be inherently too unglamorous or uninteresting to attract volunteers; the benefits of the software may be too diffuse to encourage beneficiaries to collaborate to produce it; the software may be too complex for its development to be coordinated on a purely volunteer basis; the software may require the active, committed participation of specific firms or institutions having strong disincentives to participate in OSS; and so on. Any of these factors might be enough to prevent the successful formation of an OSS project, and there are many useful types of enterprise software—including much of the enterprise software needed by higher education institutions—to which several of them apply. In short, however well a standard OSS approach may work for many projects, there is little reason to believe that the same model can work for every conceivable software project.

This is not very different from the argument I made recently about OER:

In the early days of open source, projects were typically supported through individual volunteers or small collections of volunteers, which limited the kinds and size of open source software projects that could be created. This is also largely the state of OER today. Much of it is built by volunteers. Sometimes it is grant funded, but there typically is not grant money to maintain and update it. Under these circumstances, if the project is of the type that can be adequately well maintained through committed volunteer efforts, then it can survive and potentially thrive. If not, then it will languish and potentially die.

The Mellon Foundation’s answer to this problem was Community Source, again as described by Chris Mackie:

Under this new model, several institutions contract together to build software for a common need, with the intent of releasing that software as open source. The institutions form a virtual development organization consisting of employees seconded from each of the partners. This entity is governed cooperatively by the partners and managed as if it were an enterprise software development organization, with project and team leads, architects, developers, and usability specialists, and all the trappings of organizational life, including reporting relationships and formal incentive structures. During and after the initial construction phase, the consortial partners open the project and invite in anyone who cares to contribute; over time the project evolves into a more ordinary OSS project, albeit one in which institutions rather than individual volunteers usually continue to play a major role.

A good friend of mine who has been involved in Mellon-funded projects since the early days describes Community Source more succinctly as a consortium with a license. Consortial development is a longstanding and well understood method of getting things done in higher education. If I say to you, “Kuali is a consortium of universities trying to build an ERP system together,” you will probably have some fairly well-developed notions of what the pros and cons of that approach might be. The primary innovation of Community Source is that it adds an open source license to the product that the consortium develops, thus enabling another (outer) circle of schools to adopt and contribute to the project. But make no mistake: Community Source functions primarily like a traditional institutional consortium. This can be best encapsulated by what Community Source proponents refer to as the Golden Rule: “If you bring the gold then you make the rules.”[1]

Proponents of Community Source suggested even from the early days that Community Source is different from open source. Technically, that’s not true, since Community Source projects produce open source software. But it is fair to say that Community Source borrows the innovation of the open source license while maintaining traditional consortial governance and enterprise software management techniques. Indiana University CIO and Community Source proponent Brad Wheeler sometimes refers to Community Source as “the pub between the Cathedral and the Bazaar (a reference to Eric Raymond’s seminal essay on open source development).” More recently, Brad and University of Michigan’s Dean of Libraries James Hilton codified what they consider to be the contrasts between open source and Community Source in their essay “The Marketecture of Community,” and which Brad elaborates on in his piece “Speeding Up On Curves.” They represent different models of procuring software in a two-by-two matrix, where the dimensions are “authority” and “influence”:

Note that both of these dimensions are about the degree of control that the purchaser has in deciding what goes into the software. It is fundamentally a procurement perspective. However, procuring software and developing software are very different processes.

A Case Study in Failure and Success

The Sakai community and the projects under its umbrella provide an interesting historical example to see how Community Source has worked and where it has broken down. In its early days, Indiana University and the University of Michigan where primary contributors to Sakai and very much promoted the idea of Community Source. I remember a former colleague returning from a Sakai conference in the summer of 2005 commenting, “That was the strangest open source conference I have ever been to. I have never seen an open source project use the number of dollars they have raised as their primary measure of success.” The model was very heavily consortial in those days, and the development of the project reflected that model. Different schools built different modules, which were then integrated into a portal. As Conway’s Law predicts, this organizational decision led to a number of technical decisions. Modules developed by different schools were of differing quality and often integrated with each other poorly. The portal framework created serious usability problems like breaking the “back” button on the browser. Some of the architectural consequences of this approach took many years to remediate. Nevertheless, Sakai did achieve a small but significant minority of U.S. higher education market share, particularly at its peak a few years ago. Here’s a graph showing the growth of non-Blackboard LMSs in the US as of 2010, courtesy of data from the Campus Computing Project:

Meanwhile, around 2009, Cambridge University built the first prototype of what was then called “Sakai 3.” It was intended to be a ground-up rewrite of a next-generation system. Cambridge began developing it themselves as an experiment out of their Centre for Applied Research in Educational Technologies, but it was quickly seized upon by NYU and several other schools in the Sakai community as interesting and “the future.” A consortial model was spun up around it, and then spun up some more. Under pressure from Indiana University and University of Michigan, the project group created multiple layers of governance, the highest of which eventually required a $500K institutional commitment in order to participate. Numbers of feature requirements and deadlines proliferated, while project velocity slowed. The project hit technical hurdles, principally around scalability, that it was unable to resolve, particularly given ambitious deadlines for new functionality. In mid-2012, Indiana University and University of Michigan “paused investment” in the project. Shortly thereafter, they left the project altogether, taking with them monies that they had previously committed to invest under a Memorandum of Understanding. The project quickly collapsed after that, with several other major investors leaving. (Reread Phil’s post from yesterday with this in mind and you’ll see the implications for measuring Kuali’s financial health.)

Interestingly, the project didn’t die. Greatly diminished in resources but freed from governance and management constraints of the consortial approach, the remaining team not only finally re-architected the platform to solve the scalability problems but also have managed seven major releases since that implosion in 2012. The project, now called Apereo OAE, has returned to its roots as an academic (including learning) collaboration platform and is not trying to be a direct LMS replacement. It has even begun to pick up significant numbers of new adoptees—a subject that I will return to in a future post.

It’s hard to look at the trajectory of this project and not conclude that the Community Source model was a fairly direct and significant cause of its troubles. Part of the problem was the complex negotiations that come along with any consortium. But a bigger part, in my opinion, was the set of largely obsolete enterprise software management attitudes and techniques that come along as a not-so-hidden part of the Community Source philosophy. In practice, Community Source is essentially project management approach focused on maximizing the control and influence of the IT managers whose budgets are paying for the projects. But those people are often not the right people to make decisions about software development, and the waterfall processes that they often demand in order to exert that influence and control (particularly in a consortial setting) are antithetical to current best practices in software engineering. In my opinion, Community Source is dead primarily because the Gantt Chart is dead.

Not One Problem but Two

Community Source was originally developed to address one problem, which was the challenge of marshalling development resources for complex (and sometimes boring) software development projects that benefit higher education. It is important to understand that, in the 20 years since the Mellon Foundation began promoting the approach, a lot has changed in the world of software development. To begin with, there are many more open source frameworks and better tools for developing good software more quickly. As a result, the number of people needed for software products (including voluntaristic open source projects) has shrunk dramatically—in some cases by as much as an order of magnitude. Instructure is a great example of a software platform that reached first release with probably less than a tenth of the money that Sakai took to reach its first release. But also, we can reconsider that “voluntaristic” requirement in a variety of ways. I have seen a lot of skepticism about the notion of Kuali moving to a commercial model. Kent Brooks’ recent post is a good example. The funny thing about it, though, is that he waxes poetic about Moodle, which has a particularly rich network of for-profit companies upon which it depends for development, including Martin Dougiamas’ company at the center. In fact, in his graphic of his ideal world of all open source, almost every project listed has one or more commercial companies behind it without which it would either not exist or would be struggling to improve:

BigBlueButton is developed entirely by a commercial entity. The Apache web server gets roughly 80% of its contributions from commercial entities, many of which (like IBM) get direct financial benefit from the project. And Google Apps aren’t even open source. They’re just free. Some of these projects have strong methods for incorporating voluntaristic user contributions and taking community input on requirements, while others have weak ones. But across that spectrum of practices, community models, and sustainability models, they manage to deliver value. There is no one magic formula that is obviously superior to the others in all cases. This is not to say that shifting Kuali’s sustainability model to a commercial entity is inevitably a fine idea that will succeed in enabling the software to thrive while preserving the community’s values. It’s simply to say that moving to a commercially-driven sustainability model isn’t inherently bad or evil. The value (or lack thereof) will all depend on how the shift is done and what the Kuali-adopting schools see as their primary goals.

But there is also a second problem we must consider—one that we’ve learned to worry about in the last couple of decades of progress in the craft of software engineering (or possibly a lot earlier, if you want to go back as far as the publication of The Mythical Man Month). What is the best way to plan and execute software development projects in light of the high degree of uncertainty inherent in developing any software with non-trivial complexity and a non-trivial set of potential users? If Community Source failed primarily because consortia are hard to coordinate, then moving to corporate management should solve that problem. But if it failed primarily because it reproduces failed IT management practices, then moving to a more centralized decision-making model could exacerbate the problem. Shifting the main stakeholders in the project from consortium partners to company investors and board members does not require a change in this mindset. No matter who the CEO of the new entity is, I personally don’t see Kuali succeeding unless it can throw off its legacy of Community Source IT consortium mentality and the obsolete, 1990′s-era IT management practices that undergird it.

  1. No, I did not make that up. See, for example, https://chronicle.com/article/Business-Software-Built-by/49147

The post Community Source Is Dead appeared first on e-Literate.

Accelerate your Transformation to Digital

WebCenter Team - Tue, 2014-08-26 09:07
by Dave Gray, Entrepreneur, Author & Consultant

Digital Transformation – The re-alignment of, or new investment in, technology and business models to more effectively engage consumers or employees, digitally

We are in the process of building a global digital infrastructure that quantifies, connects, analyzes and maps the physical and social world we live in. This has already had massive impact on the way we live and work, and it will continue to do so, in ways which are impossible to predict or imagine.

If you work in a company that was imagined, designed and created before this digital infrastructure was developed, there is no question that your business will be impacted. If it hasn’t happened yet, it is simply a matter of time.

This digital transformation is a global phenomenon that is affecting every individual, and every business, on the planet. Everything that can be digital, will be digital. That means every bit of information your company owns, processes or touches. There is no product or service you provide that won’t be affected. The question is, what does it mean to you, and what should you be doing about it?

When technology advances, strategies and business models shift. It’s not simply a matter of taking what you do today and “making it digital.” That’s the difference between translation and transformation. Your products and services don’t need to be “translated” into the digital world. They must be transformed.

Take Kodak, for example. As long as there have been photographs, people have used them to store and share memories of people and events. That hasn’t changed since Eastman Kodak was founded by George Eastman in 1888.

But when technology advances, strategies and business models shift. Kodak was at the leading edge of the research that doomed its own business model. In 1975, Kodak invented the digital camera (although it was kept secret at the time). Kodak engineers even predicted, with startling accuracy, when the digital camera would become a ubiquitous consumer technology. So Kodak had a major advantage over every other company in the world. They were able to predict the death of film and they had about a 15-year head start over everyone else.

Unfortunately, the business of film was so profitable, and the reluctance (or fear) of disrupting its own business was so great, that Kodak had difficulty focusing on anything else.

In 2010, two software engineers, Kevin Systrom and Mike Krieger, founded a company called Instagram, which they sold to Facebook two years later for approximately $1 billion.

When technology advances, strategies and business models shift.

Let’s take another example, Nokia, who commanded 40 percent of the mobile phone market as recently as 2007. Nokia was the clear and unequivocal market leader at that time. People were talking on phones in 2007 and they are still doing that today. But what happened? The phone became a mobile computer, a platform for digital services. And in a digital world, a phone is only as good as the digital services you can get on that phone. So Nokia, who used to compete only with other phone manufacturers, suddenly found itself in competition with computer manufacturers, who already had a strongly-developed ecosystem, strong relationships with application developers, and a compelling business model to offer them. Nokia was unable to make the leap from phone maker to computer maker.

When technology advances, strategies and business models shift.

It doesn’t matter what you make or what service you provide. If it’s a product, it will more and more come to resemble a computer. If it’s a service, it will increasingly become a digital service.

The shift doesn’t happen everywhere all at once. It’s happened with music and books, it’s happening with phones, TV and film, the hospitality and transportation industry, it is just starting to happen with cars, it will soon be happening in health care and other systems that tend to resist change because of bureaucracy and legal regulation.

So what can you do? How can you avoid being the next Kodak or the next Nokia? 

You need to take a different approach to innovation.

How do you manage your innovation initiatives?

Many companies use something called an innovation funnel. 

The idea of the funnel is that you solicit ideas from all over the company, and sometimes even from outside the company. Ideas come in at the top of the funnel and you have several gates that they have to go through, and they get funded at varying levels as they pass the criteria at each gate. If you do it right, the theory is that the best ideas come out and these are the ones that we implement.

The problem with the funnel approach is that nobody puts ideas in at the top. Nothing comes in. Why? Because people look at that funnel and what they see is a sophisticated machine for killing their ideas. The design of the funnel does not encourage people to generate ideas, because basically it’s a suggestion box with a shredder inside. It’s an idea killer. It doesn’t excite people. It doesn’t encourage creativity.

People think: I'm going to have to write a business plan. I'm going to have to do a bunch of market research. I'm going to have to make a lot of spreadsheets and projections. And this is on top of my regular job. Only to have my idea most probably killed in the end. You are saying to people “We welcome your ideas” but what people are thinking is “You don't really welcome my ideas. You welcome my ideas so you can kill them.”

So what happens? Nothing comes into the top of the funnel, and the funnel manager goes, “Why? Why is nobody giving me their ideas?” This is why it's a problem. Because if anyone really has an idea, what are they going to do? They are going to leave, because in most cases, it’s actually easier to do your innovation out in the world today, even with no funding, than it is to do it inside a modern industrial company.

So what's an alternative to the funnel? Connected companies, like Amazon and Google, do it differently. They create a level playing field where anyone in the company can generate ideas and they actually are allowed to spend time working on them, without having to prove anything. They are trusted to innovate.

At Google they call it 20 percent time. For one day a week, or the equivalent, you get the opportunity  to play with your ideas. This kind of approach has to be supported by a culture where people actually really do have passion, energy and things they want to do. And it has to be recognized and supported throughout the organization as something that’s important. 

If you want to do this, efficiency has to take a hit. You can't optimize everything for efficiency and also do experiments. You just can't. Experiments by their very nature are inefficient, because you don’t really know what you’re doing. If you know in advance what the outcome will be, by definition it’s not an experiment.

This approach is less like a funnel and more like a terraced garden. It’s actually similar to a funnel, in a way, but it’s flipped.

Think of it this way. You've got to make space for these ideas to start.

There's no learning in the funnel approach. You don't learn much from making a business plan. You learn when you do experiments, when you put things into the world and start interacting with customers. You learn when you do things. What do you learn from making a business plan? Business plans are science fiction.

As a leader you have to make space for these experiments to happen. 

And some of these experiments – maybe even a lot of them – will yield interesting results.

And you want to have a way to distribute budget and resources to the most promising experiments.

So you set a certain percentage of the budgets throughout the organization, and you say, this money is for funding promising experiments. You can't spend it on operations or improving efficiency. You have to spend it on new ideas that you think are promising. That’s the second level in the terraced garden.

Layer one gives everybody a little bit of elbow room to innovate and experiment. Layer two offers a way for management to pick the plants that look the most interesting and give them a little bit of care and feeding. It might be that you just give someone some extra time. 

Then third and fourth layers are where the most promising ideas, the ones that might be worth making big bets on, emerge. You might have a few of these that you think might actually generate the next major stage of growth for the company. 

The good thing is that these big bets are all based on things that are already working. This is how venture capitalists invest. They usually don't invest in a really good business plan, because people who make good business plans often don't make good entrepreneurs. People who make good entrepreneurs are people who are out there doing things already.

Amazon's recommendation engine started out as a little weed at the bottom of the garden. Gmail at Google started out as a small 20-percent-time project, somebody was doing it in their spare time.

Venture capitalists invest in companies that already working. They may not be profitable yet, but they have customers, they have promise, they have figured something out, they have learned something. This is also how companies like Google and Amazon do it.

They don't invest in business plans. They don't say it's got to be a billion dollar opportunity or it's not worth or time. They say, let’s try it. Get out there and try things with customers because the billion dollar opportunity may start as a $10,000 opportunity. It may start small. 

What are Google’s big bets right now? Google Glass. The self-driving car.

What are the big bets for Amazon? Kindle. Kindle's a big bet. There are lots of dollars going into that one. Amazon web services: the project that says, we're going to take our own infrastructure and we're going to sell it. Even to competitors. We'll sell to anybody.

What are some of the experiments that failed? Amazon tried auctions. And it seems to make sense. “eBay does it. We've got a big audience. We can try it.” They tried it. They tried and they failed.

But what really happened with Amazon auctions? They learned. They realized something about their customers. Amazon is very customer focused. They realized that their customers don't want to wait. They don't want to wait a week to see if they bought something. Amazon customers want it now.

One of the interesting experiments is something called “unique phrases inside this book.” Someone had a hypothesis: “I think some books have not just unique words but unique phrases. If a phrase shows up in one book, you might want to know other books that have that phrase in it. It might be interesting.” Someone's working on that as a little experiment.

What happens if that experiment fails? Those people don't get fired. They are extremely valuable because of what they have learned. They find other teams. They get recruited. Think of a swarm of startups where people are recruiting each other all the time. It's like Silicon Valley inside of Amazon.

This kind of change is not simple or easy. But it’s clear that the future will not be simply more of the past. It will require bold thinking, creativity, and a new approach to innovation. Innovation can’t be the job of an R&D department. It has to be everyone’s job. And if you’re in senior management, it’s your job to create the conditions that will make innovation possible.

You can hear more from Dave on how to transform your business to digital in our Digital Business Thought Leaders webcast "The Digital Experience: A Connected Company’s Sixth Sense".

SQL Server Replication Quick Tips

Pythian Group - Tue, 2014-08-26 07:56

There is a time in every SQL Server DBA career where a mail came in with a “replication is not working, please check” message. This article is intended to provide with quick tips on how to handle common replication errors and performance problems in a one way transactional replication topology

Oh boy, there is a data problem:

ID-10039897

You check replication monitor and get a :

“Transaction sequence number: 0x0003BB0E000001DF000600000000, Command ID: 1″

The row was not found at the Subscriber when applying the replicated command. (Source: MSSQLServer, Error number: 20598)

Note the sequential number will be used in the following scripts, also the commandID is important to note as not necessarily the whole sequential number has issues, it might be tied to just one command.

Go to the distributor database en run the following command to get the list of articles involved in this issue:

select * from dbo.MSarticles
where article_id in (
select article_id from MSrepl_commands
where xact_seqno = 0x0003BB0E000001DF000600000000)

To get the whole list of commands you can run below query

exec sp_browsereplcmds
@xact_seqno_start = ’0x0003BB0E000001DF000600000000′,
@xact_seqno_end = ’0x0003BB0E000001DF000600000000′

With this last query you can get to the exact command that is failing (by searching the command number in the commandID column)

You will notice that a transactional replication will typically(depending on setup) use insert, delete, update stored procedures to replicate the data, so the command you will see over here will look something like:

{CALL [sp_MSdel_dboMyArticle] (118)}

That is the stored procedure generated to process delete statement over dbo.MyArticle table, and in this case it is trying to delete ID 118. Based on the error reported you will now realize that the issue is that the replication is trying to delete MyArtcile on ID 118 and is not there, so it is trying to delete a non existent record.

Options:

  1. You can either check the publisher for this record and manually insert it in the subscriber, this will cause the replication command to succeed and will fix the issue.
  2. You can skip the command, for this specific example you can skip the command as there is no need to delete something that has been already deleted, by removing the command from the MSrepl_commands table. (Beware, only do this when you know what you are doing, manually removing transactions can result in a unstable replication.) In this example you would use something like
    Delete from MSrepl_commands
    where xact_seqno = 0x0003BB0E000001DF000600000000 and commandID=1
  3. Reinitialize, this option is the least famous, you should try to fix the issue before doing this, however if after skipping the command you still get new errors everywhere, something definitely went wrong and there is no easy way to guarantee that your subscription is up to date and stable, this can be indicator that someone or something messed around with the data, there was some type of modification at the subscription and this is causing issues with the replication. Remember most likely a one way transactional replication is intended to have a copy of the data so it can be queried, no modification should be made to the data as this won´t replicate back to the publisher.

Query time outs:

After checking the replication monitor you get a message like:ID-10054415

Query timeout expired
The process is running and is waiting for a response from the server
Initializing…

and then terminating with this error…
Agent ‘MyAgent’ is retrying after an error, YY retries attempted

This can be due to several reasons:

  • Your transaction is taking a long time and needs some tuning. If your transaction is touching too much data or is using a bad query plan it can result in a long running query, check your TSQL and see if the execution plan is optimal
  • There is a problem with the network. If you normally don´t have this issue and this just happened out of the blue, you can try to check the network, sometimes a network failure or saturated endpoint can increase transfer rates affecting your replication.
  • Server performance, either the publisher or subscriber can have a performance problem, either too much CPU or Memory usage can eventually impact a replication transaction causing it to timeout
  • The query just needs some more time to complete. If this is the case you can tweak the time out setting to give the transaction some more time so it can process properly. To do this:
  1. Right click the Replication folder
  2. Click Distributor Properties and select General
  3. Click ‘Profile Defaults’
  4. Choose ‘Distribution Agents’ on left
  5. Click ‘New’ to create a new default agent profile
  6. Choose ‘Default Agent Profile’ from the list displayed, (to copy this)
  7. Pick a name for your new profile and upate the QueryTimeout value in right column
  8. Save
  9. Choose to use this profile across all your replication sets. However I would recommend to only apply to the agent that requires this change
  10. To individually assign the profile, open Replication Monitor and then in the left pane click your replication set
  11. In the right pane, select your desired agent, right click and change the profile to the new one you just created

 Mini Hack on expired subscriptionsID-10098834

When a replication is marked as expired, it will tell you that you need to reinitialize.

To activate it “under the hood”, check your replication monitor last error, it will show you the last sequential number that tried to process, then run this command(using the corresponding seq_no):

update MSsubscriptions
set status=2
where subscription_seqno=0x0002AADE00005030000100000002

The status column means:

0 = Inactive.

1 = Subscribed.

2 = Active.

You can change it to Active and it will try to process again. Why would you use this? if the subscription expired but your distribution cleanup job haven´t run, then it can try to reprocess everything again, if the issue was related to a network time out and now you have your network back up, you can try this as it will try to start from the last sequential number. Also you can try to do this to reproduce the last error reported, so it will fail and eventually expire again but you will have a better idea on why it failed in the first place.

Multi threading or “Streams”

A slow replication, and by slow I mean when you know that your replication is experiencing a delay when your command goes from the distributor to the subscription, you can check this with performance counters or quickly insert a token(http://technet.microsoft.com/en-us/library/ms151846%28v=sql.105%29.aspx)

You can improve the performance by adding streams, normally a default setting will write sequentially the replication transactions one by one, with Streams you can add more threads, say you specify to use 4 strems, you will be processing 4 transactions at a time meaning a faster turnaround. This can work beautifully but it can also generate deadlocks and inconsistencies, I would recommend to start low and just add 1 stream at a time and stop when you start seeing a problem. Do not go crazy and feel this is a turbo button and add 30 streams, and like most features, test it in QA first!

To Enable this option follow these steps:

  1. Open Replication Monitor, expand the Publisher and select the Publication in the left pane.
  2. On the right pane window , under “All Subscriptions” , you will see a list of all the Subscribers.
  3. Right Click the Subscriber you want to modify and click on “View Details”. A new Window will appear with the distribution agent session details.
  4. Now click on “Action” in the Menu bar at top and select “Distribution Agent Job Properties”, this will open the corresponding job properties.ID-100203331
  5. Go to  “Steps” in the left pane window followed by highlighting “Run Agent” on the Right pane window, click Edit.
  6. A new Windows will popup , scroll to the right end of the command section and append this parameter “ -SubscriptionStreams 2”
  7. Save the settings and restart the Distribution Agent job.

You might encounter some issues when implementing this, you can read this KB for further info:

http://support.microsoft.com/kb/953199

Conclusion

There are many tips on how to fix a replication, sometimes is easier to just reinitialize, but sometimes this is not an option when critical systems depend on the subscription to be up to date or your database is so huge that it will take days to complete. When possible try to troubleshoot instead of just restarting the replication from scratch as it will give you a lot more insight on what is going on.

Categories: DBA Blogs

Starting out with MAF?

Angelo Santagata - Tue, 2014-08-26 07:45
If your starting out with MAF, Oracle's Mobile Application Framework, then you MUST read this blog entry and make sure everything is right.. even if your a seasoned ADF mobile developer like me.. you wanna check this out, got me a couple of times!

https://blogs.oracle.com/mobile/entry/10_tips_for_getting_started

%sql: To Pandas and Back

Catherine Devlin - Tue, 2014-08-26 05:03

A Pandas DataFrame has a nice to_sql(table_name, sqlalchemy_engine) method that saves itself to a database.

The only trouble is that coming up with the SQLAlchemy Engine object is a little bit of a pain, and if you're using the IPython %sql magic, your %sql session already has an SQLAlchemy engine anyway. So I created a bogus PERSIST pseudo-SQL command that simply calls to_sql with the open database connection:

%sql PERSIST mydataframe

The result is that your data can make a very convenient round-trip from your database, to Pandas and whatever transformations you want to apply there, and back to your database:



In [1]: %load_ext sql

In [2]: %sql postgresql://@localhost/
Out[2]: u'Connected: @'

In [3]: ohio = %sql select * from cities_of_ohio;
246 rows affected.

In [4]: df = ohio.DataFrame()

In [5]: montgomery = df[df['county']=='Montgomery County']

In [6]: %sql PERSIST montgomery
Out[6]: u'Persisted montgomery'

In [7]: %sql SELECT * FROM montgomery
11 rows affected.
Out[7]:
[(27L, u'Brookville', u'5,884', u'Montgomery County'),
(54L, u'Dayton', u'141,527', u'Montgomery County'),
(66L, u'Englewood', u'13,465', u'Montgomery County'),
(81L, u'Germantown', u'6,215', u'Montgomery County'),
(130L, u'Miamisburg', u'20,181', u'Montgomery County'),
(136L, u'Moraine', u'6,307', u'Montgomery County'),
(157L, u'Oakwood', u'9,202', u'Montgomery County'),
(180L, u'Riverside', u'25,201', u'Montgomery County'),
(210L, u'Trotwood', u'24,431', u'Montgomery County'),
(220L, u'Vandalia', u'15,246', u'Montgomery County'),
(230L, u'West Carrollton', u'13,143', u'Montgomery County')]

Upcoming Big Data and Hadoop for Oracle BI, DW and DI Developers Presentations

Rittman Mead Consulting - Tue, 2014-08-26 03:34

If you’ve been following our postings on the blog over the past year, you’ll probably have seen quite a lot of activity around big data and Hadoop and in particular, what these technologies bring to the world of Oracle Business Intelligence, Oracle Data Warehousing and Oracle Data Integration. For anyone who’s not had a chance to read the posts and articles, the three links below are a great introduction to what we’ve been up to:

In addition, we recently took part in an OTN ArchBeat podcast with Stewart Bryson and Andrew Bond on the updated Oracle Information Management Reference Architecture we co-developed with Oracle’s Enterprise Architecture team, where you can hear me talk with Stewart and Andrew about how the updated architecture came about, the thinking behind it, and how concepts like the data reservoir and data factory can be delivered in an agile way.

I’m also pleased to be delivering a number of presentations and seminars over the next few months, on Oracle and Cloudera’s Hadoop technology and how it applies to Oracle BI, DW and DI developers – if you’re part of a local Oracle user group and you’d like me to deliver one of them for your group, drop me an email at mark.rittman@rittmanmead.com.

Slovenian Oracle User Group / Croatian Oracle User Group Conferences, October 2014

These two events run over consecutive days in Slovenia and Croatia, and I’m delivering the keynote at each on Analytics and Big Data, and a one-day seminar running on the Tuesday in Slovenia, and over the Wednesday and Thursday in Croatia. The theme of the seminar is around applying Hadoop and big data technologies to Oracle BI, DW and data integration, and is made up of four sessions:

Part 1 : Introduction to Hadoop and Big Data Technologies for Oracle BI & DW Developers

“In this session we’ll introduce some key Hadoop concepts including HDFS, MapReduce, Hive and NoSQL/HBase, with the focus on Oracle Big Data Appliance and Cloudera Distribution including Hadoop. We’ll explain how data is stored on a Hadoop system and the high-level ways it is accessed and analysed, and outline Oracle’s products in this area including the Big Data Connectors, Oracle Big Data SQL, and Oracle Business Intelligence (OBI) and Oracle Data Integrator (ODI).”

Part 2 : Hadoop and NoSQL Data Ingestion using Oracle Data Integrator 12c and Hadoop Technologies

“There are many ways to ingest (load) data into a Hadoop cluster, from file copying using the Hadoop Filesystem (FS) shell through to real-time streaming using technologies such as Flume and Hadoop streaming. In this session we’ll take a high-level look at the data ingestion options for Hadoop, and then show how Oracle Data Integrator and Oracle GoldenGate leverage these technologies to load and process data within your Hadoop cluster. We’ll also consider the updated Oracle Information Management Reference Architecture and look at the best places to land and process your enterprise data, using Hadoop’s schema-on-read approach to hold low-value, low-density raw data, and then use the concept of a “data factory” to load and process your data into more traditional Oracle relational storage, where we hold high-density, high-value data.”

Part 3 : Big Data Analysis using Hive, Pig, Spark and Oracle R Enterprise / Oracle R Advanced Analytics for Hadoop

“Data within a Hadoop cluster is typically analysed and processed using technologies such as Pig, Hive and Spark before being made available for wider use using products like Oracle Big Data SQL and Oracle Business Intelligence. In this session, we’ll introduce Pig and Hive as key analysis tools for working with Hadoop data using MapReduce, and then move on to Spark as the next-generation analysis platform typically being used on Hadoop clusters today. We’ll also look at the role of Oracle’s R technologies in this scenario, using Oracle R Enterprise and Oracle R Advanced Analytics for Hadoop to analyse and understand larger datasets than we could normally accommodate with desktop analysis environments.”

Part 4 : Visualizing Hadoop Datasets using Oracle Business Intelligence, Oracle BI Publisher and Oracle Endeca Information Discovery

“Once insights and analysis have been produced within your Hadoop cluster by analysts and technical staff, it’s usually the case that you want to share the output with a wider audience in the organisation. Oracle Business Intelligence has connectivity to Hadoop through Apache Hive compatibility, and other Oracle tools such as Oracle BI Publisher and Oracle Endeca Information Discovery can be used to visualise and publish Hadoop data. In this final session we’ll look at what’s involved in connecting these tools to your Hadoop environment, and also consider where data is optimally located when large amounts of Hadoop data need to be analysed alongside more traditional data warehouse datasets.”

Oracle Openworld 2014 (ODTUG Sunday Symposium), September 2014

Along with another session later in the week on the upcoming Oracle BI Cloud Services, I’m doing a session on the User Group Sunday for ODTUG on ODI12c and the Big Data Connectors for ETL on Hadoop:

Deep Dive into Big Data ETL with Oracle Data Integrator 12c and Oracle Big Data Connectors [UGF9481]

“Much of the time required to work with big data sources is spent in the data acquisition, preparation, and transformation stages of a project before your data reaches a state suitable for analysis by your users. Oracle Data Integrator, together with Oracle Big Data Connectors, provides a means to efficiently load and unload data to and from Oracle Database into a Hadoop cluster and perform transformations on the data, either in raw form or in technologies such as Apache Hive or R. This presentation looks at how Oracle Data Integrator can form the centerpiece of your big data ETL strategy, within either a custom-built big data environment or one based on Oracle Big Data Appliance.”

UK Oracle User Group Tech’14 Conference, December 2014

I’m delivering an extended version of my OOW presentation on the UKOUG Tech’14’s “Super Sunday” event, this time over 90 minutes rather than the 45 at OOW, giving me a bit more time for demos and discussion:

Deep-Dive into Big Data ETL using ODI12c and Oracle Big Data Connectors

“Much of the time required to work with Big Data sources is spent in the data aquisition, preparation and transformation stages of a project; before your data is in a state suitable for analysis by your users.Oracle Data Integrator, together with Oracle Big Data Connectors, provides a means to efficiently load and unload data from Oracle Database into a Hadoop cluster, and perform transformations on the data either in raw form or technologies such as Apache Hive or R. In this presentation, we will look at how ODI can form the centrepiece of your Big Data ETL strategy, either within a custom-built Big Data environment or one based on Oracle Big Data Appliance.”

Oracle DW Global Leaders’ Meeting, Dubai, December 2014

The Oracle DW Global Leaders forum is an invite-only group organised by Oracle and attended by select customers and associate partners, one of which is Rittman Mead. I’ll be delivering the technical seminar at the end of the second day, which will run over two sessions and will be based on the main points from the one-day seminars I’m running in Croatia and Slovenia.

From Hadoop to dashboards, via ODI and the BDA – the complete trail : Part 1 and Part 2

“Join Rittman Mead for this afternoon workshop, taking you through data acquisition and transformation in Hadoop using ODI, Cloudera CDH and Oracle Big Data Appliance, through to reporting on that data using OBIEE, Endeca and Oracle Big Data SQL. Hear our project experiences, and tips and techniques based on real-world implementations”

Keep an eye out for more Hadoop and big data content over the next few weeks, including a look at MongoDB and NoSQL-type databases, and how they can be used alongside Oracle BI, DW and data integration tools.

 

Categories: BI & Warehousing

How to remap tablespaces using Oracle Import/Export Tool

Yann Neuhaus - Tue, 2014-08-26 02:04

Since Oracle 10g, Oracle provides a great tool to import and export data from databases: Data Pump. This tool offers several helpful options, particularly one that allows to import data in a different tablespace than the source database. This parameter is REMAP_TABLESPACE. However, how can you do the same when you cannot use Data Pump to perform Oracle import and export operations?

I was confronted with this issue recently, and I had to deal with different workarounds to accomplish my import successfully. The main case where you will not be able to use Datapump is when you want to export data from a pre-10g database. And believe me, there are still a lot more of these databases running over the world than we think! The Oracle Import/Export utility does not provide a built-in way to remap tablespace like Datapump. In this blog posting, I will address the different workarounds to import data in a different tablespace with the Oracle Import Export Tool.

I have used an Oracle 11g R2 database for all examples.

My source schema 'MSC' on the Oracle Database DB11G is using the default tablespace USERS. I want to import the data to a different schema 'DBI' using the USER_DATA tablespace.

The objects contained in the source schema are as follows:

 

SQL> select object_type, object_name from dba_objects where owner='MSC';
OBJECT_TYPE         OBJECT_NAME
------------------- ----------------------------------------------------
SEQUENCE            MSCSEQ
TABLE               USER_DATA
TABLE               USER_DOCUMENTS
INDEX               SYS_IL0000064679C00003$$
LOB                 SYS_LOB0000064679C00003$$
INDEX               PK_DATAID
INDEX               PK_DOCID

 

I will now export the schema MSC using exp:

 

[oracle@srvora01 dpdump]$ exp msc/Passw0rd file=exp_MSC.dmp log=exp_MSC.log consistent=y
Export: Release 11.2.0.3.0 - Production on Tue Aug 12 14:40:44 2014Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in AL32UTF8 character set and AL16UTF16 NCHAR character set. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user MSC
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user MSC
About to export MSC's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export MSC's tables via Conventional Path ...
. . exporting table                      USER_DATA      99000 rows exported
. . exporting table                 USER_DOCUMENTS         25 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.

 

I tried different ways to accomplish the tablespace remapping with imp. They are summarized below.

 

Revoke quota on USERS tablespace for the destination schema

I have read somewhere that a workaround could be to revoke UNLIMITED TABLESPACE (if granted) and any quota on USERS tablespace for the destination schema, and to grant UNLIMITED QUOTA on the target tablespace only (USER_DATA). This way, imp tool is supposed to import all objects into the schema default tablespace.

Let's try this. I have created the destination schema with the right default tablespace and temporary tablespace and the required privileges:

 

SQL> create user DBI identified by Passw0rd default tablespace USER_DATA temporary tablespace TEMP quota unlimited on USER_DATA;
User created.

 SQL> grant create session, create table to DBI;
Grant succeeded.

 

The privilege UNLIMITED TABLESPACE is not granted to DBI user. Now, I will try to run the import:

 

[oracle@srvora01 dpdump]$ imp system/Passw0rd file=exp_MSC.dmp log=imp_to_DBI.log fromuser=msc touser=dbi
Import: Release 11.2.0.3.0 - Production on Tue Aug 12 14:53:47 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export file created by EXPORT:V11.02.00 via conventional path
Warning: the objects were exported by MSC, not by you
export done in AL32UTF8 character set and AL16UTF16 NCHAR character set
. importing MSC's objects into DBI
. . importing table                    "USER_DATA"      99000 rows imported
IMP-00017: following statement failed with ORACLE error 1950:
 "CREATE TABLE "USER_DOCUMENTS" ("DOC_ID" NUMBER, "DOC_TITLE" VARCHAR2(25), ""
 "DOC_VALUE" BLOB)  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INI"
 "TIAL 65536 NEXT 1048576 MINEXTENTS 1 FREELISTS 1 FREELIST GROUPS 1 BUFFER_P"
 "OOL DEFAULT) TABLESPACE "USERS" LOGGING NOCOMPRESS LOB ("DOC_VALUE") STORE "
 "AS BASICFILE  (TABLESPACE "USERS" ENABLE STORAGE IN ROW CHUNK 8192 RETENTIO"
 "N  NOCACHE LOGGING  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 FREELIS"
 "TS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT))"
IMP-00003: ORACLE error 1950 encountered
ORA-01950: no privileges on tablespace 'USERS'
Import terminated successfully with warnings.

 

The import has worked only for a part of the objects. As we can see, it seems that the table app_documents, containing a BLOB column, has not been imported. All objects associated to this table are not imported:

 

SQL> select object_type, object_name from dba_objects where owner='DBI';
OBJECT_TYPE         OBJECT_NAME
------------------- ----------------------------------------------------
TABLE               USER_DATA
SEQUENCE            MSCSEQ
INDEX               PK_DATAID

 

With no quota on the source tablespace, imp tool imports data in the target schema default tablespace, but LOBS are not supported.

 

Drop USERS tablespace prior to import data

Another method could be to drop the source tablespace, to be sure that the import tool does not try to import data in the USERS tablespace.

In this example, I will drop the USERS tablespace and try to import data again with the same command. Note that I have previously dropped the MSC and DBI schemas prior to dropping the USERS tablespace.

 

SQL> alter database default tablespace SYSTEM;
Database altered.

 

SQL> drop tablespace USERS including contents and datafiles;
Tablespace dropped.

 

I will now recreate the empty DBI schema, as shown in example 1:

 

SQL> create user DBI identified by Passw0rd default tablespace USER_DATA temporary tablespace TEMP quota unlimited on USER_DATA;
User created. 

 

SQL> grant create session, create table to DBI;
Grant succeeded.

 

Now let's try to import the data again from the dump file:

 

oracle@srvora01:/u00/app/oracle/admin/DB11G/dpdump/ [DB11G] imp system/Passw0rd file=exp_MSC.dmp log=imp_to_DBI.log fromuser=msc touser=dbi
Import: Release 11.2.0.3.0 - Production on Tue Aug 12 17:03:50 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export file created by EXPORT:V11.02.00 via conventional path
Warning: the objects were exported by MSC, not by you
import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
. importing MSC's objects into DBI
. . importing table                    "USER_DATA"      99000 rows imported
IMP-00017: following statement failed with ORACLE error 959:
 "CREATE TABLE "USER_DOCUMENTS" ("DOC_ID" NUMBER, "DOC_TITLE" VARCHAR2(25), ""
 "DOC_VALUE" BLOB)  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INI"
 "TIAL 65536 NEXT 1048576 MINEXTENTS 1 FREELISTS 1 FREELIST GROUPS 1 BUFFER_P"
 "OOL DEFAULT) TABLESPACE "USERS" LOGGING NOCOMPRESS LOB ("DOC_VALUE") STORE "
 "AS BASICFILE  (TABLESPACE "USERS" ENABLE STORAGE IN ROW CHUNK 8192 RETENTIO"
 "N  NOCACHE LOGGING  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 FREELIS"
 "TS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT))"
IMP-00003: ORACLE error 959 encountered
ORA-00959: tablespace 'USERS' does not exist
Import terminated successfully with warnings.

 

You can see that we have still have an error when importing the USER_DOCUMENTS table, but this time the error says "Tablespace USERS does not exist". So, whatever we do, imp tool tries to import LOBS in the same tablespace. But other data is imported in the new DEFAULT tablespace of the schema.

We can say that imp has the same behavior no matter whether we revoke the quota on the source tablespace or whether we drop it. Clearly, LOBs are not supported by this method. But if you have a database with standard data, these two methods would help you to remap the tablespace at import time.

 

Pre-create objects in the new tablespace using the INDEXFILE option

Imp tool provides the option INDEXFILE which corresponds to the METADATA ONLY with expdp. There is one difference: while impdp directly creates object structures with METADATA_ONLY=Y in the database without any data, imp with the INDEXFILE option will just generate an sql file with all CREATE statements (tables, indexes etc.) and you will have to manually run this file with sqlplus to create the empty objects.

As you may expect (or not), this SQL file will allow you to change the tablespace name when objects have to be created, prior to importing data. The inconvenience is that several manual steps are involved in this workaround - the solution is described below.

 

1) Generate the SQL file

 

oracle@srvora01:/u00/app/oracle/admin/DB11G/dpdump/ [DB11G] imp system/Passw0rd file=exp_MSC.dmp log=imp_to_DBI.log fromuser=msc touser=dbi indexfile=imp_to_DBI.sql
Import: Release 11.2.0.3.0 - Production on Tue Aug 12 18:04:13 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export file created by EXPORT:V11.02.00 via conventional path
Warning: the objects were exported by MSC, not by you
import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
. . skipping table "USER_DATA"
. . skipping table "USER_DOCUMENTS"
Import terminated successfully without warnings.

 

2) Edit the SQL file

 

In this file, you will have two modifications to do. First, remove all REM keywords from the CREATE statements. All rows are created as a comment in the file. Then, change the tablespace name for all objects you want to create on a different tablespace.

This is how my SQL file looks like after the modifications:

 

oracle@srvora01:/u00/app/oracle/admin/DB11G/dpdump/ [DB11G] cat imp_to_DBI.sqlCREATE TABLE "DBI"."USER_DATA" ("DATA_ID" NUMBER, "DATA_VALUE"
VARCHAR2(250)) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(INITIAL 16777216 NEXT 1048576 MINEXTENTS 1 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "USER_DATA" LOGGING
NOCOMPRESS ;
REM  ... 99000 rows
CONNECT DBI;
CREATE UNIQUE INDEX "DBI"."PK_DATAID" ON "USER_DATA" ("DATA_ID" ) PCTFREE
10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 2097152 NEXT 1048576 MINEXTENTS
1 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "USER_DATA"
LOGGING ;
ALTER TABLE "DBI"."USER_DATA" ADD CONSTRAINT "PK_DATAID" PRIMARY KEY
("DATA_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(INITIAL 2097152 NEXT 1048576 MINEXTENTS 1 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "USER_DATA" LOGGING
ENABLE ;
CREATE TABLE "DBI"."USER_DOCUMENTS" ("DOC_ID" NUMBER, "DOC_TITLE"
VARCHAR2(25), "DOC_VALUE" BLOB) PCTFREE 10 PCTUSED 40 INITRANS 1
MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1
FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "USER_DATA"
LOGGING NOCOMPRESS LOB ("DOC_VALUE") STORE AS BASICFILE (TABLESPACE
"USER_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 FREELISTS 1 FREELIST
GROUPS 1 BUFFER_POOL DEFAULT)) ;
REM  ... 25 rows
CREATE UNIQUE INDEX "DBI"."PK_DOCID" ON "USER_DOCUMENTS" ("DOC_ID" )
PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576
MINEXTENTS 1 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE
"USER_DATA" LOGGING ;
ALTER TABLE "DBI"."USER_DOCUMENTS" ADD CONSTRAINT "PK_DOCID" PRIMARY
KEY ("DOC_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 FREELISTS 1 FREELIST
GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "USER_DATA" LOGGING ENABLE ;
ALTER TABLE "DBI"."USER_DOCUMENTS" ADD CONSTRAINT "FK_DOCID" FOREIGN
KEY ("DOC_ID") REFERENCES "USER_DATA" ("DATA_ID") ENABLE NOVALIDATE ;
ALTER TABLE "DBI"."USER_DOCUMENTS" ENABLE CONSTRAINT "FK_DOCID" ;

 

3) Execute the SQL file with SQL Plus

 

Simply use SQL Plus to execute the SQL file (the user DBI must exist prior to running the script):

 

oracle@srvora01:/u00/app/oracle/admin/DB11G/dpdump/ [DB11G] sqlplus / as sysdba @imp_to_DBI.sql
SQL*Plus: Release 11.2.0.3.0 Production on Tue Aug 12 18:13:14 2014
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

 

Table created.

 

Enter password:
Connected.

 

Index created.

 

Table altered.

 

Table created.

 

Index created.

 

Table altered.

 

Table altered.

 

Table altered.

 

SQL> select object_type, object_name from user_objects;
OBJECT_TYPE         OBJECT_NAME
------------------- -----------------------------------
INDEX               PK_DATAID
TABLE               USER_DOCUMENTS
TABLE               USER_DATA
INDEX               SYS_IL0000064697C00003$$
LOB                 SYS_LOB0000064697C00003$$
INDEX               PK_DOCID

 

4) Disable all constraints

 

When importing data with imp or impdp tools, constraints are created and enabled at the end of the import. This allows Oracle to import data without taking into account any referential integrity constrainst. As we already have created empty objects with enabled constraints, we must manually disable the constraints before importing the data.

 

SQL> select 'ALTER TABLE ' || OWNER || '.' || TABLE_NAME || ' DISABLE CONSTRAINT ' || CONSTRAINT_NAME || ';' "SQL_CMD" from DBA_CONSTRAINTS WHERE OWNER='DBI';
SQL_CMD
-----------------------------------------------------------
ALTER TABLE DBI.USER_DOCUMENTS DISABLE CONSTRAINT FK_DOCID;
ALTER TABLE DBI.USER_DATA DISABLE CONSTRAINT PK_DATAID;
ALTER TABLE DBI.USER_DOCUMENTS DISABLE CONSTRAINT PK_DOCID;

 

SQL> ALTER TABLE DBI.USER_DOCUMENTS DISABLE CONSTRAINT FK_DOCID;
Table altered.

 

SQL> ALTER TABLE DBI.USER_DATA DISABLE CONSTRAINT PK_DATAID;
Table altered.

 

SQL> ALTER TABLE DBI.USER_DOCUMENTS DISABLE CONSTRAINT PK_DOCID;
Table altered.

 

5) Import the data with the IGNORE=Y option

 

Now we must import the data from the dump file using the IGNORE=Y option to ignore warnings about already existing objects. It will allow the imp tool to load data to the empty tables and indexes. Additionaly, I have set the CONSTRAINTS=N option because imp tried to enable the constraints after the import, which was generating an error...

 

oracle@srvora01:/u00/app/oracle/admin/DB11G/dpdump/ [DB11G] imp system/Passw0rd file=exp_MSC.dmp log=imp_to_DBI.log fromuser=msc touser=dbi ignore=y constraints=n
Import: Release 11.2.0.3.0 - Production on Tue Aug 12 19:43:59 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export file created by EXPORT:V11.02.00 via conventional path
Warning: the objects were exported by MSC, not by you
import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
. importing MSC's objects into DBI
. . importing table                    "USER_DATA"      99000 rows imported
. . importing table               "USER_DOCUMENTS"         25 rows imported
About to enable constraints...
Import terminated successfully without warnings.

 

All objects have been imported successfully:

 

SQL> select object_type, object_name from dba_objects where owner='DBI';
OBJECT_TYPE         OBJECT_NAME
------------------- ----------------------------------------------------
INDEX               PK_DATAID
TABLE               USER_DATA
TABLE               USER_DOCUMENTS
INDEX               SYS_IL0000064704C00003$$
LOB                 SYS_LOB0000064704C00003$$
INDEX               PK_DOCID
SEQUENCE            MSCSEQ

 

6) Enable constraints after the import

 

Constraints previously disabled must be enabled again to finish the import:

 

SQL> select constraint_name, status from dba_constraints where owner='DBI';CONSTRAINT_NAME                STATUS
------------------------------ --------
FK_DOCID                       DISABLED
PK_DOCID                       DISABLED
PK_DATAID                      DISABLED

 

SQL> select 'ALTER TABLE ' || OWNER || '.' || TABLE_NAME || ' ENABLE CONSTRAINT ' || CONSTRAINT_NAME || ';' "SQL_CMD" from DBA_CONSTRAINTS WHERE OWNER='DBI';
SQL_CMD
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ALTER TABLE DBI.USER_DOCUMENTS ENABLE CONSTRAINT FK_DOCID;
ALTER TABLE DBI.USER_DOCUMENTS ENABLE CONSTRAINT PK_DOCID;
ALTER TABLE DBI.USER_DATA ENABLE CONSTRAINT PK_DATAID;

 

Enable PRIMARY KEYS:

 

SQL> ALTER TABLE DBI.USER_DOCUMENTS ENABLE CONSTRAINT PK_DOCID;
Table altered.

 

SQL> ALTER TABLE DBI.USER_DATA ENABLE CONSTRAINT PK_DATAID;
Table altered.

 

And then FOREIGN KEY:

 

SQL> ALTER TABLE DBI.USER_DOCUMENTS ENABLE CONSTRAINT FK_DOCID;
Table altered.

 

Constraints are now enabled:

 

SQL> select constraint_name, status from dba_constraints where owner='DBI';CONSTRAINT_NAME                STATUS
------------------------------ --------
FK_DOCID                       ENABLED
PK_DATAID                      ENABLED
PK_DOCID                       ENABLED

 

We do not have to carry on quota on tablespaces here. As you can see, even if I have recreated my USERS tablespace prior to importing the data with the INDEXFILE option, the USERS tablespace contains no segment after the import:

 

SQL> select segment_name from dba_segments where tablespace_name='USERS';
no rows selected

 

Conclusion

Workaround 1 and 2, which are very similar, are simple and fast methods to remap tablespaces for import into a database without any LOB data. But in the presence of LOB data, the workaround 3 is the right method to successfully move every object of the database into the new database. The major constraint of this workaround is that you will have to manually edit an SQL file, which can become very fastidious if you have several hundred or thousand of objects to migrate...

It is also possible to first import the data in the same tablespac, and then use the MOVE statement to move all objects into the new tablespac. However, you may not be able to move ALL objects this way. Workaround 3 seems to be the best and "cleanest" way to do it.

12.1.0.2 Introduction to Attribute Clustering (The Division Bell)

Richard Foote - Tue, 2014-08-26 00:03
One of the really cool new features introduced in 12.1.0.2 is Attribute Clustering. This new table based attribute allows you to very easily cluster data in close physical proximity based on the content of specific columns. As I’ve discussed many times, indexes love table data that is physically clustered in a similar manner to the index […]
Categories: DBA Blogs

Oracle EPM Cloud is Ramping Up; Hear the Latest at Oracle OpenWorld 2014

Linda Fishman Hoyle - Mon, 2014-08-25 18:20

A Guest Post by Jennifer Toomey, Senior Director, Oracle Business Analytics (pictured left) Normal 0 false false false EN-US X-NONE X-NONE

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-bidi-font-family:"Times New Roman";} Oracle Planning and Budgeting Cloud has gained more than 100 customers and 5,000 users since its release six months ago. This is just the beginning of more Enterprise Performance Management Cloud offerings to come! See the latest product features and hear about the near-term roadmap at Oracle OpenWorld 2014 in San Francisco, September 28 - October 30, 2014.

Kick-Off General Session—Moscone West, Room 2008
Oracle’s Balaji Yelamanchili, Senior VP of Product Development, will announce the latest innovations and what’s coming for Oracle EPM on Monday at 1:30 p.m. He will include a demo of our next EPM Cloud offering, Oracle Financial Performance Reporting Cloud.

Conference Sessions and Customer Panel—Moscone West, Room 3008
We have scheduled a number of sessions to present the EPM Cloud roadmap and upcoming offerings. For Oracle Planning and Budgeting Cloud, we will showcase the new FUSE interface and also feature a customer panel with Manhattan Beachwear and Communications Test Design, Inc. (CTDI).

  • Oracle Enterprise Performance Management Cloud—Tuesday at 10:45 a.m. [CON8545]
  • Introduction and Update: Oracle Planning and Budgeting Cloud—Tuesday 5:00 p.m. [CON8556]
  • Introduction to Oracle Financial Performance Reporting Cloud—Wednesday at 10:15 a.m. [CON8359]
  • Customer Success: Oracle Planning and Budgeting Cloud—Wednesday at 11:30 a.m. [CON8552]
  • Developing a Proper EPM Deployment Strategy for Cloud and on-Premises Solutions, featuring partner, Linium—Thursday at 1:15 p.m. [CON4348]

Dedicated Demo Pod #3895—Moscone West
In the Demo Grounds, we plan to highlight the upcoming mobile capabilities and new FUSE interface of Oracle Planning and Budgeting Cloud, as shown here:


Oracle OpenWorld is always an exciting experience so register today. We are looking forward to sharing our latest success and plans for EPM Cloud with you!

Jennifer Toomey
Senior Director, Product Marketing
Oracle Business Analytics


Oracle HCM Cloud Primed for OpenWorld 2014

Linda Fishman Hoyle - Mon, 2014-08-25 17:04
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

A Guest Post by Mike Vilimek, Director of Product Marketing, Oracle HCM Cloud

At Oracle OpenWorld 2014 all things HCM will again be a dedicated “event within an event” called HCM Central @ OpenWorld. This year’s experience promises to be even better than last year’s and we want to let you know why you should be excited and eager to attend.

Customer Success Stories
Nothing is more convincing than to hear success stories from businesses just like yours. That’s why at this year’s event we will have numerous customers sharing their stories of successfully moving to Oracle HCM Cloud. Here is just a sampling of what is planned.

  • BioMarin Pharmaceutical—Talent Acquisition Cloud for Midsize Companies with Oracle E-Business Suite [CON4527]
  • BlackRock—HR Transformation at BlackRock [CON5836]
  • BMO Financial Group – Implementing Oracle Learn Cloud: The Value of Partnering [CON8691]
  • Chick-fil-A – Oracle HCM Cloud: Integration with Enterprise Identity Management [CON5987]
  • Cox – Integrated Talent Management: Cox’s Journey to the Cloud [CON7473]
  • National Instruments – Oracle Fusion HCM: From RFP to Reality [CON2357]
  • Sierra Club – Jumping the Gap: Moving Recruiting into the Cloud from the Grass Roots [CON7515]

Integration and Unification
A main theme throughout the event will be integration and unification. Session speakers and Oracle product experts will share roadmap details, and will talk about how the Oracle Fusion and Oracle Taleo platforms are being unified across all dimensions including user experience, process, data, tools, and technology within the Oracle HCM Cloud.

Oracle’s HCM Cloud is Personalized, Connected, and Secure
We know that the notion of moving HR systems to the cloud raises questions for customers. Will I be able to set up the applications in a way that meets our unique business needs? Can we still integrate with our other on-premise systems? How secure will my data be in the cloud?

Another major theme throughout this event will be that, unlike other clouds, Oracle offers a modern cloud that is personalized, connected and secure.

Personalized: Oracle’s Cloud provides all the benefits of SaaS (speed, simplicity and lower costs) – as well as the ability to tailor applications to your unique business.

Connected: Oracle’s Cloud provides seamless integration of data and processes across cloud, on-premise, and third party applications.

Secure: Oracle’s Cloud provides top-to-bottom security from the industry leader, including complete data isolation and security at multiple tech layers.

Employee Experience Journey Mapping (EXJM)
We will be holding an EXJM session at OpenWorld. EXJM, based on the very successful customer experience journey mapping (CXJM) methodology, is designed to deliver a better employee experience to improve engagement, productivity, performance, and competitiveness. The highly interactive workshops have been getting rave reviews because they provide valuable insight into improving employee experiences without pitching a product—a refreshing change for many.

Great Venue and Entertainment
HCM Central @ OpenWorld will be held at the beautiful Palace Hotel, and as always, the entertainment will be top notch. The Oracle Appreciation Event will feature Aerosmith and the hip-hop duo Macklemore & Ryan Lewis.

So, what are you waiting for? Register today and we'll see you in San Francisco, September 28 - October 2, 2014!

Mike Vilimek
Director, Product Marketing
Oracle HCM Cloud

Kuali For-Profit: Change is an indicator of bigger issues

Michael Feldstein - Mon, 2014-08-25 14:28

On Friday the Kuali Foundation announced the creation of a new for-profit entity to be led by the former CTO of Instructure, Joel Dehlin. Jeff Young at the Chronicle described the change:

Ten years ago, a group of universities started a collaborative software project touted as an alternative to commercial software companies, which were criticized as too costly. On Friday the project’s leaders made a surprising announcement: that it would essentially become a commercial entity. [snip]

The Kuali Foundation will continue to exist as a non-profit, but it will be an investor in a new commercial entity to back the Kuali software development. Leaders insisted that they would maintain the values of the project despite creating the kind of organization that they once criticized. For one thing, the source software will remain free and open, but the company will sell services, like software hosting. On Friday the group issued an FAQ with details about the change.

As Carl Straumsheim put it at Inside Higher Ed:

The Kuali Foundation, after a decade of fighting commercial software vendors as a community source initiative, will launch a commercial company to better fight… commercial software vendors.

Despite the positioning that this change is about innovating into the next decade, there is much more to this change than might be apparent on the surface. The creation of a for-profit entity to “lead the development and ongoing support” and to enable “an additional path for investment to accelerate existing and create new Kuali products“ fundamentally moves Kuali away from the community source model. Member institutions will no longer have voting rights for Kuali projects but will instead be able to “sit on customer councils and will give feedback about design and priority”. Given such a transformative change to the underlying model, there are some big questions to address.

Financial Needs

Kuali, being a non-profit foundation, has its financial records available online, and the tax reporting form 990s are easily obtained through sites such as GuideStar. Furthermore, instructional media + magic (im+m) has a public eLibrary where they have shared Kuali documentation over the years.[1] There does not appear to be a smoking gun found in the financials to directly explain the need for such a significant change, but there are hints of issues that provide some insight. In a recent analysis of Kuali’s financials from these public sources, im+m noted how Kuali has reserves to survive between 8 – 34 months with no additional income, depending on the percentage of uncollectible accounts receivables. Months to survive In an article in the Chronicle this past spring, Kuali leaders described their apparent financial strength.

The foundation is in the best financial shape it has ever been, its officials say. Membership dues for small colleges start at a few thousand dollars; some big institutions contribute up to seven figures for specific software projects.

“We are about a $30-million net-asset organization,” says Ms. Foutty, the executive director. “There is not a concern that we are going to lack cash flow or net assets to do what we want to do.”

But what comprises these net assets? It turns out that the vast majority is comprised of accounts receivable, and more specifically, committed in-kind contribution of project resources from member institutions on the various projects. By looking at the financial report from last year (ended June 30, 2013 – see p. 3), we can see that Kuali had net assets of $26.4 million of which $21.3 million were “contributions receivable”. I would assume that current assets have approximately the same ratios. Kuali assets What this means is that a foundation such as Kuali is more dependent on member institutions keeping the faith and honoring contribution commitments than they are on pure dues and hard cash. Kuali cannot afford for too many institutions to to pull out of the consortium and write-off their commitments, and this aspect is based on whether Kuali will deliver the products that the institutions need.

Timing

According to the Kuali web site, the addition of a for-profit entity was based on two community strategy meetings that were held June 25-26 and July 30-31 of this year. Brad Wheeler, chair of the Kuali Foundation and CIO at Indiana University, wrote his summary of the meetings on Aug 1, 2014, including these two prophetic notes:

  • We need to accelerate completion of our full suite of Kuali software applications, and to do so we need access to substantially more capital than we have secured to date to meet this need of colleges and universities.
  • Kuali should consider any applicable insights from a new breed of “professional open source” firms (ex. RedHat, MySQL, Instructure) that are succeeding in blending commercial, open source, and foundation models. This should include consideration of possibly creating a commercial arm of the Kuali community.

There were also direct notes about the need for cloud services and better project coordination and decision-making.

The changes announced on Friday come less than two months after the first community strategy meeting, so I have trouble seeing the meetings as the cause and the Friday changes as the effect. There is reason to believe that the changes have been in the works prior to June of this year.

Change as an Indicator

When Kuali makes this radical of a change (moving away from community source model) within this short of a timeframe (less than two months), I think the best way to view the change is as an indicator that there are bigger issues under the surface. I wrote in a post on Unizin about a key question about the community source model:

Community source has proven its ability to develop viable solutions for known product categories and generally based on existing solutions – consider Sakai as an LMS (heavily based on U Michigan’s CHEF implementation and to a lesser degree on Indiana University’s OnCourse), Kuali Financial System (based directly on IU’s financial system), and Kuali Coeus (based on MIT’s research administration system). When you get rid of a pre-existing solution, the results are less promising. Kuali Student, based on a known product category but designed from the ground up, is currently on track to take almost 8 years from concept to full functionality. Looking further, are there any examples where a new product in an ill-defined product category has successfully been developed in a community source model?

Kent Brooks, CIO of Casper College, wrote a post this morning and called out a critical aspect of why this challenge is so important.

My overall observation is that the 10 year old Kuali project seems to have hit a bit of a lull in new adoptions. Partly is because institutions such as mine provide the next ‘wave of growth’ potential and most are unwilling to listen to the Kuali talk when there is not a Kuali Walk…aka a complete suite of tools with which one can operate the entire institution. It is a deal breaker for the 4000ish small to mid sized institutions in the US alone.

In other words, the vision of Kuali requires the availability of Kuali Student in particular, but also for HR / Payroll. Both of these project are based on future promises. I strongly suspect that the lack of completion of a complete suite of tools that Kent mentions is the real driving issue here for the changes.

Kuali must have new investment in order to complete its suite of applications, and the for-profit entity is the vehicle that the Foundation needs to raise the capital. One model that certainly informs this approach is ANGEL Learning, a for-profit entity which was founded and partially owned by the non-profit Indiana University (IU). ANGEL was able to raise additional investment beyond IU, and when ANGEL was sold for $100 million in 2009, IU made approximately $23 million in proceeds from the sale.

Required Change

Although there is a lot still to learn, my view is that the creation of a for-profit entity is not just a choice for acceleration into the next decade but is a change that the Kuali Foundation feels is required. Kuali can no longer bet that the community source model as currently implemented can successfully complete new products not based on pre-existing university applications, and they cannot rely on the current model to attract sufficient investment to finish the job.

Brad Wheeler was quoted at Inside Higher Education summarizing the changes.

“What we’re really doing is gathering the good things a .com can do: stronger means of making decisions, looking broadly at the needs of higher education and maybe sharpening product offerings a bit more,” Wheeler said. “This is going to be a very values-based organization with patient capital, not venture capital.”

The foundation will fund the launch, Wheeler said. For future funding, the company won’t pursue venture capital or private equity, but money from “values-based investors” such as university foundations. That means Kuali won’t need to be run like a traditional ed-tech startup, he said, as the company won’t be “beholden to Wall Street.”

In a post from this afternoon, Chris Coppola from rSmart (a co-founder of Kuali) provided his summary:

The Kuali mission is unwavering, to drive down the cost of administration for colleges and universities to keep more money focused on the core teaching and research mission. Our (the Kuali community) mission hasn’t changed, but the ability to execute on it has improved dramatically. The former structure made it too difficult for colleges and universities to engage and benefit from Kuali’s work. This new model will simplify how institutions can engage. The former structure breeds a lot of duplicative (and even competitive) work. The new structure will be more efficient.

More to Come

There is a lot of news to unpack here, and Michael and I will report and provide analysis as we learn more. For now, there are some big questions to consider:

  1. If you read the rest of Kent Brooks’ blog, you’ll see that he is now delaying the decision for his school to join the Kuali community. How many other schools will rethink their membership in Kuali based on the new model? The Kuali FAQ acknowledges that they will lose members but also predicts they will gain new membership. Will this prediction prove to be accurate?
  2. More importantly, are there already current member institutions providing significant resources that are threatening to pull out of Kuali?
  3. Given the central need for new, significant investment, will Kuali and the new for-profit entity succeed in bringing in this investment?
  4. Will the new entity directly address the project challenges and complete the full suite of applications that is needed by the Kuali community?
  5. What effect will Kuali’s changes have on other community source initiatives such as Sakai / Apereo and Unizin (if it does get into software development)?

Update 8/26: Clarified language on voting rights from ‘customers’ to ‘member institutions’; added qualified in last question re. Unizin (it would only be community source if it gets into software development).

  1. Disclosure: Jim Farmer from im+m has been a guest blogger at e-Literate for many years.

The post Kuali For-Profit: Change is an indicator of bigger issues appeared first on e-Literate.

Getting Python to play with Oracle using cxOracle on Mint and Ubuntu

The Anti-Kyte - Mon, 2014-08-25 12:57

“We need to go through Tow-ces-ter”, suggested Deb.
“It’s pronounced Toast-er”, I corrected gently.
“Well, that’s just silly”, came the indignant response, “I mean, why can’t they just spell it as it sounds ?”
At this point I resisted the temptation of pointing out that, in her Welsh homeland, placenames are, if anything, even more difficult to pronounce if you’ve only ever seen them written down.
Llanelli is a linguistic trap for the unwary let alone the intriguingly named Betws-Y-Coed.
Instead, I reflected on the fact that, even when you have directions, things can sometimes be a little less than straight forward.

Which brings me to the wonderful world of Python. Having spent some time playing around with this language, I wanted to see how easy it is to plug it into Oracle.
To do this, I needed the cxOracle Python library.
Unfortunately, installation of this library proved somewhat less than straightforward – on Linux Mint at least.
What follows are the gory details of how I got it working in the hope that it will help anyone else struggling with this particular conundurum.

My Environment

The environment I’m using to execute the steps that follows is Mint 13 (with the Cinnamon desktop).
The database I’m connecting to is Oracle 11gXE.

In Mint, as with most other Linux Distros, Python is part of the base installation.
In this particular distro version, the default version of Python is 2.7.

If you want to check to see which version is currently the default on your system :

which python
/usr/bin/python

This will tell you what file gets executed when you invoke python from the command line.
You should then be able to do something like this :

ls -l /usr/bin/python
lrwxrwxrwx 1 root root 9 Apr 10  2013 python -> python2.7

One other point to note is that, if you haven’t got it already, you’ll probably want to install the Oracle Client.
The steps you follow to do this will depend on whether your running a 32-bit or 64-bit OS.

To check this, open a Terminal Window and type :

uname -i

If this comes back with x86_64 then you are running 64-bit. If it’s i686 then you are on a 32-bit os.
In either case, you can find the instructions for installation of the Oracle Client on Debian based systems here.

According to the cxOracles’s official SourceForge site, the next bit should be simple.
Just by entering the magic words…

pip install cxOracle

…you can wire up your Python scripts to the Oracle Database of your choice.
Unfortunately, there are a few steps required on Mint before we can get to that point.

Installing pip

This is simple enough. Open a Terminal and :

sudo apt-get install python-pip

However, if we then run the pip command…

pip install cx_Oracle

cx_Oracle.c:6:20: fatal error: Python.h: No such file or directory

It seems that, in order to run this, there is one further package you need…

sudo apt-get install python-dev

Another point to note is that you need to execute the pip command as sudo.
Even then, we’re not quite there….

sudo pip install cx_Oracle

Downloading/unpacking cx-Oracle
  Running setup.py egg_info for package cx-Oracle
    Traceback (most recent call last):
      File "<string>", line 14, in <module>
      File "/home/mike/build/cx-Oracle/setup.py", line 135, in <module>
        raise DistutilsSetupError("cannot locate an Oracle software " \
    distutils.errors.DistutilsSetupError: cannot locate an Oracle software installation
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):

  File "<string>", line 14, in <module>

  File "/home/mike/build/cx-Oracle/setup.py", line 135, in <module>

    raise DistutilsSetupError("cannot locate an Oracle software " \

distutils.errors.DistutilsSetupError: cannot locate an Oracle software installation

----------------------------------------
Command python setup.py egg_info failed with error code 1
Storing complete log in /home/mike/.pip/pip.log

So, whilst we now have all of the required software, it seems that sudo does not recognize the $ORACLE_HOME environment variable.

You can confirm this as follows. First of all, check that this environment variable is set in your session :

echo $ORACLE_HOME
/usr/lib/oracle/11.2/client64

That looks OK. However….

sudo env |grep ORACLE_HOME

…returns nothing.

Persuading sudo to see $ORACLE_HOME

At this point, the solution presented here comes to the rescue.

In the terminal run…

sudo visudo

Then add the line :

Defaults env_keep += "ORACLE_HOME"

Hit CTRL+X then confirm the change by selecting Y(es).

If you now re-run the visudo command, the text you get should look something like this :

#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults        env_reset
Defaults        mail_badpass
Defaults        secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:$
Defaults        env_keep += "ORACLE_HOME"
# Host alias specification

# User alias specification

# Cmnd alias specification

# User privilege specification
                               [ Read 30 lines ]
^G Get Help  ^O WriteOut  ^R Read File ^Y Prev Page ^K Cut Text  ^C Cur Pos
^X Exit      ^J Justify   ^W Where Is  ^V Next Page ^U UnCut Text^T To Spell

You can confirm that your change has had the desired effect…

sudo env |grep ORACLE_HOME
ORACLE_HOME=/usr/lib/oracle/11.2/client64
Finally installing the library

At last, we can now install the cxOracle library :

sudo pip install cx_Oracle
Downloading/unpacking cx-Oracle
  Running setup.py egg_info for package cx-Oracle
    
Installing collected packages: cx-Oracle
  Running setup.py install for cx-Oracle
    
Successfully installed cx-Oracle
Cleaning up...

To make sure that the module is now installed, you can now run :

python
Python 2.7.3 (default, Feb 27 2014, 19:37:34) 
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> help('modules')

Please wait a moment while I gather a list of all available modules...

If all is well, you should be presented with the following list :

ScrolledText        copy_reg            ntpath              tty
SgiImagePlugin      crypt               nturl2path          turtle
SimpleDialog        csv                 numbers             twisted
SimpleHTTPServer    ctypes              oauth               types
SimpleXMLRPCServer  cups                opcode              ubuntu_sso
SocketServer        cupsext             operator            ufw
SpiderImagePlugin   cupshelpers         optparse            unicodedata
StringIO            curl                os                  unittest
SunImagePlugin      curses              os2emxpath          uno
TYPES               cx_Oracle           ossaudiodev         unohelper
TarIO               datetime            packagekit    

Finally, you can confirm that the library is installed by running a simple test.
What test is that ?, I hear you ask….

Testing the Installation

A successful connection to Oracle from Python results in the instantiation of a connection object. This object has a property called version, which is the version number of Oracle that the database is running on. So, from the command line, you can invoke Python…

python
Python 2.7.3 (default, Feb 27 2014, 19:58:35) 
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.

… and then run

>>> import cx_Oracle
>>> con = cx_Oracle.connect('someuser/somepwd@the-db-host-machine/instance_name')
>>> print con.version
11.2.0.2.0
>>> 

You’ll need to replace someuser/somepwd with the username and password of an account on the target database.
The db-host-machine is the name of the server that the database is sitting on.
The instance name is the name of the database instance you’re trying to connect to.

Incidentally, things are a bit easier if you have an Oracle client on your machine with the TNS_ADMIN environment variable set. To check this :

env |grep -i oracle
LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib
TNS_ADMIN=/usr/lib/oracle/11.2/client64/network/admin
ORACLE_HOME=/usr/lib/oracle/11.2/client64

Assuming that your tnsnames.ora includes an entry for the target database, you can simply use a TNS connect string :

>>> import cx_Oracle
>>> con = cx_Oracle.connect('username/password@database')
>>> print con.version
11.2.0.2.0
>>> 
Useful Links

Now you’ve got cxOracle up and running, you may want to check out some rather useful tips on how best to use it :


Filed under: Linux, Oracle Tagged: cxOracle, pip install cxOracle, python, python-dev, uname, visudo, which

<b>Contribution by Angela Golla,

Oracle Infogram - Mon, 2014-08-25 12:53
Contribution by Angela Golla, Infogram Deputy Editor

Oracle E-Business Suite Upgrades and Platform Migration


Customers increasingly face the prospect of upgrading older versions of the Oracle E-Business Suite product while considering hardware/operating systems upgrades and possible migrations across platforms - advances in technology as well as the formalization of lifecycle and support timelines for their hardware and software investments are prompting an increased interest for guidance and how to approach these multiple upgrade scenarios.

Note:1377213.1 outlines some guidelines to the mechanisms available in upgrading the Oracle E-Business Suite while considering platform migration. This document is meant as an overview to supplement existing detailed documentation that outlines specific processes to perform the migration.