Skip navigation.

Feed aggregator

PRECOMPUTE_SUBQUERY hint

XTended Oracle SQL - 9 hours 36 min ago

I’ve just found out that we can specify query block for PRECOMPUTE_SUBQUERY: /*+ precompute_subquery(@sel$2) */
So we can use it now with SQL profiles, SPM baselines and patches.

SQL> select/*+ precompute_subquery(@sel$2) */ * from dual where dummy in (select chr(level) from dual connect by level<=100);

D
-
X

SQL> @last

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  c437vsqj7c4jy, child number 0
-------------------------------------
select/*+ precompute_subquery(@sel$2) */ * from dual where dummy in
(select chr(level) from dual connect by level<=100)

Plan hash value: 272002086

---------------------------------------------------------------------------
| Id  | Operation         | Name | E-Rows |E-Bytes| Cost (%CPU)| E-Time   |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |        |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| DUAL |      1 |     2 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1 / DUAL@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("DUMMY"='' OR "DUMMY"='' OR "DUMMY"='♥' OR "DUMMY"='♦'
              OR "DUMMY"='♣' OR "DUMMY"='♠' OR "DUMMY"='' OR "DUMMY"=' OR
              "DUMMY"=' ' OR "DUMMY"=' ' OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"=' '
              OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"='►' OR "DUMMY"='◄' OR
              "DUMMY"='' OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"=''
              OR "DUMMY"='' OR "DUMMY"='↑' OR "DUMMY"='↓' OR "DUMMY"='' OR
              "DUMMY"=' OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"=''
              OR "DUMMY"=' ' OR "DUMMY"='!' OR "DUMMY"='"' OR "DUMMY"='#' OR
              "DUMMY"='$' OR "DUMMY"='%' OR "DUMMY"='&' OR "DUMMY"='''' OR
              "DUMMY"='(' OR "DUMMY"=')' OR "DUMMY"='*' OR "DUMMY"='+' OR "DUMMY"=','
              OR "DUMMY"='-' OR "DUMMY"='.' OR "DUMMY"='/' OR "DUMMY"='0' OR
              "DUMMY"='1' OR "DUMMY"='2' OR "DUMMY"='3' OR "DUMMY"='4' OR "DUMMY"='5'
              OR "DUMMY"='6' OR "DUMMY"='7' OR "DUMMY"='8' OR "DUMMY"='9' OR
              "DUMMY"=':' OR "DUMMY"=';' OR "DUMMY"='<' OR "DUMMY"='=' OR "DUMMY"='>'
              OR "DUMMY"='?' OR "DUMMY"='@' OR "DUMMY"='A' OR "DUMMY"='B' OR
              "DUMMY"='C' OR "DUMMY"='D' OR "DUMMY"='E' OR "DUMMY"='F' OR "DUMMY"='G'
              OR "DUMMY"='H' OR "DUMMY"='I' OR "DUMMY"='J' OR "DUMMY"='K' OR
              "DUMMY"='L' OR "DUMMY"='M' OR "DUMMY"='N' OR "DUMMY"='O' OR "DUMMY"='P'
              OR "DUMMY"='Q' OR "DUMMY"='R' OR "DUMMY"='S' OR "DUMMY"='T' OR
              "DUMMY"='U' OR "DUMMY"='V' OR "DUMMY"='W' OR "DUMMY"='X' OR "DUMMY"='Y'
              OR "DUMMY"='Z' OR "DUMMY"='[' OR "DUMMY"='\' OR "DUMMY"=']' OR
              "DUMMY"='^' OR "DUMMY"='_' OR "DUMMY"='`' OR "DUMMY"='a' OR "DUMMY"='b'
              OR "DUMMY"='c' OR "DUMMY"='d'))

PS. I’m not sure, but as far as i remember, when I tested it on 10.2, it didn’t work with specifying a query block.
And I have never seen such usage.

Categories: Development

My Speaking Schedule for Oracle Open World 2014

Galo Balda's Blog - Wed, 2014-08-27 12:22

A quick post to let you know about the two presentations that I’ll be doing at Oracle Open World 2014.

Session ID:         UGF4482
Session Title:     “Getting Started with SQL Pattern Matching in Oracle Database 12c
Venue / Room:  Moscone South – 301
Date and Time:  9/28/14, 13:30 – 14:15

Session ID:          CON4493
Session Title:      “Regular Expressions in Oracle Database 101″
Venue / Room:   Moscone South – 303
Date and Time:   10/2/14, 13:15 – 14:00

As usual, you might have to check before the session to make sure the room has not changed.

I hope to see you there.


Filed under: 12C, Open World, Oracle, Regular Expressions, Row Pattern Matching, SQL Tagged: 12C, Open World, Oracle, Regular Expressions, Row Pattern Matching, SQL
Categories: DBA Blogs

Hands-On Programming with R by Garrett Grolemund

Surachart Opun - Wed, 2014-08-27 02:42
R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS.
R language is useful to become a data scientist, as well as a computer scientist. I mention a book that points about a data science with R. A Hands-On Programming with R Write Your Own Functions and Simulations By Garrett Grolemund. It was written how to solve the logistical problems of data science. Additional, How to write our own functions and simulations with R. In a book, readers are able to learn in practical data analysis projects (Weighted Dice, Playing Cards, Slot Machine) and understand more in R. Additional, Appendix A-E will help to install/update R and R packages as well as loading Data and debugging in R code.
Garrett Grolemund maintains shiny.rstudio.com, the development center for the Shiny R package.
Free Sampler.Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Coding in PL/SQL in C style, UKOUG, OUG Ireland and more

Pete Finnigan - Tue, 2014-08-26 15:35

My favourite language is hard to pin point; is it C or is it PL/SQL? My first language was C and I love the elegance and expression of C. Our product PFCLScan has its main functionallity written in C. The....[Read More]

Posted by Pete On 23/07/14 At 08:44 PM

Categories: Security Blogs

Integrating PFCLScan and Creating SQL Reports

Pete Finnigan - Tue, 2014-08-26 15:35

We were asked by a customer whether PFCLScan can generate SQL reports instead of the normal HTML, PDF, MS Word reports so that they could potentially scan all of the databases in their estate and then insert either high level....[Read More]

Posted by Pete On 25/06/14 At 09:41 AM

Categories: Security Blogs

Automatically Add License Protection and Obfuscation to PL/SQL

Pete Finnigan - Tue, 2014-08-26 15:35

Yesterday we released the new version 2.0 of our product PFCLObfuscate . This is a tool that allows you to automatically protect the intellectual property in your PL/SQL code (your design secrets) using obfuscation and now in version 2.0 we....[Read More]

Posted by Pete On 17/04/14 At 03:56 PM

Categories: Security Blogs

Twitter Oracle Security Open Chat Thursday 6th March

Pete Finnigan - Tue, 2014-08-26 15:35

I will be co-chairing/hosting a twitter chat on Thursday 6th March at 7pm UK time with Confio. The details are here . The chat is done over twitter so it is a little like the Oracle security round table sessions....[Read More]

Posted by Pete On 05/03/14 At 10:17 AM

Categories: Security Blogs

PFCLScan Reseller Program

Pete Finnigan - Tue, 2014-08-26 15:35

We are going to start a reseller program for PFCLScan and we have started the plannng and recruitment process for this program. I have just posted a short blog on the PFCLScan website titled " PFCLScan Reseller Program ". If....[Read More]

Posted by Pete On 29/10/13 At 01:05 PM

Categories: Security Blogs

PFCLScan Version 1.3 Released

Pete Finnigan - Tue, 2014-08-26 15:35

We released version 1.3 of PFCLScan our enterprise database security scanner for Oracle a week ago. I have just posted a blog entry on the PFCLScan product site blog that describes some of the highlights of the over 220 new....[Read More]

Posted by Pete On 18/10/13 At 02:36 PM

Categories: Security Blogs

PFCLScan Updated and Powerful features

Pete Finnigan - Tue, 2014-08-26 15:35

We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

Posted by Pete On 04/09/13 At 02:45 PM

Categories: Security Blogs

Oracle Security Training, 12c, PFCLScan, Magazines, UKOUG, Oracle Security Books and Much More

Pete Finnigan - Tue, 2014-08-26 15:35

It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]

Posted by Pete On 28/08/13 At 05:04 PM

Categories: Security Blogs

The Social Spotlight is Going to Shine on #OOW14

Linda Fishman Hoyle - Tue, 2014-08-26 15:21

A Guest Post by Mike Stiles, Senior Content Manager for Oracle Social Cloud (pictured left)

Want to see an example of “busy” and “everywhere”? Then keep an eye on the Oracle Social Cloud team as they head into this year’s Oracle OpenWorld. Famous for their motto of “surely we can tackle even more,” Oracle’s top socializers will be all over Moscone, from the Social Intelligence Center in CX Central to 16+ social track sessions to live demos to comprehensive social coverage. Oracle Social Cloud will be trumpeting the social business imperative with live, interactive displays and inspiring speakers from Oracle, General Motors, Chevrolet, FleishmanHillard, Nestle, Polaris, Banc of California, CMP.LY and more.

Catch as many of these highlights as you can. You know how we social people love for people to follow us:

  • Social Intelligence Center: Swing by the Oracle SRM “Social Intelligence Center” in CX Central in Moscone West. We don’t know if it will literally make you smarter, but it is a real world demonstration of how the Oracle Social Cloud’s Social Relationship Management (SRM) platform serves up big data visualizations. Specifically, we’ll be watching the web and social chatter around #OOW14 using advanced analytics and deeper listening. You can see the new graphical representations of social data and global activity, get some great ideas for establishing a Social Intelligence Center at your brand, or see firsthand how the Oracle SRM platform is a mean, modernizing, social management, streamlining machine. And don’t forget to tweet about what you see.
  • “Financial Services: Why Social Media Makes Cents” with Tom Chernaik of CMP.LY; Kevin Salas, Banc of California; and Angela Wells of Oracle Social. Monday, Sept. 29 @ 1:45p.m. [CON8561]
  • “A Sky-High Overview: Oracle Social Cloud” with Meg Bear, Group Vice President of Oracle Social. Tuesday, Sept. 30 @ 10 and 11:45a.m. [TGS8068]
  •  “Show Me the Money: Building the Business Case for Social” with Holly Spaeth of Polaris; Michelle Lapierre of Marriott; Meghan Blauvelt, Nestle; and Angela Wells of Oracle Social. Wednesday, Oct. 1 @ 11:45a.m. [CON8551]
  • “Social Relationship Management: Lessons Learned from the Olympics, Super Bowl, Grammys and More” with Jamie Barbour of Chevrolet; Melissa Schreiber of FleishmanHillard; and Erika Brookes of Oracle Social. Wednesday, Oct. 1 @ 1p.m. [CON8349]
  • “Global Command Centers: Why Social Engagement is Key to Connect with Customers” with Rebecca Harris and Whitney Drake of General Motors; Alan Chumley of FleishmanHillard; and Tara Roberts of Oracle Social. Wednesday, Oct. 1 @ 2:15p.m. [CON8350]
  • “Whose Customer is this Anyway? Rise of the CCO, the CDO and the “New” CMO” with Jeb Dasteel, Oracle’s Chief Customer Officer (CCO); other C-Suite executives; and Erika Brookes of Oracle Social. Wednesday, Oct. 1 @ 3:45p.m. [CON8457]
  • “Leveraging Social Identity to Build Better Customer Relations” with Andrew Jones of the Altimeter Group. Thursday, Oct. 2 @ 11:30a.m. [CON8348]
  • “When Social Data = Opportunity: Leveraging Social Data to Target Custom Audiences” with Michelle Lapierre of Marriott; and Cory Treffiletti of Oracle. Thursday, Oct. 2 @ 12:45p.m. [CON8554]

Want the most thorough coverage of Oracle Social’s OpenWorld activities imaginable? Then by all means make sure you friend and follow us on all our channels, including Twitter, Facebook, Google+, and LinkedIn. And subscribe to our daily Social Spotlight podcast!

We want YOU to contribute to our channels and share the top sights, sounds and takeaways that you’re getting from OpenWorld. Register today and we'll see you in San Francisco September 28 - October 2, 2014!

Functional Overview of the new PeopleSoft Fluid User Experience: The Home Page

PeopleSoft Technology Blog - Tue, 2014-08-26 14:50

PeopleTools 8.54 is a landmark release for Oracle/PeopleSoft, and you are starting to see a lot of information on it, both in this blog and elsewhere.  One of the most important aspects of this release is the new Fluid User Experience.  This is a broad-ranging subject, so you will see posts from a functional perspective (this post), a developer’s perspective, and announcements about Fluid applications that are being delivered by PeopleSoft.

Perhaps you’ve heard about how the Fluid user experience provides support for mobile applications.  While that is certainly true, Fluid offers much more than that.  What the Fluid UX really provides is the ability to access PeopleSoft applications across a variety of form factors from smart phones to tablets to desktops/laptops.  Fluid applications present a common user experience on a variety of devices regardless of screen size.  These applications are efficient and attractive as well, and offer the kind of user experience the modern work force is expecting.  So no matter how your users access PeopleSoft, they will be presented with the same intuitive user experience.  This post is the first of a series covering the main features that you will see in Fluid applications when you install them or if you develop some yourself.  We’ll also cover how to get started with Fluid/PeopleTools 8.54, and how new Fluid application pages will work with your existing applications.

We’ll start today with perhaps the most fundamental feature of the Fluid UX: the Home Page.  This page provides a base or launch pad for users to navigate to their essential work.  Home pages are designed for specific roles, so they contain all the essentials for each role without extraneous menus or content that might distract users.  In this way, Fluid Home Pages are conceptually similar to current home pages or dashboards.

 Let’s look at the main features of the Home page.

The first thing you’ll notice about Home Pages is that they contain a variety of Tiles.  Tiles can serve as navigation mechanisms, but may also convey dynamic information.  Tiles are similar in purpose to pagelets, but are not as interactive.  They are responsive, however, and can automatically change size and position to accommodate different form factors.

Now let’s take a look at the Home Page header, which contains several useful features.  Central is the Home Page drop down menu.  This takes the place of tabs, and enables users to move among all the home pages that they use.  Users may serve in a variety of roles in an enterprise, and they may therefore have more than one Home Page—one for each role they play.  For example, a person may be a manager, but they are also an employee, and as such they have different activities and tasks they perform in both those roles.  They would likely have different home pages for those different roles. 

 Next is the PeopleSoft Search widget.  This provides for a search-centric navigation paradigm, and enables users to search from almost anywhere within their PeopleSoft system.  In addition, with the new PeopleSoft search, users can search across pillars and retrieve results from all their PeopleSoft content.  The search results are even actionable, so users can often complete a task right from the search results page.

Notifications are a handy mechanism that lets users know when there are tasks requiring their attention.  The Notifications widget displays the number of items requiring attention.  When the user clicks the widget a window displays the sum of all notifications from all applications to which the user has access.  The user can act on those items directly from the Notifications window.

Next is the Actions widget.  This menu is configurable, but one of the main actions available is Personalizations. This takes the user to a page where they can add or remove tiles from a Home Page, delete Home Pages, or even create new home pages and configure them.

Finally, and perhaps most importantly, we have the Navigation Bar widget.  This opens the Nav Bar, which enables users to get anywhere in their PeopleSoft system.  The Nav Bar is flexible configurable, powerful and intuitive.

The Nav Bar is a rich topic in its own right, and will be covered in a separate blog post in the near future.  In fact, the Fluid user experience is a large subject, so we’ll be posting many more articles describing its features in greater depth.  We’ll also provide a taste of developing Fluid applications.

If you would like more information on the Fluid UX (and everything PeopleSoft) please see the PeopleSoft Information Portal.

We will also be covering the Fluid UX in great depth in several sessions at Oracle Open World.  Come see us at the conference!  You’ll not only acquire useful information, but you can talk with us personally and see live demos of these features.  Here are a few sessions in particular that cover the Fluid UX:

  • A Closer Look at the New PeopleSoft Fluid User Experience (CON7567)
  • PeopleSoft Mobility Deep Dive: PeopleSoft Fluid User Interface and More (CON7588)
  • PeopleSoft Fluid User Interface: A Modern User Experience for PeopleSoft HCM on Any Device (CON7667)
  • PeopleSoft PeopleTools 8.54: PeopleSoft Fluid User Interface in Action (CON7595)
These sessions cover a wide variety of subjects from the functional (for end users and SMEs) to the technical (for developers).

Session Schedule Information OpenWorld 2014 San Francisco

Andrejus Baranovski - Tue, 2014-08-26 12:55
I have received my session schedule information for OpenWorld 2014. This year event is going to be quite busy with three sessions. Below you can check session titles along with times, looking forward to meet you in San Francisco !


Session ID: CON2623Session Title: Oracle ADF Development and Deployment to Oracle CloudVenue / Room: Moscone South - 270Date and Time: 10/1/14, 15:30 - 16:15
Session ID: CON3745 (together with Danilo Schmiedel)Session Title: Oracle Mobile Suite and Oracle Adaptive Case Management: A Strong Combination to Empower PeopleVenue / Room: Moscone West - 3018Date and Time: 10/1/14, 16:45 - 17:30
Session ID: CON2495Session Title: Data Caching Strategies for Oracle Mobile Application FrameworkVenue / Room: Marriott Marquis - Nob Hill C/DDate and Time: 10/2/14, 12:00 - 12:45

Rittman Mead in the OTN TOUR Latin America 2014

Rittman Mead Consulting - Tue, 2014-08-26 11:28

Another OTN Tour Latin America has come and gone. This is the most important technical event in the region visiting 12 countries and with more than 2500 attendees in two weeks.

This year Rittman Mead was part of the OTN Tour in Buenos Aires (Argentina) and Montevideo (Uruguay) presenting about ODI and OGG.

We have started in Buenos Aires on August 11 for the first day of the OTN Tour in Argentina. I’ve talked about the integration of ODI and OGG 12c, explaining all the technical details to configure and how to implement it. Most of the attendees didn’t work with these tools (but were curious about them) so I personalised a little the presentation giving them first an introduction of ODI and OGG.

As the vice-president of the UYOUG (Uruguayan Oracle User Group) I’m part of the organisation of the OTN Tour in my country, so we needed to come back in the same Monday to adjust some last details to have everything ready for the event in Uruguay.

Most of the speakers came on Wednesday, and we have spent a great day with Michelle Malcher, Kamran Agayev, Hans Forbrich and Mike Dietrich. First, we went to lunch at the Mercado del Puerto, an emblematic place that has lot of “parrillas” (kind of barbecues) and then we gave them a little city tour which included a visit to El Cerro de Montevideo. Finally we visited one of the most important wineries in Uruguay, Bodega Bouza where we have a wine tour followed by an amazing wine tasting of a variety of wines including Tannat which is our insignia grape. You know…it is important to be relaxed before a conference :-)

luch_allCerro_Montevideo

parrilla

The first day of the event in Uruguay was dedicated exclusively to technical sessions and in the second one we had the hands-on labs. The conference covered a wide range of topics from BI Mobile, e-Business Suite to how to upgrade to Oracle Database12c, Oracle Virtualization and Oracle RAC. All the sessions were packed with attendees.

Mike session Edel session

The next day, we had labs with PCs with software already installed but attendees could came with their own laptops to install all the software needed for the hands-on. We had the famous RAC Attack! lead by Kamran and with the help of the ninjas Michelle, Hans and Nelson Calero, and an Oracle Virtualization lab by Hernan Petitti for 7 hours!

Rac AttackRAC Attack2

It was a great event. You can see more pictures here and download the presentations here. The attendees as well as all the speakers were really happy with the result. And so did we.

This  is only the beginning for Rittman Mead in Latin America. There are a lot of things to come, so stay tuned!

Categories: BI & Warehousing

Community Source Is Dead

Michael Feldstein - Tue, 2014-08-26 11:21

As Phil noted in yesterday’s post, Kuali is moving to a for-profit model, and it looks like it is motivated more by sustainability pressures than by some grand affirmative vision for the organization. There has been a long-term debate in higher education about the value of “community source,” which is a particular governance and funding model for open source projects. This debate is arguably one of the reasons why Indiana University left the Sakai Foundation (as I will get into later in this post). At the moment, Kuali is easily the most high-profile and well-funded project that still identifies itself as Community Source. The fact that this project, led by the single most vocal proponent for the Community Source model, is moving to a different model strongly suggests that Community Source has failed.

It’s worth taking some time to talk about why it has failed, because the story has implications for a wide range of open-licensed educational projects. For example, it is very relevant to my recent post on business models for Open Educational Resources (OER).

What Is Community Source?

The term “Community Source” has a specific meaning and history within higher education. It was first (and possibly only) applied to a series of open source software projects funded by the Mellon Foundation, including Sakai, Kuali, Fedora, and DSpace (the latter two of which have merged). As originally conceived, Community Source was an approach that was intended to solve a perceived resource allocation problem in open source. As then-Mellon Foundation Associate Program Officer Chris Mackie put it,

For all that the OSS movement has produced some runaway successes, including projects like Perl, Linux, and Mozilla Firefox, there appear to be certain types of challenges that are difficult for OSS to tackle. Most notably, voluntaristic OSS projects struggle to launch products whose primary customers are institutions rather than individuals: financial or HR systems rather than Web servers or browsers; or uniform, manageable desktop environments rather than programming languages or operating systems. This limitation may trace to any of several factors: the number of programmers having the special expertise required to deliver an enterprise information system may be too small to sustain a community; the software may be inherently too unglamorous or uninteresting to attract volunteers; the benefits of the software may be too diffuse to encourage beneficiaries to collaborate to produce it; the software may be too complex for its development to be coordinated on a purely volunteer basis; the software may require the active, committed participation of specific firms or institutions having strong disincentives to participate in OSS; and so on. Any of these factors might be enough to prevent the successful formation of an OSS project, and there are many useful types of enterprise software—including much of the enterprise software needed by higher education institutions—to which several of them apply. In short, however well a standard OSS approach may work for many projects, there is little reason to believe that the same model can work for every conceivable software project.

This is not very different from the argument I made recently about OER:

In the early days of open source, projects were typically supported through individual volunteers or small collections of volunteers, which limited the kinds and size of open source software projects that could be created. This is also largely the state of OER today. Much of it is built by volunteers. Sometimes it is grant funded, but there typically is not grant money to maintain and update it. Under these circumstances, if the project is of the type that can be adequately well maintained through committed volunteer efforts, then it can survive and potentially thrive. If not, then it will languish and potentially die.

The Mellon Foundation’s answer to this problem was Community Source, again as described by Chris Mackie:

Under this new model, several institutions contract together to build software for a common need, with the intent of releasing that software as open source. The institutions form a virtual development organization consisting of employees seconded from each of the partners. This entity is governed cooperatively by the partners and managed as if it were an enterprise software development organization, with project and team leads, architects, developers, and usability specialists, and all the trappings of organizational life, including reporting relationships and formal incentive structures. During and after the initial construction phase, the consortial partners open the project and invite in anyone who cares to contribute; over time the project evolves into a more ordinary OSS project, albeit one in which institutions rather than individual volunteers usually continue to play a major role.

A good friend of mine who has been involved in Mellon-funded projects since the early days describes Community Source more succinctly as a consortium with a license. Consortial development is a longstanding and well understood method of getting things done in higher education. If I say to you, “Kuali is a consortium of universities trying to build an ERP system together,” you will probably have some fairly well-developed notions of what the pros and cons of that approach might be. The primary innovation of Community Source is that it adds an open source license to the product that the consortium develops, thus enabling another (outer) circle of schools to adopt and contribute to the project. But make no mistake: Community Source functions primarily like a traditional institutional consortium. This can be best encapsulated by what Community Source proponents refer to as the Golden Rule: “If you bring the gold then you make the rules.”[1]

Proponents of Community Source suggested even from the early days that Community Source is different from open source. Technically, that’s not true, since Community Source projects produce open source software. But it is fair to say that Community Source borrows the innovation of the open source license while maintaining traditional consortial governance and enterprise software management techniques. Indiana University CIO and Community Source proponent Brad Wheeler sometimes refers to Community Source as “the pub between the Cathedral and the Bazaar (a reference to Eric Raymond’s seminal essay on open source development).” More recently, Brad and University of Michigan’s Dean of Libraries James Hilton codified what they consider to be the contrasts between open source and Community Source in their essay “The Marketecture of Community,” and which Brad elaborates on in his piece “Speeding Up On Curves.” They represent different models of procuring software in a two-by-two matrix, where the dimensions are “authority” and “influence”:

Note that both of these dimensions are about the degree of control that the purchaser has in deciding what goes into the software. It is fundamentally a procurement perspective. However, procuring software and developing software are very different processes.

A Case Study in Failure and Success

The Sakai community and the projects under its umbrella provide an interesting historical example to see how Community Source has worked and where it has broken down. In its early days, Indiana University and the University of Michigan where primary contributors to Sakai and very much promoted the idea of Community Source. I remember a former colleague returning from a Sakai conference in the summer of 2005 commenting, “That was the strangest open source conference I have ever been to. I have never seen an open source project use the number of dollars they have raised as their primary measure of success.” The model was very heavily consortial in those days, and the development of the project reflected that model. Different schools built different modules, which were then integrated into a portal. As Conway’s Law predicts, this organizational decision led to a number of technical decisions. Modules developed by different schools were of differing quality and often integrated with each other poorly. The portal framework created serious usability problems like breaking the “back” button on the browser. Some of the architectural consequences of this approach took many years to remediate. Nevertheless, Sakai did achieve a small but significant minority of U.S. higher education market share, particularly at its peak a few years ago. Here’s a graph showing the growth of non-Blackboard LMSs in the US as of 2010, courtesy of data from the Campus Computing Project:

Meanwhile, around 2009, Cambridge University built the first prototype of what was then called “Sakai 3.” It was intended to be a ground-up rewrite of a next-generation system. Cambridge began developing it themselves as an experiment out of their Centre for Applied Research in Educational Technologies, but it was quickly seized upon by NYU and several other schools in the Sakai community as interesting and “the future.” A consortial model was spun up around it, and then spun up some more. Under pressure from Indiana University and University of Michigan, the project group created multiple layers of governance, the highest of which eventually required a $500K institutional commitment in order to participate. Numbers of feature requirements and deadlines proliferated, while project velocity slowed. The project hit technical hurdles, principally around scalability, that it was unable to resolve, particularly given ambitious deadlines for new functionality. In mid-2012, Indiana University and University of Michigan “paused investment” in the project. Shortly thereafter, they left the project altogether, taking with them monies that they had previously committed to invest under a Memorandum of Understanding. The project quickly collapsed after that, with several other major investors leaving. (Reread Phil’s post from yesterday with this in mind and you’ll see the implications for measuring Kuali’s financial health.)

Interestingly, the project didn’t die. Greatly diminished in resources but freed from governance and management constraints of the consortial approach, the remaining team not only finally re-architected the platform to solve the scalability problems but also have managed seven major releases since that implosion in 2012. The project, now called Apereo OAE, has returned to its roots as an academic (including learning) collaboration platform and is not trying to be a direct LMS replacement. It has even begun to pick up significant numbers of new adoptees—a subject that I will return to in a future post.

It’s hard to look at the trajectory of this project and not conclude that the Community Source model was a fairly direct and significant cause of its troubles. Part of the problem was the complex negotiations that come along with any consortium. But a bigger part, in my opinion, was the set of largely obsolete enterprise software management attitudes and techniques that come along as a not-so-hidden part of the Community Source philosophy. In practice, Community Source is essentially project management approach focused on maximizing the control and influence of the IT managers whose budgets are paying for the projects. But those people are often not the right people to make decisions about software development, and the waterfall processes that they often demand in order to exert that influence and control (particularly in a consortial setting) are antithetical to current best practices in software engineering. In my opinion, Community Source is dead primarily because the Gantt Chart is dead.

Not One Problem but Two

Community Source was originally developed to address one problem, which was the challenge of marshalling development resources for complex (and sometimes boring) software development projects that benefit higher education. It is important to understand that, in the 20 years since the Mellon Foundation began promoting the approach, a lot has changed in the world of software development. To begin with, there are many more open source frameworks and better tools for developing good software more quickly. As a result, the number of people needed for software products (including voluntaristic open source projects) has shrunk dramatically—in some cases by as much as an order of magnitude. Instructure is a great example of a software platform that reached first release with probably less than a tenth of the money that Sakai took to reach its first release. But also, we can reconsider that “voluntaristic” requirement in a variety of ways. I have seen a lot of skepticism about the notion of Kuali moving to a commercial model. Kent Brooks’ recent post is a good example. The funny thing about it, though, is that he waxes poetic about Moodle, which has a particularly rich network of for-profit companies upon which it depends for development, including Martin Dougiamas’ company at the center. In fact, in his graphic of his ideal world of all open source, almost every project listed has one or more commercial companies behind it without which it would either not exist or would be struggling to improve:

BigBlueButton is developed entirely by a commercial entity. The Apache web server gets roughly 80% of its contributions from commercial entities, many of which (like IBM) get direct financial benefit from the project. And Google Apps aren’t even open source. They’re just free. Some of these projects have strong methods for incorporating voluntaristic user contributions and taking community input on requirements, while others have weak ones. But across that spectrum of practices, community models, and sustainability models, they manage to deliver value. There is no one magic formula that is obviously superior to the others in all cases. This is not to say that shifting Kuali’s sustainability model to a commercial entity is inevitably a fine idea that will succeed in enabling the software to thrive while preserving the community’s values. It’s simply to say that moving to a commercially-driven sustainability model isn’t inherently bad or evil. The value (or lack thereof) will all depend on how the shift is done and what the Kuali-adopting schools see as their primary goals.

But there is also a second problem we must consider—one that we’ve learned to worry about in the last couple of decades of progress in the craft of software engineering (or possibly a lot earlier, if you want to go back as far as the publication of The Mythical Man Month). What is the best way to plan and execute software development projects in light of the high degree of uncertainty inherent in developing any software with non-trivial complexity and a non-trivial set of potential users? If Community Source failed primarily because consortia are hard to coordinate, then moving to corporate management should solve that problem. But if it failed primarily because it reproduces failed IT management practices, then moving to a more centralized decision-making model could exacerbate the problem. Shifting the main stakeholders in the project from consortium partners to company investors and board members does not require a change in this mindset. No matter who the CEO of the new entity is, I personally don’t see Kuali succeeding unless it can throw off its legacy of Community Source IT consortium mentality and the obsolete, 1990′s-era IT management practices that undergird it.

  1. No, I did not make that up. See, for example, https://chronicle.com/article/Business-Software-Built-by/49147

The post Community Source Is Dead appeared first on e-Literate.

Accelerate your Transformation to Digital

WebCenter Team - Tue, 2014-08-26 09:07
by Dave Gray, Entrepreneur, Author & Consultant

Digital Transformation – The re-alignment of, or new investment in, technology and business models to more effectively engage consumers or employees, digitally

We are in the process of building a global digital infrastructure that quantifies, connects, analyzes and maps the physical and social world we live in. This has already had massive impact on the way we live and work, and it will continue to do so, in ways which are impossible to predict or imagine.

If you work in a company that was imagined, designed and created before this digital infrastructure was developed, there is no question that your business will be impacted. If it hasn’t happened yet, it is simply a matter of time.

This digital transformation is a global phenomenon that is affecting every individual, and every business, on the planet. Everything that can be digital, will be digital. That means every bit of information your company owns, processes or touches. There is no product or service you provide that won’t be affected. The question is, what does it mean to you, and what should you be doing about it?

When technology advances, strategies and business models shift. It’s not simply a matter of taking what you do today and “making it digital.” That’s the difference between translation and transformation. Your products and services don’t need to be “translated” into the digital world. They must be transformed.

Take Kodak, for example. As long as there have been photographs, people have used them to store and share memories of people and events. That hasn’t changed since Eastman Kodak was founded by George Eastman in 1888.

But when technology advances, strategies and business models shift. Kodak was at the leading edge of the research that doomed its own business model. In 1975, Kodak invented the digital camera (although it was kept secret at the time). Kodak engineers even predicted, with startling accuracy, when the digital camera would become a ubiquitous consumer technology. So Kodak had a major advantage over every other company in the world. They were able to predict the death of film and they had about a 15-year head start over everyone else.

Unfortunately, the business of film was so profitable, and the reluctance (or fear) of disrupting its own business was so great, that Kodak had difficulty focusing on anything else.

In 2010, two software engineers, Kevin Systrom and Mike Krieger, founded a company called Instagram, which they sold to Facebook two years later for approximately $1 billion.

When technology advances, strategies and business models shift.

Let’s take another example, Nokia, who commanded 40 percent of the mobile phone market as recently as 2007. Nokia was the clear and unequivocal market leader at that time. People were talking on phones in 2007 and they are still doing that today. But what happened? The phone became a mobile computer, a platform for digital services. And in a digital world, a phone is only as good as the digital services you can get on that phone. So Nokia, who used to compete only with other phone manufacturers, suddenly found itself in competition with computer manufacturers, who already had a strongly-developed ecosystem, strong relationships with application developers, and a compelling business model to offer them. Nokia was unable to make the leap from phone maker to computer maker.

When technology advances, strategies and business models shift.

It doesn’t matter what you make or what service you provide. If it’s a product, it will more and more come to resemble a computer. If it’s a service, it will increasingly become a digital service.

The shift doesn’t happen everywhere all at once. It’s happened with music and books, it’s happening with phones, TV and film, the hospitality and transportation industry, it is just starting to happen with cars, it will soon be happening in health care and other systems that tend to resist change because of bureaucracy and legal regulation.

So what can you do? How can you avoid being the next Kodak or the next Nokia? 

You need to take a different approach to innovation.

How do you manage your innovation initiatives?

Many companies use something called an innovation funnel. 

The idea of the funnel is that you solicit ideas from all over the company, and sometimes even from outside the company. Ideas come in at the top of the funnel and you have several gates that they have to go through, and they get funded at varying levels as they pass the criteria at each gate. If you do it right, the theory is that the best ideas come out and these are the ones that we implement.

The problem with the funnel approach is that nobody puts ideas in at the top. Nothing comes in. Why? Because people look at that funnel and what they see is a sophisticated machine for killing their ideas. The design of the funnel does not encourage people to generate ideas, because basically it’s a suggestion box with a shredder inside. It’s an idea killer. It doesn’t excite people. It doesn’t encourage creativity.

People think: I'm going to have to write a business plan. I'm going to have to do a bunch of market research. I'm going to have to make a lot of spreadsheets and projections. And this is on top of my regular job. Only to have my idea most probably killed in the end. You are saying to people “We welcome your ideas” but what people are thinking is “You don't really welcome my ideas. You welcome my ideas so you can kill them.”

So what happens? Nothing comes into the top of the funnel, and the funnel manager goes, “Why? Why is nobody giving me their ideas?” This is why it's a problem. Because if anyone really has an idea, what are they going to do? They are going to leave, because in most cases, it’s actually easier to do your innovation out in the world today, even with no funding, than it is to do it inside a modern industrial company.

So what's an alternative to the funnel? Connected companies, like Amazon and Google, do it differently. They create a level playing field where anyone in the company can generate ideas and they actually are allowed to spend time working on them, without having to prove anything. They are trusted to innovate.

At Google they call it 20 percent time. For one day a week, or the equivalent, you get the opportunity  to play with your ideas. This kind of approach has to be supported by a culture where people actually really do have passion, energy and things they want to do. And it has to be recognized and supported throughout the organization as something that’s important. 

If you want to do this, efficiency has to take a hit. You can't optimize everything for efficiency and also do experiments. You just can't. Experiments by their very nature are inefficient, because you don’t really know what you’re doing. If you know in advance what the outcome will be, by definition it’s not an experiment.

This approach is less like a funnel and more like a terraced garden. It’s actually similar to a funnel, in a way, but it’s flipped.

Think of it this way. You've got to make space for these ideas to start.

There's no learning in the funnel approach. You don't learn much from making a business plan. You learn when you do experiments, when you put things into the world and start interacting with customers. You learn when you do things. What do you learn from making a business plan? Business plans are science fiction.

As a leader you have to make space for these experiments to happen. 

And some of these experiments – maybe even a lot of them – will yield interesting results.

And you want to have a way to distribute budget and resources to the most promising experiments.

So you set a certain percentage of the budgets throughout the organization, and you say, this money is for funding promising experiments. You can't spend it on operations or improving efficiency. You have to spend it on new ideas that you think are promising. That’s the second level in the terraced garden.

Layer one gives everybody a little bit of elbow room to innovate and experiment. Layer two offers a way for management to pick the plants that look the most interesting and give them a little bit of care and feeding. It might be that you just give someone some extra time. 

Then third and fourth layers are where the most promising ideas, the ones that might be worth making big bets on, emerge. You might have a few of these that you think might actually generate the next major stage of growth for the company. 

The good thing is that these big bets are all based on things that are already working. This is how venture capitalists invest. They usually don't invest in a really good business plan, because people who make good business plans often don't make good entrepreneurs. People who make good entrepreneurs are people who are out there doing things already.

Amazon's recommendation engine started out as a little weed at the bottom of the garden. Gmail at Google started out as a small 20-percent-time project, somebody was doing it in their spare time.

Venture capitalists invest in companies that already working. They may not be profitable yet, but they have customers, they have promise, they have figured something out, they have learned something. This is also how companies like Google and Amazon do it.

They don't invest in business plans. They don't say it's got to be a billion dollar opportunity or it's not worth or time. They say, let’s try it. Get out there and try things with customers because the billion dollar opportunity may start as a $10,000 opportunity. It may start small. 

What are Google’s big bets right now? Google Glass. The self-driving car.

What are the big bets for Amazon? Kindle. Kindle's a big bet. There are lots of dollars going into that one. Amazon web services: the project that says, we're going to take our own infrastructure and we're going to sell it. Even to competitors. We'll sell to anybody.

What are some of the experiments that failed? Amazon tried auctions. And it seems to make sense. “eBay does it. We've got a big audience. We can try it.” They tried it. They tried and they failed.

But what really happened with Amazon auctions? They learned. They realized something about their customers. Amazon is very customer focused. They realized that their customers don't want to wait. They don't want to wait a week to see if they bought something. Amazon customers want it now.

One of the interesting experiments is something called “unique phrases inside this book.” Someone had a hypothesis: “I think some books have not just unique words but unique phrases. If a phrase shows up in one book, you might want to know other books that have that phrase in it. It might be interesting.” Someone's working on that as a little experiment.

What happens if that experiment fails? Those people don't get fired. They are extremely valuable because of what they have learned. They find other teams. They get recruited. Think of a swarm of startups where people are recruiting each other all the time. It's like Silicon Valley inside of Amazon.

This kind of change is not simple or easy. But it’s clear that the future will not be simply more of the past. It will require bold thinking, creativity, and a new approach to innovation. Innovation can’t be the job of an R&D department. It has to be everyone’s job. And if you’re in senior management, it’s your job to create the conditions that will make innovation possible.

You can hear more from Dave on how to transform your business to digital in our Digital Business Thought Leaders webcast "The Digital Experience: A Connected Company’s Sixth Sense".

SQL Server Replication Quick Tips

Pythian Group - Tue, 2014-08-26 07:56

There is a time in every SQL Server DBA career where a mail came in with a “replication is not working, please check” message. This article is intended to provide with quick tips on how to handle common replication errors and performance problems in a one way transactional replication topology

Oh boy, there is a data problem:

ID-10039897

You check replication monitor and get a :

“Transaction sequence number: 0x0003BB0E000001DF000600000000, Command ID: 1″

The row was not found at the Subscriber when applying the replicated command. (Source: MSSQLServer, Error number: 20598)

Note the sequential number will be used in the following scripts, also the commandID is important to note as not necessarily the whole sequential number has issues, it might be tied to just one command.

Go to the distributor database en run the following command to get the list of articles involved in this issue:

select * from dbo.MSarticles
where article_id in (
select article_id from MSrepl_commands
where xact_seqno = 0x0003BB0E000001DF000600000000)

To get the whole list of commands you can run below query

exec sp_browsereplcmds
@xact_seqno_start = ’0x0003BB0E000001DF000600000000′,
@xact_seqno_end = ’0x0003BB0E000001DF000600000000′

With this last query you can get to the exact command that is failing (by searching the command number in the commandID column)

You will notice that a transactional replication will typically(depending on setup) use insert, delete, update stored procedures to replicate the data, so the command you will see over here will look something like:

{CALL [sp_MSdel_dboMyArticle] (118)}

That is the stored procedure generated to process delete statement over dbo.MyArticle table, and in this case it is trying to delete ID 118. Based on the error reported you will now realize that the issue is that the replication is trying to delete MyArtcile on ID 118 and is not there, so it is trying to delete a non existent record.

Options:

  1. You can either check the publisher for this record and manually insert it in the subscriber, this will cause the replication command to succeed and will fix the issue.
  2. You can skip the command, for this specific example you can skip the command as there is no need to delete something that has been already deleted, by removing the command from the MSrepl_commands table. (Beware, only do this when you know what you are doing, manually removing transactions can result in a unstable replication.) In this example you would use something like
    Delete from MSrepl_commands
    where xact_seqno = 0x0003BB0E000001DF000600000000 and commandID=1
  3. Reinitialize, this option is the least famous, you should try to fix the issue before doing this, however if after skipping the command you still get new errors everywhere, something definitely went wrong and there is no easy way to guarantee that your subscription is up to date and stable, this can be indicator that someone or something messed around with the data, there was some type of modification at the subscription and this is causing issues with the replication. Remember most likely a one way transactional replication is intended to have a copy of the data so it can be queried, no modification should be made to the data as this won´t replicate back to the publisher.

Query time outs:

After checking the replication monitor you get a message like:ID-10054415

Query timeout expired
The process is running and is waiting for a response from the server
Initializing…

and then terminating with this error…
Agent ‘MyAgent’ is retrying after an error, YY retries attempted

This can be due to several reasons:

  • Your transaction is taking a long time and needs some tuning. If your transaction is touching too much data or is using a bad query plan it can result in a long running query, check your TSQL and see if the execution plan is optimal
  • There is a problem with the network. If you normally don´t have this issue and this just happened out of the blue, you can try to check the network, sometimes a network failure or saturated endpoint can increase transfer rates affecting your replication.
  • Server performance, either the publisher or subscriber can have a performance problem, either too much CPU or Memory usage can eventually impact a replication transaction causing it to timeout
  • The query just needs some more time to complete. If this is the case you can tweak the time out setting to give the transaction some more time so it can process properly. To do this:
  1. Right click the Replication folder
  2. Click Distributor Properties and select General
  3. Click ‘Profile Defaults’
  4. Choose ‘Distribution Agents’ on left
  5. Click ‘New’ to create a new default agent profile
  6. Choose ‘Default Agent Profile’ from the list displayed, (to copy this)
  7. Pick a name for your new profile and upate the QueryTimeout value in right column
  8. Save
  9. Choose to use this profile across all your replication sets. However I would recommend to only apply to the agent that requires this change
  10. To individually assign the profile, open Replication Monitor and then in the left pane click your replication set
  11. In the right pane, select your desired agent, right click and change the profile to the new one you just created

 Mini Hack on expired subscriptionsID-10098834

When a replication is marked as expired, it will tell you that you need to reinitialize.

To activate it “under the hood”, check your replication monitor last error, it will show you the last sequential number that tried to process, then run this command(using the corresponding seq_no):

update MSsubscriptions
set status=2
where subscription_seqno=0x0002AADE00005030000100000002

The status column means:

0 = Inactive.

1 = Subscribed.

2 = Active.

You can change it to Active and it will try to process again. Why would you use this? if the subscription expired but your distribution cleanup job haven´t run, then it can try to reprocess everything again, if the issue was related to a network time out and now you have your network back up, you can try this as it will try to start from the last sequential number. Also you can try to do this to reproduce the last error reported, so it will fail and eventually expire again but you will have a better idea on why it failed in the first place.

Multi threading or “Streams”

A slow replication, and by slow I mean when you know that your replication is experiencing a delay when your command goes from the distributor to the subscription, you can check this with performance counters or quickly insert a token(http://technet.microsoft.com/en-us/library/ms151846%28v=sql.105%29.aspx)

You can improve the performance by adding streams, normally a default setting will write sequentially the replication transactions one by one, with Streams you can add more threads, say you specify to use 4 strems, you will be processing 4 transactions at a time meaning a faster turnaround. This can work beautifully but it can also generate deadlocks and inconsistencies, I would recommend to start low and just add 1 stream at a time and stop when you start seeing a problem. Do not go crazy and feel this is a turbo button and add 30 streams, and like most features, test it in QA first!

To Enable this option follow these steps:

  1. Open Replication Monitor, expand the Publisher and select the Publication in the left pane.
  2. On the right pane window , under “All Subscriptions” , you will see a list of all the Subscribers.
  3. Right Click the Subscriber you want to modify and click on “View Details”. A new Window will appear with the distribution agent session details.
  4. Now click on “Action” in the Menu bar at top and select “Distribution Agent Job Properties”, this will open the corresponding job properties.ID-100203331
  5. Go to  “Steps” in the left pane window followed by highlighting “Run Agent” on the Right pane window, click Edit.
  6. A new Windows will popup , scroll to the right end of the command section and append this parameter “ -SubscriptionStreams 2”
  7. Save the settings and restart the Distribution Agent job.

You might encounter some issues when implementing this, you can read this KB for further info:

http://support.microsoft.com/kb/953199

Conclusion

There are many tips on how to fix a replication, sometimes is easier to just reinitialize, but sometimes this is not an option when critical systems depend on the subscription to be up to date or your database is so huge that it will take days to complete. When possible try to troubleshoot instead of just restarting the replication from scratch as it will give you a lot more insight on what is going on.

Categories: DBA Blogs

Starting out with MAF?

Angelo Santagata - Tue, 2014-08-26 07:45
If your starting out with MAF, Oracle's Mobile Application Framework, then you MUST read this blog entry and make sure everything is right.. even if your a seasoned ADF mobile developer like me.. you wanna check this out, got me a couple of times!

https://blogs.oracle.com/mobile/entry/10_tips_for_getting_started

%sql: To Pandas and Back

Catherine Devlin - Tue, 2014-08-26 05:03

A Pandas DataFrame has a nice to_sql(table_name, sqlalchemy_engine) method that saves itself to a database.

The only trouble is that coming up with the SQLAlchemy Engine object is a little bit of a pain, and if you're using the IPython %sql magic, your %sql session already has an SQLAlchemy engine anyway. So I created a bogus PERSIST pseudo-SQL command that simply calls to_sql with the open database connection:

%sql PERSIST mydataframe

The result is that your data can make a very convenient round-trip from your database, to Pandas and whatever transformations you want to apply there, and back to your database:



In [1]: %load_ext sql

In [2]: %sql postgresql://@localhost/
Out[2]: u'Connected: @'

In [3]: ohio = %sql select * from cities_of_ohio;
246 rows affected.

In [4]: df = ohio.DataFrame()

In [5]: montgomery = df[df['county']=='Montgomery County']

In [6]: %sql PERSIST montgomery
Out[6]: u'Persisted montgomery'

In [7]: %sql SELECT * FROM montgomery
11 rows affected.
Out[7]:
[(27L, u'Brookville', u'5,884', u'Montgomery County'),
(54L, u'Dayton', u'141,527', u'Montgomery County'),
(66L, u'Englewood', u'13,465', u'Montgomery County'),
(81L, u'Germantown', u'6,215', u'Montgomery County'),
(130L, u'Miamisburg', u'20,181', u'Montgomery County'),
(136L, u'Moraine', u'6,307', u'Montgomery County'),
(157L, u'Oakwood', u'9,202', u'Montgomery County'),
(180L, u'Riverside', u'25,201', u'Montgomery County'),
(210L, u'Trotwood', u'24,431', u'Montgomery County'),
(220L, u'Vandalia', u'15,246', u'Montgomery County'),
(230L, u'West Carrollton', u'13,143', u'Montgomery County')]