Skip navigation.

Feed aggregator

APEX 5 - Opening and Closing Modal Window

Denes Kubicek - Wed, 2015-07-15 04:56
This example is showing how to open a Modal Page from any element in your application. It is easy to get it working using some standards like a button or a link in a report. However, it is not 100% clear how to get it working with some other elements which don't have the redirect functionality built in (item, region title, custom links, etc.). This example is also showing how to get the success message displayed on the parent page after closing of the Modal Page.

Categories: Development

Automatically Add License Protection and Obfuscation to PL/SQL

Pete Finnigan - Wed, 2015-07-15 03:05

Yesterday we released the new version 2.0 of our product PFCLObfuscate . This is a tool that allows you to automatically protect the intellectual property in your PL/SQL code (your design secrets) using obfuscation and now in version 2.0 we....[Read More]

Posted by Pete On 17/04/14 At 03:56 PM

Categories: Security Blogs

PQ Index anomaly

Jonathan Lewis - Wed, 2015-07-15 01:42

Here’s an oddity prompted by a question that appeared on Oracle-L last night. The question was basically – “Why can’t I build an index in parallel when it’s single column with most of the rows set to null and only a couple of values for the non-null entries”.

That’s an interesting question, since the description of the index shouldn’t produce any reason for anything to go wrong, so I spent a few minutes on trying to emulate the problem. I created a table with 10M rows and a column that was 3% ‘Y’ and 0.1% ‘N’, then created and dropped an index in parallel in parallel a few times. The report I used to prove that the index build had run  parallel build showed an interesting waste of resources. Here’s the code to build the table and index:


create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        case
                when mod(rownum,100) < 3 then 'Y'
                when mod(rownum,1000) = 7 then 'N'
        end                     flag,
        rownum                  id,
        rpad('x',30)            padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e7
;

-- gather stats here

explain plan for
create index t1_i1 on t1(flag) parallel 4 nologging
;

select * from table(dbms_xplan.display);

create index t1_i1 on t1(flag) parallel 4 nologging;

select index_name, degree, leaf_blocks, num_rows from user_indexes;
alter index t1_i1 noparallel;

As you can see, I’ve used explain plan to get Oracle’s prediction of the cost and size, then I’ve created the index, then checked its size (and set it back to serial from its parallel setting). Here are the results of the various queries (from 11.2.0.4) – it’s interesting to note that Oracle thinks there will be 10M index entries when we know that “completely null entries don’t go into the index”:

------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------
|   0 | CREATE INDEX STATEMENT   |          |    10M|    19M|  3073   (3)| 00:00:16 |        |      |            |
|   1 |  PX COORDINATOR          |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (ORDER)     | :TQ10001 |    10M|    19M|            |          |  Q1,01 | P->S | QC (ORDER) |
|   3 |    INDEX BUILD NON UNIQUE| T1_I1    |       |       |            |          |  Q1,01 | PCWP |            |
|   4 |     SORT CREATE INDEX    |          |    10M|    19M|            |          |  Q1,01 | PCWP |            |
|   5 |      PX RECEIVE          |          |    10M|    19M|  2158   (4)| 00:00:11 |  Q1,01 | PCWP |            |
|   6 |       PX SEND RANGE      | :TQ10000 |    10M|    19M|  2158   (4)| 00:00:11 |  Q1,00 | P->P | RANGE      |
|   7 |        PX BLOCK ITERATOR |          |    10M|    19M|  2158   (4)| 00:00:11 |  Q1,00 | PCWC |            |
|   8 |         TABLE ACCESS FULL| T1       |    10M|    19M|  2158   (4)| 00:00:11 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------

Note
-----
   - estimated index size: 243M bytes

INDEX_NAME           DEGREE                                   LEAF_BLOCKS   NUM_ROWS
-------------------- ---------------------------------------- ----------- ----------
T1_I1                4                                                562     310000

Although the plan says it’s going to run parallel, and even though the index says it’s a parallel index, we don’t have to believe that the creation ran as a parallel task – so let’s check v$pq_tqstat, the “parallel query table queue” statistics – and this is the result I got:


DFO_NUMBER      TQ_ID SERVER_TYPE     INSTANCE PROCESS           NUM_ROWS      BYTES      WAITS   TIMEOUTS AVG_LATENCY
---------- ---------- --------------- -------- --------------- ---------- ---------- ---------- ---------- -----------
         1          0 Ranger                 1 QC                      12        528          4          0           0
                      Producer               1 P004               2786931   39161903          9          1           0
                                             1 P005               2422798   34045157         11          1           0
                                             1 P006               2359251   33152158         12          1           0
                                             1 P007               2431032   34160854         14          2           0
                      Consumer               1 P000               3153167   44520722          3          0           0
                                             1 P001               1364146   19126604          4          1           0
                                             1 P002               2000281   28045742          3          0           0
                                             1 P003               3482406   48826476          3          0           0

                    1 Producer               1 P000                     1        298          0          0           0
                                             1 P001                     1        298          0          0           0
                                             1 P002                     1        298          0          0           0
                                             1 P003                     1         48          0          0           0
                      Consumer               1 QC                       4       1192          2          0           0

Check the num_rows column – the first set of slaves distributed 10M rows and roughly 140MB of data to the second set of slaves – and we know that most of those rows will hold (null, rowid) which are not going to go into the index. 97% of the data that went through the message queues would have been thrown away by the second set of slaves, and “should” have been discarded by the first set of slaves.

As for the original question about the index not being built in parallel – maybe it was, but not very parallel. You’ll notice that the parallel distribution at operation 6 in the plan is “RANGE”. If 97% of your data is null and only 3% of your data is going to end up in the index then you’d need to run at higher than parallel 33 to see any long lasting executions – because at parallel 33 just one slave in the second set will get all the real data and do all the work of sorting and building the index while the other slaves will (or ought to) be just throwing their data away as it arrives. When you’ve got 500M rows with only 17M non-null entries (as the OP had) to deal with, maybe the only thing happening by the time you get to look might be the one slave that’s building a 17M row index.

Of course, one of the reasons I wanted to look at the row distribution in v$pq_tqstat was that I wanted to check whether I was going to see all the data going to one slave, or a spread across 2 slaves (Noes to the left, Ayes to the right – as they used to say in the UK House of Commons), or whether Oracle had been very clever and decided to distribute the rows by key value combined with rowid to get a nearly even spread. I’ll have to set up a different test case to check whether that last option is possible.

Footnote

There was another little oddity that might be a simpler explanation of why the OP’s index creation might actually have run serially. I dropped and recreated the index in my test case several times and at one point I noticed (from view v$pq_slave) that I had 16 slave processes live (though, at that point, IDLE). Since I was the only user of the instance my session should probably have been re-using the same set of slaves each time I ran the test; instead, at some point, one of my test runs had started up a new set of slaves. Possibly something similar had happened to the OP, and over the course of building several indexes one after the other his session had reached the stage where it tried to start “yet another” set of slaves, failed, and decided to run serially rather than reuse any of the slaves that were nominally available and IDLE.

Update

It gets worse. I decided to query v$px_sesstat (joined to v$statname) while the query was running, and caught some statistics just before the build completed. Here are a few critical numbers taken from the 4 sessions that received the 10M rows and built the final index:

Coord   Grp Deg    Set  Sno   SID
264/1     1 4/4      1    1   265
---------------------------------
            physical writes direct                            558
            sorts (memory)                                      1
            sorts (rows)                                2,541,146

264/1     1 4/4      1    2    30
---------------------------------
            sorts (memory)                                      1
            sorts (rows)                                2,218,809

264/1     1 4/4      1    3    35
---------------------------------
            physical writes direct                          7,110
            physical writes direct temporary tablespace     7,110
            sorts (disk)                                        1
            sorts (rows)                                2,886,184

264/1     1 4/4      1    4   270
---------------------------------
            sorts (memory)                                      1
            sorts (rows)                                2,353,861

Not only did Oracle pass 10M rows from one slave set to the other, the receiving slave set sorted those rows before discarding them. One of the slaves even ran short of memory and spilled its sort to disc to do the sort. And we can see (physical writes direct = 558) that one slave set was responsible for handling all the “real” data for that index.

 

Update 2

A couple of follow-ups on the thread have introduced some other material that’s worth reading.  An item from Mohamed Houri about what happens when a parallel slave is still assigned to an executing statement but isn’t given any work to do for a long time; and an item from Stefan Koehler about _px_trace and tracking down why the degree of parallelism of a statement was downgraded.


Security patches released for OBIEE 11.1.1.7/11.1.1.9, and ODI DQ 11.1.1.3

Rittman Mead Consulting - Wed, 2015-07-15 00:14

Oracle issued their quarterly Critical Patch Update yesterday, and with it notice of several security issues of note:

  • The most serious for OBIEE (CVE-2013-2186) rates 7.5 (out of 10) on the CVSS scale, affecting the OBIEE Security Platform on both 11.1.1.7 and 11.1.1.9. The access vector is by the network, there’s no authentication required, and it can partially affect confidentiality, integrity, and availability.
    • The patch for users of OBIEE 11.1.1.7 is to install the latest patchset, 11.1.1.7.150714 (3GB, released – by no coincidence I’m sure – just yesterday too).
    • For OBIEE 11.1.1.9 there is a small patch (64Kb), number 21235195.
  • There’s also an issue affecting BI Mobile on the iPad prior to 11.1.1.7, the impact being partial impact on integrity.
  • For users of ODI DQ 11.1.1.3 there’s a whole slew of issues, fixed in CPU patch 21418574.
  • Exalytics users who are on ILOM versions earlier that 3.2.6 are also affected by two issues (one of which is 10/10 on the CVSS scale)

The CPU document also notes that it is the final patch date for 10.1.3.4.2. If you are still on 10g, now really is the time to upgrade!

Full details of the issues can be found in Critical Patch Update document, and information about patches on My Oracle Support, DocID 2005667.1.

Categories: BI & Warehousing

Shift Command in Shell Script in AIX and Linux

Pakistan's First Oracle Blog - Tue, 2015-07-14 21:42
Shell in Unix never ceases to surprise. Stumbled upon 'shift 2' command in AIX few hours ago and it's very useful.

'Shift n' command shifts the parameters passed to a shell script by 'n' numbers to the left.

For example:

if you have a shell script which takes 3 parameters like:

./mytest.sh arg1 arg2 arg3

and you use shift 2 in your shell script, then the values of arg1 and arg2 will be lost and the value of arg3 will get assigned to arg1.

For example:

if you have a shell script which takes 2 parameters like:

./mytest arg1 and arg2

and you use shift 2, then values of both arg1 and arg2 will be lost.

Following is a working example of shift command in AIX:

testsrv>touch shifttest.sh

testsrv>chmod a+x shifttest.sh

testsrv>vi shifttest.sh

testsrv>cat shifttest.sh
#!/bin/ksh
SID=$1
BACKUP_TYPE=$2
echo "Before Shift: $1 and $2 => SID=$SID and BACKUPTYPE=$BACKUP_TYPE"
shift 2
echo "After Shift: $1 and $2 => SID=$SID and BACKUPTYPE=$BACKUP_TYPE"


testsrv>./shifttest.sh orc daily

Before Shift: orc and daily => SID=orc and BACKUPTYPE=daily
After Shift:  and  => SID=orc and BACKUPTYPE=daily


Note that the values of arguments passed has been shifted to left, but the values of variables has remained intact.
Categories: DBA Blogs

On Oracle Corporate Citizenship

Oracle AppsLab - Tue, 2015-07-14 19:39

Yesterday, our entire organization, Oracle Applications User Experience (@usableapps) got a treat. We learned about Oracle’s corporate citizenship from Colleen Cassity, Executive Director of the Oracle Education Foundation (OEF).

oef-logo

I’m familiar with Oracle’s philanthropic endeavors, but only vaguely so. I’ve used the corporate giving match, but beyond that, this was all new information.

During her presentation, we learned about several of Oracle’s efforts, which I’m happy to share here, in video form.

First, there’s the OEF Wearable Technology Workshop for Girls, which several of our team members supported.

Next Colleen talked about Oracle’s efforts to support and promote the Raspberry Pi, which is near and dear to our hearts here. We’ve done a lot of Raspi projects here. Expect that to continue.

Next up was Wecyclers, an excellent program to promote recycling in Nigeria.

And finally, we learned about Oracle’s 26-year-old, ongoing commitment to the Dian Fossey Gorilla Fund.

This was an eye-opening session for me. Other than the Wearable Technology Workshop for Girls, I hadn’t heard about Oracle’s involvement in these other charitable causes, and  I’m honored that we were able to help with one.

I hope we’ll be able to assist with similar, charitable events in the future.

Anyway, food for thought and possibly new information. Enjoy.Possibly Related Posts:

This Is Not Glossy Marketing But You Still Won’t Believe Your Eyes. EMC XtremIO 4.0 Snapshot Refresh For Agile Test / Dev Storage Provisioning in Oracle Database Environments.

Kevin Closson - Tue, 2015-07-14 18:18

This is just a quick blog post to direct readers to a YouTube video I recently created to help explain to someone how flexible EMC XtremIO Snapshots are. The power of this array capability is probably most appreciated in the realm of provisioning storage for Test and Development environments.

Although this is a silent motion picture I think it will speak volumes–or at least 1,000 words.

Please note: This is just a video demonstration to show the base mechanisms and how they relate to Oracle Database with Automatic Storage Management. This is not a scale demonstration. XtremIO snapshots are supported to in the thousands and extremely powerful “sibling trees” are fully supported.

Not Your Father’s Snapshot Technology

No storage array on the market is as flexible as XtremIO in the area of writable snapshots. This video demonstration shows how snapshots allow the administrator of a “DEV” host–using Oracle ASM–to quickly refresh to current or past versions of ASM disk group contents from the “PROD” environment.

The principles involved in this demonstration are:

  1. XtremIO snapshots are crash consistent.
  2. XtremIO snapshots are immediately created, writeable and space efficient. There is no fixed “donor” relationship. Snapshots can be created from other snapshots and refreshes can go in any direction.
  3. XtremIO snapshot refresh does not involve the host operating system. Snapshot and volume contents can be immediately “swapped” (refreshed) at the array level without any action on the host.

Regarding number 3 on that list, I’ll point out that while the operating system does not play a role in the snapshot operations per se, applications will be sensitive to contents of storage immediately changing. It is only for this reason that there are any host actions at all.

Are Host Operations Involved? Crash Consistent Does Not Mean Application-Coherent

The act of refreshing XtremIO snapshots does not change the SCSI WWN information so hosts do not have any way of knowing the contents of a LUN have changed. In the Oracle Database use case the following must be considered:

  1. With a file system based database one must unmount the file systems before refreshing a snapshot otherwise the file system will be corrupted. This should not alarm anyone. A snapshot refresh is an instantaneous content replacement at the array level. Operationally speaking, file system based databases only require database instance shutdown and the unmounting of the file system in preparation for application-coherent snapshot refresh.
  2. With an ASM based database one must dismount the ASM disk group in preparation for snapshot refresh. To that end, ASM database snapshot restore does not involve system administration in any way.

The video is 5 minutes long and it will show you the following happenings along a timeline:

  1. “PROD” and “DEV” database hosts (one physical and one virtual) each showing the same Oracle database (identical DBID) and database creation time as per dictionary views. This establishes the “donor”<->clone relationship. DEV is a snapshot of PROD. It is begat of a snapshot of a PROD consistency group
  2. A single-row token table called  “test” in the PROD database has value “1.” The DEV database does not even have the token table (DEV is independent of PROD…it’s been changing..but its origins are rooted in PROD as per point #1)
  3. At approximately 41 seconds into the video I take a snapshot of the PROD consistency group with “value 1” in the token table. This step prepares for “time travel” later in the demonstration
  4. I then update the PROD token table to contain the value “42”
  5. At ~2:02 into the video I have already dismounted DEV ASM disk groups and started clobbering DEV with the current state of PROD via a snapshot refresh. This is “catching up to PROD”
    1. Please note: No action at all was needed on the PROD side. The refresh of DEV from PROD is a logical, crash-consistent point in time image
  6. At ~2:53 into the video you’ll see that the DEV database instance has already been booted and that it has value “42” (step #4). This means DEV has “caught up to PROD”
  7. At ~3:32 you’ll see that I use dd(1) to copy the redo LUN over the data LUN on the DEV host to introduce ASM-level corruption
  8. At 3:57 the DEV database is shown as corrupted. In actuality, the ASM disk group holding the DEV database is corrupted
  9. In order to demonstrate traveling back in time, and to recover from the dd(1) corrupting of the ASM disk group,  you’ll see at 4:31 I chose to refresh from the snapshot I took at step #3
  10. At 5:11 you’ll see that DEV has healed from the dd(1) destruction of the ASM disk group, the database instance is booted, and the value in the token table is reverted to 1 (step #3) thus DEV has traveled back in time

Please note: In the YouTube box you can click to view full screen or on youtube.com if the video quality is a problem:

More Information

For information on the fundamentals of EMC XtremIO snapshot technology please refer to the following EMC paper: The fundamentals of XtremIO snapshot technology

For independent validation of XtremIO snapshot technology in a highly-virtualized environment with Oracle Database 12c please click on the following link: Principled Technologies, Inc Whitepaper

For a proven solution whitepaper showing massive scale data sharing with XtremIO snapshots please click on the following link: EMC Whitepaper on massive scale database consolidation via XtremIO


Filed under: oracle

Unizin Offering “Associate” Membership For Annual $100k Fee

Michael Feldstein - Tue, 2015-07-14 16:33

By Phil HillMore Posts (345)

Alert unnamed readers prompted me after the last post on the Unizin contract to pursue the rumored secondary method of joining for $100k. You know who you are – thanks.

While researching this question, I came across a presentation by the University of Florida provost to the State University System of Florida (SUSFL) seeking to get the system to join Unizin under these new terms. The meeting was March 19, 2015, and the video archive is here (first 15 minutes), and the slide deck is here. The key section (full transcript below):

Associate Membership FLSUS

Joe Glover: One of the things that Unizin has done – as I’ve said it consists of those 10 large research universities – is that the Unizin board decided that member institutions may nominate their system – in this case the state university system of Florida – for Associate Membership for an annual fee of $100,000 per system.

For $100,000 the entire state university system of Florida (SUSFL) could become an associate member of Unizin and enjoy all the benefits that Unizin brings forward, whether it’s reduced pricing of products that it’s licensing, or whether it products that Unizin actually produces. Associate Membership does not qualify for board representation, but as I mentioned you do enjoy the benefits of Unizin products and services.

This section reminded me of one item I should have highlighted in the contract. In appendix B:

The annual membership fees are waived for Founding Investors through June 30, 2017.

Does this mean that founding institutions that “invested” $1.050 million over three years will have to start paying annual fees of $100,000 starting in June 2017? That’s my assumption, but I’m checking to see what this clause means and will share at e-Literate.

Update (7/17): I talked to Amin Qazi today (CEO of Unizin) who let me know that the annual membership fee for institutional members (currently the 11 schools paying $1.050 million) has not be determined yet.

What is clear is that Unizin considers the board seat – therefore input on the future direction and operations of Unizin – to be worth $700,000.[1]

Full Transcript

The presentation is fascinating in its entirety, so I’m sharing it below. There are many points that should be analyzed, but I’ll save that for other posts and for other people to explore.

Joe Glover: I’d like to begin by explaining the problem that Unizin was created to try and avoid, and I’m going to do it by analogy with the publishing problem with scientific journals. About 30 years ago there was a plethora of publishing companies that would take the intellectual property being produced by universities in the form of journal articles, and they would print them and publish them. There was a lot of competition, prices were relatively low to do that.

Then in the ensuing 30 years there was tremendous consolidation in that industry to the point that there are only three or four major publishers of scientific articles. As a consequence they have a de facto monopoly, and they’re in the position of now taking what we produce, packaging it, and selling it back to the libraries of universities basically at whatever price they want to charge. This is a national problem. It is not a problem that is unique to Florida, and I think that every state in the nation is trying to figure out how to resolve this problem because we can’t afford to continue to pay exorbitant prices for journals.

That is a situation that we got ourselves in by not looking ahead to the future. We believe we are in a similar position with respect to distance learning at this point.

We have a plethora of universities and commercial firms. all trying to get into the digital space. Most of us believe that over the next 10 – 15 – 20 years there will be tremendous consolidation in this industry, and it is likely that there will emerge a relatively small number of players who control the digital space.

This consortium of universities wanted to make sure that the universities were not cut out of this process or this industry in much the same way that they had been cut out of scholarly publishing.

Every university in some sense runs a mom & pop operation in distance learning at this point, at least in comparison with large organizations like IBM and Pearson Learning that can bring hundreds of millions of dollars to the table. No university can afford to do that.

So a consortium of major research universities in the country, in an effort to look down the road and to avoid this problem, and to secure our foothold in the digital industry, formed a consortium called Unizin. I’m going to go briefly through this to tell you what this is, and then to lay before you an opportunity that the state university system can consider for membership in this consortium to enjoy the advantages that we expect it to bring.

Slide 1

This consortium is very new – it was launched in 2014. Its current membership is by invitation only. You cannot apply to become a member of this consortium, it is by invitation. As I mentioned, its objective is to promote greater control and influence over the digital learning ecosystem.

It’s governance is fairly standard. It has a board of directors that is drawn from the founding members. It has a CEO. It has a staff and it’s acquiring more staff. As a legal entity it is a not-for-profit service operation which is hosted by Internet2.

Slide 2

It’s current members include the universities that you see listed on this screen. These are 10 major universities in the nation – they’re all large research universities. There are other research universities that are considering joining. Unizin actually started out with four universities and quickly acquired the other six that are on this list.

Associate Membership FLSUS

The primary goals for Unizin as defined by its board of directors are the following. To acquire a learning management system that will serve as the foundation for what Unizin produces and performs. Secondly, to acquire or create a repository for digital learning objects. At the moment we are all producing all sorts of things, ranging from videos to little scientific clips, demonstrations, to illustrations, to lectures, notes, in all sorts of different formats – some retrievable, some not retrievable, some shareable, some not shareable. None of which is indexed, none of which I can see outside the University of Florida.

We believe there needs to be a repository that all of the members of Unizin can place the objects that they create to promote digital learning into, with an index. And in principle there will be developed a notion of sharing of these objects. It could be free sharing, it could be licensing, it could be selling. That’s something to be discussed in the future.

The third goal for Unizin is to acquire, create, or develop learning analytics. Some of the learning management systems have a rather primitive form of learning analytics. Unizin will build on what they have, and this will go from very mechanical types of learning analytics in terms of monitoring student progress and enabling intrusive advising and tutoring; all the way up to personalized learning, which is something that really does not exist yet but is one of the objectives of Unizin.

Those are the three primary goals for Unizin. If you believe that those are three important elements of infrastructure then you are probably interested in Unizin.

I have alluded to the possibility of a club, or of sharing content. We could think about sharing content. We could think about sharing courses. We could think about sharing degree programs. That is not really Unizin’s objective at this point. I will tell you that the universities that form the board for Unizin are in conversation about that, and we expect that to be one of the things that Unizin enables us to do as we create this repository, as we develop learning analytics we expect to be able to begin to collaborate with these universities. There are a lot of interesting questions as you approach that frontier, and by no means have these been resolved, but we believe it is inevitable and important for universities to begin sharing what they do in the digital learning space, and so Unizin would form the foundation for that.

One of the things that Unizin has done – as I’ve said it consists of those 10 large research universities – is that the Unizin board decided that member institutions may nominate their system – in this case the state university system of Florida – for Associate Membership for an annual fee of $100,000 per system.

For $100,000 the entire state university system of Florida (SUSFL) could become an associate member of Unizin and enjoy all the benefits that Unizin brings forward, whether it’s reduced pricing of products that it’s licensing, or whether it products that Unizin actually produces. Associate Membership does not qualify for board representation, but as I mentioned you do enjoy the benefits of Unizin products and services.

Slide 4

The potential benefits to the state university system I believe are the following. Unizin has settled on Canvas as the learning management system which would underlie the Unizin projects of building a repository and learning analytics. If you did not use Canvas you would still enjoy the benefits of Unizin and their products, but the use of them would not be as seamless as if you were on Canvas. You would have to build a crosswalk from the Unizin products to whatever LMS you are using. If you happen to be using Canvas you would enjoy the benefits of the Unizin products in a seamless fashion.

Unizin has negotiated a discount with Canvas. And so actually the University of Florida had signed the contract with Canvas before Unizin even existed. As soon as Unizin was created and negotiated a contract with Canvas, we actually received a discount from the price that we had negotiated. Because there were 10 large universities working on this, and there is some power in purchasing.

The second benefit, or second potential benefit which I think the system could enjoy is access to the tools which are under development as I’ve mentioned, including a digital repository and learning analytics.

Third, the system would enjoy membership in a consortium of large public universities that intends to secure its niche in the evolving digital ecosystem. As I have mentioned, we do see some potential risk as the industry consolidates, that we could be cut out of this industry if we don’t take the proper precautions.

Finally, as I’ve mentioned, there is the potential for cooperative relationships within the consortium to share digital instruction and to share digital objects and courses and degrees. That is really at the beginning conversation stage, that is not a goal of the Unizin organization itself but is a goal of the universities that underpin Unizin.

Q. I guess the real question is, tell me to what extent you can, how this will benefit each of the other universities who are not members at this time. And number two, could some of our other universities eventually become members?

A. Thank you for that question because I didn’t clarify one point that the question gives me the opportunity to clarify. Additional universities could be members of Unizin, and there are some universities in conversation with Unizin at this point. However, there is a larger charge for universities to become full board members of Unizin. University of Florida committed a million dollars over three years as part of the capitalization of Unizin. Every board member has done exactly the same. If a university in the system were interested in joining Unizin as a board member to help direct Unizin’s goals and operations, we could talk about that, but it would involve that level of investment.

At the lower level of investment, the $100,000 level which would be for the whole system – let’s say you join tomorrow – then an individual university would immediately have access to the preferred pricing for the Canvas learning management system. That would be a benefit to individual universities in the system who already are on Canvas or are considering going on Canvas. As the other tools or products are either acquired or developed by Unizin, the individual campuses would have access to those as well.

Q. I’d like to hear from John Hitt [president of UCF]. How does your university look at this proposal as it relates to online?

JH. I think the group membership for the system makes sense. I don’t think that it would make a lot of sense to have multiple institutions paying in a million bucks apiece. We would probably be interested in the $100,000 share. I doubt we would go for the full membership.

Q. Do you see the benefits they’re offering to benefit to UCF at this point, or would you use it?

JH. Yes, I think we would use some of it. We have more enthusiasm for some aspects of the membership than others. Yes, I think it would be useful.

There were no further questions, but it was apparent that some board members were not sure if they were being asked to pay $1 million for each campus or $100,000. Despite this short questioning, the motion passed as shared in the meeting minutes.

Chair Hosseini recognized Mr. Lautenbach for the Innovation and Online Committee report. Mr. Lautenbach reported the Committee heard an update from Provost Joe Glover on the Unizin Consortium and the Committee directed Chancellor Criser to work with university leadership in pursuing membership for the State University System in the consortium.

  1. The $1.050 million investment over three years minus alternate cost of $100,000 for these same three years.

The post Unizin Offering “Associate” Membership For Annual $100k Fee appeared first on e-Literate.

Coming Soon - PeopleTools Customer Beta Program

PeopleSoft Technology Blog - Tue, 2015-07-14 14:07
The PeopleTools team continues to push forward, ever improving the features and capabilities of PeopleTools.  Recently, you may have seen some of the planned enhancements for PeopleTools 8.55 discussed on MyOracleSupport in the Planned Features and Enhancements area.  This document has replaced the Release Value Proposition that has been used previously to highlight features to look for in the upcoming PeopleTools release. 

There are a number of cool features that we’re working on, including the Cloud Deployment Architecture (CDA) which will provide greater flexibility in the installation and patching of environments.  Additional planned features include Analytics for PeopleSoft Update Manager (PUM), Fluid dashboards/homepages and Simplified Analytics….just to name a few.

 We plan to kick off the PeopleTools 8.55 Beta Program in the relatively near future, and have an opening for a customer who’s willing to closely partner with us.  If you are looking to get your hands on the next release so that you can thoroughly test out some of these features in your own environment to see the benefits, perhaps you are the one we’re looking for.  Does your team have the skills and desire to take beta code and run with it?  Can your organization get a standard beta trial license agreement signed promptly?  We want to work with a customer that’s going to dive in, and really exercise the new features - If that’s you, email me (mark.hoernemann@oracle.com) and let’s talk.  Please keep in mind that this is a small beta – I’ve only got room for one, maybe two customers.   

July 2015 Critical Patch Update Released

Oracle Security Team - Tue, 2015-07-14 13:59

Hello, this is Eric Maurice.

Oracle today released the July 2015 Critical Patch Update. The Critical Patch Update program is Oracle’s primary mechanism for the release of security fixes across all Oracle products, including security fixes intended to address vulnerabilities in third-party components included in Oracle’s product distributions.

The July 2015 Critical Patch Update provides fixes for 193 new security vulnerabilities across a wide range of product families including: Oracle Database, Oracle Fusion Middleware, Oracle Hyperion, Oracle Enterprise Manager, Oracle E-Business Suite, Oracle Supply Chain Suite, Oracle PeopleSoft Enterprise, Oracle Siebel CRM, Oracle Communications Applications, Oracle Java SE, Oracle Sun Systems Products Suite, Oracle Linux and Virtualization, and Oracle MySQL.

Out of these 193 fixes, 44 are for third-party components included in Oracle products distributions (e.g., Qemu, Glibc, etc.)

This Critical Patch Update provides 10 fixes for the Oracle Database, and 2 of the Database vulnerabilities fixed in today’s Critical Patch Update are remotely exploitable without authentication. The most severe of these database vulnerabilities has received a CVSS Base Score of 9.0 for the Windows platform and 6.5 for Linux and Unix platforms. This vulnerability (CVE-2015-2629) reflects the availability of new Java fixes for the Java VM in the database.

With this Critical Patch Update, Oracle Fusion Middleware receives 39 new security fixes, 36 of which are for vulnerabilities which are remotely exploitable without authentication. The highest CVSS Base Score for these Fusion Middleware vulnerabilities is 7.5.

This Critical Patch Update also includes a number of fixes for Oracle applications. Oracle E-Business Suite gets 13 fixes, Oracle Supply Chain Suite gets 7, PeopleSoft Enterprise gets 8, and Siebel gets 5 fixes. Rounding up this list are 2 fixes for the Oracle Commerce Platform.

The Oracle Communications Applications receive 2 new security fixes. The highest CVSS Base Score for these vulnerabilities is 10.0, this score is for vulnerability CVE-2015-0235, which affects Glibc, a component used in the Oracle Communications Session Border Controller. Note that this same Glibc vulnerability is also addressed in a number of Oracle Sun Systems products.

Also included in this Critical Patch Update are 25 fixes Oracle Java SE. 23 of these Java SE vulnerabilities are remotely exploitable without authentication. 16 of these Java SE fixes are for Java client-only, including one fix for the client installation of Java SE. 5 of the Java fixes are for client and server deployment. One fix is specific to the Mac platform. And 4 fixes are for JSSE client and server deployments. Please note that this Critical Patch Update also addresses a recently announced 0-day vulnerability (CVE-2015-2590), which was being reported as actively exploited in the wild.

This Critical Patch Update addresses 25 vulnerabilities in Oracle Berkeley DB, and none of these vulnerabilities are remotely exploitable without authentication. The highest CVSS Base score reported for these vulnerabilities is 6.9.

Note that the CVSS standard was recently updated to version 3.0. In a previous blog entry, Darius Wiles highlighted some of the enhancements introduced by this new version. Darius will soon publish another blog entry to discuss this updated CVSS standard and its implication for Oracle’s future security advisories. Note that the CVSS Base Score reported in the risk matrices in today’s Critical Patch Update were based on CVSS v2.0.

For More Information:

The July 2015 Critical Patch Update advisory is located at http://www.oracle.com/technetwork/topics/security/cpujul2015-2367936.html

The Oracle Software Security Assurance web site is located at http://www.oracle.com/us/support/assurance

Publish data over REST with Node.js

Kris Rice - Tue, 2015-07-14 09:47
Of course the best way to expose database data over REST is with Oracle REST Data Services.  If you haven't read over the Statement of Direction, it's worth the couple minutes it takes.  The auto table enablement and filtering is quite nice. For anyone interested in node.js and oracle, this is a very quick example of publishing the emp table over REST for use by anyone that would prefer REST

APEX Listener supported App Servers

Kris Rice - Tue, 2015-07-14 09:47
     With the latest news on Glassfish, I thought it may be a good time to review the options for the APEX Listener to deploy.  The huge caveat is this is as of today, 11/6/2013 , the future can change anything however there’s nothing planned. The Licenses I'm just putting the important parts here for reference.  They are linked to the entire license. OTN License  The APEX Listener is

How to use RESTful to avoid DB Links with ā'pěks

Kris Rice - Tue, 2015-07-14 09:47
So the question came up of avoiding a db link by using the APEX Listener's RESTful services to get at the same data.  This is all in the context of an Apex application so apex_collections is the obvious place to stuff transient data that could be used over a session. Step 1:  Make the RESTful Service. The only catch is to turn pagination off ( make it zero ).  I didn't need it for now so this

RESTful Cursor support for JSON

Kris Rice - Tue, 2015-07-14 09:47
Just a real quick blog before I forget.  In the latest APEX Listener 2.0.4 patch, there's support for nested cursors.  There is two gotchas.  First make sure to to disable ( make 0 ) the pagination of the REST definition.  The second is this only works at the top level, so not nested nests of nests. This is a very quick example of tables and nested in each table the columns and indexes that

Chunked File loading with APEX Listener + HTML5

Kris Rice - Tue, 2015-07-14 09:47
  I just found the HTML5 File API the other day so I had to see what I could do with the APEX Listener's RESTful services. There's a bunch of blogs on what can be done such as on HTML5rocks  .   The end result is that the new File api let's javascript get details of the file and slice it up into parts. Then I made a pretty simple REST end point to receive the chunks and put them back together

ORDS - Auto REST table feature

Kris Rice - Tue, 2015-07-14 09:47
Got a question on how easy it is to use ORDS to perform insert | update | delete on a table.  Here's the steps. 1) Install ORDS ( cmd line or there's a new wizard in sqldev ) 2) Enable the schema and table in this case klrice.emp; ( again there's a wizard in sqldev ) BEGIN ORDS.ENABLE_SCHEMA(p_enabled => TRUE, p_schema => 'KLRICE',

REST Data Services and SQL Developer

Kris Rice - Tue, 2015-07-14 09:47
The database tools team released 3 new GA releases and an update to our SQLCL. Official Releases are here:    SQL Developer, Modeler, and Data Miner:        https://blogs.oracle.com/otn/entry/news_oracle_updates_development_tools        https://blogs.oracle.com/datamining/entry/oracle_data_miner_4_1   REST Data Services now with SODA        https://blogs.oracle.com/otn/entry/

Instructure Is Truly Anomalous

Michael Feldstein - Tue, 2015-07-14 08:54

By Michael FeldsteinMore Posts (1037)

Phil started his last post with the following:

I’m not sure which is more surprising – Instructure’s continued growth with no major hiccups or their competitors’ inability after a half-decade to understand and accept what is at its core a very simple strategy.

Personally, I vote for Door #1. As surprising as the competition’s seeming sense of denial is, Instructure’s performance is truly shocking. After five years, I continue to be surprised by it. It’s not just how well they are executing. It’s that they seem to defy the laws of physics in the LMS market. We had no reason to believe that any LMS company could rack up the numbers they are showing—in several different areas—no matter how well they execute.

Back in late 2010, I wrote a twopart series on LMS market share. For context, this was a year after Blackboard acquired ANGEL, a month before Instructure records its first clients on the growth graph in Phil’s previous post, six months before we wrote our first post about Instructure on e-Literate, and two years before WebCT was officially killed off. At that time, Blackboard still had dominant market share—over 50%—but it was starting to become clear for the first time that their dominance might not last forever. The posts were my attempt to figure out what might happen next. Here’s what the non-Blackboard LMS market looked like then:

Here’s what the market share looked like when the then-present trends were projected out to 2014:

What we see here is a steady decline of Blackboard’s market share getting spread out among multiple platforms. It’s worth calling out a problem with the data that we had at that time. Campus Computing, the source of the market share information in this graph, tracks market share by company, not by platform. So we had no way of knowing how much of their market share was from their Learn platform and how much of it was from WebCT. This was crucial (or, at least, it seemed crucial at the time) because Blackboard was force-migrating WebCT customers to their Learn product. The rate at which Blackboard’s market share got distributed to other platforms depended on how much of the attrition was from WebCT CE customers, how many were WebCT Vista customers, and how many were Blackboard Learn customers. The CE customers tended to be small schools with small contracts, and Blackboard wasn’t making much of an effort to keep them. To the degree that Blackboard’s losses were confined to CE going forward, the company would do just fine. On the other hand, to the degree that Blackboard lost customers from its core Learn platform, it would be a sign of impending catastrophe. LMS migrations were so hard and painful that very few schools migrated unless they felt that they absolutely had to. CE customers left Blackboard in part because it was clear that Blackboard didn’t care about them and that they therefore would never get the quality of product and service (and pricing) that they needed. Blackboard was making a real effort to keep Vista customers, but it was an open question as to whether the forced migration would cause Vista schools to look around at other options, or whether Blackboard could keep the pain of migration low enough that it would be easier to just roll over to Learn than to move to something else. If, on the other hand, Blackboard started losing Learn contracts, it would mean that customers on their core platform felt that the pain of staying was worse than the pain of leaving. At the time, there was strong anecdotal evidence that CE customers were leaving in droves, moderate anecdotal evidence that Vista customers were preparing to leave, and little evidence that Learn customers were leaving. My sense at the time was that Blackboard would probably lose a bunch of customers through the WebCT sunset in 2012 and then the market would more or less settle back into stasis.

That’s not what happened. To begin with, Instructure roared onto the scene in 2011 and ended up stealing the lion’s share of the market share that Blackboard was leaking. But that’s not all. Take a look at the graph Josh Coates presented at the most recent Instructurecon:

As Phil wrote,

There appears to be three periods of growth here:

  • From introduction (roughly Jan 2011) until May 2012: Average growth of ~65 clients per year;
  • From May 2012 until May 2014: Average growth of ~140 clients per year;
  • From May 2014 until present: Average growth of ~190 clients per year.

So Instructure’s growth has accelerated since the end of 2012, which is the opposite of what I would have expected. Where is that growth coming from? It’s hard to tell. Unfortunately, the data we have on LMS market share is not as good as one would hope. The best indications we have right now are that they are primarily coming from former Blackboard Learn and ANGEL customers. Switching data sources from Campus Computing to Edutechnica, here’s Phil’s September 2014 analysis:

  • Blackboard’s BbLearn and ANGEL continue to lose market share in US –[1] Using the 2013 to 2014 tables (> 2000 enrollments), BbLearn has dropped from 848 to 817 institutions and ANGEL has dropped from 162 to 123. Using the revised methodology, Blackboard market share for > 800 enrollments now stands at 33.5% of institutions and 43.5% of total enrollments.
  • Moodle, D2L, and Sakai have no changes in US – Using the 2013 to 2014 tables (> 2000 enrollments), D2L has added only 2 schools, Moodle none, and Sakai 2 schools.
  • Canvas is the fastest growing LMS and has overtaken D2L – Using the 2013 to 2014 tables (> 2000 enrollments), Canvas grew ~40% in one year (from 166 to 232 institutions). For the first time, Canvas appears to have have larger US market share than D2L (13.7% to 12.2% of total enrollments using table above).

But even if you assume that Instructure picked up 100% of the Learn and ANGEL customers—which is plausible, given these numbers—that’s still only 70 new customers. That’s half the ~140 new customers that Instructure is reporting. Could the rest be international? Maybe, although we have little reason to believe that to be the case. In the Edutechnica post that Phil references for the market share information, George Kroner does provide a little bit of information about Instructure’s international growth in the form of a graph of LMS market share in a few different countries:

We would need to see fully 50% of Instructure’s growth reflected in non-US markets to make the numbers square. We don’t see anything like that here. Of course, there are many other non-US markets. Maybe Canvas is all the rage in Turkmenistan. But it’s hard to square the circle. I just don’t know how to account for the company’s growth. I don’t doubt Instructure’s numbers. It’s just that there’s no way I can find to make sense of them with our current data about the market.

Beyond the numerical mystery, there seems to have been a change in market attitudes about LMS migration. Schools seem to be willing to look at alternatives even when they don’t have to. Nobody likes to migrate, of course, but a variety of factors, ranging from improved standards that make moving content easier to more technology maturity and experience among university faculty and staff, have reduced vendor lock-in. It’s a more fluid market now. I had hoped that would be the case someday but, in my heart of hearts, I really didn’t expect it. And at the moment, pretty much all of that new fluidity is flowing into Instructure—at least in US higher education.

Overall, Instructure’s growth is hard to explain. But there’s also another number that I can’t account for. I am in the process of writing an update to my post on the Glassdoor ratings of ed tech companies. At the moment, Instructure’s rating is 4.7. Out of 5. For reference, LinkedIn, which I used as context in last year’s post because it had one of the highest employee ratings on Glassdoor, currently rates only a 4.5. I have been to both Instructure’s and LinkedIn’s offices. LinkedIn’s is nicer. A lot nicer. I’m sure that their salaries are a lot higher as well. Instructure may be buoyed at the moment by the likelihood that they will have an IPO in the next year or two. But still. Instructure may be the highest rated company on Glassdoor right now, not just in ed tech, but the highest rated of any company.

Also weird is the fact that we don’t hear any major complaints from them from anywhere. People tell us stuff. Customers, former employees, and current employees come to us often to dish dirt. What we end up publishing is only the tip of the iceberg because we don’t publish anything unless we feel we have strong confirmation (which usually means multiple sources), we can protect our sources by preserving their anonymity, we believe the information is truly newsworthy, and so on. We hear a lot of dirt. But we hear very little about Instructure. When we poke around, we can get people to tell us things that they’re not happy with, but it’s all normal stuff—I really wish they had this feature, that feature doesn’t work as well as it could, the sales rep was a little annoying or a little unresponsive, and so on. And almost always, the person reporting the problem takes pains to tell us that he or she is generally happy with the company. As Phil wrote,

Companies change as they grow, and I have covered when the company lost both founders and a high-profile CTO. The company moves on, however, and I cannot find customers complaining (at least yet) that the company has changed and is ticking them off. They do have customer challenges, but so far these have been manageable challenges.

Pop quiz: Name the highest profile customer disaster (outage during examples or first week, broken implementation, major bugs, etc) for Canvas.

It’s not normal. And it can’t last forever. Sooner or later, gravity will assert itself and the company will start screwing up. They all do, eventually. But right now, Instructure’s performance is so good by multiple measures that it is almost literally unbelievable.

The post Instructure Is Truly Anomalous appeared first on e-Literate.

Automate Order Receipt & Processing with Oracle WebCenter

WebCenter Team - Tue, 2015-07-14 07:39


Have paper-based methods brought aspects of your business to a grinding halt?  Are manual processes leading to slip-ups, mix-ups and missteps? Not too long ago, my company was in the same jam. In our pursuit of becoming the leading provider of education products to schools across the country, we ran into an issue: Our ability to process such a growing number of orders had created a bottleneck. We were actually printing orders captured electronically and routing them manually for fulfillment. Inefficient to say the least, this led to orders being lost or delayed. View this video created by Oracle partner Redstone Content Solutions (originally posted on their blog) to see how you can Automate Order Receipt & Processing with Oracle WebCenter. 

Instructure: Accelerating growth in 3 parallel markets

Michael Feldstein - Mon, 2015-07-13 18:50

By Phil HillMore Posts (345)

I’m not sure which is more surprising – Instructure’s continued growth with no major hiccups or their competitors’ inability after a half-decade to understand and accept what is at its core a very simple strategy. Despite Canvas LMS winning far more new higher ed and K-12 customers than any other vendor, I still hear competitors claim that schools select Canvas due to rigged RFPs or being the shiny new tool despite having no depth or substance. When listening to the market, however, (institutions – including faculty, students, IT staff, academic technology staff, and admin), I hear the opposite. Canvas is winning LMS selections despite, not because of, RFP processes, and there are material and substantive reasons for this success.

The only competitor I see that seems to understand the depth of the challenge they face is Blackboard. Other LMS solutions are adding “cloud” options or making incremental improvements to usability, but only Blackboard is going for wholesale changes to both its User Experience (UX) and cloud hosting architecture. Unfortunately, I question whether Blackboard will be able to execute this strategy, but that is a story for another post.

Like last year’s post about InstructureCon, I believe that the company growth chart[1] gives a lot more information than just “gosh, we’re doing well”.

InstructureCon 2015 Growth Slide

Education Market Growth – Canvas

The use of Canvas in higher ed (show as blue above) has grown steadily, but not exponentially, since the product introduction more than 4 years ago. There appears to be three periods of growth here:

  • From introduction (roughly Jan 2011) until May 2012: Average growth of ~65 clients per year;
  • From May 2012 until May 2014: Average growth of ~140 clients per year;
  • From May 2014 until present: Average growth of ~190 clients per year.

The use of Canvas in K-12 (show as red above) has grown much faster, and in fact Instructure has more K-12 clients than higher ed and has more sales people in K-12 than higher ed. Let that sink in for a moment – it is a point that is not well understood by the market. Over the same three periods:

  • From introduction (roughly Jan 2011) until May 2012: Average growth of ~20 clients per year (much lower than higher ed);
  • From May 2012 until May 2014: Average growth of ~135 clients per year (almost the same as higher ed);
  • From May 2014 until present: Average growth of ~340 clients per year (far exceeds higher ed).

It should be noted, however, that K-12 clients tend to have fewer students per contract and tend to spend far less per student. I don’t have exact numbers, but we could assume the following:[2]

  • Instructure has more than 50% of its clients in K-12;
  • Instructure has 30 – 40% of its student counts in K-12; and
  • Instructure makes 25 – 33% of its revenue in K-12.
Corporate Market Growth – Bridge

Actually, the client numbers (shown in green above) do not show significant growth in corporate markets yet – just slow growth of ~30 per year. I wrote about the recent product introduction of Bridge (their LMS for corporate markets) here and here. This is a different strategy than other higher ed originated LMS approaches, where Blackboard, D2L, and Moodle all use the same LMS for both education and corporate markets.

In discussions at the conference, however, the company certainly believes they are about to experience real growth in the corporate market with the new product, and they are hiring the sales force to lead this effort. It will be interesting to watch over the next year to see if the company succeeds in getting similar levels of growth as in higher ed and K-12.

Product Announcements

There were two main product announcements at the conference:

  • After a half-decade on the market, Canvas is gradually moving to a new UX design. I’ll cover that more in a second post.
  • Instructure introduced Canvas Data, a hosted data solution that addresses the biggest weakness in Canvas (not in terms of leapfrogging competition but rather trying to close the gap or to remove the weakness).

At its core, Canvas Data is an easily accessible native-cloud service, delivered on Amazon Web Services through Redshift. Canvas Data provides clients access to their data, including course design features, course activity, assessment and evaluation, user and device characteristics and more.

Both announcements are interesting, but mostly as they further illuminate the company’s strategy.

Market Strategy

Taken together, what we see is a company with a fairly straightforward strategy. Pick a market where the company can introduce a learning platform that is far simpler and more elegant than the status quo, then just deliver and go for happy customers.  Don’t expand beyond your core competency, don’t add parallel product lines, don’t over-complicate the product, don’t rely on corporate M&A. Where you have problems, address the gap. Rinse. Repeat.

Instructure has now solidified their dominance in US higher ed (having the most new client wins), they have hit their stride with K-12, and they are just starting with corporate learning. What’s next? I would assume international education markets, where Instructure has already started to make inroads in the UK and a few other locations.

The other pattern we see is that the company focuses on the mainstream from a technology adoption perspective. That doesn’t mean that they don’t want to serve early adopters with Canvas or Bridge, but Instructure more than any other LMS company knows how to say ‘No’. They don’t add features or change designs unless the result will help the mainstream adoption – which is primarily instructors. Of course students care, but they don’t choose whether to use an LMS for their course – faculty and teachers do. For education markets, the ability to satisfy early adopters rests heavily on the Canvas LTI-enabled integrations and acceptance of external application usage; this is in contrast to primarily relying on having all the features in one system.

Avoid Problems

From the beginning Instructure designed their products from the ground up to fully utilize a cloud architecture, but this also applies to the product management and support services. Instructure has essentially one software version for each product[3] from the beginning, and unlike most other higher ed LMS providers, they reap the benefits of software release management and bug fixing simplicity. Cloud is not just an issue of cost-effective scaling, it is also a matter of getting the software out of the way – just have it work.

Companies change as they grow, and I have covered when the company lost both founders and a high profile CTO. The company moves on, however, and I cannot find customers complaining (at least yet) that the company has changed and is ticking them off. They do have customer challenges, but so far these have been manageable challenges.

Pop quiz: Name the highest profile customer disaster (outage during examples or first week, broken implementation, major bugs, etc) for Canvas.

It’s Not Complicated

I suspect that everything covered in this blog post has been said before, including at e-Literate. There is nothing complex or even nuanced here.

My biggest criticism at this year’s conference is that the keynotes were unfocused and didn’t share enough information about product roadmaps. It’s fine to not focus everything on technology and products, but come on, if you’re going to talk about empathy then tie it explicitly to how that concept affects your company’s approach to student-centered learning.

But despite the weak keynote and despite Josh Coates’ reputation as a jerk (he even referenced this in the keynote), consider the observation Michael made to me that Instructure is one of the very few companies whose employee reviews at Glassdoor rival (or even exceed) LinkedIn’s reviews. Trust me, this is not true for other ed tech companies.

Instructure_Reviews___Glassdoor

I typically don’t write blog posts this positive about ed tech companies, but at this point I think the market needs to realize just how well-managed Instructure is and how positive schools are as they adopt and use its LMS. So far Instructure has been a net positive for higher ed and K-12, but change has come too slowly to the rest of the ed tech market in response to Canvas. Competition is good.

  1. The chart shows the number of clients, which is essentially the number of contracts signed with institutions, school districts, or statewide systems adopting either Canvas or Bridge LMS products.
  2. Note: this includes some personal bar-napkin estimates and student count and revenue are not reported by the company.
  3. It’s a little more complicated than just one software version based on test servers and client acceptance of changes, but the general idea holds in terms of understanding strategy.

The post Instructure: Accelerating growth in 3 parallel markets appeared first on e-Literate.