I just finished the last program for a computer science class on edX and I urge you to try it.
I took this class:
MITx: 6.00.1x Introduction to Computer Science and Programming Using Python
I was more interested in how MIT taught the class than in the material itself because I already know the subjects covered.
The class taught the basics of programming – expressions, variables, loops, if statements, and functions.
It also had a large focus on bisection or binary search and the performance benefits of this type of search over sequentially reading through a list.
It also covered lists, hash tables, trees, stacks, and queues.
It discussed object-oriented programming.
The class concluded with the professor stating that the programming and computer science skills taught in this class are key to advancing your career, even if you do not work in a computer related job.
I interacted with a number of students in the class and found some that were in other fields and were having success taking the class. Others were in business computing or IT and yet did not have a computer science background so they were good programmers but learning new concepts. Many struggled with the class but, it is free, and is given often. The class starts up again August 26th. Nothing stops you from taking it multiple times.
I tried to think about whether I should recommend this class to the people I work with as a method of helping develop my coworkers that do not have experience in these areas. At first I thought that the subject is too academic and has no connection to their jobs. But, after thinking about it for a while, I now believe that just the opposite is true.
Searching for practical applications of the class, I first remembered the programs that we wrote that compared searching sequentially through a list to using binary search. In one test case the sequential method took 15 seconds but the binary search took less than one second. This reminded me so much of tuning Oracle SQL queries. The sequential scan of the list was like a full table scan in Oracle. The binary search was like looking up a single row using an index scan. As I tune Oracle queries my computer science knowledge of binary search and binary trees makes it easy to understand index and full table scans.
In another example, we recently had slowness on a Weblogic portal server. CPU was getting maxed out and the CPU spent most of its time in a Java ConcurrentHashMap object. I don’t know the internals of Weblogic and I have never used a ConcurrentHashMap but I know how hashing works. I know that hashing is very fast until your hash table fills up or if the hash function distributes the items in an unequal way. My knowledge of hashing helped me grasp why our portal server was using a lot of CPU despite my lack of access to its internals.
So, contrary to my original fear that the edX class was too academic and not practical I believe that the concepts covered are very practical. If you do not know how binary search works or what a binary tree is you will benefit from 6.00.1x on edX. If you can not explain how a hash table works and what causes hashing to slow down you can learn from 6.00.1x. And, if you have never written a computer program, although you may find the class difficult and have to take it more than once, you will benefit from 6.00.1x on edX.
By Michael FeldsteinMore Posts (1038)
Phil and I spent much of this past week at BbWorld trying to understand what is going on there. The fact that their next-generation Ultra user experience is a year behind is deservedly getting a lot of attention, so one of our goals going into the conference was to understand why this happened, where the development is now, and how confident we could be in the company’s development promises going forward. Blackboard, to their credit, gave us tons of access to their top executives and technical folks. Despite the impression that a casual observer might have, there is actually a ton going on at the company. I’m going to try to break down much of the major news at a high level in this post.The News
Ultra is a year late: Let’s start with the obvious. The company showed off some cool demos at last year’s BbWorld, promising that the new experience would be Coming Soon to a Campus Near You. Since then, we haven’t really heard anything. So it wasn’t surprising to get confirmation that it is indeed behind schedule. What was more surprising was to see CEO Jay Bhatt state bluntly in the keynote that yes, Ultra is behind schedule because it was harder than they thought it would be. We don’t see that kind of no-spin honesty from ed tech vendors all that often.
Ultra isn’t finished yet: The product has been in use by a couple of dozen early adopter schools. (Phil and I haven’t spoken with any of the early adopters yet, but we intend to.) It will be available to all customers this summer. But Blackboard is calling it a “technical preview,” largely because there are large swathes of important functionality that have not yet been added to the Ultra experience–things like tests and groups. It’s probably fine to use it for simple (and fairly common) on-campus use cases, but there are still some open manholes here.
Ultra is only available in SaaS at the moment and will not be available for on-premise installations any time soon: This was a surprise both to us and to a number of Blackboard customers we spoke to. It’s available now for SaaS customers and will be available for managed hosting customers, but the company is making no promises about self-hosted. The main reason is that they have added some pretty bleeding edge new components to the architecture that are hard to wrap up into an easily installable and maintainable bundle. The technical team believes this situation may change over time as the technologies that they are using mature—to be clear, we’re talking about third-party technologies like server containers rather than homegrown Blackboard technologies—they think it may become practical for schools to self-host Ultra if they still want to by that time. But don’t expect to see this happen in the next two years.
Ultra is much more than a usability makeover and much more ambitious than is commonly understood: There is a sense in the market that Ultra is Blackboard’s attempt to catch up with Instructure’s ease of use. While there is some truth to that, it would be a mistake to think of Ultra as just that. In fact, it is a very ambitious re-architecture that, for example, has the ability to capture a rich array of real-time learning analytics data. These substantial and ambitious under-the-hood changes, which Phil and I were briefed on extensively and which were also shared publicly at Blackboard’s Devcon, are the reason why Ultra is late and the reason why it can’t be locally installed at the moment. I’m not going to have room to go into the details here, but I may write more about it in a future post.
Blackboard “Classic” 9.x is continuing under active development: If you’re self-hosted, you will not be left behind. Blackboard claims that the 9.x code line will continue to be under active development for some time to come, and Phil and I found their claims to be fairly convincing. To begin with, Jay got burned at Autodesk when he tried to push customers onto a next-generation platform and they didn’t want to go. So he has a personal conviction that it’s a bad idea to try that again. But also, Blackboard gets close to a quarter of its revenue and most of its growth from international markets now, and for a variety of reasons, Ultra is not yet a good fit for those markets and probably won’t be any time soon. So self-hosted customers on Learn 9.x will likely get some love. This doesn’t mean development will be as fast as they would like; the company is pushing hard in a number of directions, and we get the definite sense that there is a strain on developer resources. But 9.x will not be abandoned or put into maintenance mode in the near future.
If you want to get a sense of what Ultra feels like, try out the Blackboard Student mobile app: The way Blackboard uses the term “Ultra” is confusing, because sometimes it means the user experience but sometimes it means the next generation architecture for Learn. If you want to try Ultra the user experience, the play with the Student mobile app, which is in production today and which will work with Learn 9.x as well as Learn Ultra. Personally, I think it represents some really solid thinking about designing for students.
Moodle may make a comeback: One of the reasons that Moodle adoption has suffered in the United States the past few years is that it has lacked an advocate with a loud voice. Moodlerooms used to be the biggest promoter of the platform, and when Blackboard acquired them, they went quiet in the US. But, as I already mentioned, the international market is hugely important for Blackboard now, and Moodle is the cornerstone of the company’s international strategy. They have been quietly investing in the platform, making significant code contributions and acquisitions. There are signs that Blackboard may unleash Moodlerooms to compete robustly in the US market again. This would entail taking the risk that Moodle, a cheaper and lower-margin product, would cannibalize their Learn business, so file this under “we’ll believe it when we see it,” but Apple has killed the taboo of self-cannibalization when the circumstances are right, and they seem like they may be right in this situation.
Collaborate Ultra is more mature than Learn Ultra but still not mature: This is another case where thinking about Ultra as a usability facelift would be hugely underestimating the ambition of what Blackboard is trying to do. The new version of Collaborate is built on a new standard called WebRTC, which enables webconferencing over naked HTML rather than through Flash or Java. This is extremely hard stuff that big companies like Google, Microsoft, and Apple are still in the process of working out right now. It is just this side of crazy for a company the size of Blackboard to try to release a collaboration product based heavily on this technology. (And the only reason it’s not on the other side of crazy is because Blackboard acquired a company that has one of the world’s foremost experts on WebRTC.) Phil and I have used Collaborate Ultra a little bit. It’s very cool but a little buggy. And, like Learn Ultra, it’s still missing some features. At the moment, the sweet spot for the app appears to be online office hours.
My Quick Take
I’m trying to restrain myself from writing a 10,000-word epic; there is just a ton to say here. I’ll give a high-level framework here and come back to some aspects in later posts. Bottom line: If you think that Ultra is all about playing catch-up with Instructure on usability, then the company’s late delivery, functionality gaps, and weird restrictions on where the product can and cannot be run look pretty terrible. But that’s probably not the right way to think about Ultra. The best analogy I can come up with is Apple’s Mac OS X. In both cases, we have a company that is trying to bring a large installed base of customers onto a substantially new architecture and new user experience without sending them running for the hills (or the competitors). This is a really hard challenge. Hardcore OS X early adopters will remember that 10.0 was essentially an unusable technology preview, 10.1 was usable but painful, 10.2 was starting to feel pretty good, and 10.3 was when we really began to see why the new world was going to be so much better than the old one. If I am right, Ultra will go through the same sort of evolution. I don’t know that these stages will each be a year long; I suspect that they may be shorter than that. But right now we are probably partway through the 10.0 era for Ultra. As I mentioned earlier in the post, Phil and I still need to talk to some Ultra customers to get a sense of real usage and, of course, since it will be generally available to SaaS customers for use in the fall semester, we’ll have more folks to talk to soon. We will be watching closely to see how big the gaps are and how quickly they are filled. For example, how long will it take Blackboard to get to the items labeled as “In Development” on their slides? Does that mean in a few months? More? And what about the “Research” column? Based on these slide and our conversations, I think the best case scenario is that we reach the 10.2 era—where the platform is reasonably feature-complete, usable, and feeling pretty good overall—by BbWorld 2016, and with some 10.3-type new and strongly differentiating features starting to creep into the picture. Or they could fall flat and utterly fail to deliver. Or something in between. I’m pretty excited by the scope of the company’s ambition and am willing to cut them some slack, partly because they persuaded me that what they are trying to do is pretty big and party because they persuaded me that they probably know what they are doing. But they have had their Mulligan. As the saying goes (when properly remembered), the proof of the pudding is in the eating. We’ll see what they deliver to customers in the next 6-12 months.
Watch this space.
The post Blackboard Ultra and Other Product and Company Updates appeared first on e-Literate.
When I started writing this post, it was going to be about something else.
This happens occasionally, I have an idea in my head and set to work.
Then I do some research – don’t look so surprised, I do look at the docs occasionally – and, as in this case, I find out that there’s rather more to the topic at hand than I first thought.
What follows is a re-visiting of some of the tools available in Oracle to help with error logging.
It includes stuff that either I’d forgotten or had never considered about some fairly common functions.
Before I dive in, I’d just like to say thanks to William Robertson, who first pointed out to me the similarity between PL/SQL error logging and Quantum Theory. If you’re still unclear of the connection between the two then consider, if you will, the Schrodinger’s Cat Thought Experiment.
It involves locking a cat in a box and possibly poisoning it.
Schrodinger postulates that the cat is both alive and dead…until you open the box to check.
The conclusions we can draw from this experiment are :
- According to Quantum Theory, the act of observation changes the nature of the thing being observed
- Schrodinger wasn’t a Cat person
Before going any further, I should point out that most of the stuff I know about Physics comes from watching Star Trek.
Moving on, I now invite you to consider…
As with cats – according to Shrodinger at least – the act of “observing”- well, handling – a PL/SQL exception changes the error ( or the location from which it originated at any rate).
declare l_cat number; begin l_cat := 'GREEBO'; end; / declare * ERROR at line 1: ORA-06502: PL/SQL: numeric or value error: character to number conversion error ORA-06512: at line 4
No problem with this. It shows us that the error happened at line 4 in the code, which is correct.
declare l_cat number; begin l_cat := 'GREEBO'; exception when others then -- do some logging stuff.... raise; end; / declare * ERROR at line 1: ORA-06502: PL/SQL: numeric or value error: character to number conversion error ORA-06512: at line 8
Here, the exception originated on line 4. However, the error is reported at line 8 – inside the exception handler.
This then, is the problem with which we need to wrestle.
Time to take a closer look at the tools that Oracle provides us for the purpose of error handling, starting with the most venerable…
SQLCODE and SQLERRM have been around for as long as I can remember.
If you’ve worked on PL/SQL applications for any length of time, you will almost certainly seen a variation of one of the following in an exception handler :
sqlerrm(sqlcode); sqlcode||' : '||sqlerrm; substr(sqlerrm(sqlcode),1,500));
If any of the above ring any bells then it illustrates the point that these functions ( for that is what they are), are not especially well understood.SQLCODE
SQLCODE returns the number of the last error encountered. Not the message, just the error number :
set serveroutput on declare l_cat_lives number; begin l_cat_lives := 'If the cat belongs to Shrodinger the answer is uncertain. Otherwise 9'; exception when others then dbms_output.put_line(sqlcode); end; / -6502 PL/SQL procedure successfully completed.
SQLCODE returns 0 on successful completion…
declare l_cat_lives number; begin l_cat_lives := 9; dbms_output.put_line(sqlcode); end; / 0 PL/SQL procedure successfully completed.
For user-defined errors, it returns 1 by default…
declare e_no_cat exception; begin raise e_no_cat; exception when e_no_cat then dbms_output.put_line(sqlcode); end; / 1 PL/SQL procedure successfully completed.
…unless you associate the exception with an error number using the EXCEPTION_INIT pragma…
declare e_no_cat exception; pragma exception_init( e_no_cat, -20000); begin raise e_no_cat; exception when e_no_cat then dbms_output.put_line(sqlcode); end; / -20000 PL/SQL procedure successfully completed. SQL>
It will also return the relevant error code if you use RAISE_APPLICATION_ERROR…
begin raise_application_error(-20001, 'The cat has run off'); exception when others then dbms_output.put_line(sqlcode); end; / -20001 PL/SQL procedure successfully completed. SQL>
On it’s own then, SQLCODE is not much help in terms of working out what went wrong unless you happen to have memorized all of the Oracle error messages.
Fortunately we also have…SQLERRM
This function takes in an error number and returns the relevant message :
begin dbms_output.put_line(sqlerrm(-6502)); end; / ORA-06502: PL/SQL: numeric or value error PL/SQL procedure successfully completed.
Because of this SQLERRM can be used to create the equivalent of the oerr utility in PL/SQL.
Better still, it takes SQLCODE as it’s default parameter…
declare l_cat_lives number; begin l_cat_lives := 'If the cat belongs to Shrodinger the answer is uncertain. Otherwise 9'; exception when others then dbms_output.put_line(sqlerrm); end; / ORA-06502: PL/SQL: numeric or value error: character to number conversion error PL/SQL procedure successfully completed.
The maximum length of a varchar returned by SQLERRM is, according to the documentation, “the maximum length of an Oracle Database error message” – 512.
Whilst we’re on the subject, the 11gR2 documentationincludes a note recommending that, generally, DBMS_UTILITY.FORMAT_ERROR_STACK be used instead…DBMS_UTILITY.FORMAT_ERROR_STACK
So, let’s see what this function gives us when used as a drop-in replacement for SQLERRM….
declare l_cat_lives number; begin l_cat_lives := 'If the cat belongs to Shrodinger the answer is uncertain. Otherwise 9'; exception when others then dbms_output.put_line(dbms_utility.format_error_stack); end; / ORA-06502: PL/SQL: numeric or value error: character to number conversion error PL/SQL procedure successfully completed. SQL>
Not much then based on this example. However, there are a couple of differences.
The first is that this function returns up to 2000 characters of the error stack.
The second is that it does not take any arguments. Fair enough I suppose. From the name you’d infer that this function returns the entire error stack rather than the single message that SQLERRM does.
Let’s put that to the test…
create or replace package transporter as function find_target return varchar2; procedure beam_me_up_scotty; end transporter; / create or replace package body transporter as function find_target return varchar2 is begin raise_application_error(-20003, 'Location or velocity unknown'); end find_target; procedure beam_me_up_scotty is l_target varchar2(30); begin -- engage the heisenburg compensator... l_target := find_target; dbms_output.put_line('Energize !'); end beam_me_up_scotty; end transporter; /
This package is an analog of what Star Fleet Engineers would have been working with before they came up with the Heisenburg Compensator.
If we call this without any error handling, we’ll get a “stack” of errors…
begin transporter.beam_me_up_scotty; end; / * ERROR at line 1: ORA-20003: Location or velocity unknown ORA-06512: at "MIKE.TRANSPORTER", line 6 ORA-06512: at "MIKE.TRANSPORTER", line 13 ORA-06512: at line 1
You’d expect something fairly similar if you used the FORMAT_ERROR_STACK function…
set serveroutput on size unlimited begin transporter.beam_me_up_scotty; exception when others then dbms_output.put_line(dbms_utility.format_error_stack); end; / ORA-20003: Location or velocity unknown PL/SQL procedure successfully completed. SQL>
So, not similar at all then.
NOTE – if you’ve already spotted the deliberate mistake here. Bear with me for a bit.
If we change the package body so that errors are raised at multiple levels…
create or replace package body transporter as function find_target return varchar2 is begin raise_application_error(-20003, 'Location or velocity unknown'); end find_target; procedure beam_me_up_scotty is l_target varchar2(30); begin -- engage the heisenburg compensator... l_target := find_target; dbms_output.put_line('Energize !'); exception when others then raise_application_error(-20004, 'I canna change the laws o physics!'); end beam_me_up_scotty; end transporter; /
… we simply get the last error passed…
begin transporter.beam_me_up_scotty; exception when others then dbms_output.put_line(dbms_utility.format_error_stack); end; / ORA-20004: I canna change the laws o physics! PL/SQL procedure successfully completed. SQL>
From all of this, it would appear that DBMS_UTILITY.FORMAT_ERROR_STACK doesn’t really give us much (if anything) over SQLERRM. This is especially true if the documentation is correct and no single Oracle Error Message will exceed 512 bytes.
All of which is rather odd, until you consider…
RAISE_APPLICATION_ERROR is usually invoked as in the above example. However, it actually accepts three arguments :
- NUM – an error code in the range -20000 to -20999
- MSG – an error message up to 1024 characters ( including the error code)
- KEEPERRORSTACK – if TRUE then the error code is placed at the top of the error stack. Otherwise it replaces the error stack.
Default is FALSE
The first point to note here is that, unlike SQLERRM, FORMAT_ERROR_STACK can accomodate the full length of a message from RAISE_APPLICATION_ERROR.
More relevant to the issue at hand however, it the KEEPERRORSTACK parameter. If we tweak the package once more to set this parameter to true…
create or replace package body transporter as function find_target return varchar2 is begin raise_application_error(-20003, 'Location or velocity unknown', true); end find_target; procedure beam_me_up_scotty is l_target varchar2(30); begin -- engage the heisenburg compensator... l_target := find_target; dbms_output.put_line('Energize !'); exception when others then raise_application_error(-20004, 'I canna change the laws o physics!', true); end beam_me_up_scotty; end transporter; /
…and re-run our test…
begin transporter.beam_me_up_scotty; exception when others then dbms_output.put_line(dbms_utility.format_error_stack); end; / ORA-20004: I canna change the laws o physics! ORA-06512: at "MIKE.TRANSPORTER", line 16 ORA-20003: Location or velocity unknown PL/SQL procedure successfully completed.
…we now get a stack. However, we’re still stuck without the line number from where the error originated.
Fortunately, they’ve been burning the candle at both ends over at Star Fleet, or possibly at Redwood Shores…
Let’s try re-executing our package, this time using FORMAT_ERROR_BACKTRACE…
begin transporter.beam_me_up_scotty; exception when others then dbms_output.put_line(dbms_utility.format_error_backtrace); end; / ORA-06512: at "MIKE.TRANSPORTER", line 7 ORA-06512: at "MIKE.TRANSPORTER", line 14 ORA-06512: at line 2 PL/SQL procedure successfully completed. SQL>
Well, that’s different. We get a stack, together with the line number at which the error originated. Unfortunately it doesn’t include the originating error message itself. Let’s try that again, but this time in combination with SQLERRM…
begin transporter.beam_me_up_scotty; exception when others then dbms_output.put_line(sqlerrm); dbms_output.put_line(dbms_utility.format_error_backtrace); end; / ORA-06502: PL/SQL: numeric or value error: character to number conversion error ORA-06512: at "MIKE.TRANSPORTER", line 7 ORA-06512: at "MIKE.TRANSPORTER", line 14 ORA-06512: at line 2 PL/SQL procedure successfully completed. SQL>
Now we have the original error. We also have the line at which it happened. At last, we have our Heisenburg compensator.
Well, in most circumstances. Just before we test it on Admiral Archer’s prize beagle …
create or replace package body transporter as function find_target return varchar2 is l_silly number; begin l_silly := 'Location or velocity unknown'; exception when others then -- do some logging and... raise; end find_target; procedure beam_me_up_scotty is l_target varchar2(30); begin -- engage the heisenburg compensator... l_target := find_target; dbms_output.put_line('Energize !'); end beam_me_up_scotty; end transporter; /
Now we’ve added an error handler to the innermost package member…
begin transporter.beam_me_up_scotty; exception when others then dbms_output.put_line(sqlerrm); dbms_output.put_line(dbms_utility.format_error_backtrace); end; / ORA-06502: PL/SQL: numeric or value error: character to number conversion error ORA-06512: at "MIKE.TRANSPORTER", line 10 ORA-06512: at "MIKE.TRANSPORTER", line 17 ORA-06512: at line 2 PL/SQL procedure successfully completed.
…once again the handled error has “moved” to the exception block of the function.
In terms of retrieving the error stack, it would appear that a combination of SQLERRM and DBMS_UTILITY.FORMAT_ERROR_BACKTRACE offer the most comprehensive and reliable information.
However, in order to further pin down where those pesky errors are originating we may need to turn to a time-honoured technique – albeit with a comparatively modern twist…
The $$PLSQL_LINE variable simply returns the line number of the stored program unit or anonymous block you’re currently in…
begin dbms_output.put_line('At line : '||$$plsql_line); end; / At line : 2 PL/SQL procedure successfully completed. SQL>
By sprinkling a few dollars through our code, we should get a better (although still not necessarily exact) idea of where our error is originating.
I’m going to persevere with this transporter code. After all, they managed to get it working in the original Star Trek and that was way back in the 60’s…
create or replace package body transporter as function find_target return varchar2 is l_loc pls_integer; l_silly number; begin l_loc := $$plsql_line; l_silly := 'Location or velocity unknown'; exception when others then dbms_output.put_line('Error originating after line '||l_loc); raise; end find_target; procedure beam_me_up_scotty is l_target varchar2(30); begin -- engage the heisenburg compensator... l_target := find_target; dbms_output.put_line('Energize !'); end beam_me_up_scotty; end transporter; /
Now, we should get a bit more information…
begin transporter.beam_me_up_scotty; exception when others then dbms_output.put_line(sqlerrm); dbms_output.put_line(dbms_utility.format_error_backtrace); end; / Error originating after line 8 ORA-06502: PL/SQL: numeric or value error: character to number conversion error ORA-06512: at "MIKE.TRANSPORTER", line 12 ORA-06512: at "MIKE.TRANSPORTER", line 19 ORA-06512: at line 2 PL/SQL procedure successfully completed. SQL>
The error actually originates from line 9 so that’s a pretty good approximation.
The downside is the aforementionned sprinkling of the assignment of $$PLSQL_LINE to a variable immediately before you perform any action.
Well, I’ve probably managed to annoy an Physics experts and Star Trek fans that happen to be reading. That’s before you even start thinking about PL/SQL Developers.
On the plus side I can say, hand-on-heart, that no cats were harmed in the writing of this post.
Filed under: Oracle, PL/SQL Tagged: $$plsql_line, dbms_utility.format_error_backtrace, dbms_utility.format_error_stack, pl/sql exceptions, pragma exception_init, raise_application_error, sqlcode, sqlerrm
This Log Buffer Edition covers Oracle, SQL Server and MySQL blogs of the running week.
- While checking the sources of the Cassandra/NetBeans integration into GitHub yesterday, something went very badly wrong and ALL the source files in my Maven project that disappeared!
- AWR Reports, Performance Hub, historisches SQL Monitoring in 12c
- Oracle Database Mobile Server 12c: Advanced data synchronization engine
- ORA-39001, ORA-39000 and ORA-39142
- ORA-15410: Disks in disk group do not have equal size
- SAN and NAS protocols and how they impact SQL Server
- SQL Style Habits: Attack of the Skeuomorphs
- Is It Worth Writing Unit Tests?
- Large SQL Server Database Backup on an Azure VM and Archiving
- Reporting Services: Drawing a Buffer on a Map
- MySQL Tcpdump system : use percona-toolkit to analyze network packages
- Replication in real-time from Oracle and MySQL into data warehouses and analytics
- Altering tablespace of table – new in MySQL 5.7
- MySQL QA Episode 8: Reducing Testcases for Engineers: tuning reducer.sh
- MySQL upgrade 5.6 with innodb_fast_checksum=1
Learn more about Pythian’s expertise in Oracle , SQL Server & MySQL, as well as the author Fahd Mirza.
The post Log Buffer #433: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.
During the patching I noticed we were getting some issues with supposed misconfiguration of static files. After clearing my browser cache, the message went away, so I tweeted this.
“Regarding APEX 5.0.1 patch. Clear your browser cache, or you may get that static files message, even though config is correct. #orclapex”
Practically before I hit return, Patrick Wolf came back with this.
“@oraclebase Thanks for letting us know. I have filed bug# 21463521 and fixed it for 5.0.2″
We’re a week on now and all our APEX installations are happily running 5.0.1. We had no issues with the upgrades and no problems with the apps.
We are small fry where APEX is concerned, so don’t take this as the green light to upgrade everything yourself. I’m just saying we had no problems with it, which is pretty cool. If APEX is a strategic environment for you, you will probably need to do a lot more testing than us.
Tim…APEX 5.0.1 : We’re all patched up! was first posted on July 24, 2015 at 4:32 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
I just noticed MobaXterm 8.0 was released a few days go.
Tim…MobaXterm 8.0 was first posted on July 24, 2015 at 3:19 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
A small but remarkable change in Oracle Database 12c is the default value of AUDIT_SYS_OPERATIONS has changed to TRUE now. In other words, all actions done by the superuser sys are being audited now by default!
[oracle@uhesse ~]$ sqlplus / as sysdba SQL*Plus: Release 126.96.36.199.0 Production on Fri Jul 24 15:23:10 2015 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 188.8.131.52.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> select name,value from v$spparameter where isspecified='TRUE'; NAME VALUE ---------------------------------------- -------------------------------------------------- memory_target 1073741824 control_files /u01/app/oracle/oradata/prima/control01.ctl db_block_size 8192 compatible 184.108.40.206 db_recovery_file_dest /u02/fra db_recovery_file_dest_size 2147483648 undo_management auto undo_tablespace undotbs1 remote_login_passwordfile exclusive db_name prima diagnostic_dest /u01/app/oracle 11 rows selected. SQL> show parameter sys_oper NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ audit_sys_operations boolean TRUE SQL> select count(*) from scott.dept; COUNT(*) ---------- 4 SQL> show parameter audit_file NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ audit_file_dest string /u01/app/oracle/product/12.1.0 /dbhome_1/rdbms/audit SQL> exit Disconnected from Oracle Database 12c Enterprise Edition Release 220.127.116.11.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options [oracle@uhesse ~]$ cd /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/audit [oracle@uhesse audit]$ cat prima_ora_6204_20150724152310753136143795.aud Audit file /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/audit/prima_ora_6204_20150724152310753136143795.aud Oracle Database 12c Enterprise Edition Release 18.104.22.168.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1 System name: Linux Node name: uhesse Release: 3.8.13-68.2.2.el7uek.x86_64 Version: #2 SMP Tue May 12 14:38:58 PDT 2015 Machine: x86_64 Instance name: prima Redo thread mounted by this instance: 1 Oracle process number: 41 Unix process pid: 6204, image: oracle@uhesse (TNS V1-V3) Fri Jul 24 15:23:10 2015 +02:00 LENGTH : '160' ACTION : 'CONNECT' DATABASE USER: '/' PRIVILEGE : 'SYSDBA' CLIENT USER: 'oracle' CLIENT TERMINAL: 'pts/1' STATUS: '0' DBID: '2113606181' [Output shortened...] Fri Jul 24 15:23:56 2015 +02:00 LENGTH : '185' ACTION : 'select count(*) from scott.dept' DATABASE USER: '/' PRIVILEGE : 'SYSDBA' CLIENT USER: 'oracle' CLIENT TERMINAL: 'pts/1' STATUS: '0' DBID: '2113606181'
Something you might need to know as a DBA, don’t you think? :-)
Tagged: 12c New Features, security
Several years go (eight to be precise) I wrote a note suggesting that Oracle will not materialize a factored subquery unless it is used at least twice in the main query. I based this conclusion on a logical argument about the cost of creating and using a factored subquery and, at the time, I left it at that. A couple of years ago I came across an example where even with two uses of a factored subquery Oracle still didn’t materialize even though the cost of doing so would reduce the cost of the query – but I never got around to writing up the example, so here it is:
create table t1 as select object_id, data_object_id, created, object_name, rpad('x',1000) padding from all_objects where rownum <= 10000 ; exec dbms_stats.gather_table_stats(user,'T1') explain plan for with gen as ( select /*+ materialize */ object_id, object_name from t1 ) select g1.object_name, g2.object_name from gen g1, gen g2 where g2.object_id = g1.object_id ; select * from table(dbms_xplan.display);
You’ll notice that my original table has very wide rows, but my factored subquery selects a “narrow” subset of those rows. My target is to have an example where doing a tablescan is very expensive but the temporary table holding the extracted data is much smaller and cheaper to scan.
I’ve included a materialize hint in the SQL above, but you need to run the code twice, once with, and once without the hint. Here are the two plans – unhinted first:
============================ Unhinted - won't materialize ============================ --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 10000 | 468K| 428 (2)| 00:00:03 | |* 1 | HASH JOIN | | 10000 | 468K| 428 (2)| 00:00:03 | | 2 | TABLE ACCESS FULL| T1 | 10000 | 234K| 214 (2)| 00:00:02 | | 3 | TABLE ACCESS FULL| T1 | 10000 | 234K| 214 (2)| 00:00:02 | --------------------------------------------------------------------------- ================================== Hinted to materialize - lower cost ================================== --------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 10000 | 585K| 227 (2)| 00:00:02 | | 1 | TEMP TABLE TRANSFORMATION | | | | | | | 2 | LOAD AS SELECT | SYS_TEMP_0FD9D6664_9DAAEB7 | | | | | | 3 | TABLE ACCESS FULL | T1 | 10000 | 234K| 214 (2)| 00:00:02 | |* 4 | HASH JOIN | | 10000 | 585K| 13 (8)| 00:00:01 | | 5 | VIEW | | 10000 | 292K| 6 (0)| 00:00:01 | | 6 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6664_9DAAEB7 | 10000 | 234K| 6 (0)| 00:00:01 | | 7 | VIEW | | 10000 | 292K| 6 (0)| 00:00:01 | | 8 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6664_9DAAEB7 | 10000 | 234K| 6 (0)| 00:00:01 | ---------------------------------------------------------------------------------------------------------
Clearly the optimizer isn’t considering the costs involved. If I add the predicate “where object_id > 0″ (which identifies ALL the rows in the table), materialization occurs unhinted (with the same costs reported as for the hinted plan above. My tentative conclusion is that the transformation is a heuristic one that follows the rule “two or more appearances of the subquery and some indication of row selection in the subquery rowsource”. (In fact if the rowsource is “select * from pipeline_function” the requirement for subsetting doesn’t seem to apply.)
The plans above came from 22.214.171.124 but I got the same result, with a slight difference in costs, in 126.96.36.199. It’s worth pointing out that despite Oracle apparently ignoring the costs when deciding whether or not to materialize, it still seems to report self-consistent values after materialization: the 227 for the plan above is the 214 for creating the temporary table plus the 13 for deriving the hash join of the two copies of the temporary table.
By Phil HillMore Posts (345)
Michael and I have made several specific criticisms of D2L’s marketing claims lately culminating in this blog post about examples based on work at the University of Wisconsin-Milwaukee (UWM) and California State University at Long Beach (CSULB).
I understand that other ed tech vendors make marketing claims that cannot always be tied to reality, but these examples cross a line. They misuse and misrepresent academic outcomes data – whether public research-based on internal research – and essentially take credit for their technology “delivering results”.
This week brought welcome updates from D2L that go a long way towards addressing the issues we raised. As of Monday, I noticed that the ‘Why Brightspace? Results’ page now has links to supporting material for each claim, and the UWM claim has been reworded. Today, D2L released a blog post explaining these changes and admitting the mistakes. D2L even changed the web page to allow text selection for copy / paste. From the blog post:
Everyone wants more from education and training programs—so it’s critical that our customers are part of the process of measurement and constant improvement.
At Fusion, our customers came together to share new ideas and practices to push education forward. They like to hear about the amazing results, like U-Pace, which we post on our website. In our excitement to share the great results our customers are seeing through their programs, we didn’t always provide the details around the results. When we make mistakes, it’s our job to fix it—as we are doing now.
U-Pace is the specific program at UWM (course redesign from large lecture to self-paced / mastery approach), and D2L now links to a documented case study and quotes this case study in the blog post.
We have a Customer Success Program in place where approvals from our clients are acquired before we post anything about them. Stories are revisited every six months to make sure that they’re still valid and accurate. However, a recent customer success story was mistakenly posted on our website without their permission or knowledge. We will be doubling down on our efforts to help ensure that this doesn’t happen again, and we will work harder to provide citations for all the facts.
This “without their permission or knowledge” paragraph refers to a claim about CSULB.
Make no mistake, we’re extremely proud of what our clients are accomplishing. Our customers’ innovation, dedication, and just plain awesomeness is making a huge difference—and we’re proud to be a part of it. We will continue to measure and improve our offerings, listen to our community for suggestions, and when warranted, share their results. Here’s to them!
Kudos to D2L for these admissions and changes. Well done.Notes and Caveats
While the overall change is very positive, I do have a few additional notes and caveats to consider.
- The blog post today should have come from Renny Monaghan (Chief Marketing Officer) or John Baker (CEO). The blog post was written by Barry Dahl, and unless I misunderstand he is their lead for community engagement – building a user community that is mostly behind-login and not public-facing. The “mistakes” were made in official marketing and company communications. The leader of the department in charge of official messaging (Renny) or the company leader (John) should have taken ownership of what happened in the past and the corrections they are making.
- In the blog post section describing the U-Pace program at the UWM, I would have included the description of moving from large lecture to self-paced / mastery approach. That change should not be embedded as one of “many factors that came together for UWM to achieve the results that they did, and that the increases in student success are not all attributed to their use of Brightspace.” That change to self-paced / mastery was the intervention, and all other factors are secondary. The case study describes the program quite well, but such an omission in the blog post is misleading.
- The blog post only references UWM and CSULB examples, yet the ‘Why Brightspace? Results’ page added links to all claims. Changing them all was the right move.
- Apparently, specific criticisms do not carry a CC-BY license.
These are welcome changes.
- For what it’s worth, Barry does great work for the company
ORA-54033 and the Hidden Virtual Column Mystery, from All Things SQL.
Oracle Database In-Memory Bundle Patch 10 Released, from Oracle Database In-Memory.
Oracle 12 and latches, part 2, from Frits Hoogland Weblog.
Oracle VM 3.3.3 Officially Released!!!!, from the Simon Coter Blog.
Search and Replace in Oracle SQL Developer Data Modeler, from that JEFF SMITH.
WebLogic Partner Community Newsletter July 2015, from WebLogic Partner Community EMEA.
Performance Study – REST vs SOAP for Mobile Applications, from SOA & BPM Partner Community Blog.
And from the same source:
Purging and partitioned schemas
Auto Refresh for ADF BC Cached LOV, from Andrejus Baranovskis Blog.
How to Build a Hadoop Cluster Using OpenStack, from The art of virtualization.
Patch Set Update: Hyperion Profitability and Cost Management 188.8.131.52.100, from SOA & BPM Partner Community Blog.
And another update announcement:
Patch Set Update: Hyperion Profitability and Cost Management 184.108.40.206.100, from Business Analytics - Proactive Support.
New Knowledge Article: OBIEE 11g: Problem invoking WLST - Traceback' when Installing OBIEE 220.127.116.11.0 on Windows 2012 R2, from Business Analytics - Proactive Support.
Kernel Zones support in 12.3, from the Ops Center blog.
What's New With Oracle Certification This Week?, from Oracle Certification.
From the Oracle E-Business Suite Support blog:
Oracle E-Business Suite Releases 11i & 12 Critical Patch Update
Have you Discovered the Power of the iProcurement Change Request Analyzer?
Keep Those Sales Orders Flowing
How to run the new XML version of the APList Invoice Data Collection Diagnostic
July 2015 WEBCAST – BI Publisher Overview
From the Oracle E-Business Suite Technology blog:
Can You Use IE 11 Enterprise Mode or Compatibility View With EBS?
Oracle Internet Directory 11gR1 18.104.22.168 Certified with E-Business Suite
Oracle Access Manager 22.214.171.124 Certified with E-Business Suite
Two really interesting items this week. For those of you who always wanted to have a tricorder, the first iteration is on its way to your phone. It starts out at $249, but I’m betting this will be a standard item on smart phones in not too long. Then you can analyze your pizza for alien life forms: Pocket Spectrometer is the Star Trek Tricorder you Always Wanted, from Epic Times.
And another article fraught with Star Trek implications (data crystals here we come): Scientists have finally discovered massless particles, and they could revolutionise electronics, from ScienceAlert.
- Help save time and improve your effectiveness in finding patches that you need.
- Learn about product specific best practices for patching and upgrading.
- Avoid issues by staying current on your Oracle products and applying critical fixes.
1. Review Oracle Support Lifecycle Advisors for patching and upgrading best practices
View Oracle Support Lifecycle Advisors (Document ID 250.2) to learn about best practices for patching and upgrading the Oracle products you own.
2. Use the Patch Simple Search when you know the Patch ID
3. Use the Patch Advanced Search to search based on a product and release
Include additional filters to find Oracle recommended patches and to filter by platform or other attributes.
4. Use the Patch Related Activity region to find recently viewed or downloaded patches
5. Download the latest Critical Patch Updates and Security Alerts
- Oracle strongly recommends staying current with the latest Critical Patch Updates and Security Alerts.
- Critical Patch Updates are collections of security fixes for Oracle products and are released quarterly.
- Security Alerts are issued for vulnerability fixes deemed too critical to wait for distribution in the next Critical Patch Update.
- The latest information is always available from the "Critical Patch Updates and Security Alerts" link on the Patching Quick Links region.
6. For more information on searching for patches
- View My Oracle Support Patches and Updates How To (Document ID 1966784.2) for more information.
- Visit the My Oracle Support Community for a simple, easy way to ask a question without having to initiate a service request.
The article is behind a paywall, so I cannot reproduce it here, but if you are a subscriber, here you go.
I find it generally hard to read SciAm these days, as well as many other scientific sources, because of the pervasive species-ism (humans unique, more important than all others) found sadly among scientists.
But this article was, I thought, a real disappointment, coming from SciAm. I sent this letter to the author:
I read your SciAm article with the above title, and I found it scientifically sloppy and offensively tone deaf, given the state of our planet today (specifically the threat of climate change and human-cause extinctions and species degradation).
You might not read past that initial paragraph but if you do:
I am all for interesting “thought experiments”, but it should have a reasonable amount of logical consistency. I think your experiment fails in this regard.
Specifically, you talk about the growth of order on earth from production of cultural information.
This implies a clear net positive change in order due to our intensely “ordered” products.
Yet previously, you recognized that there is order (lots of it) in living things.
And that's where I see a very deep (specie-ist-driven) fallacy: to create our products humans destroy a vast amount of living things and therefore wipe out corresponding enormous amounts of order.
Vast parts of the rainforest, extinction of entire species, degradation of the ocean, etc., etc., etc. - do you really think that if you even attempted to conceptualize the volume of order sacrificed to build iPhones, you could come out with a net positive growth in order?
I suppose it might be remotely possible - but you don’t even address this trade-off, making your argument incomplete and sloppy. I am very surprised that SciAm did not insist on a more rigorous treatment.
Sdaly, you seem to blithely accept that destruction of life on our planet in order to manifest our culture-as-thought as products.
Which that brings me to…
Offensively Tone Deaf
Your insistence to see the entire world through a human filter and impose human paradigms onto the rest of the natural world is shocking, giving the growing awareness (especially among the most rational of us, like many scientists).“A tree, for example, is a computer”
“Objects of this kind [manufactured products] are particularly special.”
“Biological cells are finite computers”
“People are also limited, and we transcend our finite compuational capacities by forming social and professional networks.”“Special” “Transcend”
You use words that impute relentlessly positive values to human activity.
Yet if you do not place humans “above” all others, you could at least say (my changes in bold):
“People are also limited, and we augment our finite compuational capacities by forming social and professional networks. A necessary consequence of this agumentation is the destruction of the computational capacities of billions of other living creatures.”
At the very end of your muddled thought experiment, you finally hint at a bigger picture:“The resulting hyperconnected society will present our species with some of the most challenging ethical problems in human history.”Ah, ethics! Finally! Professor Hidalgo will now point out the grave price paid by our planet and co-inhabitants for human's desire for comfort and convenience, but....
No, no. For you, like way too many other humans, all that matters is the human species.“We could lose aspects of our humanity that some of us consider essential: for example, we might cheat death.”Now that would be a real ethical disaster (cheating death) - precisely because it mean accelerated devastation of our planet and non-humans.
But that doesn’t seem to even register in your thinking.
As you know, Oracle Documents Cloud Service is not just an Enterprise File Sync and Share (EFSS) solution in the Cloud but more than that, it offers a cloud platform for documents management, storage and collaboration to drive business processes hooked to business applications.
Today we are going to look at how Siebel CRM users can extend their documents, embed attachments and drive collaboration in the Cloud using Oracle Documents Cloud Service.
Integration between Documents Cloud and Siebel CRM allows Siebel users to store file attachments in the cloud dynamically allowing you to share this content with your team, both internally and externally by defining granular permission privileges. Storing it in the Cloud also means the content is available to you anytime, anywhere and on any device - on the web, desktop or even your mobile devices giving mobility to your content like never before. With inherent granular security controls, leveraging Oracle Documents Cloud for document storage, access and sharing also means security assurance with your information protected at rest, in motion and at access points. Your documents are available to you 24 X 7, in context and are always up to date. Once you have setup Documents Cloud Service with Siebel, you can simply access your relevant files seamlessly from within your application like so:
What's more, Documents Cloud gives you the ability to share these documents amongst other Oracle Cloud solutions such as Sales Cloud and in the future Service Cloud and Oracle Social Network, etc. so that you have a seamless work experience through the business process and that contextual documents are available for your use anytime from within your application of choice. Version control ensures there are not multiple versions floating around leading to erroneous decision making.
A simple drag and drop interface make document collaboration easy to use and intuitive. You can store attachments for any Siebel entity using Documents Cloud. With the scalable, trusted Oracle Cloud architecture, you significantly reduce your CapEx as well as maintenance and upgrade costs down the line. The service is available as soon as you "switch it on" giving your documents the mobility you need and even offline access.
Implementing the solution into a Siebel CRM deployment is easy with step by step instructions listed in this whitepaper. Leveraging Oracle Documents Cloud with Oracle Siebel means lower TCO, proven integration model, same foundational architecture and one solution partner.
Here's a brief video explaining it all. Take a look and make your Siebel documents mobile with Documents Cloud Service today.
Have some questions? Our colleagues, John Bedford on the Siebel side along with the Documents Cloud product management team, have compiled this FAQ for you. For additional questions and follow up, please do leave us a comment here or contact your Oracle account manager.
First start your WebLogic server in debug mode. The startup scripts generated for a domain provide an option to start the server in debug mode.
To run in debug mode, set the environment variable "debugFlag" to a value of "true" and start the server.
$ export debugFlag="true"
This launches the server with a command line such as that shown below, which sets the standard Java debug properties:
Starting WLS with line:The console will display a message confirming the debug Java VM is using:
/Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/java -server -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=8453,server=y,suspend=n -Djava.compiler=NONE -Xmx512m -Dweblogic.Name=myserver -Djava.security.policy=/tmp/dev_update2/wls12130/wlserver/server/lib/weblogic.policy -Xverify:none -Djava.endorsed.dirs=/Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/jre/lib/endorsed:/tmp/dev_update2/wls12130/wlserver/../oracle_common/modules/endorsed -ea -da:com.bea... -da:javelin... -da:weblogic... -ea:com.bea.wli... -ea:com.bea.broker... -ea:com.bea.sbconsole... -Dwls.home=/tmp/dev_update2/wls12130/wlserver/server -Dweblogic.home=/tmp/dev_update2/wls12130/wlserver/server -Dweblogic.utils.cmm.lowertier.ServiceDisabled=true weblogic.Server
Listening for transport dt_socket at address: 8453
Now on the NetBeans side, create a new Server entry for your WebLogic instance using the new Remote Domain option:
Check the Server debug mode enabled option and specify the port (8453 by default):
Now specify the new server as the destination to use to run the application:
Now to debug your application, simply set a break point in your code and select the Debug option for the project:
This will build and deploy the application to the WebLogic remotely then open the Debugger in the IDE and connect it automatically to the WebLogic debug port.
From here you can use all the standard debug facilities to step through your code, view stacks, instance data and so forth.
WordPress 4.2.3 has been released.
It contains fixes for some pretty nasty stuff. Usually, the updates have to be manually triggered for a day or so before the auto-update feature picks them up. I was on the blog this morning and there were no “manually triggered auto-updates” available, so it looks like this one has been pushed straight out, which probably makes sense.
By the time you’ve read this you are probably up to date already, but if not, get on your blog and give it a nudge.
You can see the changelog here.
Tim…WordPress 4.2.3 : Hurry up and … let it fix itself… was first posted on July 23, 2015 at 1:28 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
Emergency Monitoring, Real-Time ADDM, Compare Period ADDM, and Active Session History (ASH) Analytics
My recent dalliance with YouTube (141 subscribers and growing! ) has left the blog feeling a little bit unloved of late, but then why write when you can waffle in the car?
Anyway, the 12c learning train keeps on rolling. I’ve recently put the following articles live.
- Emergency Monitoring in Oracle Enterprise Manager Cloud Control 12c
- Active Session History (ASH) Analytics in Oracle Enterprise Manager Cloud Control 12c
- Real-Time Automatic Database Diagnostics Monitor (ADDM) in Oracle Database 12c Release 1 (12.1)
- Automatic Database Diagnostic Monitor (ADDM) Compare Period in Oracle Database 12c Release 1 (12.1)
These are all listed as 12c new features in the 1Z0-060 “Upgrade to Oracle Database 12c” OCP upgrade exam, which I find a bit odd. Two of them are EM12c features, not database features. The other two are existing EM12c features that are now available directly from the database, but I can’t see myself ever using them on the server when it is much easier to do it from Cloud Control. Whatever!
Getting close to the end of the OCP syllabus now… I’ll soon have to consider sitting the exam…
Tim…Emergency Monitoring, Real-Time ADDM, Compare Period ADDM, and Active Session History (ASH) Analytics was first posted on July 23, 2015 at 12:00 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
By Phil HillMore Posts (345)
I’m not sure when e-Literate was awarded the exclusive rights for non-PR Unizin coverage, but there were two announcements this week to cover.State University System of Florida Joins
The first announcement is an update and confirmation of my recent post about the new associate membership option. If a member institution (one of the 11 members paying $1.050 million) sponsors their statewide system, that system can join Unizin as “associate members” for $100 thousand per year but without retaining a board seat and vote on product direction. The week the State University System of Florida (SUSFL) announced they are joining Unizin.
Building on its growing record of collaboration, the State University System of Florida, comprised of Florida’s 12 public universities, has joined Unizin, a group with a mission to have more control and influence over the digital learning ecosystem.
The decision helps secure Florida’s leadership in the realm of digital learning and gives access to tools under development, including a Digital Objects Repository and Learning Analytics. Florida is the first State University System to join the collaborative organization, which is a consortium of major research universities. The University of Florida is a founding member, alongside other top universities such as Pennsylvania State University, Ohio State University and the University of Michigan.The organization is a not-for-profit service operation and its membership is by invitation only.
It is not clear which of the 12 public universities beyond the University of Florida are actually planning to participate in Unizin. If you want details on the SUSFL plans and what associate membership means, go read the earlier post.Courseload Acquisition And Content Relay
The second update is that Unizin acquired the IP, trademark, and remains of Courseload, a provider of e-reader platform for digital textbooks. From the announcement:
Unizin announced the acquisition of the Courseload software today. Courseload includes an eText reader platform and collaborative learning tools for the delivery of digital learning materials including Open Educational Resources, faculty-authored course packs, and publisher content. The addition of Courseload is a vital component for connecting content to learners in Unizin’s digital learning ecosystem.
This move now determines the second component of Unizin, as the plan is for the acquired Courseload employees will modify and develop a portion of their software to become the basis for the Content Relay. Previously Unizin had been planning to license or contract another organization to provide the Content Relay.
This acquisition means that Unizin will now be in the software development business and not just to integrate various products. This approach changes what had previously been the plans to not develop product, as Unizin co-founder and co-chairman of the board Brad Wheeler shared with me last year.
Unizin is not a Community Source effort in the way that I understand Community Source as we started applying the label 10+ years ago. Unizin is better understood, as you have reported, as a cloud-scale service operator somewhat like I2 [Internet2]. It does not plan to do lots of software development other than as needed for integrations. No biggie, just a nuanced observation from the end of the story.
When I asked Brad if this means that Unizin is ruling out product development, he replied:
Unizin is working on its roadmap for each area. If we do need to head down some development approach that is more than integration, we’ll give thought to the full range of options for best achieving that, but there is no plan to begin an open/community source effort at this time.
Courseload is based in Indianapolis, IN while Unizin is based in Austin, TX. This creates an interesting situation where a new organization will be managing a remote development team that likely outnumbers the pre-existing Unizin employees.Common Origins
The Chronicle described the origins of Courseload in 2010.
Courseload, the e-book broker, started in 2000, when a co-founder, Mickey Levitan, a former Apple employee inspired by the company’s transformative role in the music industry, devised the idea and teamed up with a professor at Indiana University at Bloomington to try it. But the company failed to find enough takers, and it all but shut down after a brief run.
Then last year an official at Indiana, Bradley C. Wheeler, called Mr. Levitan and talked him into trying again.
Update (7/23): The following paragraph has been revised based on private communication from source which pointed out that Crunchbase data is wrong in this case.
Based on that company revival, in 2012 Courseload raised $1.6 million from IU’s Innovate Indiana fund according to Crunchbase. In 2012 the Innovate Indiana Fund, an organization that represents Indiana University’s push for economic development, joined other investment groups in helping to fund the new Courseload. The IIF investment was in the lower single digit % of the total raised. The The tight relationship with IU was further described in the Innovate Indiana end-of-year 2012 report.
In 2000, Mickey Levitan and IU Professor Alan Dennis had an idea that was ahead of its time. Through Courseload, the start-up learning platform company they cofounded, the two endeavored to make college course materials accessible online.
A decade later, Indiana University became the first customer, implementing the Courseload platform across all its campuses. Now with 50 clients and 32 employees, Courseload is leading the online course text revolution—lowering costs for students and providing capabilities that can improve educational outcomes, while offering professors the discretion to use the platform on a course-by-course basis. [snip]
Levitan is grateful for the company’s broad-reaching partnership with IU. Early support from [VP of IT Brad] Wheeler was critical to the company’s success, Levitan says. “He’s a wonderful partner and an extraordinary leader—a visionary who is ready to go out and shape the
world rather than be shaped by it.”
Levitan is also grateful for the company’s early and ongoing relationship with the IU Research and Technology Corporation (IURTC). Tony Armstrong, president and CEO of the IURTC, identified an early funding opportunity for Courseload through the Innovate Indiana Fund. Kenneth Green, manager of the Innovate Indiana Fund, sits on Courseload’s board of directors.
This Inside Higher Ed article from 2012 highlights the common origins of both Unizin and Courseload – both in terms of founder, Internet2, and common justification. As a reminder, Unizin is technically operates as part of Internet2.
In a session at the 2011 Educause conference in October, Bradley Wheeler, the chief information officer at Indiana University, issued a challenge to his colleagues. Unless universities assert their power as customers, the vendors that sell them products and services will continue squeezing those institutions for cash while dictating the terms under which they go digital.
That conversation revolved around expensive, institution-level investments such as learning-management platforms and enterprise resource planning software. Now Wheeler and his colleagues are looking to apply the same principles of “aggregated demand” to help students save money on electronic textbooks.
Internet2, a consortium of 221 colleges and universities, which last year brokered landmark deals with Box.com and Hewlett-Packard that gave its members discounts on cloud computing services, announced today that it had entered into a contract with McGraw-Hill, a major textbook publisher, aimed at creating similar discounts for students on digital course materials.Moving Ahead
Unizin is now up to 11 full member institutions and 1 state-wide system associate member. Despite or because of the tangled paths of Unizin and Courseload, we finally have some clarity on the second component (the Content Relay) of the consortium’s services. It’s not what I would have guessed ahead of time, but I have to admit that seems to be a willing list of schools ready to join.
The post Unizin Updates on Florida State System and Acquisition of Courseload appeared first on e-Literate.
A Guest Post by Senior Director Louvaine Thomson, (pictured left), Product Management, Oracle Cloud Applications
Last month we announced these Release 10 preview materials:
Spotlight Videos: Hosted by senior development staff, these webcast-delivered presentations highlight top level messages and product themes, and are reinforced with a product demo
Release Content Documents (RCDs): This content includes a summary level description of each new feature and product
We are now pleased to announce the next significant wave of readiness content. Specifically, the following content types are now available on the Release 10 Readiness page:
- What's New: Learn about what's new in the upcoming release by reviewing expanded discussions of each new feature and product, including capability overviews, business benefits, setup considerations, usage tips, and more
- Release Training: Created by product management, these self-paced, interactive training sessions are deep dives into key new enhancements and products. Also referred to as Transfers of Information (TOIs)
- Product Documentation: Oracle's online documentation includes detailed product guides and training tutorials to ensure your successful implementation and use of the Oracle Applications Cloud
Access is simple: From the Cloud Site: Click on Resources > Release Readiness to access the Release Readiness page.
Access is simple:
1. From the Cloud Site: Click on Resources > Release Readiness to access the Release Readiness page, or
After wearing the Fitbit Surge for seven weeks, I developed an ugly skin rash. So, I took a break and let my skin breathe for a while.
I’m all better now, thanks for asking.
For most of the year, I’ve been test-driving various fitness bands and super watches and journaling my impressions here as one man’s research. After all, wearables are, and have been, a thing for a while now. So, I need to know as much as possible.
If you’re scoring at home, that’s 18 weeks with something on my wrist, a lot for me after 23 years, give or take, of nothing on my wrist; I’m not really a watch guy.
Physical bands aside, I was also tracking and quantifying myself, my fitness and general activity data and my sleep data. I’m a fan of the quantified self and better living through statistics and math. Looking at raw numbers forces introspection that can be very revealing, in good and bad ways.
Anyway, before I put on another device, I decided to capture the pros and cons of not wearing one, at least in terms of what was missing when I had a naked wrist.
Not having something on my wrist all the time is pro enough. I generally don’t like encumbrances, and having my wrist free again is nice.
Typing on a keyboard is another plus. I still don’t know how people with watches do it. A guy I used to work with wore a watch, and his Macbook Pro showed the scratch damage it did to the unibody aluminum.
Being free of data collection is liberating, but it cuts both ways. On the plus side, I don’t obsess about my step count. Wearing a fitness tracker has made it painfully obvious that my life is dangerously sedentary.
If it weren’t for running on a treadmill, there are many days when I wouldn’t reach the 10,000 steps magic number.
Why is this a pro? Now that I know, I can adjust accordingly, without a tracker, and I have a general idea of how much activity generates 10,000 steps.
Taking a break from testing has given me time to reflect on the four devices I’ve used without being too close to the one I’m currently testing. When I finish this research experiment, I should take a similar break to reflect.
On the downside, I really got used to having the time on my wrist, which is something I missed when I wore the Jawbone UP24 as well.
Even though I did find myself checking the time as a nervous habit, the utility outweighed the nervous tick.
I really miss the phone and text notifications that the two super watches, the Basis Peak and Fitbit Surge provide.
On the data collection side, I find myself needing to be pushed by numbers. It’s weird, I know; I’ll recognize something that generates more activity, like walking vs. driving, but I need the extra push to do it.
I also miss my morning data review. It became routine for me to review my night’s sleep and browse through my data each morning, my own a personal, daily report.
Now that Google has Your Timeline for Maps, you can begin to see the value of aggregating data summaries; yes, it’s creepy, especially the implications of kismet or whatever the opposite of that is, but I remain in the optimistic camp that hopes to correlate and improve based on personal data sets.
Anyway, figured since I’d been sharing my wearables observations, I might as well share my lack of wearable observations.
Sometime in the next few weeks, I’ll get started on a new one. Stay tuned.Possibly Related Posts:
- Seven Weeks with the Fitbit Surge
- Four Weeks with the Basis Peak
- The Fitbit Surge: Watching Where the Super Watch Puck Goes
- Three Weeks with the Nike+ Fuelband SE
- The Misfit Shine