Skip navigation.

Feed aggregator

Automatic Diagnostics Repository (ADR) in Oracle Database 12c

Tim Hall - Wed, 2014-06-25 08:08

There’s a neat little change to the Automatic Diagnostics Repository (ADR) in Oracle 12c. You can now track DDL operations and some of the messages that would have formerly gone to the alert log and trace files are now written to the debug log. This should thin out some of the crap from the alert log hopefully. Not surprisingly, ADRCI has had a minor tweak so you can report this stuff.

You can see what I wrote about it here:

Of course, the day-to-day usage remains the same, as discussed here:

Cheers

Tim…

Automatic Diagnostics Repository (ADR) in Oracle Database 12c was first posted on June 25, 2014 at 3:08 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle Database: Query to List All Statistics Tables

Pythian Group - Wed, 2014-06-25 08:00

If you were a big fan of manual database upgrade steps, perhaps you would have come across this step many times in your life while reading MOS notes, upgrade guides, etc.

Upgrade Statistics Tables Created by the DBMS_STATS Package
If you created statistics tables using the DBMS_STATS.CREATE_STAT_TABLE procedure, then upgrade these tables by executing the following procedure:
EXECUTE DBMS_STATS.UPGRADE_STAT_TABLE(‘SYS’,’dictstattab’);

In my experience, I found the statistics tables can be created from Oracle rdbms version 8i. So this step became part of the database upgrade documents until now. I also noticed the structure of the statistics table was the same until 10gR2 version, but Oracle had modified the structure marginally on 11g and 12c versions.

I have been using this single query to list all statistics tables that exist on a database, which can be still used despite changes on the table structure.

SQL> select owner,table_name from dba_tab_columns where COLUMN_NAME=’STATID’ AND DATA_TYPE= ‘VARCHAR2′;

Though this is not a critical step, it is required as a part of the post upgrade. Here is the small action plan to run the required command to upgrade all statistics tables.

Connect as SYS database user and run these steps:
SQL> set pages 1000
SQL> set head off
SQL> set feedback off
SQL> spool /home/oracle/stattab_upg.sql
SQL> select ‘EXEC DBMS_STATS.UPGRADE_STAT_TABLE(”’||owner||”’,”’||table_name||”’);’ from dba_tab_columns where COLUMN_NAME=’STATID’ AND DATA_TYPE= ‘VARCHAR2′;
SQL> spool off
SQL> @/home/oracle/stattab_upg.sql
SQL> exit

Categories: DBA Blogs

check jdbc version

Laurent Schneider - Wed, 2014-06-25 05:12

There are 2 versions to check when using jdbc.

The first one is in the name of the file : classes12.zip works with JDK 1.2 and later, ojdbc7.jar works with java7 and later.

Even if classes12.zip works fine with JAVA 8, it is not supported.

Be sure you check the support matrix on the Oracle JDBC FAQ

According to the support note 401934.1, only Oracle JDBC driver 11.2.0.3 (and greater) versions support JDK 1.7.

To check your version of the JDBC Driver, there are two methods.

One is with the jar (or zip) utility.


$ jar -xvf ojdbc7.jar META-INF/MANIFEST.MF
 inflated: META-INF/MANIFEST.MF
$ grep Implementation META-INF/MANIFEST.MF
Implementation-Vendor: Oracle Corporation
Implementation-Title: JDBC
Implementation-Version: 12.1.0.1.0
$ unzip classes12.zip META-INF/MANIFEST.MF
Archive:  classes12.zip
  inflating: META-INF/MANIFEST.MF
$ grep Implementation META-INF/MANIFEST.MF
Implementation-Title:   classes12.jar
Implementation-Version: Oracle JDBC Driver 
  version - "10.2.0.1.0"
Implementation-Vendor:  Oracle Corporation
Implementation-Time:  Jun 22 18:51:56 2005

The last digit is often related to the java version, so if you have ojdbc6 and use java 6, you’re pretty safe. If you have java 8, you won’t find any ojdbc8 available at the time of writing, a safer bet is to use the latest version and to wait for a support note. The latest notes about ojdbc7.jar currently does not display java 8 certification. Probably we will have to wait for a more recent version of ojdbc7.jar.

Another mean to find the version of the driver is to use DatabaseMetaData.getDriverVersion()


public class Metadata {
  public static void main(String argv[]) 
    throws java.sql.SQLException {
    java.sql.DriverManager.registerDriver(
      new oracle.jdbc.driver.OracleDriver());
    System.out.println(
      java.sql.DriverManager.
        getConnection(
"jdbc:oracle:thin:@SRV01.EXAMPLE.COM:1521:DB01", 
          "scott", "tiger").
            getMetaData().getDriverVersion());
  }
}


$ javac -classpath ojdbc6.jar Metadata.java
$ java -classpath ojdbc6.jar:. Metadata
11.2.0.3.0

Conditional uniqueness

Dominic Brooks - Wed, 2014-06-25 02:50

A quick fly through the options for conditional uniqueness.

Requirement #1: I want uniqueness on a column but only under certain conditions.

For example, I have an active flag and I want to make sure there is only one active record for a particular attribute but there can be many inactive rows.

Initial setup:

create table t1
(col1      number       not null
,col2      varchar2(24) not null
,is_active number(1)    not null
,constraint pk_t1 primary key (col1)
,constraint ck_t1_is_active check (is_active in (1,0)));

Solution #1: A unique index on an expression which evaluates to null when the condition is not met.

create unique index i_t1 on t1 (case when is_active = 1 then col2 end);

unique index I_T1 created.

insert into t1 values(1,'SHAGGY',1);

1 rows inserted.

insert into t1 values(2,'SHAGGY',1);

SQL Error: ORA-00001: unique constraint (I_T1) violated
00001. 00000 -  "unique constraint (%s.%s) violated"
*Cause:    An UPDATE or INSERT statement attempted to insert a duplicate key.
           For Trusted Oracle configured in DBMS MAC mode, you may see
           this message if a duplicate entry exists at a different level.
*Action:   Either remove the unique restriction or do not insert the key.

Only one active SHAGGY allowed.
But multiple inactives allowed:

insert into t1 values(2,'SHAGGY',0);

1 rows inserted.

insert into t1 values(3,'SHAGGY',0);

1 rows inserted.

Solution #2: A virtual column with a unique constraint

drop index i_t1;

index I_T1 dropped.

alter table t1 add (vc_col2 varchar2(24) generated always as (case when is_active = 1 then col2 end));

table T1 altered.

alter table t1 add constraint uk_t1 unique (vc_col2);

table T1 altered.

Note that now we have a virtual column we have to be very aware of insert statements with no explicit column list:

insert into t1 values(4,'SCOOBY',1);

SQL Error: ORA-00947: not enough values
00947. 00000 -  "not enough values"

Unless we’re lucky enough to be on 12c and use the INVISIBLE syntax:

alter table t1 add (vc_col2 varchar2(24) invisible generated always as (case when is_active = 1 then col2 end));

But as this example is on 11.2.0.3:

insert into t1 (col1, col2, is_active) values(4,'SCOOBY',1);

1 rows inserted.

insert into t1 (col1, col2, is_active) values(5,'SCOOBY',1);

SQL Error: ORA-00001: unique constraint (UK_T1) violated
00001. 00000 -  "unique constraint (%s.%s) violated"
*Cause:    An UPDATE or INSERT statement attempted to insert a duplicate key.
           For Trusted Oracle configured in DBMS MAC mode, you may see
           this message if a duplicate entry exists at a different level.
*Action:   Either remove the unique restriction or do not insert the key.

insert into t1 (col1, col2, is_active) values(5,'SCOOBY',0);

1 rows inserted.

insert into t1 (col1, col2, is_active) values(6,'SCOOBY',0);

1 rows inserted.

Requirement #2: Sorry we forgot to tell you that we insert the new row first and the update the old one to be inactive so we need deferred constraint (hmmm!)

In which case, you can’t have deferred uniqueness on an index so the only option is the virtual column.

alter table t1 drop constraint uk_t1;

table T1 altered.

alter table t1 add constraint uk_t1 unique (vc_col2) deferrable initially deferred;

table T1 altered.

insert into t1 (col1, col2, is_active) values(7,'FRED',1);

1 rows inserted.

insert into t1 (col1, col2, is_active) values(8,'FRED',1);

1 rows inserted.

commit;

SQL Error: ORA-02091: transaction rolled back
ORA-00001: unique constraint (.UK_T1) violated
02091. 00000 -  "transaction rolled back"
*Cause:    Also see error 2092. If the transaction is aborted at a remote
           site then you will only see 2091; if aborted at host then you will
           see 2092 and 2091.
*Action:   Add rollback segment and retry the transaction.

insert into t1 (col1, col2, is_active) values(7,'FRED',1);

1 rows inserted.

insert into t1 (col1, col2, is_active) values(8,'FRED',1);

1 rows inserted.

update t1 set is_active = 0 where col1 = 7;

1 rows updated.

commit;

committed.

See previous post on similar approach for conditional foreign key


About User Groups

Floyd Teter - Tue, 2014-06-24 17:37
I'm hanging out in the middle of nowhere this week...Fort Riley, Kansas.  Here to visit my granddaughters.  Which means I'm missing ODTUG's KScope14 conference.  Missed the OAUG/Quest/IOUG Collaborate14 this year as well.  Will also be absent at OAUG's ConnectionPoint14 in Pittsburgh.  Will be missing a few others that are usually on my calendar as well (But I made it to UTOUG Training Days, Alliance14, and the MidAtlantic HEUG conference - will also make it to the ECOAUG later this year).

With all the user conferences missed in 2014, I've had some folks asking if I still believe it Oracle user groups.  The short answer is yes.  The longer answer is yes, but I do believe the user group model needs to change a bit.

Attend a user group conference this year (sorry, Oracle OpenWorld does not count - it is NOT a user group conference).  Look around at the faces.  Other than those working the partner sales booths, the vast majority of those faces will be middle-aged and older.  See, when user groups were first formed, the model was built to appeal to Baby Boomers and Echo Boomers.  And the big thrill was face-to-face networking.  Now that the Baby Boomers and Echo Boomers are riding off into the enterprise technology sunset, the user group model can only flourish by changing the model for those who take our places.

Face-to-face networking is still important, but just doesn't seem to hold the same level of importance for these younger workers.  Easily accessed on-demand education sessions on the web (for free), virtual gatherings on GoogleTalk, facilitating group chats on focused subjects, information in short snippets...simple, quick and virtual channels of information delivery seem to gain more traction with the rising generation than annual, huge national or international conferences when it comes to enterprise apps.

So, yeah, I still believe in user groups.  But, as long as you're asking, I think the model will need changing in order to flourish into the future.

I'm going back to the grandkids now...

WWW-based online education turns 20 this summer

Michael Feldstein - Tue, 2014-06-24 17:01

I’m a little surprised that this hasn’t gotten any press, but Internet-based online education turns 20 this summer. There were previous distance education programs that used networks of one form or another as the medium (e.g. University of Phoenix established its “online campus” in 1989), but the real breakthrough is the use of the world wide web (WWW), effectively creating what people most commonly know as “the Internet”.

To the best of my knowledge (correct me in comments if there are earlier examples), the first accredited school to offer a course over the WWW was the Open University in a pilot Virtual Summer School project in the summer of 1994. The first course was in Cognitive Psychology, offered to 12 students, as described in this paper by Marc Eisenstadt and others involved in the project (the HTML no longer renders):

In August and September 1994, a Virtual Summer School (VSS) for Open University undergraduate course D309 Cognitive Psychology enabled students to attend an experimental version of summer school ‘electronically’, i.e. from their own homes using a computer and a modem. VSS students were able to participate in group discussions, run experiments, obtain one-to-one tuition, listen to lectures, ask questions, participate as subjects in experiments, conduct literature searches, browse original journal publications, work in project teams, undertake statistical analyses, prepare and submit nicely formatted individual or joint written work, prepare plenary session presentations, and even socialize and chit-chat, all without ever leaving their homes. The term ‘Virtual Summer School’ was used to mean that the software packages supplied to students emulate many aspects of a residential summer school, but without requiring physical attendance. As with many other Open University activities, we feel that face-to-face tuition and peer group interaction would still be preferable if it could be achieved. However, there are sometimes circumstances which preclude physical attendance, so we want to provide the best possible alternative. Virtual Summer School was a first step in this direction. This year, it was only an experimental option for a dozen already-excused students, which gave us a low-risk entry in order to assess the viability of the approach.

There is even a concept video put together by the Open University at the end of 1994 that includes excerpts of the VSS course.

And now for your trip down memory lane, I have taken the paper, cleaned up the formatting, and fixed / updated / removed the links that no longer work. The modified paper is below for easier reading:

*************

Virtual Summer School Project, 1994

(source: http://faculty.education.ufl.edu/Melzie/Distance/Virtual%20Summer%20School%20Project)

Background

One of the great strengths of the UK’s Open University is its extensive infrastructure, which provides face-to-face tuition through a network of more than 7000 part-time tutors throughout the UK and Europe. This support network, combined with in-house production of high-quality text and BBC-produced videos, provides students with much more than is commonly implied by the phrase ‘distance teaching’! Moreover, students on many courses must attend residential schools (e.g. a one-week summer school to gain experience conducting Biology experiments), providing an additional layer of support. About 10% of students have genuine difficulty attending such residential schools, and increasingly we have started to think about addressing the needs of students at a greater distance from our base in the UK. This is where the Virtual Summer School comes in.

The Cognitive Psychology Virtual Summer School

In August and September 1994, a Virtual Summer School (VSS) for Open University undergraduate course D309 Cognitive Psychology enabled students to attend an experimental version of summer school ‘electronically’, i.e. from their own homes using a computer and a modem. VSS students were able to participate in group discussions, run experiments, obtain one-to-one tuition, listen to lectures, ask questions, participate as subjects in experiments, conduct literature searches, browse original journal publications, work in project teams, undertake statistical analyses, prepare and submit nicely formatted individual or joint written work, prepare plenary session presentations, and even socialize and chit-chat, all without ever leaving their homes. The term ‘Virtual Summer School’ was used to mean that the software packages supplied to students emulate many aspects of a residential summer school, but without requiring physical attendance. As with many other Open University activities, we feel that face-to-face tuition and peer group interaction would still be preferable if it could be achieved. However, there are sometimes circumstances which preclude physical attendance, so we want to provide the best possible alternative. Virtual Summer School was a first step in this direction. This year, it was only an experimental option for a dozen already-excused students, which gave us a low-risk entry in order to assess the viability of the approach.

Below we describe the technology involved, evaluation studies, and thoughts about the future.

The Technology

Three main categories of technology were required: communications & groupwork tools, support & infrastructure software/hardware, and academic project software.

Communications and Groupwork
  • Email, Usenet newsgroups, live chat lines and low-bandwidth (keyboard) conferencing: this technology was provided by FirstClass v. 2.5 from SoftArc in Toronto, and gave students a nice-looking veneer for many of their day-to-day interactions. A ‘Virtual Campus’ map appeared on their desktops, and folder navigation relied on a ‘room’ metaphor to describe crucial meeting places and bulletin boards.
  • WWW access: NCSA Mosaic 1.0.3 for Macintosh was provided for this purpose [in the days before Netscape was released] . Students had customized Hotlists which pointed them to academically-relevant places (such as Cognitive & Psychological Sciences on The Internet), as well as some fun places.
  • Internet videoconferencing: Using Cornell University’s CU-SeeMe, students with ordinary Macs or Windows PCs (even over dial-up lines from home) were able to watch multiple participants around the world. Video transmission from slightly higher-spec Macs & PCs was used for several Virtual Summer School events, including a Virtual Guest Lecture by Donald A. Norman, formerly Professor of Psychology at the University of California at San Diego (founder of its Cognitive Science Programme), and now an Apple Fellow.
  • Remote presentation software: we used a product called ‘The Virtual Meeting’ (from RTZ in Cupertino), which allowed synchronized slide & movie presentations on remote Macs & PCs distributed across local, wide, or global (including dial-in) networks, displayed images of all remote ‘participants’, and facilitated moderated turn-taking, ‘hand-raising’, interactive whiteboard drawing & question/answer sessions.
  • Mobile telephone support and voice conferencing: every VSS student was supplied with an NEC P100 cellular phone, so that they could use it while their domestic phone was busy with their modem (some day they’ll have ISDN of fibre optic lines, but not this year). Audio discussions were facilitated by group telephone conference calls, run concurrently with CU-SeeMe and other items shown above. Our largest telephone conference involved 17 participants, and worked fine given that basic politeness constraints were obeyed.
  • Remote diagnostic support and groupwork: Timbuktu Pro from Farallon, running over TCP/IP, enabled us to ‘cruise in’ to our students’ screens while chatting to them on their mobile phones, and to help them sort out specific problems. Students could also work in small self-moderated groups this way, connecting as observers to one user’s Macintosh.
Support and infrastructure software/hardware
  • Comms Infrastructure: TCP/IP support was provided by a combination of MacTCP, MacPPP, VersaTerm Telnet Tool on each student’s machine, plus an Annex box at The Open University connecting to a Mac Quadra 950 running a FirstClass Server and 3 Suns running cross-linked CU-SeeMe reflectors.
  • Tutorial Infrastructure: each student was supplied with HyperCard, MoviePlay, and SuperCard 1.7 to run pre-packaged tutorial and demonstration programs, some of which were controlled remotely by us during group presentations. Pre-packaged ‘guided tour’ demos of all the software were also provided (prepared with a combination of MacroMind Director and CameraMan). To help any computer-naive participants ‘bootstrap’ to the point where they can at least send us an email plea for help, we also supplied a short video showing them how to unpack and connect all of their equipment, and how to run some of the demos and FirstClass.
  • Hardware: one of our aims was to foreshadow the day in the near future when we can presuppose that (a) most students will be computer-literate, (b) students will have their own reasonable-specification hardware, (c) bandwidth limitations will not be so severe, and (d) all of our software will be cross-platform (e.g. Mac or Windows). We could only approximate that in 1994, so we supplied each VSS student with a Macintosh LC-II with 8MB of RAM, a 14.4Kbps modem, a StyleWriter-II printer, 13″ colour monitor, mobile phone and extra mobile phone battery. Students were given a conventional video cassette showing how to set up all the equipment (see tutorial infrastructure above).
Academic project software

Our students had four main support packages to help them in their Cognitive Psychology studies:

  • a custom-built ‘Word Presentation Program’, which allowed them to create stimuli for presentation to other students and automatically record data such as reaction times and button presses (they could create a turnkey experiment package for emailing to fellow students, and then have results emailed back);
  • a HyperCard-based statistics package, for analysing their data;
  • MacProlog from Logic Programming Associates in the UK, for writing simple Artificial Intelligence and Cognitive Simulation programs;
  • ClarisWorks, for preparing reports and presentations, reading articles that we emailed to them as attachments, and doing richer data analyses.
Timetable and evaluation

Students had a three-week warmup period in order to become familiar with their new equipment and run some trial (fun) activities with every piece of software, and formal academic activities took place from August 27th – Sept. 9th, 1994, mostly in the evenings. Thus, the conventional one-week residential summer school was stretched out for two weeks to allow for part-time working. During week one the students concentrated on experimental projects in the area of “Language & Memory” (typically demonstrating inferences that “go beyond the information given”). During week two the students wrote simple AI programs in Prolog that illustrate various aspects of cognitive processing (e.g. simulating children’s arithmetic errors). They were supplied with Paul Mulholland’s version of our own Prolog trace package (see descriptions of our work on Program Visualization) to facilitate their Prolog debugging activities.

A detailed questionnaire was supplied both to the Virtual Summer School students and to conventional summer school students taking the same course. We looked at how students spent their time, which activities were beneficial for them, and many other facets of their Virtual Summer School experience.

[removed reference to Kim Isikoff's paper and student interviews, as all links were broken]

The future

The Virtual Summer School finished on 9th September 1994 (following our Virtual Disco on 8th September 1994, incidentally…. we told students about music available on the World Wide Web for private use). What happens next? Here are several issues of importance to us:

  • We must lobby for ever-increasing ‘bandwidth’ [i.e. channel capacity, reflected directly in the amount and quality of full-colour full-screen moving images and quality sound that can be handled]. This is necessary not only for Open University students, but also for the whole of the UK, and indeed for the whole world. As capacity and technology improve, so does the public expectation and need [analagous to the way the M25 motorway was overfull with cars the first day it opened-- the technology itself helps stimulate demand]. Whatever the current ‘Information SuperHighway’ plans are [just like Motorway construction plans], there is a concern that they don’t go far enough.
  • We must RADICALLY improve both (i) the user interfaces and (ii) the underlying layers of communications tools. Even with the excellent software and vendor support that we had at our disposal, all the layers of tools needed (TCP/IP, PPP, Communications Toolbox, etc.) made a veritable house of cards. The layers of tools were (i) non-trivial to configure optimally in the first place (for us, not the students); (ii) non-trivial to mass-install as ‘turnkey’-ready systems for distribution to students; (iii) non-trivial for students to use straight ‘out of the box’ (naturally almost everything in the detailed infrastructure is hidden from the students, but one or two items must of necessity rear their ugly heads, and that gets tricky); and (iv) ‘temperamental’ (students could get interrupted or kicked off when using particular combinations of software). We were fully prepared for (iv), because that’s understandible in the current era of communicating via computers, but (i), (ii), and (iii) were more surprising. [If anyone doubts the nature of these difficulties, I hereby challenge them to use Timbuktu Pro, a wonderful software product, with 4 remotely-sited computer-naive students using TCP/IP over a dial-up PPP connection.] We can do better, and indeed we MUST do better in the future. Many vendors and academic institutions are working on these issues, and they need urgent attention.
  • We must obtain a better understanding of the nature of remote groupwork. Our students worked in groups of size 2, 3, or 4 (depending on various project selection circumstances). Yet even with pre-arranged group discussions by synchronous on-line chat or telephone conference calls, a lot of fast-paced activity would suddenly happen, involving just one student and one tutor. For example, student A might post a project idea to a communal reading area accessible only to fellow project-group students B and C and also tutor T. Tutor T might post a reply with some feedback, and A might read it and react to it before B and C had logged in again. Thus, A and T would have inadvertently created their own ‘shared reality’– a mini-dialogue INTENDED for B and C to participate in as well, yet B and C would get left behind just because of unlucky timing. The end result in this case would be that students A, B, and C would end up doing mostly individual projects, rather than a group project. Tutors could in future ‘hold back’, but this is probably an artificial solution. The ‘shared reality’ between A and T in the above scenario is no different from what would happen if A cornered T in the bar after the day’s activities had finished at a conventional Summer School. However, in that situation T could more easily ensure that B and C were brought up to date the next day. We may ultimately have to settle for project groups of size 2, but not before doing some more studies to try to make larger groups (e.g. size 4) much more cohesive and effective.
  • We need to improve ‘tutor leverage’ (ability to reach and influence more people). Let’s suppose that we have thoroughly researched and developed radical improvements for the three items above (more bandwidth, nice user interfaces with smooth computer/communications infrasture [sic], happy cohesive workgroups of size 4). It would be a shame if, after all that effort and achievement, each tutor could only deal with, say, 3 groups of 4 students anywhere in the world. The sensory overload for tutors at the existing Virtual Summer School was considerable… many simultaneous conversations and many pieces of software and technology running at once. The 1994 Virtual Summer School was (of necessity) run by a self-selecting group of tutors who were competent in both the subject matter and the technology infrastructure. Less technologically-capable tutors need to be able to deal with larger numbers of students in a comfortable fashion, or Virtual Summer School will remain quite a ‘niche’ activity.

The four areas above (more bandwidth, better computer/comms interfaces, larger workgroups, increased tutor leverage) are active areas of research for us…. stay tuned (and see what we’re now doing in KMi Stadium)!

Who made it work?
  • Marc Eisenstadt: VSS Course Director, Slave Driver, and Fusspot
  • Mike Brayshaw: VSS Tutor & Content Wizard
  • Tony Hasemer: VSS Tutor & FirstClass Wizard
  • Ches Lincoln: VSS Counsellor and FirstClass Guru
  • Simon Masterton: VSS Academic Assistant, Mosaic Webmaster, and Mobile Phone Guru
  • Stuart Watt: VSS Mac Wizard
  • Martin Le Voi: VSS Memory/Stats Advisor & Unix Guru
  • Kim Issroff: VSS Evaluation and <A HREF=”#kim-report”>Report</A>
  • Richard Ross: VSS Talking Head Guided Tour
  • Donald A. Norman (Apple, Inc.): VSS Virtual Guest Lecturer
  • Blaine Price: Unix & Internet Guru & Catalyst
  • Adam Freeman: Comms & Networking Guru
  • Ian Terrell: Network Infrastructure Wizard
  • Mark L. Miller (Apple, Inc.): Crucial Guidance
  • Christine Peyton (Apple UK): Support-against-all-odds
  • Ortenz Rose: Admin & Sanity Preservation
  • Elaine Sharkey: Warehousing/Shipping Logistics

Update: Changed title and Internet vs. WWW language to avoid post-hoc flunking of Dr. Chuck’s IHTS MOOC.

The post WWW-based online education turns 20 this summer appeared first on e-Literate.

New Pastures

Duncan Davies - Tue, 2014-06-24 16:58

I try to keep the content on here focused on the products and implementation tips however I hope you’ll indulge me with one personal post.

After six and a half enjoyable years I have left Succeed Consultancy. I’m leaving behind a lot of talented colleagues and great friends, however for reasons that I don’t want to bore you with it’s time to move on.

As of yesterday I’ve started work for Cedar Consulting. One of the largest the ‘tier 2′ PeopleSoft consultancies in EMEA.

Cedar have been running – in one form or other – for nearly 20 years and have an impressive list of PeopleSoft implementations, upgrades and support/hosting clients. There are few UK PeopleSoft clients who haven’t engaged Cedar at one point or other. As well as their large team of UK consultants they have a number of offices spread globally and a solution centre in India.

Importantly for me, Cedar also have a strong focus on Fusion and already have both a live Fusion client under their belt and the UKOUG Fusion Partner of the Year gold award.

This career move also means that the branding of the PeopleSoft and Fusion Weeklies will change. I’d to thank Succeed for sponsoring the newsletters up to this point and I’m grateful to Cedar for agreeing to sponsor it going forwards. You should notice a rebrand in this week’s editions.


Query result cache in oracle 11g

Adrian Billington - Tue, 2014-06-24 16:58
Oracle adds a new cache for storing the results of queries. December 2007 (updated June 2014)

Virtual CPUs with Amazon Web Services

Pythian Group - Tue, 2014-06-24 15:41

Some months ago, Amazon Web Services changed the way they measure CPU capacity on their EC2 compute platform. In addition to the old ECUs, there is a new unit to measure compute capacity: vCPUs. The instance type page defines a vCPU as “a hyperthreaded core for M3, C3, R3, HS1, G2, and I2.” The description seems a bit confusing: is it a dedicated CPU core (which has two hyperthreads in the E5-2670 v2 CPU platform being used), or is it a half-core, single hyperthread?

I decided to test this out for myself by setting up one of the new-generation m3.xlarge instances (with thanks to Christo for technical assistance). It is stated to have 4 vCPUs running E5-2670 v2 processor at 2.5GHz on the Ivy Bridge-EP microarchitecture (or sometimes 2.6GHz in the case of xlarge instances).

Investigating for ourselves

I’m going to use paravirtualized Amazon Linux 64-bit for simplicity:

$ ec2-describe-images ami-fb8e9292 -H
Type    ImageID Name    Owner   State   Accessibility   ProductCodes    Architecture    ImageType       KernelId        RamdiskId Platform        RootDeviceType  VirtualizationType      Hypervisor
IMAGE   ami-fb8e9292    amazon/amzn-ami-pv-2014.03.1.x86_64-ebs amazon  available       public          x86_64  machine aki-919dcaf8                      ebs     paravirtual     xen
BLOCKDEVICEMAPPING      /dev/sda1               snap-b047276d   8

Launching the instance:

$ ec2-run-instances ami-fb8e9292 -k marc-aws --instance-type m3.xlarge --availability-zone us-east-1d
RESERVATION     r-cde66bb3      462281317311    default
INSTANCE        i-b5f5a2e6      ami-fb8e9292                    pending marc-aws        0               m3.xlarge       2014-06-16T20:23:48+0000  us-east-1d      aki-919dcaf8                    monitoring-disabled                              ebs                                      paravirtual     xen             sg-5fc61437     default

The instance is up and running within a few minutes:

$ ec2-describe-instances i-b5f5a2e6 -H
Type    ReservationID   Owner   Groups  Platform
RESERVATION     r-cde66bb3      462281317311    default
INSTANCE        i-b5f5a2e6      ami-fb8e9292    ec2-54-242-182-88.compute-1.amazonaws.com       ip-10-145-209-67.ec2.internal     running marc-aws        0               m3.xlarge       2014-06-16T20:23:48+0000        us-east-1d      aki-919dcaf8                      monitoring-disabled     54.242.182.88   10.145.209.67                   ebs                      paravirtual      xen             sg-5fc61437     default
BLOCKDEVICE     /dev/sda1       vol-1633ed53    2014-06-16T20:23:52.000Z        true

Logging in as ec2-user. First of all, let’s see what /proc/cpuinfo says:

[ec2-user@ip-10-7-160-199 ~]$ egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo
processor       : 0
model name      : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
cpu MHz         : 2599.998
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 1
processor       : 1
model name      : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
cpu MHz         : 2599.998
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 1
processor       : 2
model name      : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
cpu MHz         : 2599.998
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 1
processor       : 3
model name      : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
cpu MHz         : 2599.998
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 1

Looks like I got some of the slightly faster 2.6GHz CPUs. /proc/cpuinfo shows four processors, each with physical id 0 and core id 0. Or in other words, one single-core processor with 4 threads. We know that the E5-2670 v2 processor is actually a 10-core processor, so the information we see at the OS level is not quite corresponding.

Nevertheless, we’ll proceed with a few simple tests. I’m going to run “gzip”, an integer-compute-intensive compression test, on 2.2GB of zeroes from /dev/zero. By using synthetic input and discarding output, we can avoid effects of disk I/O. I’m going to combine this test with taskset comments to impose processor affinity on the process.

A simple test

The simplest case: a single thread, on processor 0:

[ec2-user@ip-10-7-160-199 ~]$ taskset -pc 0 $$
pid 1531's current affinity list: 0-3
pid 1531's new affinity list: 0
[ec2-user@ip-10-7-160-199 ~]$ dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null
2170552320 bytes (2.2 GB) copied, 17.8837 s, 121 MB/s

With the single processor, we can process 121 MB/sec. Let’s try running two gzips at once. Sharing a single processor, we should see half the throughput.

[ec2-user@ip-10-7-160-199 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 35.8279 s, 60.6 MB/s
2170552320 bytes (2.2 GB) copied, 35.8666 s, 60.5 MB/s
Sharing those cores

Now, let’s make things more interesting: two threads, on adjacent processors. If they are truly dedicated CPU cores, we should get a full 121 MB/s each. If our processors are in fact hyperthreads, we’ll see throughput drop.

[ec2-user@ip-10-7-160-199 ~]$ taskset -pc 0,1 $$
pid 1531's current affinity list: 0
pid 1531's new affinity list: 0,1
[ec2-user@ip-10-7-160-199 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 27.1704 s, 79.9 MB/s
2170552320 bytes (2.2 GB) copied, 27.1687 s, 79.9 MB/s

We have our answer: throughput has dropped by a third, to 79.9 MB/sec, showing that processors 0 and 1 are threads sharing a single core. (But note that Hyperthreading is giving performance benefits here: 79.9 MB/s on a shared core is higher than then 60.5 MB/s we see when sharing a single hyperthread.)

Trying the exact same test, but this time, non-adjacent processors 0 and 2:

[ec2-user@ip-10-7-160-199 ~]$ taskset -pc 0,2 $$
pid 1531's current affinity list: 0,1
pid 1531's new affinity list: 0,2
[ec2-user@ip-10-7-160-199 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 17.8967 s, 121 MB/s
2170552320 bytes (2.2 GB) copied, 17.8982 s, 121 MB/s

All the way up to full-speed, showing dedicated cores.

What does this all mean? Let’s go back to the Amazon’s vCPU definition

Each vCPU is a hyperthreaded core

As our tests have shown, a vCPU is most definitely not a core. It’s a half of a shared core, or one hyperthread.

A side effect: inconsistent performance

There’s another issue at play here too: the shared-core behavior is hidden from the operating system. Going back to /proc/cpuinfo:

[ec2-user@ip-10-7-160-199 ~]$ grep 'core id' /proc/cpuinfo
core id         : 0
core id         : 0
core id         : 0
core id         : 0

This means that the OS scheduler has no way of knowing which processors have shared cores, and can not schedule tasks around it. Let’s go back to our two-thread test, but instead of restricting it to two specific processors, we’ll let it run on any of them.

[ec2-user@ip-10-7-160-199 ~]$ taskset -pc 0-3 $$
pid 1531's current affinity list: 0,2
pid 1531's new affinity list: 0-3
[ec2-user@ip-10-7-160-199 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 18.041 s, 120 MB/s
2170552320 bytes (2.2 GB) copied, 18.0451 s, 120 MB/s
[ec2-user@ip-10-7-160-199 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 21.2189 s, 102 MB/s
2170552320 bytes (2.2 GB) copied, 21.2215 s, 102 MB/s
[ec2-user@ip-10-7-160-199 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 26.2199 s, 82.8 MB/s
2170552320 bytes (2.2 GB) copied, 26.22 s, 82.8 MB/s

We see throughput varying between 82 MB/sec and 120 MB/sec, for the exact same workload. To get some more performance information, we’ll configure top to run 10-second samples with per-processor usage information:

[ec2-user@ip-10-7-160-199 ~]$ cat > ~/.toprc <<-EOF
RCfile for "top with windows"           # shameless braggin'
Id:a, Mode_altscr=0, Mode_irixps=1, Delay_time=3.000, Curwin=0
Def     fieldscur=AEHIOQTWKNMbcdfgjplrsuvyzX
        winflags=25913, sortindx=10, maxtasks=2
        summclr=1, msgsclr=1, headclr=3, taskclr=1
Job     fieldscur=ABcefgjlrstuvyzMKNHIWOPQDX
        winflags=62777, sortindx=0, maxtasks=0
        summclr=6, msgsclr=6, headclr=7, taskclr=6
Mem     fieldscur=ANOPQRSTUVbcdefgjlmyzWHIKX
        winflags=62777, sortindx=13, maxtasks=0
        summclr=5, msgsclr=5, headclr=4, taskclr=5
Usr     fieldscur=ABDECGfhijlopqrstuvyzMKNWX
        winflags=62777, sortindx=4, maxtasks=0
        summclr=3, msgsclr=3, headclr=2, taskclr=3
EOF
[ec2-user@ip-10-7-160-199 ~]$ top -b -n10 -U ec2-user
top - 21:07:50 up 43 min,  2 users,  load average: 0.55, 0.45, 0.36
Tasks:  86 total,   4 running,  82 sleeping,   0 stopped,   0 zombie
Cpu0  : 96.7%us,  3.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  :  0.0%us,  1.4%sy,  0.0%ni, 97.9%id,  0.0%wa,  0.3%hi,  0.0%si,  0.3%st
Cpu2  : 96.0%us,  4.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  :  0.0%us,  1.0%sy,  0.0%ni, 97.9%id,  0.0%wa,  0.7%hi,  0.0%si,  0.3%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1766 ec2-user  20   0  4444  608  400 R 99.7  0.0   0:06.08 gzip
 1768 ec2-user  20   0  4444  608  400 R 99.7  0.0   0:06.08 gzip

Here two non-adjacent CPUs are in use. But 3 seconds later, the processes are running on adjacent CPUs:

top - 21:07:53 up 43 min,  2 users,  load average: 0.55, 0.45, 0.36
Tasks:  86 total,   4 running,  82 sleeping,   0 stopped,   0 zombie
Cpu0  : 96.3%us,  3.7%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  : 96.0%us,  3.6%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.3%hi,  0.0%si,  0.0%st
Cpu2  :  0.0%us,  0.0%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.3%hi,  0.0%si,  0.3%st
Cpu3  :  0.3%us,  0.0%sy,  0.0%ni, 99.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.3%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1766 ec2-user  20   0  4444  608  400 R 99.7  0.0   0:09.08 gzip
 1768 ec2-user  20   0  4444  608  400 R 99.7  0.0   0:09.08 gzip

Although usage percentages are similar, we’ve seen earlier that throughput drops by a third when cores are shared, and we see varied throughput as the processes are context-switched between processors.

This type of situation arises where compute-intensive workloads are running, and when there are fewer processes than total CPU threads. And if only AWS would report correct core IDs to the system, this problem wouldn’t happen: the OS scheduler would make sure processes did not share cores unless necessary.

Here’s a chart summarizing the results:

aws-cpu Summing up

Over the course of the testing I’ve learned two things:

  • A vCPU in an AWS environment actually represents only half a physical core. So if you’re looking for equivalent compute capacity to, say, an 8-core server, you would need a so-called 4xlarge EC2 instance with 16 vCPUs. So take it into account in your costing models!
  • The mislabeling of the CPU threads as separate single-core processors can result in performance variability as processes are switched between threads. This is something the AWS and/or Xen teams should be able to fix in the kernel.

Readers: what has been your experience with CPU performance in AWS? If any of you has access to a physical machine running E5-2670 processors, it would be interesting to see how the simple gzip test runs.

Categories: DBA Blogs

External table enhancements in 11g

Adrian Billington - Tue, 2014-06-24 13:55
Encryption, compression and preprocessing for external tables in Oracle 11g. September 2009 (updated June 2014)

Hacking Oracle 12c COMMON Users

Pete Finnigan - Tue, 2014-06-24 13:20

The main new feature of Oracle 12cR1 has to be the multitennant architecture that allows tennant databases to be added or plugged into a container database. I am interested in the security of this of course and one element that....[Read More]

Posted by Pete On 23/07/13 At 02:52 PM

Categories: Security Blogs

OBIEE Training Site

Abhinav Agarwal - Tue, 2014-06-24 12:33
I was contacted by Seth Williams, who pointed me to this OBIEE training site - http://www.fireboxtraining.com/obiee-training, and asked if I would link to it. There is a an online tutorial, as well as a video, on how to create KPIs using OBIEE - How To Use KPIs | OBIEE Online Training Tutorial
I think this is useful, so am posting it to my blog - which, by the way, you would have seen is not being updated regularly. Feel free to browse to the site. Do let Seth and the people at Firebox know what you think of the site and the tutorial.
Disclaimer: I am not endorsing the site or the trainings. But you know that.

OTN DBA/DEV Watercooler - NEW Blog!

OTN TechBlog - Tue, 2014-06-24 11:35

Laura Ramsey, OTN Database Community Manager, has just launched the OTN DBA/DEV Watercooler.  This blog is your official source of news covering Oracle Database technology topics and community activities from throughout the OTN Database and Developer Community. Find tips and in-depth technology information you need to master Oracle Database Administration or Application Development here. This Blog is compiled by @oracledbdev, the Oracle Database Community Manager for OTN, and features insights, tech tips and news from throughout the OTN Database Community.

Find out more about what you might hear around the OTN DBA/DEV Watercooler in Laura's inaugural post. 

Happy Reading!

Ongoing Database Security Services Provide Greater Visibility: Database Activity Monitoring Series pt. 3 [VIDEO]

Chris Foot - Tue, 2014-06-24 08:38

Hi and welcome back to the RDX blog, where we’re deep in a series about our Database Activity Monitoring services and how these services allow our customers to gain full visibility into their database activity.

We’ve previously touched on how we integrated the advanced features of McAfee’s security products to provide our customers with a 24×7 customizable Database Activity Monitoring solution that alerts customers to threats in real time.

In addition to all of that, we also provide ongoing services, such as new threat analyses, vulnerability scans, database and OS patching services and database activity monitoring reports.

Vulnerability assessments help us give you detailed information you can put into action immediately, helping you prioritize and remediate security gaps., and we schedule them on an ongoing basis to prevent future vulnerabilities. You will be notified about any unprivileged users or programs, and they will be quarantined in real time, preventing any further access into the database.

These assessments make demonstrating compliance to auditors much easier, and we’ll touch on this in our next video, the last part of our Database Activity Monitoring series. Thanks for watching, and stay tuned!

The post Ongoing Database Security Services Provide Greater Visibility: Database Activity Monitoring Series pt. 3 [VIDEO] appeared first on Remote DBA Experts.

SQL vs. NoSQL: Which is best?

Chris Foot - Tue, 2014-06-24 01:33

The manner in which information is accessed – as well as how fast it's procured – depends on the day-to-day needs of organizations. Database administration services often help businesses decide whether Not Only Structured Query Language (NoSQL) or conventional Structured Query Language is needed to optimize data-related operations. 

SQL 
SQL servers, also known as relational databases (RDBMS) have been around for the longest time, with companies such as Oracle and Microsoft developing the structures. The Geek Stuff acknowledged a few key components of the technology: 

  • RDBMS are table-based structures, representing data in columns and rows
  • They possess an underlying pattern or protocol to access and read the information
  • Scaled vertically, SQL databases are accessed by increasing hardware power
  • Good for intricate, extensive queries
  • Vendors typically offer more support for RDBMS, as it is a popular, familiar solution. 

NoSQL 
Relatively new to the sector, NoSQL runs off of unstructured query language. MongoDB, the most popular provider of NoSQL databases, explained that they were developed to better handle large sets of different data types. Primary functions of the technology are dictated below:

  • Can consist of four primary types: document, graph stores, key-value (in which every item in the database is stored with a name and its worth), or wide column
  • Do not subscribe to schemas or preset rules
  • Scaled by combining the computational power of other machines to reduce load stress – also known as "scaling out" 
  • Outside experts are hard to come by, but database support services can provide users with efficient knowledge. 

As they stand in the market 
Visual Studio Magazine referenced a survey of 500 North American software developers by Database-as-a-Service (DBaaS) company Tesora, which discovered that 79 percent of respondents were using SQL database language. The study itself focused on how the two programming interchanges were utilized by those working with private or public cloud environments. 

"Going forward, this gap can be expected to close since NoSQL databases have only been on the market for a few years or less, as opposed decades for some of the incumbents," acknowledged the report, as quoted by VSM. 

One better than the other? 
For those handling a mix of unstructured, structured and semi-structure data, NoSQL is most likely the way to go. Those managing number-based information should see major benefits from using SQL. 

However, it's important to remember that the processing power of tangible servers is increasing at a slower rate than it was ten years ago. Because NoSQL optimizes the use of these machines by pooling computing power, it may be the better choice for those worried about the future. 

The post SQL vs. NoSQL: Which is best? appeared first on Remote DBA Experts.

OOW : Catalogue des sessions

Jean-Philippe Pinte - Mon, 2014-06-23 18:30
Le catalogue des sessions OOW2014 est en ligne.

Coursera shifts focus from ‘impact on learners’ to ‘reach of universities’

Michael Feldstein - Mon, 2014-06-23 17:15

Richard Levin, the new CEO of Coursera, is getting quite clear about the new goals for the company. At first glance the changes might seem semantic in nature, but I believe the semantics are revealing. Consider this interview with the Washington Post that was published today in the Washington Post [emphasis added in both cases below]:

Richard C. Levin, the new chief executive of Coursera, the most widely used MOOC platform, wants to steer the conversation back to what grabbed public attention in the first place: the wow factor.

Sure, Levin said, the emerging technology will help professors stimulate students on campus who are tired of old-school lectures. The talk of “flipped classrooms” and “blended learning” — weaving MOOCs into classroom experiences — is not mere hype.

“But that is not the big picture,” Levin said in a visit last week to The Washington Post. “The big picture is this magnifies the reach of universities by two or three orders of magnitude.”

Contrast this interview with Daphne Koller’s December article at EdSurge:

Among our priorities in the coming year, we hope to shift the conversation around these two dimensions of the learning experience, redefine what it means to be successful, and lay the groundwork for products, offerings, and features that can help students navigate this new medium of learning to meet their own goals, whether that means completing dozens of courses or simply checking out a new subject. [snip]

Still, we are deeply committed to expanding our impact on populations that have been traditionally underserved by higher education, and are actively working to broaden access for students in less-developed countries through a range of initiatives

There are valid criticisms of how well Coursera has delivered on its goal of helping students meet their own learning goals, but now it is apparent that the focus of their efforts is shifting away from the learner and towards the institution. Below are a few notes based on these recent interviews.

Changing Direction From Founders’ Vision

This is the second interview where Levin contradicts the two Coursera founders. In the case above Levin shows the point of Coursera is not primarily impact on learners but is reach of great universities. In a New York Times interview from April he made similar points in contrast to Andrew Ng.

In a recent interview, Mr. Levin predicted that the company would be “financially viable” within five years. He began by disagreeing with Andrew Ng, Coursera’s co-founder, who described Coursera as “a technology company.”

Q. Why is the former president of Yale going to a technology company?

A. We may differ in our views. The technology is obviously incredibly important, but what really makes this interesting for me is this capacity to expand the mission of our great universities, both in the United States and abroad, to reach audiences that don’t have access to higher education otherwise.

Levin is signifying a change at Coursera, and he is not just a new CEO to manage the same business. Andrew Ng no longer has an operational role in the company, but he remains as Chairman of the Board (I’m not claiming a correlation here, but just noting the change in roles).

Reach Is Not Impact

@PhilOnEdTech Is "reach" the same as "impact"?

— Russell Poulin (@RussPoulin) June 23, 2014

The answer in my opinion is only ‘yes’ if the object of the phrase is the universities. Impact on learners is not the end goal. In Levin’s world there is a class of universities that are already “great”, and the end goal is to help these universities reach more people. This is about A) having more people understand the value of each university (branding, eyeballs) and B) getting those universities to help more people. I’m sure that B) is altruistic in nature, but Levin does not seem to focus on what that help actually comprises. Instead we get abstract concepts as we see in the Washington Post:

“That’s why I decided to do it,” Levin said. “Make the great universities have an even bigger impact on the world.”

Levin seems enamored of the scale of Coursera (8.2 million registered students, etc), but I can find no concrete statements in his recent interviews that focus on actual learning results or improvements to the learning process (correct me in the comments if I have missed some key interview). This view is very different from the vision Koller was offering in December. In her vision, Koller attempts to improve impact on learners (the end) by using instruction from great university (the means).

Other People’s Money

Given this view of expanding the reach of great universities, the candor about a lack of revenue model is interesting.

“Nobody’s breathing down our necks to start to turn a profit,” he said. Eventually that will change.

Levin said, however, that “a couple” universities are covering their costs through shared revenue. He declined to identify them.

This lack of priority on generating a viable revenue model is consistent with the pre-Levin era, but what if you take it to its logical end with the new focus of the company? What we now have is a consistent story with AllLearn and Open Yale Courses – spending other people’s money to expand the reach of great universities. Have we now reached the point where universities that often have billion-dollar endowments are using venture capital money to fund part of their branding activities? There’s a certain irony in that situation.

It is possible that Levin’s focus will indirectly improve the learning potential of Coursera’s products and services, but it is worth noting a significant change in focus from the largest MOOC provider.

The post Coursera shifts focus from ‘impact on learners’ to ‘reach of universities’ appeared first on e-Literate.

Extended Support for PeopleSoft Interaction Hub Ending--Time to Upgrade Soon!

PeopleSoft Technology Blog - Mon, 2014-06-23 16:58

Extended support for the PeopleSoft Interaction Hub will be ending in October 2014.  Sustaining support will still be available, but if you are an Interaction Hub (aka Portal) customer, you should consider upgrading to the latest release before that time.  The 9.1/Revision 3 release will be available soon after PeopleTools 8.54 is released, so customers considering an upgrade may wish to move to Revision 3 when it becomes available.

See this document for general information on Oracle/PeopleSoft's Lifetime support program.

Parallel Execution Skew - Addressing Skew Using Manual Rewrites

Randolf Geist - Mon, 2014-06-23 13:33
This is just a short note that the next part of the mini series about Parallel Execution skew has been published at AllThingsOracle.com.

After having shown in the previous instalment of the series that Oracle 12c added a new feature that can deal with Parallel Execution skew (at present in a limited number of scenarios) I now demonstrate in that part how the problem can be addressed using manual query rewrites, in particular the probably not so commonly known technique of redistributing popular values using an additional re-mapping table.

PeopleSoft Interaction Hub Release Value Proposition for Revision 3

PeopleSoft Technology Blog - Mon, 2014-06-23 12:40

We've just published the Release Value Proposition for the PeopleSoft Interaction Hub 9.1/Revision 3.  The RVP provides an overview of the new features and enhancements planned for the upcoming release, which is aligned with PeopleTools 8.54 . The release value proposition is intended to help you assess the business benefits of upgrading to the latest release and to plan your IT projects and investments.

The highlights of the RVP cover the following subjects in the upcoming release:

  • Branding
  • PeopleSoft Fluid User Interface updates
  • Interaction hub cluster setup improvements
  • Simplified content creation and publication
  • WCAG 2.0 adoption

Look for the availability of this release in the near future.