Skip navigation.

Feed aggregator

Latest Oracle Service Cloud Product Release Powers Impactful Community Self-Service Experiences

Linda Fishman Hoyle - Tue, 2015-08-25 16:20

A Guest Post by David Vap (pictured left), Group Vice President, Product Development, Oracle

Today more than one in three customers prefers to contact brands through social channels rather than by phone or email (Nielsen), and the distinction between social and traditional channels is eroding. To deliver the best possible customer experience across traditional and digital channels, customer care organizations need to provide a positive and unified experience where and when customers want, whether they are on Twitter, Facebook, peer-to-peer communities, or other social networks.

Following Oracle’s recent Twitter-enriched social customer service announcement, the latest release of Oracle Service Cloud and Oracle Social Cloud continues to power positive and differentiated customer experiences. The new functionality includes:

New Community Self-Service solution to help streamline the customer journey

  • New approach to web self-service brings community functionality directly into core Service Cloud multi-channel web experience
  • Service Cloud now enables organizations to deliver a seamless experience between web self-service and community interactions, leveraging power of customer knowledge to improve service operations
  • A customer no longer needs to separately navigate self-service and community sites to find an answer, instead discovering and interacting with both formal knowledge (knowledge base) and informal knowledge (community answers) in a single experience

Enhanced social service and incident routing

  • New workflow capabilities between Social Cloud and Service Cloud enable businesses to leverage power of social insights and engagements
  • Business users can now attach contextual attributes and notes from posts or incidents identified by Social Cloud directly to Service Cloud to improve service quality and efficiency by providing more customer information and context

Extended social listening and analytics capabilities to private data sources

  • Enhanced connectivity between Social Cloud and Service Cloud has also extended social listening and analytics to enterprise private-data sources, such as the new  community self-service capability, survey data, and chat and call logs.
  • Organizations can now listen and analyze unstructured data and gain insights with terms, themes, sentiment, and customer metrics, and view private and public data side by side in the Oracle SRM.

According to Gartner, investment in peer-to-peer communities drives support costs down and boosts profits. In fact, in a December 2014 Gartner research note entitled “Nine CRM Projects to Do Right Now for Customer Service,” Michael Maoz, Vice President, Distinguished Analyst, Gartner, writes, “Gartner clients who are successful in this space are still seeing on average of 20% reduction in the creation of support tickets following the introduction of peer-to-peer communities.” Maoz goes on to say, “Clients are seeing other business benefits as well. By enabling community-based support, clients have been able to recognize new sales opportunities and increase existing customer satisfaction, resulting in increased revenue in several of these cases.”

For more information about this leading social customer service product, read the news release and check out the VentureBeat profile!

Truncate – 2

Jonathan Lewis - Tue, 2015-08-25 11:25

Following on from my earlier comments about how a truncate works in Oracle, the second oldest question about truncate (and other DDL) appeared on the OTN database forum“Why isn’t a commit required for DDL?”

Sometimes the answer to “Why” is simply “that’s just the way it is” – and that’s what it is in this case, I think.  There may have been some historic reason why Oracle Corp. implemented DDL the way they did (commit any existing transaction the session is running, then auto-commit when complete), but once the code has been around for a few years – and accumulated lots of variations – it can be very difficult to change a historic decision, no matter how silly it may now seem.

This posting isn’t about answering the question “why”, though; it’s about a little script I wrote in 2003 in response to a complaint from someone who wanted to truncate a table in the middle of a transaction without committing the transaction. Don’t ask why – you really shouldn’t be executing DDL as part of a transactional process (though tasks like dropping and recreating indexes as part of a batch process is a reasonable strategy).

So if DDL always commits the current transaction how do you truncate a table without committing ? Easy – use an autonomous transaction. First a couple of tables with a little data, then a little procedure to do my truncate:


create table t1 (n1 number);
insert into t1 values(1);

create table t2 (n1 number);
insert into t2 values(1);

create or replace procedure truncate_t1
as
        pragma autonomous_transaction;
begin
        execute immediate 'truncate table t1';
end;
/

Then the code to demonstrate the effect:


prompt  ======================================
prompt  In this example we end up with no rows
prompt  in t1 and only the original row in t2,
prompt  the truncate didn't commit the insert.
prompt  ======================================

insert into t2 values(2);

execute truncate_t1;
rollback;

select * from t1;
select * from t2;


According to my notes, the last time I ran this code was on 9.2.0.3 but I’ve just tested it on 12.1.0.2 and it behaves in exactly the same way.

I’ve only tested the approach with “truncate” and “create table” apparently, and I haven’t made any attempt to see if it’s possible to cause major distruption with cunningly timed concurrent activity; but if you want to experiment you have a mechanism which Oracle could have used to avoid committing the current transaction – and you may be able to find out why it doesn’t, and why DDL is best “auto-committed”.


Autonomous transaction to the rescue

Patrick Barel - Tue, 2015-08-25 10:10
.code, .code pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .code pre { margin: 0em; } .code .rem { color: #ff0000; } .code .kwrd { color: #008080; } .code .str { color: #0000ff; } .code .op { color: #0000c0; } .code .preproc { color: #cc6633; } .code .asp { background-color: #ffff00; } .code .html { color: #800000; } .code .attr { color: #ff0000; } .code .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .code .lnum { color: #606060; }

Today, at my current project, I came across an issue where autonomous transactions came in handy.

The situation: I need to create a query to perform an export. A couple of the fields to be selected come from a global temporary table, nothing fancy so far except this global temporary table is filled by a (rather complex) procedure. Another problem is this table is emptied for every row, i.e. it will contain only one row at a time. ‘Just build a wrapper table function for this procedure and have that function call the procedure’ was my first idea.

I created a script that shows the situation

CREATE GLOBAL TEMPORARY TABLE empdate
(
  empno NUMBER(4)
, hiredate DATE
)
ON COMMIT DELETE ROWS
/
CREATE OR REPLACE PROCEDURE getthehiredate(empno_in IN NUMBER) IS
BEGIN
  DELETE FROM empdate;
  INSERT INTO empdate
    (empno
    ,hiredate)
    (SELECT empno
           ,hiredate
       FROM emp
      WHERE empno = empno_in);
END getthehiredate;
/

Then I set out to build a pipelined table function that accepts a cursor as one of its parameters. This function then loops all the values in the cursor, calls the procedure, reads the data from the global temporary table and pipes out the resulting record, nothing really fancy so far.

CREATE TYPE empdate_t AS OBJECT
(
  empno    NUMBER(4),
  hiredate DATE
)
/
CREATE TYPE empdate_tab IS TABLE OF empdate_t
/
CREATE OR REPLACE FUNCTION getallhiredates(empnos_in IN SYS_REFCURSOR) RETURN empdate_tab
  PIPELINED IS
  l_empno       NUMBER(4);
  l_returnvalue empdate_t;
BEGIN
  FETCH empnos_in
    INTO l_empno;
  WHILE empnos_in%FOUND LOOP
    getthehiredate(empno_in => l_empno);
    SELECT empdate_t(ed.empno, ed.hiredate)
      INTO l_returnvalue
      FROM empdate ed
     WHERE 1 = 1
       AND ed.empno = l_empno;
    PIPE ROW(l_returnvalue);
    FETCH empnos_in
      INTO l_empno;
  END LOOP;
  RETURN;
END getallhiredates;
/

But when I ran a query against this function:

SELECT *
FROM TABLE(getallhiredates(CURSOR (SELECT empno
FROM emp)))
/

I ran into an error:

ORA-14551: cannot perform a DML operation inside a query 

So, all the work I done so far had been for nothing? Time wasted? I don’t think so. If there is anything I learned over the years it is that Oracle tries to stop you doing certain things but at the same time supplies you the tools to create a work-around.

There is something like an autonomous transaction, that might help me in this case so I changed the code for the function a bit:


CREATE OR REPLACE FUNCTION getallhiredates(empnos_in IN SYS_REFCURSOR) RETURN empdate_tab
  PIPELINED IS
  PRAGMA AUTONOMOUS_TRANSACTION;
  l_empno       NUMBER(4);
  l_returnvalue empdate_t;
BEGIN
  FETCH empnos_in
    INTO l_empno;
  WHILE empnos_in%FOUND LOOP
    getthehiredate(empno_in => l_empno);
    SELECT empdate_t(ed.empno, ed.hiredate)
      INTO l_returnvalue
      FROM empdate ed
     WHERE 1 = 1
       AND ed.empno = l_empno;
    PIPE ROW(l_returnvalue);
    FETCH empnos_in
      INTO l_empno;
  END LOOP;
  COMMIT;
  RETURN;
END getallhiredates;
/

But when I ran the query:

SELECT *
FROM TABLE(getallhiredates(CURSOR (SELECT empno
FROM emp)))
/

I ran into a different error:

ORA-06519: active autonomous transaction detected and rolled back

So this doesn’t work or does it? Pipelined table functions have ‘exit’ the function multiple times. Whenever a row is piped out. So, I tried to put the COMMIT just before the PIPE ROW command:


CREATE OR REPLACE FUNCTION getallhiredates(empnos_in IN SYS_REFCURSOR) RETURN empdate_tab
  PIPELINED IS
  PRAGMA AUTONOMOUS_TRANSACTION;
  l_empno       NUMBER(4);
  l_returnvalue empdate_t;
BEGIN
  FETCH empnos_in
    INTO l_empno;
  WHILE empnos_in%FOUND LOOP
    getthehiredate(empno_in => l_empno);
    SELECT empdate_t(ed.empno, ed.hiredate)
      INTO l_returnvalue
      FROM empdate ed
     WHERE 1 = 1
       AND ed.empno = l_empno;
    COMMIT;
    PIPE ROW(l_returnvalue);
    FETCH empnos_in
      INTO l_empno;
  END LOOP;
  RETURN;
END getallhiredates;
/

And when I ran my statement again:

SELECT *
FROM TABLE(getallhiredates(CURSOR (SELECT empno
FROM emp)))
/

It worked as I hoped for.

As you can see I have tried to mimic the situation using the EMP and DEPT tables. I think this is a nice little trick, but it should be used with caution. It is not for no reason that Oracle prevents you from running DML inside a query, but in this case I can bypass this restriction.



Autonomous transaction to the rescue

Bar Solutions - Tue, 2015-08-25 10:10
.code, .code pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .code pre { margin: 0em; } .code .rem { color: #ff0000; } .code .kwrd { color: #008080; } .code .str { color: #0000ff; } .code .op { color: #0000c0; } .code .preproc { color: #cc6633; } .code .asp { background-color: [...]

Fedora 22/23 and Oracle 11gR2/12cR1

Tim Hall - Tue, 2015-08-25 08:53

linux-tuxAs always, installations of Oracle server products on Fedora are not a great idea, as explained here.

I was reading some stuff about the Fedora 23 Alpha and realised Fedora 22 had passed me by. Not sure how I missed that. :)

Anyway, I did a run through of the usual play stuff.

While I was at it, I thought I would get the heads-up on Fedora 23 Alpha.

The F23 stuff will have to be revised once the final version is out, but I’m less likely to forget now. :)

I guess the only change in F22 upward that really affects me is the deprecation of YUM in F22 in favour of the DNF fork. For the most part, you just switch the command.

#This:
yum install my-package -y
yum groupinstall my-package-group -y
yum update -y

#Becomes:
dnf install my-package -y
dnf groupinstall  my-package-group -y
dnf group install  my-package-group -y
dnf update -y

This did cause one really annoying problem in F23 though. The “MATE Desktop” had a single documentation package that was causing a problem. Usually I would use the following.

yum groupinstall "MATE Desktop" -y --skip-broken

Unfortunately, DNF doesn’t support “–skip-broken”, so I was left to either manually install the pieces, or give up. I chose the latter and use LXDE instead. :) F23 is an Alpha, so you expect issues, but DNF has been in since F22 and still no “–skip-broken”, which I find myself using a lot. Pity.

Cheers

Tim…

Fedora 22/23 and Oracle 11gR2/12cR1 was first posted on August 25, 2015 at 3:53 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Yet another CSV -> Table but with pipleline function

Kris Rice - Tue, 2015-08-25 08:11
Here's just one more variation on how to get a CSV into a table format.  It could have been done before but my google-fu couldn't find it anywhere. First to get some sample data using the /*csv*/ hint in sqldev. Then the results of putting it back to a table. The inline plsql is just to convert the text into a CLOB. Now the details. The csv parsing is completely borrowed(stolen) from

Oracle Midlands : Event #11

Tim Hall - Tue, 2015-08-25 07:36

Just a quick note to say Oracle Midlands Event #11 is nearly here.

om11

Cheers

Tim…

Oracle Midlands : Event #11 was first posted on August 25, 2015 at 2:36 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Truncate

Jonathan Lewis - Tue, 2015-08-25 01:39

The old question about truncate and undo (“does a truncate generate undo or not”) appeared on the OTN database forum over the week-end, and then devolved into “what really happens on a truncate”, and then carried on.

The quick answer to the traditional question is essentially this: the actual truncate activity typically generates very little undo (and redo) compared to a full delete of all the data because all it does is tidy up any space management blocks and update the data dictionary; the undo and redo generated is only about the metadata, not about the data itself.

Of course, a reasonable response to the quick answer is: “how do you prove that?” – so I suggested that all you had to do was “switch logfile, truncate a table, dump logfile”. Unfortunately I realised that I had never bothered to do this myself and, despite having far more useful things to do, I couldn’t resist wasting some of my evening doing it. Here’s the little script I wrote to help


create table t2 (v1 varchar2(32));
insert into t2 values (rpad('A',32));
commit;

create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                  id, 
        rpad('x',100)           padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5
;

create index t1_i1 on t1(id);
alter system flush buffer_cache;
execute dbms_lock.sleep(3)

alter system switch logfile;

insert into t2 values(rpad('X',32));

truncate table t1;and t

insert into t2 values(rpad('Y',32));
commit;

execute dump_log

Procedure dump_log simply dumps the current log file. The call to switch logfile keeps the dumped log file as small as possible; and I’ve flushed the buffer cache with a three second sleep to minimise the number of misleading “Block Written Record” entries that might otherwise appear in the log file after the truncate. There were all sorts of interesting little details in the resulting activity when I tested this on 12.1.0.2 – here’s one that’s easy to spot before you even look at the trace file:


SQL> select object_id, data_object_id, object_name from user_objects where object_name like 'T1%';

 OBJECT_ID DATA_OBJECT_ID OBJECT_NAME
---------- -------------- --------------------
    108705         108706 T1_I1
    108704         108707 T1

Notice how the data_object_id of the index is smaller than that of the table after the truncate ? Oracle truncates (and renumbers) the index before truncating the table.

The truncate activity was pretty much as as I had assumed it would be – with one significant variation. The total number of change vectors report was 272 in 183 redo record (your numbers may vary slightly if you try to reproduce the example), and here’s a summary of the redo OP codes that showed up in those change vectors in order of frequency:


Change operations
=================
  1 OP:10.25    Format root block
  1 OP:11.11    Insert multiple rows (table)
  1 OP:24.1     DDL
  1 OP:4.1      Block cleanout record
  2 OP:10.4     Delete leaf row
  2 OP:13.28    HWM on segment header block
  3 OP:10.2     Insert leaf row
  3 OP:17.28    standby metadata cache invalidation
  4 OP:11.19    Array update (index)
  4 OP:11.5     Update row (index)
 10 OP:13.24    Bitmap Block state change (Level 2)
 11 OP:23.1     Block written record
 12 OP:14.1     redo: clear extent control lock
 12 OP:22.5     File BitMap Block Redo
 14 OP:14.2     redo - lock extent (map)
 14 OP:14.4     redo - redo operation on extent map
 14 OP:5.4      Commit / Rollback
 15 OP:18.3     Reuse record (object or range)
 15 OP:22.16    File Property Map Block (FPM)
 22 OP:13.22    State on Level 1 bitmap block
 24 OP:22.2     File Space Header Redo
 29 OP:5.2      Get undo header
 58 OP:5.1      Update undo block

The line that surprised me was the 14 commit/rollback codes – a single truncate appears to have operated as 14 separate (recursive) transactions. I did start to walk through the trace file to work out the exact order of operation, but it’s really messy, and a tedious task, so I just did a quick scan to get the picture. I may have made a couple of mistakes in the following, but I think the steps were:

  • Start transaction
  • Lock the extent map for the index — no undo needed
  • Lock each bitmap (space management) block  — no undo needed
  • Reset each bitmap block — undo needed to preserve space management information
  • Reset highwater marks where relevant on bitmap and segment header block — undo needed
  • Clear segment header block — undo needed
  • Write all the updated space management blocks to disc (local write waits)
    • Log file records “Block Written Record”.
  • For each space management block in turn
    • Update space management blocks with new data object_id — undo needed
    • Write the updated block to disc (local write wait)
    • Log file records one “Block Written Record” for each block
  • Repeat all the above for the TABLE segment.
  • Start a recursive transacion
    • Insert a row into mon_mod$ — undo needed
    • recursive commit
  • Set DDL marker in redo log (possibly holding the text of the DDL statement, but it’s not visible in the dump)
  • Set object reuse markers in the redo log
  • update tab$  — needs undo, it’s just DML
  • update ind$ — needs undo, it’s just DML
  • update seg$  — needs undo, it’s just DML (twice – once for table once for index)
  • update obj$ — needs undo, it’s just DML (twice – ditto)
  • COMMIT — at last, with a change vector for a “Standby metadata cache invalidation” marker

The remaining 12 transactions look like things that could be delayed to tidy up things like space management blocks for the files and tablespaces and releasing “block locks”.

This first, long, transaction, is the thing that has to happen as an atomic event to truncate the table – and you can imagine that if the database crashed (or you crashed the session) in the middle of a very slow truncate then there seems to be enough information being recorded in the undo to allow the database to roll forward an incomplete truncate, and then roll back to before the truncate.

It would be possible to test whether or not this would actually work – but I wouldn’t want to do it on a database that anyone else was using.


Ed Tech Evaluation Plan: More problems than I initially thought

Michael Feldstein - Mon, 2015-08-24 14:21

By Phil HillMore Posts (356)

Late last week I described the new plan from the US Department of Education (ED) and their Office of Educational Technology (OET) to “call for better methods for evaluating educational apps”. Essentially the ED is seeking proposals for new ed tech evaluation methods so that they can share the results with schools – helping them evaluate specific applications. My argument [updated DOE to be ED]:

Ed tech apps by themselves do not “work” in terms of improving academic performance. What “works” are pedagogical innovations and/or student support structure that are often enabled by ed tech apps. Asking if apps works is looking at the question inside out. The real question should be “Do pedagogical innovations or student support structures work, under which conditions, and which technology or apps support these innovations?”. [snip]

I could see that for certain studies, you could use the ED template and accomplish the same goal inside out (define the conditions as specific pedagogical usage or student support structures), thus giving valuable information. What I fear is that the pervasive assumption embedded in the program setup, asking over and over “does this app work” will prove fatal. You cannot put technology as the center of understanding academic performance.

Upon further thought as well as prompting from the comments and private notes, this ED plan has even more problems that I initially thought.

Advocate or Objective Evaluator

There is a real problem with this plan coming out of the Office of Educational Technology due to their mission.

The mission of the Office of Educational Technology (OET) is to provide leadership for transforming education through the power of technology. OET develops national educational technology policy and establishes the vision for how technology can be used to support learning.

The OET strongly advocates for the use of ed tech applications, which I think is a primary cause of their inside-out, technology first view of the world. They are not an objective organization in terms of whether and when technology should be used, but rather an advocate assuming that technology should be used, but please make it effective. Consider these two statements, the first from the National Technology Plan and the second from the paper “Learning Technology Effectiveness” [emphasis added]:

  • The plans calls for applying the advanced technologies used in our daily personal and professional lives to our entire education system to improve student learning, accelerate and scale up the adoption of effective practices, and and use data and information for continuous improvement.
  • While this fundamental right to technology access for learning is nonnegotiable, it is also just the first step to equitable learning opportunities.

I have no problem with these goals, per se, but it would be far more useful to not have advocates in charge of evaluations.

A Better View of Evaluation

Richard Hershman from the National Association of College Stores (NACS) shared with me an article that contained a fascinating section on just this subject.

Why Keep Asking the Same Questions When They Are Not the Right Questions?

There are no definitive answers to questions about the effectiveness of technology in boosting student learning, student readiness for workforce skills, teacher productivity, and cost effectiveness. True, some examples of technology have shown strong and consistent positive results. But even powerful programs might show no effects due to myriad methodological flaws. It would be most unfortunate to reject these because standardized tests showed no significant differences. Instead, measures should evaluate individual technologies against specific learning, collaboration, and communication goals.

The source of this excellent perspective on evaluating ed tech? An article called “Plugging In: Choosing and Using Educational Technology” from the North Central Regional Educational Laboratory and commissioned by the US Department of Education in 1995.

As Richard Parent commented in my recent post:

You’re exactly right to reframe this question. It’s distressing when the public demands to know “what works” as if there are a set of practices or tools that simply “are” good education. It’s downright depressing when those who should be in the know do so, too.

Update: This is not fully to the level of response, but Rolin Moe got Richard Culatta to respond to his tweet about the initial article.

@RMoeJo it’s true. I believe research has to adapt to pace of tech or we will continue to make decisions about edu apps with no evidence

— Richard Culatta (@rec54) August 25, 2015

Rolin Moe: Most important thing I have read all year – @philonedtech points out technocentric assumptions of US ED initiative

Richard Culatta: it’s true. I believe research has to adapt to pace of tech or we will continue to make decisions about edu apps with no evidence

The post Ed Tech Evaluation Plan: More problems than I initially thought appeared first on e-Literate.

Reminder: Great free computer science and Python programming class starts Wednesday

Bobby Durrett's DBA Blog - Mon, 2015-08-24 13:01

I mentioned this class earlier in a blog post but I wanted to remind people who read this blog that the class is starting again on Wednesday.  Here is the URL for the class: link

The class is completely free and taught at a very high level of quality.

It teaches computer science concepts that apply in any programming language but also teaches Python programming.

It is valuable information in the increasingly computer oriented world and economy and the class is free which is remarkable given its quality.

Here is the class name:

MITx: 6.00.1x Introduction to Computer Science and Programming Using Python

Bobby

Categories: DBA Blogs

Multi-model database managers

DBMS2 - Mon, 2015-08-24 02:07

I’d say:

  • Multi-model database management has been around for decades. Marketers who say otherwise are being ridiculous.
  • Thus, “multi-model”-centric marketing is the last refuge of the incompetent. Vendors who say “We have a great DBMS, and by the way it’s multi-model (now/too)” are being smart. Vendors who say “You need a multi-model DBMS, and that’s the reason you should buy from us” are being pathetic.
  • Multi-logical-model data management and multi-latency-assumption data management are greatly intertwined.

Before supporting my claims directly, let me note that this is one of those posts that grew out of a Twitter conversation. The first round went:

Merv Adrian: 2 kinds of multimodel from DBMS vendors: multi-model DBMSs and multimodel portfolios. The latter create more complexity, not less.

Me: “Owned by the same vendor” does not imply “well integrated”. Indeed, not a single example is coming to mind.

Merv: We are clearly in violent agreement on that one.

Around the same time I suggested that Intersystems Cache’ was the last significant object-oriented DBMS, only to get the pushback that they were “multi-model” as well. That led to some reasonable-sounding justification — although the buzzwords of course aren’t from me — namely:

Caché supports #SQL, #NoSQL. Interchange across tables, hierarchical, document storage.

Along the way, I was reminded that some of the marketing claims around “multi-model” are absurd. For example, at the time I am writing this, the Wikipedia article on “multi-model database” claims that “The first multi-model database was OrientDB, created in 2010…” In fact, however, by the definitions used in that article, multi-model DBMS date back to the 1980s, when relational functionality was grafted onto pre-relational systems such as TOTAL and IDMS.

What’s more, since the 1990s, multi-model functionality has been downright common, specifically in major products such as Oracle, DB2 and Informix, not to mention PostgreSQL. (But not so much Microsoft or Sybase.) Indeed, there was significant SQL standards work done around datatype extensions, especially in the contexts of SQL/MM and SQL3.

I tackled this all in 2013, when I argued:

Developments since then have been in line with my thoughts. For example, Spark added DataFrames, which promise substantial data model flexibility for Spark use cases, but more mature products have progressed in a more deliberate way.

What’s new in all this is a growing desire to re-integrate short-request and analytic processing — hence Gartner’s new-ish buzzword of HTAP (Hybrid Transactional/Analytic Processing). The more sensible reasons for this trend are:

  • Operational applications have always needed to accept immediate writes. (Losing data is bad.)
  • Operational applications have always needed to serve small query result sets based on the freshest data.(If your write something into a database, you might need to immediately retrieve it to finish the business operation.)
  • It is increasingly common for predictive decisions to be made at similar speeds. (That’s what recommenders and personalizers do.) Ideally, such decisions can be based on fresh and historical data alike.
  • The long-standing desire for business intelligence to operate on super-fresh data is, increasingly, making sense, as we get ever more stuff to monitor. However …
  • … most such analysis should look at historical data as well.
  • Streaming technology is supplying ever more fresh data.

But here’s the catch — the best models for writing data are the worst for reading it, and vice-versa, because you want to write data as a lightly-structured document or log, but read it from a Ted-Codd-approved RDBMS or MOLAP system. And if you don’t have the time to move data among multiple stores, then you want one store to do a decent job of imitating both kinds of architecture. The interesting new developments in multi-model data management will largely be focused on that need.

Related links

  • The two-policemen joke seems ever more relevant.
  • My April, 2015 post on indexing technology reminds us that one DBMS can do multiple things.
  • Back in 2009 integrating OLTP and data warehousing was clearly a bad idea.
Categories: Other

Issue with Perl in $ORACLE_HOME during installs

DBASolved - Mon, 2015-08-24 01:14

I’ve been doing some Enterprise Manager installs a bit more lately. At the same time, I’ve been working on Data Integration items such as GoldenGate and ODI.  What these products have in common are that they require an Oracle Database for a repository.  Needless to say I’ve been installing a lot of 12.1.0.2 databases in test and production environments.  The one thing that has been consistent is the issue I keep seeing with PERL that is packaged with the Grid Infrastructure and/or Database.

Tip: This may not be happening to everyone and I may have a bad set of binaries.  In discussing this with a co-worker, the md5sum sets were the same for my set of binaries as they were for his. So I couldn’t say if this issue was bad binaries or something else.

As I was doing installs of Grid Infrastructure or Database on Oracle Enterprise Linux 6.6, I would get the following issue when trying to run the root.sh scripts from either the OUI or from the command line.

[root@rac1 grid]./root.sh
Performing root user operation.</p>
<p>The following environment variables are set as:
<br> ORACLE_OWNER=oracle 
<br> ORACLE_HOME=/u01/app/grid/12.1.0/grid 
<br> Copying dbhome to /usr/local/bin...
<br> Copying oraenv to /usr/local/bin...
<br> Copying coraenv to/usr/local/bin&nbsp;...</p>
<p>Entries will be added to the /etc/oratab file as needed by <br>Database Configuration Assistant when a database&nbsp;is&nbsp;created <br>Finished running generic part of root script.<br>Now product-specific root actions will be performed.<br><strong>/u01/app/grid/12.1.0/grid/crs/config/rootconfig.sh: line 131: 20862 Segmentation fault  (core dumped) $ROOTSCRIPT $ROOTSCRIPT_ARGS</strong><strong>The command '/u01/app/grid/12.1.0/grid/perl/bin/perl -I/u01/app/grid/12.1.0/grid/perl/lib -I/u01/app/grid/12.1.0/grid/crs/install /u01/app/grid/12.1.0/grid/crs/install/rootcrs.pl ' execution failed</strong>

You will notice that the execution failed with a “Segmentation fault”. In looking at the command, I noticed that this is running perl from the $ORACLE_HOME/perl/bin directory. When I did a “which perl”, the perl that the operating system is using is coming from /usr/bin/perl. This is not the correct one being used by the root.sh script. Also if I did a “perl -v” from the command line it returns that the version of perl is 5.10.

Now that it is established that the operating system installed perl is fine, I took a look at the perl in $ORACLE_HOME/perl/bin. When I navigated to the $ORACLE_HOME/perl/bin directory and executed “perl -v”; I was met with the “Segmentation fault” issue. Knowing that the problem is within the Oracle binaries; how can this be resolved?

To resolve this “Segmentation fault” issue, I had to recompile the perl binaries that Oracle uses in the $ORACLE_HOME path. To do this, I had to download and recompile the perl binaries in the $ORACLE_HOME directories.

<br>$cd ~/Downloads 
<br>$wget http://www.cpan.org/src/5.0/perl-5.14.4.tar.gz
<br>$tar -xzf perl-5.14.4.tar.gz 
<br>$cd perl-5.14.4 <br>$./Configure -des -Dprefix=$GI_HOME/perl <br>$make 
<br>$make test 
<br>$make install 
<br>

With the binaries recompiled, I was now able to run a “perl -v” from the $ORACLE_HOME and get a successful result set.

<br>[oracle@rac1 ~] cd /u01/app/grid/12.1.0.2/grid 
<br>[oracle@rac1 grid] cd perl/bin 
<br>[oracle@rac1 bin]./perl -v 
</p>
<p>This is perl 5, version 14,subversion 4 (v5.14.4) built for x86_64-linux </p>
<p>Copyright 1987-2013,Larry&nbsp;Wall </p><p>Perl may be copied only under the terms of either the Artistic License or the <br>GNU General Public License, which may be found in the Perl 5 source kit.</p><p>Complete documentation for Perl, including FAQ lists, should be found on 
<br>this system using "man perl" or "perldoc perl".  If you have access to the <br>Internet, point&nbsp;your browser at http://www.perl.org/, the Perl Home Page.
<br>

This process can be done if the OUI is running and the step that hung can be retried.  If you closed out the OUI, then the root.sh scripts will run successfully now from the $ORACLE_HOME directories.

Enjoy!


Filed under: Database, General
Categories: DBA Blogs

Worrying about the 4th Bullet?

Bradley Brown - Sun, 2015-08-23 20:12
I love analogies that are very meaningful and get the point across.  I often hear businesses talking about different subjects that concern them.  You could say this is similar to the days of the old west where people carried their weapons on their belt and used them at will.  Back then, did people worry about the 4th bullet in the gun?  Of course not.  They worried about the 1st bullet.

So why is it that in business we're often worried about the 4th bullet that's going to kill us?  For example, let's say your business needs revenue, has expenses, is not making a profit.  Should you worry about your legal contracts?  Should you worry about employee retention?  Employee onboarding?  Other topics?

Of course those things are important, but...what's most important?  How do you get to profitability?  More revenue?  Fewer expenses?  More customers?  More from your existing customers?  The 1st bullet that will kill a business is the lack of revenue.

In real estate, the 3 most important things are location, location, and location.  In business, the 3 most important things are focus, focus, and focus (on the 1st bullet).

Focus on the 1st bullet every day, not the 4th bullet!

What Deborah Tannen taught me about conversation & interruption

Sean Hull - Sun, 2015-08-23 12:44
I was recently invited to attend a charity event in Washington DC. Dinner was a catered affair of 300 with a few senators & Muhammad Yunus there to talk about micro financing. After dinner we broke up into some smaller groups, and had great conversations into the night. It was interesting to me as I … Continue reading What Deborah Tannen taught me about conversation & interruption →

Monitoring RMAN Operations

Michael Dinh - Sat, 2015-08-22 23:41

Just a reference to source and my version of the script.

This is for restore since there are OUTPUTS.

Script to monitor RMAN Backup and Restore Operations (Doc ID 1487262.1)

$ sqlplus / as sysdba @mon_rman_restore.sql

SQL*Plus: Release 10.2.0.4.0 - Production on Sun Aug 23 01:14:31 2015

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


Session altered.


  SID SERIAL# USERNAME	 LOGON_TIME	 OSUSER     PROCESS	   SPID 	MACHINE        ST PROGRAM
----- ------- ---------- --------------- ---------- -------------- ------------ -------------- -- --------------------------------
 3290	   12 SYS	 22-08-15 20:36  oracle     31267	   31298	prod2      I  rman@prod2 (TNS V1-V3)
 3292	    9 SYS	 22-08-15 20:36  oracle     31267	   31297	prod2      I  rman@prod2 (TNS V1-V3)
 3289	   11 SYS	 22-08-15 20:36  oracle     31267	   31299	prod2      A  rman@prod2 (TNS V1-V3)
 3279	    1 SYS	 22-08-15 20:36  oracle     31267	   31301	prod2      A  rman@prod2 (TNS V1-V3)
 3285	   14 SYS	 22-08-15 20:36  oracle     31267	   31300	prod2      A  rman@prod2 (TNS V1-V3)
 3278	    1 SYS	 22-08-15 20:36  oracle     31267	   31302	prod2      A  rman@prod2 (TNS V1-V3)
 3277	    1 SYS	 22-08-15 20:36  oracle     31267	   31303	prod2      A  rman@prod2 (TNS V1-V3)
 3275	    1 SYS	 22-08-15 20:36  oracle     31267	   31305	prod2      A  rman@prod2 (TNS V1-V3)
 3276	    1 SYS	 22-08-15 20:36  oracle     31267	   31304	prod2      A  rman@prod2 (TNS V1-V3)
 3274	    1 SYS	 22-08-15 20:36  oracle     31267	   31306	prod2      A  rman@prod2 (TNS V1-V3)
 3273	    1 SYS	 22-08-15 20:36  oracle     31267	   31307	prod2      A  rman@prod2 (TNS V1-V3)
 3272	    1 SYS	 22-08-15 20:37  oracle     31267	   31308	prod2      A  rman@prod2 (TNS V1-V3)
 3270	    1 SYS	 22-08-15 20:37  oracle     31267	   31310	prod2      A  rman@prod2 (TNS V1-V3)
 3271	    1 SYS	 22-08-15 20:37  oracle     31267	   31309	prod2      A  rman@prod2 (TNS V1-V3)

14 rows selected.


  SID SERIAL# CHANNEL			 SEQ# EVENT			     STATE		SECS	  SOFAR  TOTALWORK % COMPLETE
----- ------- -------------------- ---------- ------------------------------ ------------ ---------- ---------- ---------- ----------
 3274	    1 rman channel=d08		54992 RMAN backup & recovery I/O     WAITING		   0	 342523    6815742	 5.03
 3275	    1 rman channel=d07		18384 RMAN backup & recovery I/O     WAITING		   0	 501503    7340030	 6.83
 3278	    1 rman channel=d04		48839 RMAN backup & recovery I/O     WAITING		   3	 502704    7340030	 6.85
 3272	    1 rman channel=d10		13502 RMAN backup & recovery I/O     WAITING		   3	 495473    6815742	 7.27
 3270	    1 rman channel=d12		39023 RMAN backup & recovery I/O     WAITING		   0	 535039    7340030	 7.29
 3271	    1 rman channel=d11		51018 RMAN backup & recovery I/O     WAITING		   0	 536703    7340030	 7.31
 3276	    1 rman channel=d06		  121 RMAN backup & recovery I/O     WAITING		   0	 503423    6815742	 7.39
 3277	    1 rman channel=d05		  276 RMAN backup & recovery I/O     WAITING		   3	 553855    7389182	  7.5
 3285	   14 rman channel=d02		56444 RMAN backup & recovery I/O     WAITING		   3	 611128    7340030	 8.33
 3289	   11 rman channel=d01		 2482 RMAN backup & recovery I/O     WAITING		   3	 846732    7340030	11.54
 3279	    1 rman channel=d03		 5065 RMAN backup & recovery I/O     WAITING		   3	 882685    7340030	12.03
 3273	    1 rman channel=d09		49115 RMAN backup & recovery I/O     WAITING		   3	1004287    7340030	13.68

12 rows selected.


  SID CHANNEL		   STATUS		OPEN_TIME	       SOFAR_MB   TOTAL_MB % COMPLETE TYPE
----- -------------------- -------------------- -------------------- ---------- ---------- ---------- ---------
FILENAME
----------------------------------------------------------------------------------------------------
 3270 rman channel=d12	   IN PROGRESS		23-AUG-2015 01:06:36	4180.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH1_9qqf6d01_49466_1.bus

 3275 rman channel=d07	   IN PROGRESS		23-AUG-2015 01:06:59	3918.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH3_a0qf6hcg_49472_1.bus

 3289 rman channel=d01	   IN PROGRESS		23-AUG-2015 01:02:00	6615.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH8_9pqf6crq_49465_1.bus

 3285 rman channel=d02	   IN PROGRESS		23-AUG-2015 01:05:46	4647.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH7_9rqf6d1e_49467_1.bus

 3279 rman channel=d03	   IN PROGRESS		23-AUG-2015 01:01:26	6895.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH6_9uqf6d3c_49470_1.bus

 3278 rman channel=d04	   IN PROGRESS		23-AUG-2015 01:07:02	3922.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH2_9sqf6d1t_49468_1.bus

 3277 rman channel=d05	   IN PROGRESS		23-AUG-2015 01:06:20	4327.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH5_9oqf6coh_49464_1.bus

 3276 rman channel=d06	   IN PROGRESS		23-AUG-2015 01:07:00	3933.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH8_a2qf6i9i_49474_1.bus

 3274 rman channel=d08	   IN PROGRESS		23-AUG-2015 01:09:24	2674.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH7_a3qf6ic7_49475_1.bus

 3273 rman channel=d09	   IN PROGRESS		23-AUG-2015 00:59:40	7846.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH3_9vqf6d3d_49471_1.bus

 3272 rman channel=d10	   IN PROGRESS		23-AUG-2015 01:07:07	3869.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH5_a4qf6idl_49476_1.bus

 3271 rman channel=d11	   IN PROGRESS		23-AUG-2015 01:06:35	4193.99 		      INPUT
/shares/dd/prod1/rman/PROD/rman_PROD_DB_level0_CH4_9tqf6d1v_49469_1.bus

 3273 rman channel=d09	   IN PROGRESS		23-AUG-2015 00:59:42	   3923      24576	15.96 OUTPUT
+DATA01/prod2/datafile/xxxdata01.305.888454781

 3279 rman channel=d03	   IN PROGRESS		23-AUG-2015 01:01:28	3447.88      24576	14.03 OUTPUT
+DATA01/prod2/datafile/xxxdata01.307.888454887

 3289 rman channel=d01	   IN PROGRESS		23-AUG-2015 01:02:02	   3308      24576	13.46 OUTPUT
+DATA01/prod2/datafile/xxxdata01.309.888454921

 3273 rman channel=d09	   IN PROGRESS		23-AUG-2015 00:59:41	3923.88   32767.98	11.97 OUTPUT
+DATA01/prod2/datafile/xxxidx01.304.888454781

 3279 rman channel=d03	   IN PROGRESS		23-AUG-2015 01:01:27	3448.88   32767.98	10.53 OUTPUT
+DATA01/prod2/datafile/xxxidx01.306.888454887

 3289 rman channel=d01	   IN PROGRESS		23-AUG-2015 01:02:01	   3308   32767.98	 10.1 OUTPUT
+DATA01/prod2/datafile/xxxidx01.308.888454921

 3285 rman channel=d02	   IN PROGRESS		23-AUG-2015 01:05:47	2387.38      24576	 9.71 OUTPUT
+DATA01/prod2/datafile/xxxdata01.311.888455147

 3276 rman channel=d06	   IN PROGRESS		23-AUG-2015 01:07:03	1966.88      20480	  9.6 OUTPUT
+DATA01/prod2/datafile/xxxdata01.449.867145931.tts

 3272 rman channel=d10	   IN PROGRESS		23-AUG-2015 01:07:08	1935.88      20480	 9.45 OUTPUT
+DATA01/prod2/datafile/xxxidx01.325.888455227

 3277 rman channel=d05	   IN PROGRESS		23-AUG-2015 01:06:22	2163.88      24960	 8.67 OUTPUT
+DATA01/prod2/datafile/xxxdata01.313.888455181

 3271 rman channel=d11	   IN PROGRESS		23-AUG-2015 01:06:36	2096.88      24576	 8.53 OUTPUT
+DATA01/prod2/datafile/xxxdata01.315.888455195

 3270 rman channel=d12	   IN PROGRESS		23-AUG-2015 01:06:38	   2090      24576	  8.5 OUTPUT
+DATA01/prod2/datafile/xxxidx01.317.888455197

 3278 rman channel=d04	   IN PROGRESS		23-AUG-2015 01:07:03	   1964      24576	 7.99 OUTPUT
+DATA01/prod2/datafile/xxxdata01.323.888455223

 3275 rman channel=d07	   IN PROGRESS		23-AUG-2015 01:07:01	1958.88      24576	 7.97 OUTPUT
+DATA01/prod2/datafile/xxxidx01.319.888455221

 3285 rman channel=d02	   IN PROGRESS		23-AUG-2015 01:05:47	   2388   32767.98	 7.29 OUTPUT
+DATA01/prod2/datafile/xxxdata01.310.888455147

 3277 rman channel=d05	   IN PROGRESS		23-AUG-2015 01:06:21	   2164   32767.98	  6.6 OUTPUT
+DATA01/prod2/datafile/xxxidx01.312.888455181

 3274 rman channel=d08	   IN PROGRESS		23-AUG-2015 01:09:25	1337.88      20480	 6.53 OUTPUT
+DATA01/prod2/datafile/xxxidx01.327.888455365

 3271 rman channel=d11	   IN PROGRESS		23-AUG-2015 01:06:35	   2097   32767.98	  6.4 OUTPUT
+DATA01/prod2/datafile/xxxdata01.314.888455195

 3270 rman channel=d12	   IN PROGRESS		23-AUG-2015 01:06:37	2090.88   32767.98	 6.38 OUTPUT
+DATA01/prod2/datafile/xxxidx01.316.888455197

 3276 rman channel=d06	   IN PROGRESS		23-AUG-2015 01:07:02	   1967   32767.98	    6 OUTPUT
+DATA01/prod2/datafile/xxxdata01.320.888455221

 3278 rman channel=d04	   IN PROGRESS		23-AUG-2015 01:07:03	1964.38   32767.98	 5.99 OUTPUT
+DATA01/prod2/datafile/xxxidx01.321.888455223

 3275 rman channel=d07	   IN PROGRESS		23-AUG-2015 01:07:00	   1960   32767.98	 5.98 OUTPUT
+DATA01/prod2/datafile/xxxidx01.318.888455219

 3272 rman channel=d10	   IN PROGRESS		23-AUG-2015 01:07:07	   1936   32767.98	 5.91 OUTPUT
+DATA01/prod2/datafile/xxxidx01.324.888455227

 3274 rman channel=d08	   IN PROGRESS		23-AUG-2015 01:09:25	1338.88   32767.98	 4.09 OUTPUT
+DATA01/prod2/datafile/xxxdata01.326.888455365


36 rows selected.

Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
$ 
SET linesize 160 trimspool ON pages 1000 
ALTER session SET nls_date_format = 'DD-MON-YYYY HH24:MI:SS';
col sid FOR 9999 
col serial# FOR 99999 
col spid FOR 9999 
col username FOR a10 
col osuser FOR a10 
col status FOR a2 
col program FOR a32 
col logon_time FOR a15 
col module FOR a30 
col action FOR a35 
col process FOR a14 
col machine FOR a14
SELECT s.sid,
  s.serial#,
  s.username,
  TO_CHAR(s.logon_time,'DD-MM-RR hh24:mi') logon_time,
  s.osuser,
  s.process,
  p.spid,
  s.machine,
  SUBSTR(s.status,1,1) status,
  s.program
FROM v$session s, v$process p
WHERE s.program LIKE '%rman%'
AND s.paddr = p.addr (+)
ORDER BY s.logon_time, s.sid
;
col event FOR a30 
col channel FOR a20 
col state FOR a12
SELECT o.sid,
  o.serial#,
  client_info channel,
  seq#,
  event,
  state,
  seconds_in_wait secs,
  sofar,
  totalwork,
  ROUND(sofar/totalwork*100,2) "%COMPLETE"
FROM v$session_longops o, v$session s
WHERE program LIKE '%rman%'
AND opname NOT LIKE '%aggregate%'
AND o.sid       =s.sid
AND totalwork  != 0
AND sofar       totalwork
AND wait_time   = 0
AND NOT action IS NULL
ORDER BY 10
;
col filename FOR a110 
col status FOR a20
SELECT a.sid,
  client_info channel,
  a.status,
  open_time,
  ROUND(BYTES      /1024/1024,2) SOFAR_MB,
  ROUND(total_bytes/1024/1024,2) TOTAL_MB,
  ROUND(BYTES      /TOTAL_BYTES*100,2) "%COMPLETE",
  a.type,
  filename
FROM v$backup_async_io a, v$session s
WHERE NOT a.STATUS IN ('UNKNOWN')
AND a.sid           =s.sid
AND a.status       'FINISHED'
ORDER BY 8, 7 DESC
;
EXIT

Windows 10 Again

Tim Hall - Sat, 2015-08-22 11:55

DiagnosticsI wrote a few months ago about having a play with Windows 10 (here).

I’m visiting family today, catching up on all the Windows desktop (and mobile phone) support that I missed while I was away.

I purposely postponed the Windows 10 update on the desktops before I went away, but now I’m back I did the first of them.

The update itself was fine, but it did take a long time. Nothing really to write home about.

I’ve installed the latest version of Classic Shell on the machine, so the experience is similar to what they had before, Windows 8.1 and Classic Shell, which felt like Windows 7. :)

I’ve also switched out their shortcuts from Edge (Spartan) to Internet Explorer 11. They already use a combination of IE, Firefox and Chrome, so I didn’t want to add another thing into the mix. Also, the nephews use the Java plugin for some web-based games, so it is easier to leave them with IE for the time being. Maybe I will introduce Edge later…

So all in all, the user experience is pretty much unchanged compared to what they had before. I guess I will see how many calls Captain Support gets over the coming weeks! :)

Cheers

Tim…

Windows 10 Again was first posted on August 22, 2015 at 6:55 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Bucharest's Oracle EPC Ambassadors Show 'n' Wow with Oracle Applications Cloud UX

Usable Apps - Sat, 2015-08-22 09:14

The Oracle EMEA Presales Center (EPC) team (@OracleEPC), based in Bucharest, Romania has delivered an awesome Oracle Applications Cloud User Experience (UX) day. 

UX Team in Readers' Cafe Bucharest

The team carries the message: Passion and enthusiasm for UX. In style.

The event was for local customers and partners to find out more about the Oracle Applications Cloud UX strategy, to see and hear how we innovate with UX, and to explore the Oracle Applications Cloud in a personal, hands-on way. I was honored to kick off the proceedings, being keen to gauge the local market reaction to the cloud and innovation, and to answer any questions.

 Still part of UX

Look mum, no UI! But there's still a UX! IoT and web services are part of our Cloud UX story.

An eager and curious audience in Bucharest's Metropolis Centre was treated to an immersive UX show about strategy, science, and storytelling: What's UX? What does UX mean for users and the business? Simplicity, Mobility, Extensibility, Glance, Scan, Commit, the Oracle Cloud as platform, wearables, IoT and web services, and PaaS4SaaS, it was all covered.

The Oracle EPC team was the real enablers. Upstairs in the very funky Readers Café, these UX ambassadors brought the Oracle Applications Cloud UX message to life for customers in style, demoing "by walking around", and staffing stations for deeper discussions about the Oracle HCM Cloud, Oracle Sales Cloud, Oracle ERP Cloud, and PaaS4SaaS.

Oracle EPC team styling the Simplicity, Mobility, Extensibility UX message

The new wearables: Simplicity, Mobility, Extensibility.  

The Oracle EPC team let the UX do the talking by putting the Oracle Applications Cloud into the hands of customers, answering any questions as users enthusiastically swiped and tapped on Apple iPads to explore for themselves.

Oracle ERP Cloud demo in Readers Cafe Bucharest

Oracle Applications Cloud UX orchestration: Music to customer and partner ears.

Later, I was given a walking and video tour of the Oracle EPC operation in the fab Oracle building in Bucharest, co-ordinated by Oracle HCM Cloud and UX champ Vlad Babu (@vladbabu). I learned about the central work that EPC do so passionately across EMEA and APAC in providing content, context, and services to enable the Oracle sales effort: bid management, cloud and technology learning, making web solutions, demos and POC creation, video storytelling, rainmaking with insight, building mobile and PaaS4SaaS integration demos, and more.

I was blown away. To echo Oracle CEO Mark Hurd's (@markvhurd) words, "I didn’t know you did that. I didn’t know you had that."

I do now. And so do our customers.

Our Commitment to UX 

Be clear about what this event meant: It's a practical demonstration of Oracle's tremendous investment in user experience with great design, people, and technology and a testament to global success through bringing it all together. It's a clear message about the UX team's commitment to putting boots on the ground in EMEA, and other regions to listen, watch, and enable. That's why I'm here in EMEA.

Listening to the people who matter. And responding. That's UX.

UX is about listening to customers, partners, and users. It's about empathy. It's about being there.

The Bucharest event is just the beginning of great things to come and even greater things to happen for Oracle Applications Cloud customers and partners in EMEA and APAC. I'll be back. See you soon!

Be Prepared 

If you missed the event, check out our free Oracle Applications Cloud UX eBook, and find out how you can participate in the Oracle Cloud UX and future events in your area from the Usable Apps website. Keep up to date by following along on Twitter (@usableapps). 

Shout-outs

Thanks to Vlad Babu and Monica Costea for making it all happen, the co-ordination of the Oracle Applications UX team in the U.S., to Oracle EPC management for their support, and to Marcel Comendant for the images used on this page and on Twitter.

Presenting in Perth on 9 September and Adelaide on 11 September (Stage)

Richard Foote - Sat, 2015-08-22 05:54
For those of you lucky enough to live on the western half of Australia, I’ll be presenting at a couple of events in both Perth and Adelaide in the coming weeks. On Wednesday, 9th September 2015, I’ll be presenting on Oracle Database 12c New Features For DBAs (and Developers) at a “Let’s Talk Oracle” event […]
Categories: DBA Blogs

US Department of Education: Almost a good idea on ed tech evaluation

Michael Feldstein - Fri, 2015-08-21 16:53

By Phil HillMore Posts (356)

Richard Culatta from the US Department of Education (DOE, ED, never sure of proper acronym) wrote a Medium post today describing a new ED initiative to evaluate ed tech app effectiveness.

As increasingly more apps and digital tools for education become available, families and teachers are rightly asking how they can know if an app actually lives up to the claims made by its creators. The field of educational technology changes rapidly with apps launched daily; app creators often claim that their technologies are effective when there is no high-quality evidence to support these claims. Every app sounds world-changing in its app store description, but how do we know if an app really makes a difference for teaching and learning?

He then describes the traditional one-shot studies of the past (control group, control variables, year or so of studies, get results) and notes:

This traditional approach is appropriate in many circumstances, but just does not work well in the rapidly changing world of educational technology for a variety of reasons.

The reasons?

  • Takes too long
  • Costs too much and can’t keep up
  • Not iterative
  • Different purpose

This last one is worth calling out in detail, as it underlies the assumptions behind this initiative.

Traditional research approaches are useful in demonstrating causal connections. Rapid cycle tech evaluations have a different purpose. Most school leaders, for example, don’t require absolute certainty that an app is the key factor for improving student achievement. Instead, they want to know if an app is likely to work with their students and teachers. If a tool’s use is limited to an after-school program, for example, the evaluation could be adjusted to meet this more targeted need in these cases. The collection of some evidence is better than no evidence and definitely better than an over-reliance on the opinions of a small group of peers or well-designed marketing materials.

The ED plans are good in terms of improving the ability to evaluate effectiveness in such a manner that accounts for rapid technology evolution. The general idea of ED investing in the ability to provide better decision-making information is a good one. It’s also very useful to see ED recognize context of effectiveness claims.

The problem I see, and it could be a fatal one, is that ED is asking the wrong question for any technology or apps related to teaching and learning. [emphasis added]

The important questions to be asked of an app or tool are: does it work? with whom? and in what circumstances? Some tools work better with different populations; educators want to know if a study included students and schools similar to their own to know if the tool will likely work in their situations.

Ed tech apps by themselves do not “work” in terms of improving academic performance[1]. What “works” are pedagogical innovations and/or student support structure that are often enabled by ed tech apps. Asking if apps works is looking at the question inside out. The real question should be “Do pedagogical innovations or student support structures work, under which conditions, and which technology or apps support these innovations?”.

Consider our e-Literate TV coverage of Middlebury College and one professor’s independent discover of flipped classroom methods.

How do you get valuable information if you ask the question “Does YouTube work” to increase academic performance? You can’t. YouTube is a tool that the professor used. Now you could get valuable information if you ask the question “Does flipped classroom work for science courses, and which tools work in this context?” You could even ask “For the tools that support this flipped classroom usage, does the choice of tool (YouTube, Vimeo, etc) correlate with changes in student success in the course?”.

I could see that for certain studies, you could use the ED template and accomplish the same goal inside out (define the conditions as specific pedagogical usage or student support structures), thus giving valuable information. What I fear is that the pervasive assumption embedded in the program setup, asking over and over “does this app work” will prove fatal. You cannot put technology as the center of understanding academic performance.

I’ll post this as a comment to Richard’s Medium post as well. With a small change in the framing of the problem, this could be a valuable initiative from DOE.

Update: Changed DOE to ED for accuracy.

Update: This is not fully to the level of response, but Rolin Moe got Richard Culatta to respond to his tweet about this article.

@RMoeJo it’s true. I believe research has to adapt to pace of tech or we will continue to make decisions about edu apps with no evidence

— Richard Culatta (@rec54) August 25, 2015

Rolin Moe: Most important thing I have read all year – @philonedtech points out technocentric assumptions of US ED initiative

Richard Culatta: it’s true. I believe research has to adapt to pace of tech or we will continue to make decisions about edu apps with no evidence

  1. And yes, they throw in a line that it is not just about academic performance but also administrative claims. But the whole setup is on teaching and learning usage, which is the primary focus of my comments.

The post US Department of Education: Almost a good idea on ed tech evaluation appeared first on e-Literate.

Oracle Applications Customer Connect Has a New Look

Linda Fishman Hoyle - Fri, 2015-08-21 15:23

A Guest Post by Katrine Haugerud (pictured left), Senior Director, Oracle Product Management

We are pleased to announce the new, more modern look for our Customer Connect Community. This is based on the Oracle User Interface design paradigm.

Here are some of the enhancements you may have already noticed.

Landing page (pre-login)

On the landing page you can access information that does not require you to login. This includes Release Readiness resources, Help content, and more.

If you are an existing member you can use the Sign In link at the top of the page or the Sign In button on the Welcome banner to login. If you are not yet a community member, use the Register button to find out how you can request an account.


Homepage (post-login)

After logging in and getting to your homepage, you will notice that the overall navigation and structure of our Community have not changed much, but we have revitalized it with the new Oracle UI look and feel.

The banners are bigger and better to help you stay on top of important conferences, events, announcements, and other resources. We have also improved the Events Calendar so you can, at-a-glance, see what and when events are coming up, without having to navigate to the Events page. The Tab navigation is also streamlined to make it easier to retrieve the forums or content areas you are looking for!

We hope you’ll find this new look refreshing―and don’t forget to give us feedback by posting on the Site Feedback and Questions forum.

Remember this is Your Community!