Skip navigation.

Feed aggregator

Upgrade Cloud Control 12cR4 to 12cR5

Tim Hall - Mon, 2015-07-06 00:32

em-12cA couple of weeks ago I wrote a post about doing a Cloud Control 12cR5 installation and said I would be testing the upgrade from 12cR4. I’ve now done that.

The upgrade documentation is quite extensive and the prerequisites are going to be different depending on the database and cloud control versions you are starting with, so this is no way a “recommended” way to do the upgrade. Each one will need to be approached on a case-by-case basis. It’s just meant to give a flavour of what you have to do.

Suffice to say, it worked fine for me. :)

Cheers

Tim…

Upgrade Cloud Control 12cR4 to 12cR5 was first posted on July 6, 2015 at 7:32 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Avoiding the Inbound Monkey

Anthony Shorten - Sun, 2015-07-05 23:32

This is the first in a series of articles on implementation advice to optimize the designs and configuration of Oracle Utilities products.

Early in my career my mentor at the time suggested that I expand my knowledge outside the technical area. The idea is that non-technical techniques would augment my technical knowledge. He suggested a series of books and articles that would expand my thinking. Today I treasure those books and articles and regularly reread them to reinforce my skills.

Recently I was chatting to customers about optimizing their interface designs using a techniques typically called "Responsibility Led Design". The principle is basically that each participant in an interface has distinct responsibilities for the data interchanged and it was important to make sure designs took this into account. This reminded me of one of favorite books titled "The One Minute Manager Meets The Monkey" by Ken Blanchard, William Oncken Jr. and Hal Burrows. I even have a copy of the audio version which is both informative and very entertaining. The book was based on a very popular Harvard Review article entitled "Management Time: Who's Got The Monkey" and expands on that original idea.

To paraphrase the article, a monkey is a task that is not your responsibility that is somehow assigned to you. The terms for this is the "monkey jumping on your back" or simply "Monkey on your back". This epitomizes the concepts of responsibility.

So what has this got to with design or even Oracle Utilities products, you might ask?

One of the key designs for all implementation is sending data INTO the Oracle Utilities products. These are inbound interfaces, for obvious reasons. In every interface there is a source application and a target application. The responsibility of the source application is to send valid data to the target application for processing. Now, one of the problems I see with implementations is when the source application sends invalid data to the target. There are two choices in this case:

  • Send back the invalid request - This means that if the data transferred from the source in invalid for the target then the target should reject the data and ask the source to resend. Most implementations use various techniques for achieve this. This keeps the target clean of invalid data and ensures the source corrects their data before sending it off again. This is what I call correct behavior.
  • Accept the invalid request (usually in a staging area) and correct it within the target for reprocessing - This means the data is accepted by the target. regardless of the error and corrected within the target application to complete the processing.

More and more I am seeing implementations taking the latter design philosophy.  This is not efficient as the responsibility for data clensing (the monkey in this context) has jumped on the back of the target application. At this point, the source application has no responsibility for cleaning their own data and has no real incentive to ever send clean data to the target as the target is now has the monkey firmly on their back. This has consequences for the target application as it can increase resource usage (human and non-human) to now correct data errors from the source application. Some of the customers I chatted to found that while initially they found the volume of these types of transactions were low that over time the same errors kept being sent, and over time the cumulative effect of the data clensing on the target started to get out of control. Typically, at this point, customers ask for advice to try and reduce the impact.

In an Oracle Utilities product world, this exhibits itself as a large number of interface To Do's to manage as well as staging records to manage and additional storage to manage. The latter is quite important as typically implementations keep forgetting to remove completed transactions that have been corrected once they have been applied from the staging area.  The product ship special purge jobs to remove complete staged transactions and we recently added support for ILM to staging records.

My advice to these customers are:

  • Make sure that you assign invalid transactions back to the source application. This will ensure the source application maximizes the quality of their data and also hopefully prevents common transaction errors to reoccur. In other words, the monkey does not jump from the source to the target.
  • If you choose to let the monkey jump on the target's back, for any reason, then use staging tables and make sure they are cleaned up to minimize the impact. Monitor the error rates and number of errors and ensure the source application is informed to correct the error trends.

In short, avoid the monkey in your inbound transactions. This will make sure the resources you allocate to both your source and target are responsible and are allocated in an efficient manner.

Installation steps of JDK 7 for Linux for Oracle Fusion Middleware

Online Apps DBA - Sun, 2015-07-05 14:40

This post covers procedure installs the Java Development Kit (JDK) for 64-bit RPM-based Linux platforms, such as Red Hat and SuSE, using an RPM. This post is from our Oracle Fusion Middleware (FMW) or Oracle Access Manager (OAM) training where we provide dedicated machine to trainees to practice but if you need to install similar setup on your local machine (We use Oracle Virtual Box with Oracle Linux 5.5 ).

You must be login as root user to install  this installation (Assumption is that you are installing JDK on 64 bit Linux)

1. Download the JDK software from here (jdk-7u60-linux-64.rpm)

new

2. The installation process should be carried out with the “root” user.

su – root

when prompted for password, enter the root password.

3. Navigate to the directory where your JDK software is downloaded

cd /stage/oracle/jdk

4. Install the package  using the command : rpm -ivh <package_name>

rpm -ivh  jdk-7u60-linux-x64.rpm

Note: This step will install JDK 1.7 under /usr/java/jdk1.7.0_60

5. To verify the version of java, navigate to the directory /usr/java/jdk1.7.0_60/bin and check javac and java versions. The version should be the latest installed JDK version.

Note: JDK is default installed under the directory /usr/java/jdk1.7.0_60

cd /usr/java/jdk1.7.0_60/bin

./java -version

jdk

 

6. Delete the .rpm file if you want to save disk space.

7. Exit the root shell. No need to reboot.

 

Reference:

 

If you are part of our training program and have not yet registered for closed Facebook Group then send request and post any technical queries.

 

 

 

Related Posts for Fusion Middleware
  1. Installation steps of JDK 7 for Linux for Oracle Fusion Middleware

The post Installation steps of JDK 7 for Linux for Oracle Fusion Middleware appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

RESULT_CACHE hint expiration options

Yann Neuhaus - Sun, 2015-07-05 14:22

The result cache is very nice when base tables are static: get the result without any buffer get, without any join or sorts. But as soon as any DML occurs on one of the tables the result is based on, then result cache is invalidated. And cache misses are quite expensive. You can think of it as a materialized view in memory. But with materialized views, you can accept to see stale results in order to avoid frequent refreshes. Let's see how we can control stale results with undocumented options.

Test case

I'm creating a basic table.

22:30:44 SQL> create table DEMO as select rownum id from xmltable('1 to 100000');

Table created.

22:30:44 SQL> exec if dbms_result_cache.flush then dbms_output.put_line('Flushed.'); end if;

PL/SQL procedure successfully completed.

RESULT_CACHE

On that table, I'll do a simple select with the RESULT_CACHE hint.

22:30:44 SQL> set autotrace on
22:30:44 SQL> select /*+ result_cache */ count(*) from DEMO;

  COUNT(*)
----------
    100000


Execution Plan
----------------------------------------------------------
Plan hash value: 2180342005

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 |    46   (0)| 00:00:01 |
|   1 |  RESULT CACHE       | 2x5f91pfn5p6882f6szxj50jwf |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| DEMO                       |   100K|    46   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; dependencies=(DEMO.DEMO); attributes=(single-row); name="select /*+ result_cache */ count(*) from DEMO"


Statistics
----------------------------------------------------------
          5  recursive calls
          0  db block gets
        159  consistent gets
        153  physical reads
          0  redo size
        358  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

We have read the 150 blocks of the DEMO table. The result cache has been loaded. Here are the objects and dependency:
22:30:44 SQL> set autotrace off
22:30:44 SQL> select * from v$result_cache_dependency;

 RESULT_ID  DEPEND_ID  OBJECT_NO     CON_ID
---------- ---------- ---------- ----------
         1          0     112463          0

22:30:44 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Dependency Published DEMO.DEMO                      DEMO.DEMO                      0
  1 Result     Published select /*+ result_cache */ cou 2x5f91pfn5p6882f6szxj50jwf     0

We can read it as: the query result (id=1) depends on the table (id=0).

If I run it another time:

22:30:45 SQL> select /*+ result_cache */ count(*) from DEMO;

  COUNT(*)
----------
    100000


Execution Plan
----------------------------------------------------------
Plan hash value: 2180342005

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 |    46   (0)| 00:00:01 |
|   1 |  RESULT CACHE       | 2x5f91pfn5p6882f6szxj50jwf |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| DEMO                       |   100K|    46   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; dependencies=(DEMO.DEMO); attributes=(single-row); name="select /*+ result_cache */ count(*) from DEMO"


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        358  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

No block get at all. The result was in cache.

dependencies

If we do any kind of DML on the tables the result cache depends on:

22:30:45 SQL> delete from DEMO where null is not null;

0 rows deleted.

22:30:45 SQL> commit;

Commit complete.

Then the cache is invalidated:
22:30:45 SQL> select * from v$result_cache_dependency;

no rows selected

22:30:45 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Dependency Published DEMO.DEMO                      DEMO.DEMO                      1
  1 Result     Invalid   select /*+ result_cache */ cou 2x5f91pfn5p6882f6szxj50jwf     0

and another run will have to load the cache again:
22:30:45 SQL> set autotrace on
22:30:45 SQL> select /*+ result_cache */ count(*) from DEMO;

  COUNT(*)
----------
    100000


Execution Plan
----------------------------------------------------------
Plan hash value: 2180342005

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 |    46   (0)| 00:00:01 |
|   1 |  RESULT CACHE       | 2x5f91pfn5p6882f6szxj50jwf |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| DEMO                       |   100K|    46   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; dependencies=(DEMO.DEMO); attributes=(single-row); name="select /*+ result_cache */ count(*) from DEMO"


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        157  consistent gets
          0  physical reads
          0  redo size
        358  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

22:30:46 SQL> set autotrace off
22:30:46 SQL> select * from v$result_cache_dependency;

 RESULT_ID  DEPEND_ID  OBJECT_NO     CON_ID
---------- ---------- ---------- ----------
         2          0     112463          0

22:30:46 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Dependency Published DEMO.DEMO                      DEMO.DEMO                      1
  1 Result     Invalid   select /*+ result_cache */ cou 2x5f91pfn5p6882f6szxj50jwf     0
  2 Result     Published select /*+ result_cache */ cou 2x5f91pfn5p6882f6szxj50jwf     0

The is the only documented behaviour: the cache is invalidated if and only if there has been some DML on the tables it depends on.

RESULT_CACHE(SHELFLIFE=seconds)


I flush the cache and run the same with the undocumented result cache hint option: SHELFLIFE=10 seconds:

22:30:46 SQL> exec if dbms_result_cache.flush then dbms_output.put_line('Flushed.'); end if;

PL/SQL procedure successfully completed.

22:30:46 SQL>
22:30:46 SQL> set autotrace on
22:30:46 SQL> select /*+ result_cache(shelflife=10) */ count(*) from DEMO;

  COUNT(*)
----------
    100000


Execution Plan
----------------------------------------------------------
Plan hash value: 2180342005

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 |    46   (0)| 00:00:01 |
|   1 |  RESULT CACHE       | 2x5f91pfn5p6882f6szxj50jwf |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| DEMO                       |   100K|    46   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; dependencies=(DEMO.DEMO); attributes=(single-row, shelflife=10); name="select /*+ result_cache(shelflife=10) */ count(*) from DEMO"
we see the 'shelflife' attribute but the dependency is the same as without the option:
22:30:46 SQL> set autotrace off
22:30:46 SQL> select * from v$result_cache_dependency;

 RESULT_ID  DEPEND_ID  OBJECT_NO     CON_ID
---------- ---------- ---------- ----------
         1          0     112463          0

22:30:46 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Dependency Published DEMO.DEMO                      DEMO.DEMO                      0
  1 Result     Published select /*+ result_cache(shelfl 2x5f91pfn5p6882f6szxj50jwf     0
I run the query again 5 seconds later:
22:30:51 SQL> set autotrace on
22:30:51 SQL> select /*+ result_cache(shelflife=10) */ count(*) from DEMO;

  COUNT(*)
----------
    100000


Execution Plan
----------------------------------------------------------
Plan hash value: 2180342005

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 |    46   (0)| 00:00:01 |
|   1 |  RESULT CACHE       | 2x5f91pfn5p6882f6szxj50jwf |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| DEMO                       |   100K|    46   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; dependencies=(DEMO.DEMO); attributes=(single-row, shelflife=10); name="select /*+ result_cache(shelflife=10) */ count(*) from DEMO"


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        358  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

22:30:51 SQL> set autotrace off
22:30:51 SQL> select * from v$result_cache_dependency;

 RESULT_ID  DEPEND_ID  OBJECT_NO     CON_ID
---------- ---------- ---------- ----------
         1          0     112463          0

22:30:51 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Dependency Published DEMO.DEMO                      DEMO.DEMO                      0
  1 Result     Published select /*+ result_cache(shelfl 2x5f91pfn5p6882f6szxj50jwf     0

Nothing special here. No DML occured so the result is still valid ('published')

expiration


But let's wait 5 more seconds and run it again:

22:30:56 SQL> select * from v$result_cache_dependency;

 RESULT_ID  DEPEND_ID  OBJECT_NO     CON_ID
---------- ---------- ---------- ----------
         1          0     112463          0

22:30:56 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Dependency Published DEMO.DEMO                      DEMO.DEMO                      0
  1 Result     Expired   select /*+ result_cache(shelfl 2x5f91pfn5p6882f6szxj50jwf     0
The status has changed, it's not EXPIRED because the 10 seconds shelflife has passed on since the cache was loaded.
22:30:56 SQL>
22:30:56 SQL> set autotrace on
22:30:56 SQL> select /*+ result_cache(shelflife=10) */ count(*) from DEMO;

  COUNT(*)
----------
    100000


Execution Plan
----------------------------------------------------------
Plan hash value: 2180342005

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 |    46   (0)| 00:00:01 |
|   1 |  RESULT CACHE       | 2x5f91pfn5p6882f6szxj50jwf |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| DEMO                       |   100K|    46   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; dependencies=(DEMO.DEMO); attributes=(single-row, shelflife=10); name="select /*+ result_cache(shelflife=10) */ count(*) from DEMO"


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        157  consistent gets
          0  physical reads
          0  redo size
        358  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

here I have a cache miss even if no DML occured on the tables it depends on.
22:30:56 SQL> select * from v$result_cache_dependency;

 RESULT_ID  DEPEND_ID  OBJECT_NO     CON_ID
---------- ---------- ---------- ----------
         2          0     112463          0

22:30:56 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Dependency Published DEMO.DEMO                      DEMO.DEMO                      0
  1 Result     Invalid   select /*+ result_cache(shelfl 2x5f91pfn5p6882f6szxj50jwf     0
  2 Result     Published select /*+ result_cache(shelfl 2x5f91pfn5p6882f6szxj50jwf     0

When I run a query using an expired result cache, that cache becomes invalid, the query is fully run (150 block gets) and new result cache populated.

Note that in addition to the shelflife expiration, the dependency on DML is still invalidating the result cache:

22:30:56 SQL> delete from DEMO where null is not null;

0 rows deleted.

22:30:56 SQL> commit;

Commit complete.

22:30:56 SQL> select * from v$result_cache_dependency;

no rows selected

22:30:56 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Dependency Published DEMO.DEMO                      DEMO.DEMO                      1
  1 Result     Invalid   select /*+ result_cache(shelfl 2x5f91pfn5p6882f6szxj50jwf     0
  2 Result     Invalid   select /*+ result_cache(shelfl 2x5f91pfn5p6882f6szxj50jwf     0
So with SHELFLIFE we have two reasons for invalidations: dependency and expiration.

RESULT_CACHE(SNAPSHOT=seconds)


There is another undocumented option for the result cache hint: SNAPSHOT which also taked a number of seconds. Let's try it:

22:30:56 SQL> exec if dbms_result_cache.flush then dbms_output.put_line('Flushed.'); end if;

PL/SQL procedure successfully completed.

22:30:56 SQL>
22:30:56 SQL> set autotrace on
22:30:56 SQL> select /*+ result_cache(snapshot=10) */ count(*) from DEMO;

  COUNT(*)
----------
    100000


Execution Plan
----------------------------------------------------------
Plan hash value: 2180342005

------------------------------------------------------------------------------------------
| Id  | Operation           | Name                       | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |                            |     1 |    46   (0)| 00:00:01 |
|   1 |  RESULT CACHE       | 2x5f91pfn5p6882f6szxj50jwf |       |            |          |
|   2 |   SORT AGGREGATE    |                            |     1 |            |          |
|   3 |    TABLE ACCESS FULL| DEMO                       |   100K|    46   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; attributes=(single-row, snapshot=10); name="select /*+ result_cache(snapshot=10) */ count(*) from DEMO"


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        157  consistent gets
          0  physical reads
          0  redo size
        358  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

22:30:56 SQL> set autotrace off
22:30:56 SQL> select * from v$result_cache_dependency;

no rows selected

22:30:56 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Result     Published select /*+ result_cache(snapsh 2x5f91pfn5p6882f6szxj50jwf     0

The cache has been populated here, but there is no dependency. Let's try DML on base table:
22:30:56 SQL> delete from DEMO where null is not null;

0 rows deleted.

22:30:56 SQL> commit;

Commit complete.

22:30:56 SQL> select * from v$result_cache_dependency;

no rows selected

22:30:56 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Result     Published select /*+ result_cache(snapsh 2x5f91pfn5p6882f6szxj50jwf     0

A SNAPSHOT result cache is not invalidated by DML on base tables.

But if we wait 10 seconds:

22:31:06 SQL> select id,type,status,name,cache_id,invalidations from v$result_cache_objects order by id;

 ID TYPE       STATUS    NAME                           CACHE_ID                     INV
--- ---------- --------- ------------------------------ --------------------------- ----
  0 Result     Expired   select /*+ result_cache(snapsh 2x5f91pfn5p6882f6szxj50jwf     0

The status is expired, as with the shelflife. Next execution will invalidate it.

Conclusion

There are two reasons to invalidate a result cache: DML on dependency, or expiration after a number of seconds. And we can use any combination of them with undocumented hints. Oracle itself uses them internally. SNAPSHOT is used by Active Dynamic Sampling: stale result are acceptable for one hour. SHELFLIFE is used on dictionary views bases on X$ tables where dependency cannot be tracked.

I hope it will get documented in future releases. There are some cases where we can accept stale results in order to get better performance. We already do that with materialized views, so why not do it with result cache?

RESULT_CACHE: run-time dependency tracking

XTended Oracle SQL - Sun, 2015-07-05 13:20

As you know, since 11.2 “relies_on” clause was deprecated and oracle tracks dependencies at runtime now.
Test function and tables

create or replace function f_without_deps(p_tab varchar2) return varchar2
as
   res varchar2(30);
begin
   execute immediate 'select '''||p_tab||''' from '||p_tab||' where rownum=1' into res;
   return res;
end;
/
create table a as select 'a' a from dual;
create table b as select 'b' b from dual;
create view v_ab as select a,b from a,b;

[collapse]
And it works fine with normal tables:
v_ab

SQL> exec :p_tab:='v_ab';

PL/SQL procedure successfully completed.

SQL> call DBMS_RESULT_CACHE.flush();

Call completed.

SQL> select/*+ result_cache */ f_without_deps(:p_tab) result from dual;

RESULT
----------
v_ab

1 row selected.

SQL> select name,bucket_no, id,type,status,pin_count,scan_count,invalidations from v$result_cache_objects o order by type,id;

NAME                                                               BUCKET_NO  ID TYPE       STATUS     PIN_COUNT SCAN_COUNT INVALID
----------------------------------------------------------------- ---------- --- ---------- --------- ---------- ---------- -------
XTENDER.F_WITHOUT_DEPS                                                  1579   0 Dependency Published          0          0       0
XTENDER.V_AB                                                            3127   2 Dependency Published          0          0       0
XTENDER.B                                                                778   3 Dependency Published          0          0       0
XTENDER.A                                                                464   4 Dependency Published          0          0       0
select/*+ result_cache */ f_without_deps(:p_tab) result from dual       1749   1 Result     Published          0          0       0

[collapse]
But don’t forget that the result_cache also caches such functions with the objects, that usually should not be cached, and such objects will not be listed in the result_cache dependencies list:
v$database

SQL> exec :p_tab:='v$database';

PL/SQL procedure successfully completed.

SQL> call DBMS_RESULT_CACHE.flush();

Call completed.

SQL> select/*+ result_cache */ f_without_deps(:p_tab) result from dual;

RESULT
----------
v$database

1 row selected.

SQL> select name,bucket_no, id,type,status,pin_count,scan_count,invalidations from v$result_cache_objects o order by type,id;

NAME                                                               BUCKET_NO  ID TYPE       STATUS     PIN_COUNT SCAN_COUNT INVALID
----------------------------------------------------------------- ---------- --- ---------- --------- ---------- ---------- -------
XTENDER.F_WITHOUT_DEPS                                                   772   0 Dependency Published          0          0       0
PUBLIC.V$DATABASE                                                       1363   2 Dependency Published          0          0       0
select/*+ result_cache */ f_without_deps(:p_tab) result from dual       2283   1 Result     Published          0          0       0

3 rows selected.

[collapse] As you can see, there is only dependency on public synonym V$DATABASE, but not real base fixed X$-tables.
SYS.OBJ$

SQL> exec :p_tab:='sys.obj$';

PL/SQL procedure successfully completed.

SQL> call DBMS_RESULT_CACHE.flush();

Call completed.

SQL> select/*+ result_cache */ f_without_deps(:p_tab) result from dual;

RESULT
----------
sys.obj$

1 row selected.

SQL> select name,bucket_no, id,type,status,pin_count,scan_count,invalidations from v$result_cache_objects o order by type,id;

NAME                                                               BUCKET_NO ID TYPE       STATUS     PIN_COUNT SCAN_COUNT INVALID
----------------------------------------------------------------- ---------- -- ---------- --------- ---------- ---------- -------
XTENDER.F_WITHOUT_DEPS                                                  3922  0 Dependency Published          0          0       0
select/*+ result_cache */ f_without_deps(:p_tab) result from dual       3753  1 Result     Published          0          0       0

2 rows selected.

[collapse] The results were cached and the dependencies do not include system objects.
We easily check that the queries with any table in SYS schema or with sysdate,systimestamp,current_date,current_timestamp,dbms_random will not be cached:
SYS tables

SQL> select/*+ result_cache */ current_scn result from v$database;

    RESULT
----------
##########

1 row selected.

SQL> select name,bucket_no, id,type,status,pin_count,scan_count,invalidations from v$result_cache_objects o order by type,id;

no rows selected

SQL> explain plan for select/*+ result_cache */ * from sys.obj$;

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------
Plan hash value: 2311451600

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 87256 |  7328K|   296   (1)| 00:00:04 |
|   1 |  TABLE ACCESS FULL| OBJ$ | 87256 |  7328K|   296   (1)| 00:00:04 |
--------------------------------------------------------------------------

[collapse] Note that there is no “RESULT CACHE” line.
And even if create own tables in SYS schema(don’t do it :)), they will not be cached :
SYS.V_AB

SYS> create table a as select 'a' a from dual;
SYS> create table b as select 'b' b from dual;
SYS> create view v_ab as select a,b from a,b;
SYS> grant select on v_ab to xtender;

XTENDER> explain plan for select/*+ result_cache */ * from sys.v_ab;
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------
Plan hash value: 215283502

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |     1 |     6 |     4   (0)| 00:00:01 |
|   1 |  MERGE JOIN CARTESIAN|      |     1 |     6 |     4   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL  | A    |     1 |     3 |     2   (0)| 00:00:01 |
|   3 |   BUFFER SORT        |      |     1 |     3 |     2   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL | B    |     1 |     3 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement (level=2)

[collapse]
But sys_context and userenv will be cached successbully:
sys_context

SQL> explain plan for select/*+ result_cache */ sys_context('userenv','os_user')  from dual;

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------
Plan hash value: 1388734953

---------------------------------------------------------------------------------------
| Id  | Operation        | Name                       | Rows  | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                            |     1 |     2   (0)| 00:00:01 |
|   1 |  RESULT CACHE    | 267m2hcwj08nq5kwxcb0nb2ka8 |       |            |          |
|   2 |   FAST DUAL      |                            |     1 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; attributes=(single-row); parameters=(sys_context);
        name="select/*+ result_cache */ sys_context('userenv','os_user')  from dual"

14 rows selected.

[collapse] userenv

SQL> explain plan for select/*+ result_cache */ userenv('instance')  from dual;

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------
Plan hash value: 1388734953

---------------------------------------------------------------------------------------
| Id  | Operation        | Name                       | Rows  | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                            |     1 |     2   (0)| 00:00:01 |
|   1 |  RESULT CACHE    | dxzj3fks1sqfy35shbbst4332h |       |            |          |
|   2 |   FAST DUAL      |                            |     1 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=1; attributes=(single-row); parameters=(sys_context); 
       name="select/*+ result_cache */ userenv('instance')  from dual"

[collapse]

Categories: Development

A function gets called twice if the result_cache is used

XTended Oracle SQL - Sun, 2015-07-05 07:37

Recently I showed simple example how result_cache works with non-deterministic functions and observed strange behaviour: a function gets fired once in the normal query, but twice with the result_cache hint.
Moreover, only third and subsequent query executions return same cached results as second query execution.
I didn’t want to investigate such behavior, just because 1) we should not cache the results of non-deterministic functions and 2) it doesn’t matter if we use deterministic functions.
But later I was asked to explain this, so this post is just a short description with test case.

Look at the simple function that just returns random values:

create or replace function f_nondeterministic(p int:=100) return int
as
  res number;
begin
   res:=round(dbms_random.value(0,p));
   return res;
end;
SQL> exec dbms_result_cache.flush;
PL/SQL procedure successfully completed.

SQL> select/*+ result_cache */ f_nondeterministic(1000) nondeter from dual;

  NONDETER
----------
       481

SQL> select/*+ result_cache */ f_nondeterministic(1000) nondeter from dual;

  NONDETER
----------
       689

SQL> select/*+ result_cache */ f_nondeterministic(1000) nondeter from dual;

  NONDETER
----------
       689

with result_cache_statistics

SQL> exec dbms_result_cache.flush;

PL/SQL procedure successfully completed.

SQL> select/*+ result_cache */ f_nondeterministic(1000) nondeter from dual;

  NONDETER
----------
       481

SQL> select name,value from v$result_cache_statistics where name in ( 'Create Count Success', 'Find Count');

NAME                                               VALUE
-------------------------------------------------- ----------
Create Count Success                               1
Find Count                                         0

SQL> select/*+ result_cache */ f_nondeterministic(1000) nondeter from dual;

  NONDETER
----------
       689

SQL> select name,value from v$result_cache_statistics where name in ( 'Create Count Success', 'Find Count');

NAME                                               VALUE
-------------------------------------------------- ----------
Create Count Success                               1
Find Count                                         1

SQL> select/*+ result_cache */ f_nondeterministic(1000) nondeter from dual;

  NONDETER
----------
       689

SQL> select name,value from v$result_cache_statistics where name in ( 'Create Count Success', 'Find Count');

NAME                                               VALUE
-------------------------------------------------- ----------
Create Count Success                               1
Find Count                                         2

SQL> select name,bucket_no, id,type,status,pin_count,scan_count,invalidations from v$result_cache_objects o;

NAME                                                BUCKET_NO         ID TYPE       STATUS     PIN_COUNT SCAN_COUNT INVALIDATIONS
-------------------------------------------------- ---------- ---------- ---------- --------- ---------- ---------- -------------
XTENDER.F_NONDETERMINISTIC                                552          0 Dependency Published          0          0             0
select/*+ result_cache */ f_nondeterministic(1000)       2102          1 Result     Published          0          2             0
 nondeter from dual

[collapse]
As you can see, second execution returns different result than first one.
If we change this function:

create or replace function f_nondeterministic(p int:=100) return int
as
  res number;
begin
   res:=round(dbms_random.value(0,p));
   dbms_output.put_line('fired! ('||res||')');
   return res;
end;

and repeat this test-case:

SQL> select/*+ result_cache */ f_nondeterministic(1000) nondeter from dual;

  NONDETER
----------
       943    -- << (2)

1 row selected.

fired! (607)    -- << (1)
fired! (943)    -- << (2)
SQL> /

  NONDETER
----------
       607    -- << (1)

1 row selected.

SQL> /

  NONDETER
----------
       607    -- << (1)

1 row selected.

SQL> /

  NONDETER
----------
       607    -- << (1)

1 row selected.

we will see that there were 2 function executions: first result was cached, and the second was fetched!

Categories: Development

Flipkart, ecommerce, machine learning, and free advice

Abhinav Agarwal - Sat, 2015-07-04 12:08
I wrote about the obsession of Flipkart (and Myntra) with "mobile-only" without even having an iPad-optimized app! I also talked about the stunning advances being made in voice-search by using machine learning, cognitive learning, natural language processing, even as voice-based search capabilities of e-commerce companies - including Amazon - remain abysmal. Finally, I also included several use-cases that these companies need to work on incorporating into their capabilities.

That piece, Flipkart, Focus and Free Advice, appeared in DNA on June 27th, 2015.


My earlier pieces on the same topic:
  1. Flipkart vs Amazon: Beware the Whispering Death - 20th April '15 (blog, dna)
  2. Mobile Apps: There’s Something (Profitable) About Your Privacy - 18th April '15  (blog, dna)
  3. Mobile advertising and how the numbers game can be misleading - 14th April '15  (blog, dna)
  4. Is Flipkart losing focus - 12th April '15  (blog, dna)
Flipkart, Focus, and Free Advice – Shipping Charges Also Waived!What is one to make of a statement like this - “India is not mobile-first, but mobile-only country[1]”? Especially so if it is from the co-founder of the largest ecommerce company in India, and it turns out the company does not even have an app for the Apple iPad?

I have written at length on the distractions that seem to have been plaguing Flipkart and why it cannot afford to drop its guard in this fiercely contested space[2] - especially in light of all the noise surrounding its mobile ambitions. Somewhat paradoxically, this post is about offering advice to Flipkart that calls for some diversification!

As a logical next step, I wanted to take a look at Flipkart’s mobile apps – both on the iOS and Android platforms – to see how well they were executing on their very bold ambitions. As an aside, I also wanted to see if these (and competitive) mobile apps were leveraging all the computing power now available on tap inside these tiny devices. After all, apart from the recent – and amazing – advances Google has made in its voice-based search capabilities[3], there was this stunning demo from Hound[4] that gave a glimpse into the huge advances that voice-recognition, search, and machine-learning technologies have made in the last decade.
#MustRead - what next, e-tailing ? @flipkart Flipkart, Focus and Free Advice http://t.co/zincR6LS9D via @dna @AbhinavAgarwal— Harini Calamur (@calamur) June 28, 2015
Flipkart, Focus and Free Advice http://t.co/UNF20JCqer by @AbhinavAgarwal— dna (@dna) June 27, 2015The results were, to put it mildly, massively disappointing – which I will describe in some detail.
It should be clear that Amazon and Flipkart and SnapDeal are going to be at each other’s throats in the Indian online retail market. This is one battle from which neither player can walk away. Amazon has lost the China market to Alibaba (“In the first quarter of 2014, Alibaba's e-tailing site had a 48.4 per cent market share against Amazon China's less than 3 per cent.”[5] If that was not enough, Alibaba and Foxconn are in talks with SnapDeal for a rumoured $500 million investment![6]).

Amazon cannot afford to now lose the India market to a local upstart. Flipkart, on the other hand, has even less choice. It plays only in the Indian market. It cannot walk away either; there is no other market for it to walk towards. Its valuations – expected to rise to $15 billion after its next round of funding[7] make it way too costly for it to be acquired – at least profitably so for those funders who have put in hundreds of millions of dollars at these later and higher valuations. Amazon and Flipkart have deep pockets; Flipkart can afford to bleed hundreds of millions of dollars a year even as it grows, while Amazon has conditioned Wall Street to grant it the currency of ultra-high valuations even as it operates on razor-thin margins. It is unlikely that either will be able to deliver a knockout punch to the other anytime soon. This is a fifteen-round slugfest that will be decided by who can keep soaking in the blows and keep standing at the end of the fifteenth round; while they fight, the customer continues to win. Amazon has more diversity in its portfolio of business divisions than does Flipkart – ecommerce, cloud computing services, streaming audio and video, MRO and industrial supplies, smartphones, tablets, and more. While these divisions may at times face off against each other in expectedly healthy and sometimes unhealthy rivalry, they still form a formidable front against the competition. To quote these immortal lines from the Mahabharata, “we may be five against a hundred, but against a common enemy we are a hundred and five.”

So what does Flipkart do? Three things, to begin with.

First, it needs to get serious about software.
When you have a web site that offers millions of products from tens of thousands of resellers to millions of customers that reside in tens of thousands of cities and towns and villages, you need to make sure that your customers are seeing the products that are of most relevance to them, and which they are most likely to buy. If that problems looks like a nail to you, specifically a large-scale optimization problem with a huge number of decision variables, then large-scale computing and regression modelling are the hammer. You need to be applying this hammer to the almost infinite number of nails in front of you, all day and all night long. This is what enables you to present an ever-relevant basket of products to your customers, which keeps them engaged when on your site, and which hopefully makes them buy more often than not. Flipkart needs to take a close, long, hard look at its search capabilities – about which I will talk later in this post – and its suggestions engine, because both are very subpar at this point. If it’s any consolation, while Amazon is certainly better in the search department, its capabilities in this area are nothing great either, yet. Where Amazon scores over its competitors – every single one of them - is its huge and ever-growing corpus of customer reviews. Flipkart probably recognizes the important of this corpus of customer reviews, but has run into rough weather over the expected problem of fake reviews[8].

For inspiration on where the trifecta of search, machine learning, and e-commerce could venture – with Big Data in tow - one can turn to the story of how the popular American TV game show “Jeopardy” became the battleground for IBM researchers to build upon their experience with Deep Blue (the computer that had beaten world chess champion Gary Kasparov in 1997[9]) and to build a computer that would defeat the reigning champion of Jeopardy. That happened in February 2011, after four years of work led by IBM researcher David Ferucci and “about twenty researchers”[10].
This required advances in machine learning and other esoteric concepts like LAT (Lexical Answer Type), IDF (Inverse Document Frequency), temporal and even geospatial reasoning.[11] A new suite of software and platforms, built on a concept called genetic programming (“a technique inspired by biological evolution”) has started to make its way into mainstream commercial applications.  The algorithm here “begins by randomly combining various mathematical building blocks into equations and then testing to see how well the equations fit the data. Equations that fail the test are discarded, while those that show promise are retained and recombined in new ways so that the system ultimately converges on an accurate mathematical model.”[12] What this essentially means is going beyond keyword search-based correlations and moving to more semantic-oriented searches that combine machine learning with natural language processing. This in turn requires serious software brains (smart programmers using and refining the right algorithms and models) and muscle (massive learning and training sets in the hundreds of gigabytes running on clusters of tens of thousands of nodes).
If Flipkart is serious about the mobile ad business (about which I have expressed my reservations), even then it needs to get to the holy grail of deep-learning in ad-tech – “Inferring Without Interfering” the customer’s intent.”[13] In any event, this policy will only stand Flipkart in good stead. If they are already doing so, then good, but the proof is not in the pudding as much as in the eating of the pudding.

A critical differentiator in the coming times is not, I repeat, not, going to be driven by slick UIs or gimmicks on mobile apps like “shake to see offers”, but by offering truly intelligent and immersive experience that are feasible even today. Advances in machine learning, and capabilities such as voice, video, location, and more, when used in tandem will power the next set of innovations. Rather than stick to the tried and tested and old way of making users search using simple keywords and correlations and prior history, e-tailers need to make the shopping experience more intelligent.

Appendix 2 and 3 outline possible use-cases. It should be clear that both Flipkart and Amazon have a long, long way to go before realizing anything close to the vision outlined, but without such advances, competitors like Google will find the wedge they need to prise open this market for themselves.

Second, Flipkart (or even Amazon for that matter, or SnapDeal, or whichever competitor you happen to care about, though in this case the admonition is more targeted at Flipkart in light of its mobile-only pronouncements) needs to get serious about the mobile platform.

Browse to either Flipkart or Myntra’s websites from a browser on an iPad and you are asked to use their app instead. Would you believe if I told you Flipkart does not have an iPad app (as of 15th June 2015)? No? Go check for yourself – I did! Ditto for Myntra (the online fashion retailer Flipkart acquired in 2014)! See Appendix 1 for what I found when I downloaded their apps on my iPad tablet. This would be comedically farcical if serious money weren’t riding on such decisions.

Third, Flipkart needs to get into the cloud business.

Yes, I am serious.

Let’s look at the competition – Amazon. It is the 800 pound gorilla in the cloud computing industry, where its offering goes by the umbrella of AWS (Amazon Web Services) and offers almost everything you could think of under the cloud – platform, infrastructure, software, database, email, storage, even machine learning, and much more. How gorilla-ish? “AWS offers five times the utilized compute capacity of the other 14 cloud providers in the Gartner Magic Quadrant. Combined.[14]” Since 2005, Amazon has spent “roughly $12 billion” on its infrastructure[15]. It competes with the likes of Microsoft and Google in this space. Yet, Amazon’s cloud revenues are estimated to be “30 times bigger than Microsoft’s.[16]

And yet I argue that Flipkart should get into the cloud business. As I wrote last year[17], Flipkart had to invest substantially (per my estimates, more than one hundred crore rupees, or somewhere in the vicinity of $15-$20 million dollars – which is not chump change) to build its capacity to stand up to the traffic it expected for its “Big Billion Day”. This is in addition to the regular additions it must be making to its computing infrastructure. All this is not surprising, given that the retail business is prone to lumpiness in traffic – a disproportionate amount of traffic is concentrated around sale events, or holidays.

For example, while Amazon reportedly had ten million Prime subscribers in March 2013, it reported that over 10 million “tried Prime for the first time” over the holidays in 2014 (traditionally the period between Thanksgiving and Christmas).[18] To prevent web sites from keeling over under the crush of holiday traffic, companies invest substantially, in advance, to make sure the web site keeps chugging along. The flip side is that for those periods when traffic is more average and a fraction of peak traffic, all those thousands of computers, the hundreds of gigabytes of memory, terabytes of disk space, and gobs of network bandwidth capacity are lying idle – depreciating away, obsolescing away.

Amazon realized this a decade ago and started building a rental model around its excess capacity – this was the genesis behind Amazon Web Services. There is no reason for Flipkart to not do the same. What works for Amazon has worked quite well for Flipkart[19]. If it spins off its entire e-commerce infrastructure into a separate entity, it can palm much off the capital costs of its computing infrastructure to the cloud computing subsidiary, substantially improving its balance sheet in the process. You could argue this is nothing but an accounting gimmick, and I am not going to argue with that aspect of the decision - there would be undeniable and real benefits to this decision, and it’s childish to expect a business to be run on utopian principles. As things stand, the state government of Telangana is already assiduously wooing Amazon to invest in an AWS centre in the state[20]. Once operating on Indian soil, Amazon will be able to meet legal requirements that require certain categories of data to remain with the national borders.

Any industry so heavily influenced and shaped by technology as the e-commerce industry would do well to listen to the winds of change. If unheard and unheeded, these winds of change turn into gale storms of disruption that blow away incumbents faster than you can imagine. “Mobile-only” is a useful-enough mantra, but translating that into an “app-only” sermon hints at myopic thinking – a troubling sign for sure. It turns out that Google “secretly” acquired a company that specializes in “streaming native mobile apps”. Is this a shape of the things to come? How will this transform the world of mobile apps, or even the mobile landscape in general? Time will tell, but “lock-in” may well be a wise strategy for your customers, but a terrible one to apply to yourself.[21].

Appendix 1 - App-solutely Serious about Apps?Fire up your favourite mobile browser on an Apple iPad and browse to Myntra’s website (that would be www.myntra.com). You are greeted with a message to vamoose to their mobile app, because after all, Myntra is all about mobility – social mobility in fashion, and mobile devices when speaking more literally.
Figure 1 - Myntra web site on tablet browser
Incredulity hits you in the face when you realize that (on the Apple App Store) the Myntra app is “optimized for iPhone 5, iPhone 6 and iPhone 6 Plus”, but not the iPad. Yes, you read that right – the web site that tells you have to use its mobile app and mobile app only on an iPad does not have an app optimized for the iPad.
Figure 2 - Myntra app details on the Apple App Store
I am, however, somewhat of a cynical person. I tried searching for the keyword “myntra” on the Apple App Store. The only filter applied was to look for “iPad Only” apps. Here are the beatific search results. Indian gave the world the concept of zero, and the search results page gave one practical application of that elegant mathematical concept.
Figure 3 - Search results for "iPad Only" apps on the Apple AppStore for "myntra"
So where was that Myntra app hiding? I changed the filter to “iPhone Only”, and true-enough, there was that Myntra app.
Figure 4 - Myntra app on the Apple App Store
In case you are wondering how that was even possible, know that most apps created for the iPhone (or iPod Touch) can run on an iPad without any modifications – all that is required for you to keep this in mind when compiling the app. Apple calls this a “Universal app”[22].

Now that can’t be so bad, right? After all, the app is available on the iPhone and the iPad, so where and what is the grouse? I will come to that in just a bit, but take a look at what the Myntra app looks like when run on the iPad.
Figure 5 - Myntra app running on an iPadThis is how the app runs inside an iPad. You have the option of tapping the “2x” button, after which the app uses the full screen, but by scaling everything to twice its size. There is no other intelligence here being applied – like changing the icons, or the text, or adding more features. This is iOS doing what little work you see.
Why this arouses incredulity is due to the stunning dissonance one experiences – between the statements of the Myntra (and Flipkart) executives going to town about a “mobile-only” world[23] on the one hand and the reality of a missing-in-action iPad-optimized app on the other. Yes, one could make the argument that Apple commanded a stunningly low single-digit share of 7% of the tablet market in India[24], but to make this argument is to negate your very philosophy of a “mobile-only” world. Mobile includes smartphones, tablets, phablets, wearables (for which Flipkart does have an app![25]), smart-TVs, and even embedded devices.
Flipkart’s mobile web site works - at least for now - on the iPad (though it does not on a smartphone – you have no option but to use their app), but the story is not much different there. No iPad-optimized app, but a smartphone app that does duty on the iPad by virtue of it being a “Universal” app.
Figure 6 - Flipkart shopping app in the Apple App Store
 Figure 7 - Flipkart shopping app on the Apple iPad It’s not as if Amazon’s iPad app is much better. Yes, they do have an iPad app, but it looks more like a hybrid app – a native shell with an embedded browser snuck in, and very little by way of any tablet optimizations.
Figure 8 - Amazon app for the iPad
Appendix 2 – Natural Speech SearchesMobile shopping apps like Flipkart and Amazon provide you the option of inputting your search query via voice (more because of the support the underlying mobile OS provides), but that forces you say out aloud what you have typed – keywords, and nothing more.
Unlike the stunning Hound demo or the capabilities of Google Now[26], e-tailers have yet to leave the stone age in search capabilities. While Hound can understand and answer (correctly) queries like “Show me hotels in Seattle for Friday, staying one night” and then support refinements to the query like “Show only the ones costing less than $300” or “Show only the ones that have three or four or five stars that are pet friendly, that have a gym and a pool, within 4.5 miles of the Space Needle”[27], and Google Now can understand foreign accents (like my Indian English accent) and parse phrases like “ghat”, “jyotirling” and more, a relatively simple phrase like - “What are the best sellers in fiction” – leads to disappointment on both Amazon and Flipkart’s mobile apps.
Figure 9 - Search results in the Amazon appAnd to be clear, what was presented was not the bestsellers list, because the bestseller list looked like this:
Figure 10 - Non-fiction bestsellers in books as shown on the Amazon app
I tried another search – “Suggest books for children”. I don’t know what to call the search results, but one with “*divorce* your Child” as the first result is surreal.
Figure 11 - Search results on Amazon app
To complete my brief experiment on Amazon, I tried “Show me best sellers in electronics”. That also did not yield any relevant results.
Figure 12 - Search results in the Amazon appFlipkart is not much better, and at this point we are really looking at rock-bottom as the baseline. Even a marginal improvement would be welcome here. Sadly, not the case. Though, Flipkart does separate each word out and allow you to delete any one word to refine your search. Given the abysmal quality of search results, it is somewhat of a zero-divide-by-zero case, resulting in only infinite misery trying to find the right combination of keywords that will yield the desired results.
Figure 13 - Search results on the Flipkart app
Does the Myntra app fare any better? Predictably, it doesn’t. If semantic search in the e-commerce space was a problem that had been cracked by either Flipkart or Myntra, it would have been shared across both platforms by now.
Figure 14 - Search results in the Myntra app
Even Google, with its oft-stated e-commerce ambitions[28],[29] , and the eye-popping advances that it has made with its voice-based search (Siri from Apple and lately Cortana from Microsoft also deserve to be included, but neither company seems to be quite interested in e-commerce at the scale of Amazon, yet) left me disappointed with a simple search like – “what are the fiction best sellers in India”.
Figure 15 - Search results in the Google app
Appendix 3What do I have in mind with respect to the kinds of queries that Flipkart (or Amazon) should be trying to enable? Without any further context, I present the following examples:
One:
(this is a comparatively simpler form of the semantic search capabilities I propose)
Me: show me the best sellers in non-fiction
App: [displays a list of book best sellers in non-fiction] [Optionally, excludes or places the ones I have bought at the bottom of the list; or marks them differently and provides me with an option of reading them online – assuming I had purchased an e-book version]
Me: show me only those books that have been published in the last three months;
App: [filters the previous set of search results to show only those non-fiction best sellers that have been published in the last three months]
Me: also include books that were on the bestseller list this year
App: [adds books that were in the top 10/20 bestsellers list in 2015 but have now dropped out of the rankings]
Me: cancel the last search, and show me those books that are also available as e-books, and then sort them by price
App: [displays a list of book best sellers in non-fiction, filtered by those available on the Kindle, and sorts by price, ascending]
Me: send me free e-book samples of the first five books from this list and remind me in one week whether I want to purchase them.
App: [downloads free samples of the first three books to my e-book app] [creates a reminder to remind me in one week]


Two:
(this is a more social and more nuanced form of the semantic search outlined above)
Me: show me a list of LED TVs
App: [displays a list of the bestselling LED TVs]
Me: show me LED TVs that are HD, 40 inches or larger, cost no more than Rs 60,000, and can be delivered in the next three days.
App: [displays a list of TVs matching the criteria, and adds – “there are only three TVs that match your search criteria, so I have changed the price to Rs 70,000, which has resulted in five more search results. Say “cancel” to undo.”]
Me: Which among these would be most relevant to me?
App: [displays the list sorted based on popularity in my postal code] [offers to show the list sorted on TVs sold in the last three months to the housing community I live in – or the company I work at – or based on people with my profile of educational qualifications or marital/family status – based on privacy settings of course]
Me: summarize the most useful reviews for the first TVs, and keep each under two minutes.
App: [summarizes the most useful reviews and then reads out a software-generated summary, in less than two minutes. Also sends a text summary to my WhatsApp or email]
Far-distant utopia? Naah, I don’t think so. This is within the realm of the possible, and I expect to see this become reality in the next two years. Today, however, we are some ways off from the innovations where online shopping will become a truly immersive, interactive experience akin to having a natural conversation with an incredibly knowledgeable yet infinitely patient salesperson.

Three:
(ratcheting things up one more notch)
Me: (standing amidst the ruins of Hampi) Suggest some good books about this place.
App: [suggests bestsellers or highest-rated books on three categories: coffee-table books on Hampi; history of Hampi and the Vijayanagar Empire; historical fiction books set in the fifteenth/sixteenth century Vijaynagara Empire]
Me: Also suggest something on the significance of this chariot temple
App: …

Four:
App: [reminds me that I have a party at my house this weekend where four families are coming over]
Me: I need some snacks and also suggest some recent action movies to rent
App: [suggests food-items to order and shows a list of the five top grossing movies of the year in the “Action” genre and shows options: buy, rent (really?), stream]
Me: place the first, third, and fifth items in the shopping cart, record this and deliver to my wife. Then rent to stream the third movie in HD format on Saturday evening.
App: [places these items in the shopping cart, records a 15 second video and pings the spouse via a notification/alert to view the video. It also places an order for the selected movie]

Disclaimer: views expressed are personal.

 References:
[1] "India is not mobile-first, but mobile-only country: Sachin Bansal, Flipkart's founder and Mukesh Bansal, Myntra's CEO - timesofindia-economictimes", http://articles.economictimes.indiatimes.com/2015-05-13/news/62124447_1_myntra-sachin-bansal-ceo-mukesh-bansal
[2] See http://www.dnaindia.com/analysis/standpoint-flipkart-vs-amazon-beware-the-whispering-death-2079185 and http://www.dnaindia.com/analysis/standpoint-why-flipkart-seems-to-be-losing-focus-2076806
[3] "Google Launches Custom Voice Actions For Third Party Apps", http://searchengineland.com/google-launches-custom-voice-actions-for-third-party-apps-220148
[4] "After Nine Years of Secret Development, Hound Voice Search App Has a Dazzling Demo | Re/code", http://recode.net/2015/06/02/after-nine-years-of-secret-development-hound-voice-search-app-has-a-dazzling-demo/
[5] "A missed opportunity in China has Amazon founder Jeff Bezos backing his India venture", http://indiatoday.intoday.in/story/amazon-jeff-bezos-india-business-venture-flipkart-snapdeal/1/393933.html
[6] "Alibaba, Foxconn in Talks to Invest $500 Million in India’s Snapdeal - India Real Time - WSJ", http://blogs.wsj.com/indiarealtime/2015/06/16/alibaba-foxconn-in-talks-to-invest-500-million-in-indias-snapdeal/
[7] "Flipkart set to raise up to $800 million - Livemint", http://www.livemint.com/Companies/j2B9ax1SNS4JrDdJAU19sO/Flipkart-set-to-raise-up-to-800-mn.html
[8] See "How e-retailers such as Flipkart, Amazon are keeping fake products at bay - timesofindia-economictimes", http://articles.economictimes.indiatimes.com/2015-01-08/news/57791521_1_amazon-india-sellers-mystery-shoppers, "Who Reviews the Reviewers? How India's Online Businesses Are Fighting Fake Reviews | NDTV Gadgets", http://gadgets.ndtv.com/internet/features/who-reviews-the-reviewers-how-indias-online-business-are-fighting-fake-reviews-697112, and "How genuine are product reviews on FlipKart? - Quora", http://www.quora.com/How-genuine-are-product-reviews-on-FlipKart
[9] "Deep Blue (chess computer) - Wikipedia, the free encyclopedia", https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)
[10] "Rise of the Robots: Technology and the Threat of a Jobless Future”, by Martin Ford, Jeff Cummings, ISBN 9781480574779, http://www.amazon.com/Rise-Robots-Technology-Threat-Jobless/dp/1480574775
[11] Ibid and "The AI Behind Watson — The Technical Article", http://www.aaai.org/Magazine/Watson/watson.php
[12] "Rise of the Robots: Technology and the Threat of a Jobless Future”
[13] Gregory Piatetsky on Twitter: "The #DeepLearning future of ads: "Inferring Without Interfering with what moves the Customer most" - J. Kobelius http://t.co/zh96DC6DDG" https://twitter.com/kdnuggets/status/610848069672927232
[14] "Gartner: AWS Now Five Times The Size Of Other Cloud Vendors Combined - ReadWrite", http://readwrite.com/2013/08/21/gartner-aws-now-5-times-the-size-of-other-cloud-vendors-combined
[15] Ibid.
[16] "How much bigger is Amazon’s cloud vs. Microsoft and Google?", http://www.networkworld.com/article/2837910/public-cloud/how-much-bigger-is-amazon-s-cloud-vs-microsoft-and-google.html
[17] "A Billion Dollar Sale, And A Few Questions", http://www.dnaindia.com/analysis/standpoint-a-billion-dollar-sale-and-a-few-questions-2047853
[18] "Amazon added 10M new Prime subscribers over the holidays, could make up to $1B in annual revenue | VentureBeat | Business | by Harrison Weber", http://venturebeat.com/2014/12/26/amazon-made-nearly-1b-from-new-prime-subscriptions-over-the-holidays/
[19] See http://www.dnaindia.com/analysis/standpoint-flipkart-vs-amazon-beware-the-whispering-death-2079185
[20] "Amazon likely to bring Web Services to Telangana | Business Standard News", http://www.business-standard.com/article/companies/amazon-likely-to-bring-web-services-to-telangana-115061000659_1.html
[21] "Report: Last Year Google Secretly Acquired Agawi, A Specialist In Streaming Native Mobile Apps | TechCrunch", http://techcrunch.com/2015/06/18/report-last-year-google-secretly-acquired-agawi-a-specialist-in-streaming-native-mobile-apps/
[22] "Start Developing iOS Apps Today: Tutorial: Basics", https://developer.apple.com/library/ios/referencelibrary/GettingStarted/RoadMapiOS/FirstTutorial.html
[23] http://articles.economictimes.indiatimes.com/2015-05-13/news/62124447_1_myntra-sachin-bansal-ceo-mukesh-bansal
[24] "Samsung Tops Indian Tablet Market Share, Followed By Micromax, iBall", http://trak.in/tags/business/2014/11/28/indian-tablet-market-share-growth/
[25] "Flipkart launches an app for Android Wear sporting wearables - Tech2", http://tech.firstpost.com/news-analysis/flipkart-launches-an-app-for-android-wear-sporting-wearables-240070.html
[26] See this for a comparison between Hound and Google Now - "Here’s how Hound beta compares to Google Now (Video) | 9to5Google", http://9to5google.com/2015/06/05/hound-beta-vs-google-now-video/
[27] "After Nine Years of Secret Development, Hound Voice Search App Has a Dazzling Demo | Re/code", http://recode.net/2015/06/02/after-nine-years-of-secret-development-hound-voice-search-app-has-a-dazzling-demo/
[28] "Google Preps Shopping Site to Challenge Amazon - WSJ", http://www.wsj.com/articles/google-preps-shopping-site-to-challenge-amazon-1418673413
[29] "Google Finds Partners To Help It Compete With Amazon - Forbes", http://www.forbes.com/sites/benkepes/2015/04/13/google-finds-partners-to-help-it-compete-with-amazon/



© 2015, Abhinav Agarwal. All rights reserved.

Oracle Process Cloud Service - Consuming ADF BC REST Service in Web Form

Andrejus Baranovski - Sat, 2015-07-04 08:57
With the introduction of Oracle Process Cloud Service (https://cloud.oracle.com/process) there is an option to run your business process in the cloud. You can implement very similar things as it is possible with BPM 12c in JDeveloper, but only in the cloud. There is no option to implement human task UI with ADF, it must be done with Web Forms (light UI forms implementation framework). This is disadvantage, as it will require to externalise complex business logic and access data collections through REST calls, instead of processing it locally in ADF extension. However, this is how it is implemented currently in the BPM cloud version.

This is how it looks - Web Form editor in Process Cloud Service:


You have an option to select specific types from component palette. There are such types as money, email, phone, text. Components are provided with built in validation, checks for valid email address, etc. User who implements a form, needs to drag and drop components one by one (generally it works OK, but sometimes may get stuck and page refresh will be required). Properties section is available to enter details for the selected component.

Cool part about it - we can define business logic rules. For example, to show/hide customer field based on trip type, etc. One of the rules below, shows how REST service can be called inline and REST response is applied for drop-down field. This is pretty simple to fetch data from external REST service and use it for the Web Form component. Just invoke REST service and assign collection to the component supporting array of data:


If REST call requires to pass parameters through URL, this is also possible, see example below - list of cities, based on the selected country in the Web Form:


Web Form works pretty well in Preview mode, it even calls REST services and shows real data. Validation messages for built-in checks are displayed pretty nicely. Here I'm selecting a country from the list populated by REST service:


Based on the selection in the Country drop-down, filtered list for Cities becomes available (another REST call):


Good news - most of the Oracle Process Cloud interface itself is implemented with ADF. Here you can see a section listing all Web Forms available for the current process:


I have implemented REST service for Countries and Cities lists in ADF BC. There are two custom methods in AM implementation, to fetch the data and transform it to the list format:


REST resources are defined through annotations and are configured to produce JSON data, this will allow Oracle Process Cloud to parse such data automatically:


If you are going to check how REST service is implemented in ADF BC, you can run a query in Postman to retrieve a list of all countries from HR schema:


Another query to retrieve a list of cities, by country:


Download sample application, with ADF BC REST service - RestADFBCApp.zip.

RMAN -- 4 : Recovering from an Incomplete Restore

Hemant K Chitale - Fri, 2015-07-03 23:22
What do you do if a RESTORE fails mid-way ?  Do you need to rerun the whole restore ?  If it is a very large database, it could take [many ?] hours.

RMAN is "smart" enough to detect datafiles that have been restored and not re-attempt a restore.

UPDATE 08-Jul-15 :  For a database with Oracle Managed Files  (OMF), please see Oracle Support Note Doc ID 1621319.1  {Thus, see my next blog post on Recovering from an Incomplete Restore with OMF Files}

This example below is for non-OMF files.

Here, I begin a database restore.

 
RMAN> restore controlfile from autobackup;

Starting restore at 04-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=19 device type=DISK

recovery area destination: /NEW_FS/oracle/FRA
database name (or database unique name) used for search: HEMANTDB
channel ORA_DISK_1: AUTOBACKUP /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp found in the recovery area
AUTOBACKUP search with format "%F" not attempted because DBID was not set
channel ORA_DISK_1: restoring control file from AUTOBACKUP /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp
channel ORA_DISK_1: control file restore from AUTOBACKUP complete
output file name=/home/oracle/app/oracle/oradata/orcl/control01.ctl
output file name=/home/oracle/app/oracle/flash_recovery_area/orcl/control02.ctl
Finished restore at 04-JUL-15

RMAN>
RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN>
RMAN> restore database;

Starting restore at 04-JUL-15
Starting implicit crosscheck backup at 04-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=19 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK
Crosschecked 8 objects
Finished implicit crosscheck backup at 04-JUL-15

Starting implicit crosscheck copy at 04-JUL-15
using channel ORA_DISK_1
using channel ORA_DISK_2
Finished implicit crosscheck copy at 04-JUL-15

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp

using channel ORA_DISK_1
using channel ORA_DISK_2

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00004 to /home/oracle/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqsccg_.bkp
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqsccg_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:02:34
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00003 to /home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqwt4s_.bkp
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqwt4s_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:35
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00002 to /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
ORA-01092: ORACLE instance terminated. Disconnection forced
ORACLE error from target database:
ORA-03135: connection lost contact
Process ID: 3777
Session ID: 1 Serial number: 9

[oracle@localhost ~]$

After having restored a few datafiles, the restore failed on being disconnected from the database. (The  server or database instance has crashed).  Since the controlfile has been restored, I can bring up the database in MOUNT mode and then re-attempt a RESTORE DATABASE.

[oracle@localhost ~]$ rman target sys/oracle@orcl

Recovery Manager: Release 11.2.0.2.0 - Production on Sat Jul 4 12:56:41 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN>
RMAN> restore database;

Starting restore at 04-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK

skipping datafile 3; already restored to file /home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
skipping datafile 4; already restored to file /home/oracle/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00002 to /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:02:36
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00010 to /home/oracle/app/oracle/oradata/orcl/APEX_2614203650434107.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxovh_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxovh_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00005 to /home/oracle/app/oracle/oradata/orcl/example01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxjv6_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxjv6_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:04:02
Finished restore at 04-JUL-15

RMAN>

RMAN detects that datafiles 3 (undotbs01.dbf) and 4 (users01.dbf) had already been restored.
 If you look at the previous RESTORE run, you can see that these were restored by Channel ORA_DISK_2. The first channel ORA_DISK_1 had started restoring system01.dbf but hadn't completed restoring the datafile when the restore crashed. That restore of datafile 1 (system01.dbf) had to be redone.

 (Another thing to note : Oracle doesn't necessarily restore datafiles in the order of file_id (file#) ! There really is no ORDER BY for a RESTORE)

RMAN> recover database;

Starting recover at 04-JUL-15
using channel ORA_DISK_1
using channel ORA_DISK_2

starting media recovery

archived log for thread 1 with sequence 628 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc
archived log for thread 1 with sequence 629 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc
archived log for thread 1 with sequence 630 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc
archived log for thread 1 with sequence 631 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc
archived log for thread 1 with sequence 632 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc
archived log for thread 1 with sequence 633 is already on disk as file /home/oracle/app/oracle/oradata/orcl/redo03.log
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc thread=1 sequence=628
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc thread=1 sequence=629
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc thread=1 sequence=630
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc thread=1 sequence=631
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc thread=1 sequence=632
archived log file name=/home/oracle/app/oracle/oradata/orcl/redo03.log thread=1 sequence=633
media recovery complete, elapsed time: 00:00:02
Finished recover at 04-JUL-15

RMAN>
RMAN> alter database open resetlogs;

database opened

RMAN>


UPDATE 07-Jul-15 :  Also see my earlier (year 2012) post "Datafiles not Restored -- using V$DATAFILE and V$DATAFILE_HEADER"  which also shows retrying a RESTORE DATABASE after a failure of restoring a datafile.  There, a single file in a BackupSet failed to restore.  Oracle didn't continue and try the other datafiles in that BackupSet.  I could either fix the error and retry the entire BackupSet (RESTORE DATABASE would have identified the right BackupSet containing those files) OR I could, as I did in that scenario, individually restore DataFiles from the BackupSet.

It can be a good idea to have your database backup consist of multiple BackupSets, using either multiple CHANNELs or FILESPERSET during the BACKUP.


You could also note, as an aside, that Log Sequence 633 was an online redo log file. RMAN automatically verifies that the online redo log files designated by the controlfile are present and uses them.

.
.
.

Categories: DBA Blogs

Log Buffer #430: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-07-03 12:56

This Log Buffer Edition cuts through the crowd and picks some of the outstanding blog posts from Oracle, SQL Server and MySQL.


Oracle:

  • Continuous Delivery (CD) is a software engineering approach in which teams keep producing valuable software in short cycles and ensure that the software can be reliably released at any time.
  • Query existing HBase tables with SQL using Apache Phoenix.
  • Even though WebLogic with Active GridlLink are Oracle’s suggested approach to deploy Java applications that use Oracle Real Applications Clusters (RAC), there might be scenarios in which you can’t make that choice (e.g. certification issues, licensing, library dependency, etc.).
  • OSB & MTOM: When to use Include Binary Data by Reference or Value.
  • Ever used SoapUI to test services on multiple environments? Then you probably ran in to the job of ever changing the endpoints to the hosts of the particular environment; development, test, acceptance, production (although I expect you wouldn’t use SoapUI against a prod-env). This is not that hard if you have only one service endpoint in the project.

SQL Server:

  • Using DAX to create SSRS reports: The Basics.
  • Getting to know your customers better – cohort analysis and RFM segmentation in R.
  • Using the T-SQL PERCENTILE Analytic Functions in SQL Server 2000, 2005 and 2008.
  • Schema-Based Access Control for SQL Server Databases.
  • How to Fix a Corrupt MSDB SQL Server Database.

MySQL:

  • MySQL Enterprise Audit – parsing audit information from log files, inserting into a MySQL table.
  • Proposal to deprecate MySQL INTEGER display width and ZEROFILL.
  • Using Cgroups to Limit MySQL and MongoDB memory usage.
  • Slave election is a popular HA architecture,  first MySQL MariaDB toolkit to manage switchover and failover in a correct way was introduce by Yoshinori Matsunobu into MHA.
  • Setting up environments, starting processes, and monitoring these processes on multiple machines can be time consuming and error prone.

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

The post Log Buffer #430: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

WebLogic Server (FMW) : Generating Thread Dumps using OS commands

Online Apps DBA - Fri, 2015-07-03 03:33

This post is coming from our Oracle Fusion Middleware Training where we cover Oracle WebLogic Server on Day1 . One of the performance issue that commonly encountered in poorly written application (or on not so performant Fusion Middleware infrastructure) is Stuck Threads.

Stuck Threads in WebLogic Server means a thread performing the same request for a very long time and more than the configurable Stuck Thread Max Time in WebLogic .

Thread dumps are diagnosis information that is used to analyse and troubleshoot performance related issues such as server hangs, deadlocks, slow running, idle or stuck applications etc.

How to generate Thread dumps?
In this post, I will walk you through the steps to generate Thread dumps of a server using operating system (O.S.) commands.

1. Start the server from command line script (using nohup). Let us take managed server as an example for which we need to generate the thread dumps so start the server using script as shown below.
cd $DOMAIN_HOME/bin
nohup ./startManagedWeblogic.sh <Server_name> &

2. Now identify the PID (java Process ID) for the managed server using the below command:
ps auxwww | grep –i java | grep –i <server_name> (This command is for Solaris)

3. Now run the below command to create the thread dump.
kill -3 <PID>

(This will send a signal to the process whose dump we require. This signal causes the Java Virtual Machine to generate a stack trace of the process.)

This command will create thread dump in the nohup.out file (where we started the managed server)

4. Open the nohup.out file to see generated thread dumps:

References

Related Posts for WebLogic/FMW
  1. WebLogic Server (FMW) : Generating Thread Dumps using OS commands

The post WebLogic Server (FMW) : Generating Thread Dumps using OS commands appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

Testing the just released PostgreSQL 9.5 Alpha in a docker container

Yann Neuhaus - Fri, 2015-07-03 00:15

On the 2cnd of July the PostgreSQL Global Development Group released an alpha version of the upcoming PostgreSQL 9.5. The same day, Josh Berkus, another of those PostgreSQL core team members released a docker image for testing this alpha release. It's never been that easy to get started with PostgreSQL or testing new features.

Happy Birthday to oracle-base.com (sort-of)

Tim Hall - Thu, 2015-07-02 23:35

birthday-cake-clipartToday is another anniversary, but this time it’s the website, which is 15 years old.

OK. This is a bit of a cheat because:

  • The website originally had a different name, so you could say the website with it’s current name is 13 months younger, but it’s the same site, so whatever.
  • I don’t actually know the exact day the first page went online, but I do know the date I bought the original domain name (before the rename to oracle-base.com), so I know the first page was put up about now.

Anyway, July 3rd is from now on the official birthday of the website. Makes it easy to remember, because it’s the day after my birthday.

Cheers

Tim…

PS. For those that are interested, the blog was 10 years old last month. I do know the exact date for that because the posts are dated and you can read the first post. :)

Happy Birthday to oracle-base.com (sort-of) was first posted on July 3, 2015 at 6:35 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Seven Weeks with the Fitbit Surge

Oracle AppsLab - Thu, 2015-07-02 14:35

As my wearables odyssey continues, it’s time to document my time with the Fitbit Surge.

I ended up wearing the Surge a lot longer than I’d worn the Nike+ Fuelband, the Basis Peak and the Jawbone UP24 because June was a busy month, and I didn’t have time to switch.

For comparison’s sake, I suggest you read Ultan’s (@ultan) review of the Surge. He’s a hardcore fitness dude, and I’m much more a have-to-don’t-like-to exercise guy, which makes for a nice companion read.

As usual, this isn’t a review, more loosely-coupled observations. You can find lots of credible reviews of the Surge, billed as a “Super Watch” by the recently IPO’ed Fitbit, e.g. this one from Engadget.

Here we go.

The watch

As with most of the other wearables I’ve used, the Surge must be setup from software installed on a computer. It also requires the use of a weird USB doohickey for pairing, after which the watch firmware updates.

IMG_20150512_070824

I get why they provide ways for people to sync to software installed on computers, but I wonder how many users really eschew the smartphone app or don’t have a smartphone.

Anyway, despite Fitbit Connect, the software you have to install, saying the firmware update process will take five to ten minutes, my update took much longer, like 30 minutes.

Physically, the Surge is chunky. Its shape reminds me of a door-stop, like a wedge. While this looks weird, it’s really a nice design idea, essentially tilting the display toward the user, making it easier to read at a glance.

IMG_20150513_063914

I found wearing the device to be comfortable, although the rubber of the band did make my skin clammy after a while, see the Epilogue for more on that.

The display is easy to read in any light, and the backlight comes on automatically in low light conditions.

Surge carries water resistant rating of 5 ATM, which amounts to 50 meters deep, but for some reason, Fitbit advises against submerging it. Weird, right?

Not one to follow directions, I took the Surge in a pool with no ill effects. However, once or twice during my post-workout steam, the display did show some condensation under the glass. So, who knows?

The device interface is a combination of touches and three physical buttons, all easy to learn through quick experimentation.

The watch screens show the day’s activity in steps, calories burned, miles, and floors climbed. It also tracks heart rate via an optical heart rate sensor.

In addition, you can start specific activity tracking from the device including outdoor running with GPS tracking, which Ultan used quite a lot, and from what I’ve read, is the Surge’s money feature. I only run indoors on a treadmill (lame), so I didn’t test this feature.

The Surge does have a treadmill activity, but I found its mileage calculation varied from the treadmill’s, e.g. 3.30 miles on the treadmill equated to 2.54 on the Surge. Not a big deal to me, especially given how difficult tracking mileage would be for a device to get right through sensors.

Speaking of, the Surge packs a nice array of sensors. In addition to the aforementioned GPS and optical heart rate sensor, it also sports a 3-axis accelerometer and a 3-axis gyroscope.

The Surge tracks sleep automatically, although I’m not sure how. Seemed to be magically accurate though.

Fitbit advertises the Surge’s battery life as seven days, but in practice, I only got about four or five days per charge. Luckily, Fitbit will inform you when the battery gets low via app notifications and email, both of which are nice.

Happily, the battery charges very quickly, albeit via a proprietary charging cord. Lose that cord, and you’re toast. I misplaced mine, which effectively ended this experiment.

The app and data

As Ultan mentioned in his post, the Fitbit Aria wifi scale makes using any Fitbit device better. I’ve had an Aria for a few years, but never really used it. So, this was a great chance to try it with the Surge.

Fitbit provides both mobile and web apps to track data.

I mostly used the mobile app which shows a daily view of activity, weight and food consumption, if you choose to track that manually. Tapping any item shows you details, and you can swipe between days.

Screenshot_2015-07-02-13-01-58 Screenshot_2015-07-02-13-02-03 Screenshot_2015-07-02-13-02-17 Screenshot_2015-07-02-13-02-57 Screenshot_2015-07-02-13-02-45 Screenshot_2015-07-02-13-03-26

It’s all very well-done, easy to use, and they do a nice job of packing a lot information into a small screen.

From within the app, you can set up phone notifications for texts and calls, a feature I really liked from wearing the Basis Peak.

Noel, send me a text message.

Noel, send me a text message.

Unfortunately, I only got notified about half the time, not ideal, and I’m not the only one with this issue. Danny Bryant (@dbcapoeira) and I chatted about our Surge experiences at Kscope, and he mentioned this as an issue for him as well.

Fitibit offers Challenges to encourage social fitness competition, which seems nice, but not for me. There are badges for milestones too, like walking 500 miles, climbing 500 floors, etc. Nice.

Sleep tracking on the mobile app is pretty basic, showing number of times awake and number of times restless.

Fitbit’s web app is a dashboard showing the same information in a larger format. They hide some key insights in the Log section, e.g. the sleep data in there is more detailed than what the dashboard shows.

Fitbit Dashboard

Fitbit Dashboard

Track My Sleep on Fitbit

Fitbit Log

Track My Activities on Fitbit

Fitbit Log

I have to say I prefer the Jawbone approach to viewing data; they only have a mobile app which dictates the entire experience and keeps it focused.

Fitbit sends weekly summary emails too, so yet another way to view your data. I like the emails, especially the fun data point about my average time to fall asleep for the week, usually zero minutes. I guess this particular week I was well-rested.

fitbitEmail

I did have some time zone issues when I went to Florida. The watch didn’t update automatically, and I did some digging and found a help article about traveling with your Fitbit with this tip:

Loss of data can occur if the “Set Automatically” timezone option in the app’s “Settings” is on. Toggle the “Set Automatically” timezone option to off.

So for the entire week in Hollywood, my watch was three hours slow, not a good look for a watch.

And finally, data export out of Fitbit’s ecosystem is available, at a cost. Export is a premium feature. “Your data belongs to you!” for for $50 a year. Some consolation though, they offer a free trial for a week, so I grabbed my data for free, at least this time.

Overall, the Surge compares favorably to the Basis Peak, but unlike the Jawbone UP24, I didn’t feel sad when the experiment ended.

Epilogue

Perhaps you’ll recall that Fitbit’s newer devices have been causing rashes for some users. I’m one of those users. I’m reporting this because it happened, not as an indictment of the device.

I wore the Surge for seven weeks, pretty much all the time. When I took it off to end the experiment, my wife noticed a nasty red spot on the outer side of my arm. I hadn’t seen it, and I probably would never have noticed.

IMG_20150629_131631 IMG_20150629_131702

It doesn’t itch or anything, just looks gnarly. After two days, it seems to be resolving, no harm, no foul.

The rash doesn’t really affect how I view the device, although if I wear the Surge again, I’ll remember to give my skin a break periodically.

One unexpected side effect of not wearing a device as the rash clears up is that unquantified days feel weird. I wonder why I do things if they’re not being quantified. Being healthy for its own sake isn’t enough. I need that extra dopamine from achieving something quantifiable.

Strange, right?

Find the comments.Possibly Related Posts:

Oracle Priority Support Infogram for 02-JUL-2015

Oracle Infogram - Thu, 2015-07-02 13:53

Oracle Support
Two good items from the My Oracle Support blog:
Three Scenarios for Using Support Identifier Groups
Stay Up to Date with Key My Oracle Support Resources of Your Choice using Hot Topics.
A Guide to Providing a Good Problem Description When Raising Service Requests, from the Communications Industry Support Blog.
MySQL
MySQL Enterprise Monitor 3.0.22 has been released, from the MySQL Enterprise Tools Blog.
Big Data
Identifying Influencers with the Built-in Page Rank Analytics in Oracle Big Data Spatial and Graph, from Adding Location and Graph Analysis to Big Data.
WebLogic
Additional new material WebLogic Community, from WebLogic Partner Community EMEA.
Improve SSL Support for Your WebLogic Domains, from Proactive Support - Identity Management.
Fusion Middleware
Calling Fusion SOAP Services from Ruby, from Angelo Santagata's Blog.
JDBC
Using Universal Connection Pooling (UCP) with JBoss AS, from JDBC Middleware.
OBIEE
OBIEE SampleApp V506 is Available, from Business Analytics - Proactive Support.
Ops Center
Upgrading to 12.3, from the Ops Center blog.
Identity Management
Configuring OAM SSO for ATG BCC and Endeca XM, from Proactive Support - Identity Management.
And from the same blog: Monitoring OAM Environment
Health Sciences
Health Sciences Partner Support Best Practices & Resources, from Chris Warticki's Support Blog.
Primavera
New Primavera P6 Release 15.1 Patch Set 2, from the Primavera Support Blog.
EBS
From the Oracle E-Business Suite Support blog:
Self-Evaluation for High Volume Order Import (HVOP)
General Ledger Balances Corruption - Causes, Suggestions, Solutions
Revaluation in Fixed Assets
iSetup? Use It Even Inside The Same Instance!
Using Translation and Plan to Upgrade? Don't Miss This!

Continue Your Work In Process!

Query existing HBase tables with SQL using Apache Phoenix

Kubilay Çilkara - Thu, 2015-07-02 13:25
Spending a bit more time with Apache Phoenix and reading again my previous post I realised that you can use it to query existing HBase tables. That is NOT tables created using Apache Phoenix, but HBase - the columnar NoSQL database in Hadoop.

I think this is cool as it gives you the ability to use SQL on an HBase table.

To test this, let's say you login to HBase and you create an HBase table like this:

> create 'table2', {NAME=>'cf1', VERSIONS => 5}

The table2 is a simple table in HBase with one column family cf1 and now let's put some data to this HBase table.

> put 'table2', 'row1', 'cf1:column1', 'Hello SQL!'

then maybe add another row

> put 'table2', 'row4', 'cf1:column1', 'London'

Now, in Phoenix all you will have to do is create a database View for this table and query it with SQL. The database View will be read-only.  How cool is that, you don't even need to physically create the table or move the data to Phoenix or convert it, a database view will be sufficient and via Phoenix you can query the HBase table with SQL.

In Phoenix you create the view for the table2 using the same name. As you can see below the DDL used to create the view is case sensitive and if you created your HBase table name in lower case you will have to put the name in between double quotes.

So login to Phoenix and create the "table2" view like this:

> create view "table2" ( pk VARCHAR PRIMARY KEY, "cf1"."column1" VARCHAR );

And here is how you then query it in Phoenix:


SQL Query on Phoenix
Tremendous potential here, imagine all those existing HBase tables which now you can query with SQL. More, you can point your Business Intelligence tools and Reporting Tools and other tools which work with SQL and query HBase as if it was another SQL database.

A solution worth investigating further? It definitely got me blogging in the evenings again.

To find out more about Apache Phoenix visit their project page https://phoenix.apache.org/



Categories: DBA Blogs

OSB & MTOM: When to use Include Binary Data by Reference or Value

Darwin IT - Thu, 2015-07-02 09:23
As can be seen in my blogs of these days, I've been busy with implementing a service using MTOM in OSB. When enabling XOP/MTOM Support you'll have to choose between:
  • Include Binary Data by Reference
  • Include Binary Data by Value

I used the first because I want to process the content in another service on another WLS-Domain. However, in my first service catching the initial request I want to do an XSD validation. And although the rest of the message is valid, the Validate activity raises an exception with the message: 'Element not allowed: binary-content@http://www.bea.com/wli/sb/context in element Bestandsdata....'.

Looking into this problem I came up with this section in the doc,  which states that you use 'Include Binary Data by Value' when you want to:
  • transfer your data to a service that does not support MTOM
  • validate your message
Now, what does this other option? OSB then parses the root of the Inbound MIME message in search for  xop:Include-tags. When found, it will Base64 encode the binary-content and replaces the tags with the Base64-string.

Now, although I want exactly that in the end, I don't want that at this point of the service. I want to transform my message, without the Base64-strings. And I want to encode the data only on my other domain.

So I just want to ignore messages that start with the 'Element not allowed: binary-content@...' messages. To do so I came up with the next expression:
fn:count($fault/ctx:details/con:ValidationFailureDetail/con:message[not(fn:starts-with(text(),'Element not allowed: binary-content@http://www.bea.com/wli/sb/context in element Bestandsdata'))])>0 
Add an If-Then-Else activity to your Error Handler Stage with this expression. Add the following Namespace:
  • Prefix: con
  • Namespace:  http://www.bea.com/wli/sb/stages/transform/config

If the expression evaluates to true, then you have in fact an invalid XML-message. In the else branch you can add a Resume to ignore the exception.

This expression might come in handy in other situations as well.

D2L Again Misusing Academic Data For Brightspace Marketing Claims

Michael Feldstein - Thu, 2015-07-02 05:56

By Phil HillMore Posts (342)

At this point I’d say that we have established a pattern of behavior.

Michael and I have been quite critical of D2L and their pattern of marketing behavior that is misleading and harmful to the ed tech community. Michael put it best:

I can’t remember the last time I read one of D2L’s announcements without rolling my eyes. I used to have respect for the company, but now I have to make a conscious effort not to dismiss any of their pronouncements out-of-hand. Not because I think it’s impossible that they might be doing good work, but because they force me to dive into a mountain of horseshit in the hopes of finding a nugget of gold at the bottom. Every. Single. Time. I’m not sure how much of the problem is that they have decided that they need to be disingenuous because they are under threat from Instructure or under pressure from investors and how much of it is that they are genuinely deluding themselves. Sadly, there have been some signs that at least part of the problem is the latter situation, which is a lot harder to fix. But there is also a fundamental dishonesty in the way that these statistics have been presented.

Well, here’s the latest. John Baker put out a blog called This Isn’t Your Dad’s Distance Learning Program with this theme:

But rather than talking about products, I think it’s important to talk about principles. I believe that if we’re going to use education technology to close the attainment gap, it has to deliver results. That — as pragmatic as it is — is the main guiding principle.

The link about “deliver results” leads to this page (excerpted as it existed prior to June 30th, for reasons that will become apparent).

Why Brightspace

Why Brightspace? Results.

So the stage is set – use ed tech to deliver results, and Brightspace (D2L’s learning platform, or LMS) delivers results. Now we come to the proof, including these two examples.

CSULB UWM Results

According to Californiat State University-Long Beach, retention has improved 6% year-over-year since they adopted Brightspace.[snip]

University of Wisconsin-Milwaukee reported an increase in the number of students getting A’s and B’s in Brightspace-powered courses by over 170%

Great results, no? Let’s check the sources. Ah . . . clever marketing folks – no supporting data or even hyperlinks to learn more. Let’s just accept their claims and move along.

. . .

OK, that was a joke.

CSU Long Beach

I contacted CSU Long Beach to learn more, but I could find no one who knew where this data came from or even that D2L was making this claim. I shared the links and context, and they went off to explore. Today I get a message saying that the issue has been resolved, but that CSU Long Beach would make no public statements on the matter. Fair enough – the observations below are my own.

If you now look at that Results page now, the CSU Long Beach claim is no longer there – down the memory hole[1] with no explanation, replaced by a new claim about Mohawk College.

Mohawk UWM Results

While CSU Long Beach would not comment further on the situation, there are only two plausible explanations for the issue being resolved by D2L taking down the data. Either D2L was using legitimate data that they were not authorized to use (best case scenario) or D2L was using data that doesn’t really exist. I could speculate further, but the onus should be on D2L since they are the ones who made the claim.

UW Milwaukee

I also contacted UW Milwaukee to learn more, and I believe the data in question is from the U-Pace program which has been fully documented.[2][3]

The U-Pace instructional approach combnes self-paced, master-based learning with instructor-initiated Amplified Assistance in an online environment.

The control group was traditionally-taught (read that as large lecture classes) for Intro to Psychology.

From the EDUCAUSE Quarterly article on U-Pace, for disadvantaged students the number of A’s and B’s increased 163%. This is the closest data I can find to back up D2L’s claim of 170% increase.

U-Pace results EQ

There are three immediate problems here (ignoring the fact that I can’t find improvements of more than 170% – I’ll take 163%).

  1. First, the data claim is missing the context of “for underprepared students” who exhibited much higher gains than prepared students. That’s a great result for the U-Pace program, but it is also important context to include.
  2. The program is an instructional change, moving from large lecture classes to self-paced, mastery-learning approach. That is the intervention, not the use of the LMS. In fact, D2L was the LMS used in both the control group and the U-Pace treatment group.
  3. The program goes out of its way to call out the minimal technology needed to adopt the approach, and they even list Blackboard, Desire2Learn, and Moodle as examples of LMS’s that work with the following conditions:

U-Pace LMS Reqs

This is an instructional approach that claims to be LMS neutral with D2L’s Brightspace used in both the control group and treatment group, yet D2L positions the results as proof that Brightspace gets results! It’s wonderful that Brightspace LMS worked during the test and did not get in the way, but that is a far cry from Brightspace “delivering results”.

The Pattern

We have to now add these two cases to the Lone Star College and LeaP examples. In all cases, there is a pattern.

  1. D2L makes marketing claim implying their LMS Brightspace delivers results, referring to academic outcomes data with missing supporting data or references.
  2. I contact school or research group to learn more.
  3. Data is either misleading (treatment group is not LMS usage but instead instructional approach, adaptive learning technology, or student support software) or just plain wrong (with data taken down).
  4. In all cases, the results could have been presented honestly, showing the appropriate context, links for further reading, and explanation of the LMS role. But they were not presented honestly.
  5. e-Literate blog post almost writes itself.
  6. D2L moves on to make their next claim, with no explanations.

I understand that other ed tech vendors make marketing claims that cannot always be tied to reality, but these examples cross a line. They misuse and misrepresent academic outcomes data – whether public research-based on internal research – and essentially take credit for their technology “delivering results”.

This is the misuse of someone else’s data for corporate gain. Institutional data. Student data. That is far different than using overly-positive descriptions of your own data or subjective observations. That is wrong.

The Offer

For D2L company officials, I have an offer.

  1. If you have answers or even corrections about these issues, please let us know through your own blog post or comments to this blog.
  2. If you find any mistakes in my analysis, I will write a correction post.
  3. We are happy to publish any reply you make here on e-Literate.
  1. Their web page does not allow archiving with the Wayback Machine, but I captured screenshots in anticipation of this move.
  2. Note – While I assume this claim derives from U-Pace, I am not sure. It is the closest example of real data that I could find, thanks to a helpful tip from UW-M staff. I’ll give D2L the benefit of the doubt despite their lack of reference.
  3. And really, D2L marketing staff should learn how to link to external sources. It’s good Internet practice.

The post D2L Again Misusing Academic Data For Brightspace Marketing Claims appeared first on e-Literate.

Set environment properties in SoapUI (freeware)

Darwin IT - Thu, 2015-07-02 04:26
Ever used SoapUI to test services on multiple environments? Then you probably ran in to the job of ever changing the endpoints to the hosts of the particular environment; development, test, acceptance, production (although I expect you wouldn't use SoapUI against a prod-env). This is not that hard if you have only one service endpoint in the project. But what if you want to test against multiple services or have to call a service on one system with the result of the other during a testcase. You can even have testcases that mock services called by your (BPEL/BPM) process and call back the process to have it process to a further stage. And then you can end up having multiple endpoints per environment.

You can set multiple endpoints on a request and toggle between them. But you'll have to do that for every request.

SoapUI however, supports the use of properties in the endpoints. So you can setup different host-properties and URI properties on the project:
In this case you see that I have one property for the Service URI, the part of the URL after the host:port, and several ...Host properties for each seperate environment, and one actual.

As said, you can have a property based endpoint like this:
So I have one single endpoint defined based on:
http://${#Project#CSServiceHost}/${#Project#CSServiceURI}
Here you see that the endpoint is based on two properties: ${#Project#CSServiceHost} and ${#Project#CSServiceURI}. In those properties '#Project#' refers to the level in SoapUI the properties are defined. You can also refer to #TestSuite#, #TestCase#, etc.

Now you could manually copy and paste the host of the particular environment to the actual host property, but that can be error prone when dealing with multiple endpoints.
What I did was to create a seperate TestSuite called 'ProjectSettings'. In there I created a testcase per environment: 'SetLocalHosts', SsetDevHosts', etc. In there I created a PropertyTransfer that transfers the particular env-host-property to the actual host-property:

You can create a property transfer for each applicable host in your environment. You can enhance the testcase with particular groovyscripts to determine the properties on run-time. You could even call a generic TestCase from there.

Running the particular testcase before your tests will setup your SoapUI project for the target environment in one go.

Maybe I'll enhance this further in my projects, but for now I find this neat. However, it would have been nice if SoapUI would support different environments with hostnames/urls applicable for that environment. And that you could select a target-environment on project level using a poplist.
Also it would be nice to have custom scripts (like macro's) on project level, that could be coupled to a button in the button bar, in stead of how I do it above.

Streamline Oracle Development with Cloud Services

Angelo Santagata - Thu, 2015-07-02 04:24
Streamline Java Development with Cloud Services
On-Demand Webinar Replay
Learn to deliver java applications to market faster. Reduce hardware and software costs for new development and testing environments. Improve DevOps efficiency. Build, test and run enterprise-grade applications in the Cloud and on premise.

Listen to this webinar replay with development expert James Governor, co-founder of RedMonk, and Daniel Pahng, President and CEO of mFrontiers, LLC, an ISV with hands-on experience developing enterprise mobility and Internet of Things (IOT) solutions, as they present this webcast on developing applications in the cloud. Listen today! For more information: July 2015                                                                                          Oracle Corporation - All rights reserved