Skip navigation.

Feed aggregator

RFM Analysis in Oracle BI Apps

Dylan's BI Notes - Fri, 2015-04-24 19:16
I wrote the article RFM Analysis around 7 years ago.  We recently posted a much more detailed explanations about how Oracle BI Apps implements this concept in the product. Customer RFM Analysis RFA related customer attributes are good examples of aggregated performance metrics as described in this design tip from the Kimball Group Design Tip #53: Dimension Embellishments […]
Categories: BI & Warehousing

Advanced Oracle Troubleshooting Guide – Part 12: control file reads causing enq: SQ – contention waits?

Tanel Poder - Fri, 2015-04-24 17:23

Vishal Desai systematically troubleshooted an interesting case where the initial symptoms of the problem showed a spike of enq: SQ – contention waits, but he dug deeper – and found the root cause to be quite different. He followed the blockers of waiting sessions manually to reach the root cause – and also used my @ash/ash_wait_chains.sql and @ash/event_hist.sql scripts to extract the same information more conveniently (note that he had modified the scripts to take AWR snap_ids as time range parameters instead of the usual date/timestamp):

Definitely worth a read if you’re into troubleshooting non-trivial performance problems :)

Related Posts

List listeners and services from the instance

Yann Neuhaus - Fri, 2015-04-24 11:35

Want to know all your listeners - including scan listeners, and the services it listens for? It is possible from the instance, with the - undocumented - view V$LISTENER_NETWORK which is there since 11.2

Parallel Execution -- 4 Parsing PX Queries

Hemant K Chitale - Fri, 2015-04-24 09:20
Unlike "regular" Serial Execution queries that undergo only 1 hard parse and multiple soft parses on repeated execution, Parallel Execution queries actually are hard parsed by each PX Server plus the co-ordinator at each execution.  [Correction, as noted by Yasin in his comment : Not hard parsed, but separately parsed by each PX Server]

Here's a quick demo.

First, I start with a Serial Execution query.

[oracle@localhost ~]$ sqlplus hemant/hemant

SQL*Plus: Release 11.2.0.2.0 Production on Fri Apr 24 22:53:55 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

HEMANT>set serveroutput off
HEMANT>alter table large_table noparallel;

Table altered.

HEMANT>select count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 5ys3vrapmbx6w, child number 0
-------------------------------------
select count(*) from large_table

Plan hash value: 3874713751

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 18894 (100)| |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 18894 (1)| 00:03:47 |
--------------------------------------------------------------------------


14 rows selected.

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '5ys3vrapmbx6w';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
1 1 0 select count(*) from large_table

HEMANT>
HEMANT>select count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '5ys3vrapmbx6w';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
2 2 0 select count(*) from large_table

HEMANT>
HEMANT>select count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '5ys3vrapmbx6w';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
3 3 0 select count(*) from large_table

HEMANT>
HEMANT>select count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '5ys3vrapmbx6w';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
4 4 0 select count(*) from large_table

HEMANT>
HEMANT>select count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '5ys3vrapmbx6w';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
5 5 0 select count(*) from large_table

HEMANT>


5 executions with no additional parse overheads.

Next, I run Parallel Execution.

[oracle@localhost ~]$ sqlplus hemant/hemant

SQL*Plus: Release 11.2.0.2.0 Production on Fri Apr 24 23:04:45 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

HEMANT>set serveroutput off
HEMANT>alter table large_table parallel 4;

Table altered.

HEMANT>select /*+ PARALLEL */ count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 4wd97vn0ytfmc, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from large_table

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 1311 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 1311 (1)| 00:00:16 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 1311 (1)| 00:00:16 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '4wd97vn0ytfmc';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
1 5 0 select /*+ PARALLEL */ count(*) from large_table

HEMANT>
HEMANT>select /*+ PARALLEL */ count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '4wd97vn0ytfmc';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
2 10 0 select /*+ PARALLEL */ count(*) from large_table

HEMANT>
HEMANT>select /*+ PARALLEL */ count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '4wd97vn0ytfmc';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
3 15 0 select /*+ PARALLEL */ count(*) from large_table

HEMANT>
HEMANT>select /*+ PARALLEL */ count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '4wd97vn0ytfmc';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
4 20 0 select /*+ PARALLEL */ count(*) from large_table

HEMANT>
HEMANT>select /*+ PARALLEL */ count(*) from large_table;

COUNT(*)
----------
4802944

HEMANT>select
2 executions, parse_calls, invalidations, sql_fulltext
3 from v$sqlstats
4 where sql_id = '4wd97vn0ytfmc';

EXECUTIONS PARSE_CALLS INVALIDATIONS SQL_FULLTEXT
---------- ----------- ------------- --------------------------------------------------------------------------------
5 25 0 select /*+ PARALLEL */ count(*) from large_table

HEMANT>


Each of the 5 executions had parse overheads for each PX server.
Note : The 5 "PARSE_CALLS" per execution is a result of 4 PX servers.  You might see a different number in your tests.

.
.
.


Categories: DBA Blogs

Pillars of PowerShell: Profiling

Pythian Group - Fri, 2015-04-24 06:53
Introduction

This is the fourth blog post continuing the series on the Pillars of PowerShell. The previous post in the series are:

  1. Interacting
  2. Commanding
  3. Debugging
Profiles

This is something I mentioned in the second post and can be a great way to keep up with those one-liners you use most often in your work. A profile with PowerShell is like using start up scripts in an Active Directory environment. You can “pre-run” things on a domain computer at start up or when a user logs into the machine. In a PowerShell profile you can “pre-load” information, modules, custom functions, or any command you want to execute in the PowerShell console. There is a separate profile for the console and then for PowerShell ISE. Your profile is basically a PowerShell script saved into a specific location under your Documents folder. The path to this profile is actually kept within a system variable, most notably called, $PROFILE.

Output of the $PROFILE variable

Output of the $PROFILE variable

I am using a Windows Azure VM that I just built, so I have not created any profiles on this machine. The path is kept within this variable but that does not mean it actually exists. We will need to create this file and the easiest method to do this is to actually use a cmdlet, New-Item. You can use this cmdlet to create files or folders. You can execute this one-liner to generate the PowerShell script in the path shown above:

New-Item $PROFILE -ItemType File -Force
New-Item $PROFILE

New-Item $PROFILE

Now, from here you can use another cmdlet to open the file within the default editor set to open any “.ps1″ file on your machine, Invoke-Item. This might be Notepad or you can set it to be the PowerShell ISE as well. Just execute this cmdlet followed by the $PROFILE variable (e.g. Invoke-Item $PROFILE).

One of the things I picked up on when I started using my profile more often was you can actually format your console. More specifically, I like to shorten the “PS C:\Users\melton_admin” portion. If you start working in directories that are 3 or 4 folders deep this can take up a good portion of your prompt. I came across a function that I truthfully cannot find the original poster, so sorry for not attributing it.

function prompt
{
if($host.UI.RawUI.CursorPosition.Y -eq 0) { "<$pwd> `n`r" + "PS["+$host.UI.RawUI.CursorPosition.Y+"]> "} else { "PS["+$host.UI.RawUI.CursorPosition.Y+"]> "}
}

Any function you save in your profile that performs an action you can call anytime in the PowerShell console, once it is loaded. However if I want that action to take effect when it loads the profile I simply need to call the function at the end of the profile script. I just add these two lines and ensure they are always the last two lines of my profile, anything added will go between the function above and these two lines:

prompt;
clear;
Profile_format

I use the clear command (just like using cls in the DOS prompt) to just get rid of any output a function or command I have may cause; just starts me out on a fresh clean slate.

If you want to test your profile script you can force it to load into your current session by doing this: .$profile. That is enter “period $profile” and just hit enter. You will need to take note that since I use the clear command in my profile if any cmdlet or one-liner I add outputs an error you will not see it. So when I have issues like this I simply comment the line out of my profile. You can put comments into your script using the pound sign (#), putting that before a command will allow it to be ignored or not run.

Set-ExecutionPolicy

PowerShell is a security product by default, so in certain operating system environments when you try to run your profile script above you may have gotten an error like this:

ExecutionPolicyError

 

 

 

This means pretty much what it says, execution of scripts is disabled. To enable this you need to use the Set-ExecutionPolicy cmdlet with a few parameters. You can find the documentation for this if you want by looking at the “about_Execution_Policies” in PowerShell or follow the link in the error. The documentation will explain the various options and policies you can set. The command below will allow you to execute scripts in your console and let it load your profile scripts:

Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy RemoteSigned
Summary

In this post I pointed out the following cmdlets and concepts:

  • New-Item
  • Invoke-Item
  • Commenting in your script
  • Set-ExecutionPolicy

These are fairly basic areas of PowerShell and putting each one into your favorite search engine should lead you to a plentiful list of reading material. This post by no means encompassed all the potential you can do with Profiles, but was simply meant to get you started, and hopefully excited about what can be done.

Categories: DBA Blogs

BI Forum 2015 Preview — OBIEE Regression Testing, and Data Discovery with the ELK stack

Rittman Mead Consulting - Fri, 2015-04-24 06:18

I’m pleased to be presenting at both of the Rittman Mead BI Forums this year; in Brighton it’ll be my fourth time, whilst Atlanta will be my first, and my first trip to the city too. I’ve heard great things about the food, and I’m sure the forum content is going to be awesome too (Ed: get your priorities right).

OBIEE Regression Testing

In Atlanta I’ll be talking about Smarter Regression testing for OBIEE. The topic of Regression Testing in OBIEE is one that is – at last – starting to gain some real momentum. One of the drivers of this is the recognition in the industry that a more Agile approach to delivering BI projects is important, and to do this you need to have a good way of rapidly testing changes made. The other driver that I see is OBIEE 12c and the Baseline Validation Tool that Oracle announced at Oracle OpenWorld last year. Understanding how OBIEE works, and therefore how changes made can be tested most effectively, is key to a successful and efficient testing process.

In this presentation I’ll be diving into the OBIEE stack and explaining where it can be tested and how. I’ll discuss the common approaches and the relative strengths of each.

If you’ve not registered for the Atlanta BI Forum then do so now as places are limited and selling out fast. It runs May 14–15 with an optional masterclass on Wednesday 13th May from Mark Rittman and Jordan Meyer.

Data Discovery with the ELK Stack

My second presentation is at the Brighton forum the week before Atlanta, and I’ll be talking about Data Discovery and Systems Diagnostics with the ELK stack. The ELK stack is a set of tools from a company called Elastic, comprising Elasticsearch, Logstash and Kibana (E – L – K!). Data Discovery is a crucial part of the life cycle of acquiring, understanding, and exploiting data (one could even say, leverage the data). Before you can operationalise your reporting, you need to understand what data you have, how it relates, and what insights it can give you. This idea of a “Discovery Lab” is one of the key components of the Information Management and Big Data Reference Architecture that Oracle and Rittman Mead produced last year:

ELK gives you great flexibility to ingest data with loose data structures and rapidly visualise and analyse it. I wrote about it last year with an example of analysing data from our blog and associated tweets with data originating in Hadoop, and more recently have been analysing twitter activity using it. The great power of Kibana (the “K” of ELK) is the ability to rapidly filter and aggregate data, as well as see a summary of values within a data field:

The second aspect of my presentation is still on data discovery, but “discovering data” within the logfiles of an application stack such as OBIEE. ELK is perfectly suited to in-depth diagnostics against dense volumes of log data that you simply could not handle within simple log viewers or Enterprise Manager, such as the individual HTTP requests and types of value passed within the interactions of a single user session:

By its nature of log streaming and full text search, ELK also lends itself well to near real time system monitoring dashboards reporting the status of systems including OBIEE and ODI, and I’ll be discussing this in more detail during my talk.

The Brighton BI Forum is on 7–8 May, with an optional masterclass on Wednesday 6th May from Mark Rittman and Jordan Meyer. If you’ve not registered for the Brighton BI Forum then do so now as places are very limited!

Don’t forget, we’re running a Data Visualisation Challenge at each of the forums, and if you need to convince your boss to let you go you can find a pre-written ‘justification’ letter here.

Categories: BI & Warehousing

Database landscape 2014 visualization

Marco Gralike - Fri, 2015-04-24 04:18
I saw this database landscape 2014 overview from “451 Research” with an very nice visualization…

Using the Oracle Developer Cloud Service for Git version management for JDeveloper/ADF apps

Shay Shmeltzer - Thu, 2015-04-23 10:35

The Oracle Developer Cloud Service (DevCS for short) provides a complete cloud-hosted development platform for your team. This makes it very easy to start adopting development best practices for your team, and adopt a more agile development approach.

If you haven't tried it yet, you should!

It's simple to get a free trial instance - just sign up for a trial of the Java cloud service (which, by the way, will take you through anOracle ADF based registration wizard) and an instance of the Developer cloud service will be provisioned for you as part of the trial. No need for any additional machines or installations on your side.

I'm going to write a couple of blogs about the various features that DevCS provides such as build and continuous integration, but let's start with the very basic feature we all should be using - source code management.

Every project needs to do version management - even if you are a single developer - and with DevCS there is no server and network setup required. Create a new DevCS project and 10 seconds later you have your git server accessible from any computer that has internet access.

The demo below is using JDeveloper 12.1.3 and the sample summit ADF application that you can get from OTN. 

In the demo I start from scratch and demo how to

  • create a new DevCS project
  • check code into the git repository
  • branch my code to work on fixes
  • submit the changes back
  • how to do code review by team members
  • merge fixes to the master branch

 

Go ahead try it out with your project and your team.

If you are new to git (which has quickly became the new standard for source management) - check out the Oracle A-Team blog entry that explains a good workflow for team work with git that you can adopt. 

Have any further questions about using the Developer Cloud Service? Ask them on the DevCS community page

Categories: Development

Oracle Priority Support Infogram for 23-APR-2015

Oracle Infogram - Thu, 2015-04-23 10:31

Time to Patch!
From the Oracle E-Business Suite Technology blog:
Critical Patch Update for April 2015 Now Available
RDBMS
From that JEFF SMITH: A Quick Hit on Database Auditing Support
CDBs with less options now supported in Oracle 12.1.0.2. from Update your Database – NOW!
PL/SQL
Optimizing the PL/SQL Challenge II: How to Figure Out the Root of the Problem, from All Things SQL.
BI
From the Oracle BI applications blog, Customer RFM Analysis.
Analytics
From Business Analytics - Proactive Support: New Whitepaper: OBIEE to Essbase Authentication Methods
Fusion
Managing Attachments Using Web Services, from Fusion Applications Developer Relations
WebLogic
The WebLogic Partner Community Newsletter April 2015, from WebLogic Partner Community EMEA.
SOA
Enable SOA Composer in SOA Suite, from SOA & BPM Partner Community Blog.
MAF
From Shay Shmeltzer's Weblog: Dynamically refresh a part of a page (PPR) in Oracle MAF
Ops Center
Disabling ASR for a specific asset, from the Ops Center blog.
OEM
April 2015 EM Recommended Patch List, from Enterprise Manager Best Practices.
EBS
From the Oracle E-Business Suite Support blog:
Webcast: Pick Release Move Order Related To OPM Production Batches
Whats New in the Procurement Approval Analyzer - Version 200.2
Webcast: Oracle Receivables Balance Forward Billing (BFB) Setup & Usage
SR Automation Explained
Are You Considering Item Web Services for Oracle Product Hub?
There's a New Tool in Oracle Payroll...it's the Payroll Dashboard!
Webcast: Enhancement Request to My Oracle Support Community
From the Oracle E-Business Suite Technology blog:
Quarterly EBS Upgrade Recommendations: April 2015 Edition
Best Practices for Testing EBS Endeca Applications
JRE 1.8.0_45 Certified with Oracle E-Business Suite
JRE 1.7.0_79 and 1.7.0_80 Certified with Oracle E-Business Suite

Java JRE 1.6.0_95 Certified with Oracle E-Business Suite

ALTER TABLE INMEMORY

Yann Neuhaus - Thu, 2015-04-23 06:52

In-Memory Column Store is amazing. It brings very good performance to full table scans. I't easy: just 'flip a switch' and you accelerate all reporting queries on your table, without thinking about what to index and how. But in this post, I would like to warn you about the consequences when you just flip that switch. The new full table scan plan will replace the old ones... even before the table is populated in memory...
I'm not sure that it is the expected behaviour. In my opinion the CBO should consider INMEMORY plans only once the population is done. But here is the exemple.

Test case

Here is the testcase. I have a table DEMO with bitmap indexes on its columns:

12:04:54 SQL> create table DEMO compress as
12:04:54   2  with M as (select substr(dbms_random.string('U',1),1,1) U from dual connect by 10>=level)
12:04:54   3  select M1.U U1, M2.U U2, M3.U U3, M4.U U4 from M M1,M M2, M M3, M M4, (select * from dual connect by 1000>=level)
12:04:54   4  /
Table created.

12:05:00 SQL> create bitmap index DEMO_U1 on DEMO(U1);
Index created.
12:05:01 SQL> create bitmap index DEMO_U2 on DEMO(U2);
Index created.
12:05:03 SQL> create bitmap index DEMO_U3 on DEMO(U3);
Index created.
12:05:04 SQL> create bitmap index DEMO_U4 on DEMO(U4);
Index created.
And my test query on those columns:
12:05:05 SQL> alter session set statistics_level=all;
Session altered.
12:05:05 SQL> select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E') and U4='B';
no rows selected
with its execution plan:
12:05:06 SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last'));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------
SQL_ID  64skw45ghn5a0, child number 0
-------------------------------------
select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E')
and U4='B'

Plan hash value: 3881032911

---------------------------------------------------------------------------------------
| Id  | Operation                      | Name    | Starts | E-Rows | A-Rows | Buffers |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |         |      1 |        |      0 |       2 |
|   1 |  HASH UNIQUE                   |         |      1 |      2 |      0 |       2 |
|   2 |   TABLE ACCESS BY INDEX ROWID  | DEMO    |      1 |   4070 |      0 |       2 |
|   3 |    BITMAP CONVERSION TO ROWIDS |         |      1 |        |      0 |       2 |
|   4 |     BITMAP AND                 |         |      1 |        |      0 |       2 |
|   5 |      BITMAP MERGE              |         |      1 |        |      0 |       2 |
|*  6 |       BITMAP INDEX RANGE SCAN  | DEMO_U2 |      1 |        |      0 |       2 |
|*  7 |      BITMAP INDEX SINGLE VALUE | DEMO_U1 |      1 |        |      0 |       0 |
|*  8 |      BITMAP INDEX SINGLE VALUE | DEMO_U4 |      1 |        |      0 |       0 |
|   9 |      BITMAP OR                 |         |      1 |        |      0 |       0 |
|* 10 |       BITMAP INDEX SINGLE VALUE| DEMO_U3 |      1 |        |      0 |       0 |
|* 11 |       BITMAP INDEX SINGLE VALUE| DEMO_U3 |      1 |        |      0 |       0 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("U2">'X')
       filter("U2">'X')
   7 - access("U1"='A')
   8 - access("U4"='B')
  10 - access("U3"='A')
  11 - access("U3"='E')


34 rows selected.
Good. I'm happy with that plan. But I've In-Memory option so probably I can get rid of those bitmap indexes.

alter table INMEMORY

Let's put that query in memory:

12:05:06 SQL> alter table DEMO inmemory priority none memcompress for query high;
Table altered.
and run that query again
12:05:06 SQL> select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E') and U4='B';
no rows selected

12:05:07 SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------
SQL_ID  64skw45ghn5a0, child number 0
-------------------------------------
select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E')
and U4='B'

Plan hash value: 51067428

------------------------------------------------------------------------------------------
| Id  | Operation                   | Name | Starts | E-Rows | A-Rows | Buffers | Reads  |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |      1 |        |      0 |   13740 |  13736 |
|   1 |  HASH UNIQUE                |      |      1 |      2 |      0 |   13740 |  13736 |
|*  2 |   TABLE ACCESS INMEMORY FULL| DEMO |      1 |   4070 |      0 |   13740 |  13736 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - inmemory(("U2">'X' AND "U1"='A' AND "U4"='B' AND INTERNAL_FUNCTION("U3")))
       filter(("U2">'X' AND "U1"='A' AND "U4"='B' AND INTERNAL_FUNCTION("U3")))
Here is my problem. Now that I have defined the table to be populated into the In-Memory Column Store, then the CBO choose an In-Memory plan for my query.

This is a FULL TABLE SCAN because you can only do full table scans from the In-Memory Column Store. But I have a problem. The column store is not yet populated:

12:05:07 SQL> select segment_name,inmemory_size,bytes_not_populated from v$im_segments;
no rows selected
So the FULL TABLE SCAN occured on the row store. Look at the statistics above: 1370 logical reads from the buffer cache. And 13736 physical reads because that table is not in the buffer cache. I always used index access for it before, so the table blocks are not in buffer cache. And the full table scan has good change to be done in direct-path.
I still have a very good access from the bitmap indexes - which are still there - but now I'm now doing a very expensive full table scan.

Population

Look at the same query two seconds later:

12:05:09 SQL> select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E') and U4='B';
no rows selected

12:05:09 SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------
SQL_ID  64skw45ghn5a0, child number 0
-------------------------------------
select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E')
and U4='B'

Plan hash value: 51067428

------------------------------------------------------------------------------------------
| Id  | Operation                   | Name | Starts | E-Rows | A-Rows | Buffers | Reads  |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |      1 |        |      0 |   11120 |  11117 |
|   1 |  HASH UNIQUE                |      |      1 |      2 |      0 |   11120 |  11117 |
|*  2 |   TABLE ACCESS INMEMORY FULL| DEMO |      1 |   4070 |      0 |   11120 |  11117 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - inmemory(("U2">'X' AND "U1"='A' AND "U4"='B' AND INTERNAL_FUNCTION("U3")))
       filter(("U2">'X' AND "U1"='A' AND "U4"='B' AND INTERNAL_FUNCTION("U3")))
It is just a bit better: 11117 physical reads instead of 13736. This is because some In-Memory Compression Units are already there in the In-Memory Column Store:
12:05:10 SQL> select segment_name,inmemory_size,bytes_not_populated from v$im_segments;

SEGMENT_NA INMEMORY_SIZE      BYTES BYTES_NOT_POPULATED
---------- ------------- ---------- -------------------
DEMO             6815744  117440512            88973312
Among the 117440512 bytes (which is 14336 8k blocks) only 88973312 are not yet populated (10861 8k blocks). This is why a bit earlier the query still had to read 11120 blocks from buffer cache.

Let's wait 1 minute for population. Remember that during that time, the population uses a lot of CPU in order to read the row store blocs, put it in column, compress it and store it into the column store.

12:06:04 SQL> select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E') and U4='B';
no rows selected

12:06:04 SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------
SQL_ID  64skw45ghn5a0, child number 0
-------------------------------------
select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E')
and U4='B'

Plan hash value: 51067428

---------------------------------------------------------------------------------
| Id  | Operation                   | Name | Starts | E-Rows | A-Rows | Buffers |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |      1 |        |      0 |       3 |
|   1 |  HASH UNIQUE                |      |      1 |      2 |      0 |       3 |
|*  2 |   TABLE ACCESS INMEMORY FULL| DEMO |      1 |   2546 |      0 |       3 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - inmemory(("U1"='A' AND "U4"='B' AND "U2">'X' AND INTERNAL_FUNCTION("U3")))
       filter(("U1"='A' AND "U4"='B' AND "U2">'X' AND INTERNAL_FUNCTION("U3")))
Ok. not only 3 blocks were read from buffer cache. I have now good performance that I can compare with what I had with the bitmap indexes.

This is because population is completed:

12:06:15 SQL> select segment_name,inmemory_size,bytes,bytes_not_populated from v$im_segments;

SEGMENT_NA INMEMORY_SIZE      BYTES BYTES_NOT_POPULATED
---------- ------------- ---------- -------------------
DEMO            31195136  117440512                   0

Conclusion

My conclusion is that altering a table to populate it into the In-memory Column Store looks like an easy operation. But it is not. When you do that:

  • You change the plans to FULL TABLE SCAN which will not be optimal until the table is fully populated.
  • You trigger the population which will increase a lot your server CPU usage
  • you have the risk to get tables only partially populated in case you're in RAC, or if you don't have enough space in the inmemory_size
So this is something to plan and to monitor. And you will also need to think about what happens if your instance crashes and you have to restart it. How long will it take to get back to correct performance?
And that's even without asking yourself yet if you can drop those bitmap indexes that are superseeded by the In-Memory column store now.

Of course, there are solutions for any problem. if you are on Exadata, then SmartScan will come to the rescue until the IMCS is populated. Full table scan is offloaded to storage nodes. Database node CPU resources are available for quick population. In that way, they are complementary.

Singapore Maths Question Solution and Very Interesting Observation (The Trickster)

Richard Foote - Thu, 2015-04-23 02:22
OK, time to reveal the solution to the somewhat tricky Singapore maths exam question I blogged previously. Remember, there were 10 dates: May 13   May 15   May 19 June 13   June 14 July 16   July 18 August 14   August 15   August 16 Bowie only knew the month of my birthday, Ziggy only knew the day. Bowie […]
Categories: DBA Blogs

Golden Oldies

Jonathan Lewis - Thu, 2015-04-23 01:45

I’ve just been motivated to resurrect a couple of articles I wrote for DBAZine about 12 years ago on the topic of bitmap indexes. All three links point to Word 97 documents which I posted on my old website in September 2003. Despite their age they’re still surprisingly good.


Is MERGE a bug?

Chet Justice - Wed, 2015-04-22 20:57
A few years back I pondered whether DISTINCT was a bug.

My premise was that if you are depending on DISTINCT to return a correct result set, something is seriously wrong with your table design. I was reminded of this again recently when I ran across Kent Graziano's post on Better Data Modeling: Are you making these 3 beginner mistakes in your data models?. Specifically:
Instead of that, you should be defining a natural, or business, key for every table in your system. A natural key is a an attribute or set of attributes (that occur naturally in the data set) required to uniquely identify a row in that table. In addition you should define a Unique Key Constraint on those attributes in the database. Then you can be sure you will not get any duplicate data into the tables.

CLARIFICATION: This point has caused a lot of questions and comments. To be clear, the mistake here is to have ONLY defined a surrogate key. i believe that even if using surrogate keys is the best solution for your design, you should ALSO define an alternate unique natural key. So why MERGE?

I learned about the MERGE statement in 2008. During an interview, Frank Davis asked me about when I would use it. I didn't even know what it was (and admitted that) but I went home that night and...wait...I think he asked me about multi table inserts. Whatever, credit is still going to Mr. Davis. Where was I? OK, so I had been working with Oracle for about 6 years at that point and I didn't know about it. My initial reaction was to use it everywhere (not really)! You know, shiny object and all. Look! Squirrel!

Why am I considering MERGE a bug? Let me be more specific. I was working with a couple of tables and had not written the API for them yet and a developer was writing some PL/SQL to update the records from APEX. In his loop he had a MERGE. I realized at that moment there was 1, no surrogate key and 2, no natural key defined (which ties in with Kent's comments up above). Upon realizing the developer was doing this, I knew immediately what the problem was (besides not using a PL/SQL API to nicely encapsulate the business logic). The table was poorly designed.

Easy fix. Update the table with a surrogate key and define a natural key. I was thankful for the reminder, I hadn't added the unique constraint yet. Of course had I written the API already I probably would have noticed the design error, either way, a win for design.

Now, there are perfectly good occasions to use the MERGE statement. Most of those, to me anyway, relate to legacy systems where you don't have the ability to change the underlying table structures (or it's just cost prohibitive) or ETL, where you want to load/update a dimension table in your data warehouse.

Noons, how's that? First time out in 10 months. Thanks for the push.
Categories: BI & Warehousing

ASU, edX and The Black Knight: MOOCs are not dead yet

Michael Feldstein - Wed, 2015-04-22 18:24

By Phil HillMore Posts (307)

In 2012 I wrote a post during the emergence of MOOC mania, pointing out some barriers that must be overcome for the new model to survive.

So what are the barriers that must be overcome for the MOOC concept (in future generations) to become self-sustaining? To me the most obvious barriers are:

  • Developing revenue models to make the concept self-sustaining;
  • Delivering valuable signifiers of completion such as credentials, badges or acceptance into accredited programs;
  • Providing an experience and perceived value that enables higher course completion rates (most today have less than 10% of registered students actually completing the course); and
  • Authenticating students in a manner to satisfy accrediting institutions or hiring companies that the student identify is actually known.

Fig 3 EvolutionCombine20120927

Since that time, of course, the MOOC hype has faded away, partially based on the above barriers not being overcome.

Today, Arizona State University (ASU) and edX announced a new program, Global Freshman Academy, that takes direct aim at all four barriers and could be the most significant MOOC program yet. From the New York Times story:

Arizona State University, one of the nation’s largest universities, is joining with edX, a nonprofit online venture founded by M.I.T. and Harvard, to offer an online freshman year that will be available worldwide with no admissions process and full university credit.

In the new Global Freshman Academy, each credit will cost $200, but students will not have to pay until they pass the courses, which will be offered on the edX platform as MOOCs, or Massive Open Online Courses.

Later in the article we find out more details on pricing and number of courses.

The new program will offer 12 courses — eight make up a freshman year — created by Arizona State professors. It will take an unlimited number of students. Neither Mr. Agarwal nor Mr. Crow would predict how many might enroll this year.

The only upfront cost will be $45 a course for an identity-verified certificate. Altogether, eight courses and a year of credit will cost less than $6,000.

ASU will pay for the course development and edX will pay for the platform. They eventually hope to get foundation funding, but ASU president Michael Crow promised that “we’re going ahead no matter what”.

This is a big commitment, and it will be interesting to see the results of program that addresses revenue models, identity verification, completion rates and awarding actual credit. As Crow described:

“We were not big believers in MOOCs without credit, courses without a connection to degrees, so we focused our attention on building degree programs,” Mr. Crow said.

Pay attention to this one, whether you’re a MOOC fan or not.

Update:

The post ASU, edX and The Black Knight: MOOCs are not dead yet appeared first on e-Literate.

PLAN_HASH_VALUE calculation different HP-UX and Linux?

Bobby Durrett's DBA Blog - Wed, 2015-04-22 17:01

I’m trying to compare how a query runs on two different 11.2.0.3 systems.  One runs on HP-UX Itanium and one runs on 64 bit x86 Linux.  Same query, same plan, different hash value.

HP-UX:

SQL_ID 0kkhhb2w93cx0
--------------------
update seg$ set type#=:4,blocks=:5,extents=:6,minexts=:7,maxexts=:8,exts
ize=:9,extpct=:10,user#=:11,iniexts=:12,lists=decode(:13, 65535, NULL,
:13),groups=decode(:14, 65535, NULL, :14), cachehint=:15, hwmincr=:16,
spare1=DECODE(:17,0,NULL,:17),scanhint=:18, bitmapranges=:19 where
ts#=:1 and file#=:2 and block#=:3

Plan hash value: 1283625304

----------------------------------------------------------------------------------------
| Id  | Operation             | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT      |                |       |       |     2 (100)|          |
|   1 |  UPDATE               | SEG$           |       |       |            |          |
|   2 |   TABLE ACCESS CLUSTER| SEG$           |     1 |    65 |     2   (0)| 00:00:01 |
|   3 |    INDEX UNIQUE SCAN  | I_FILE#_BLOCK# |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Linux:

SQL_ID 0kkhhb2w93cx0
--------------------
update seg$ set type#=:4,blocks=:5,extents=:6,minexts=:7,maxexts=:8,exts
ize=:9,extpct=:10,user#=:11,iniexts=:12,lists=decode(:13, 65535, NULL,
:13),groups=decode(:14, 65535, NULL, :14), cachehint=:15, hwmincr=:16,
spare1=DECODE(:17,0,NULL,:17),scanhint=:18, bitmapranges=:19 where
ts#=:1 and file#=:2 and block#=:3

Plan hash value: 2170058777

----------------------------------------------------------------------------------------
| Id  | Operation             | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT      |                |       |       |     2 (100)|          |
|   1 |  UPDATE               | SEG$           |       |       |            |          |
|   2 |   TABLE ACCESS CLUSTER| SEG$           |     1 |    64 |     2   (0)| 00:00:01 |
|   3 |    INDEX UNIQUE SCAN  | I_FILE#_BLOCK# |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

I wonder if the endianness plays into the plan hash value calculation? Or is it just a port specific calculation?

Odd.

– Bobby

Categories: DBA Blogs

Successful launch dbi services Zürich

Yann Neuhaus - Wed, 2015-04-22 12:50

Yesterday evening dbi services (headquarters in Delémont) launched officially its third branch in Zürich (Beside Basel and Lausanne). Five years after its take off, the "Oracle Database Partner of Year 2014", is employing more than 40 consultants. I would like to use this opportunity to thank all the customers and partners who trust dbi services. A particular thanks goes to the customers and partners who helped us to enjoy a very pleasant inauguration party yesterday.

Thanks also to Mr Thomas Salzmann (KKG) who presented our successful collaboration and to Massimo Castelli (Oracle) who presented the challenges of recruitment in the IT branch. I was pleased to see that large players like Oracle do, sometimes, have the same challenges as mid-sized companies :-) .

All this adventure would not have been possible without our incredible teams, working hard every day to transform ideas and problems into projects and solutions. dbi services will continue to leverage the skills of its employees, to look for opportunities, in order to remain at top-level provider for the operating systems, database and middleware layers.

A final thanks goes to Kurt Meier who will lead the dbi services branch in Zürich and for the very good organisation of this party. After having won the first customers, Kurt proved that dbi services will succeed and manage this new challenge.

b2ap3_thumbnail_rsz_dsc_2404.jpg

 

b2ap3_thumbnail_rsz_dsc_2390.jpg

 

Dynamically refresh a part of a page (PPR) in Oracle MAF

Shay Shmeltzer - Wed, 2015-04-22 10:23

A common question for developers who are just starting with Oracle MAF, especially if they have a background in Oracle ADF, is how do you do a partial page refresh in Oracle MAF.

Partial Page Refresh basically means that I want to change something in my UI based on another event in the UI - for example hide or show a section of the page. (In ADF there is a partialTrigger property for components which is not there in MAF).

In MAF the UI behaves differently - it is not based on JSF after all - the UI directly reflects changes in managed beans as long as it knows about changes there. How does it know about changes? For this you need to enable firing change event notifications. This is actually quite easy to do - just turn on the checkbox in JDeveloper's accessors generation and it will do the job for you.

Here is a quick demo showing you how to achieve  this:

Here is the code used.

in AMX page:

     <amx:tableLayout id="tl1">

      <amx:rowLayout id="rl1">

        <amx:cellFormat id="cf2">

          <amx:listView var="row" showMoreStrategy="autoScroll" bufferStrategy="viewport" id="lv1">

            <amx:listItem id="li1">

              <amx:outputText value="ListItem Text" id="ot2"/>

              <amx:setPropertyListener id="spl1" from="#{'true'}" to="#{viewScope.backingPPR.showIt}"

                                       type="swipeRight"/>

              <amx:setPropertyListener id="spl2" from="#{'false'}" to="#{viewScope.backingPPR.showIt}"

                                       type="swipeLeft"/>

            </amx:listItem>

          </amx:listView>

        </amx:cellFormat>

      </amx:rowLayout>

      <amx:rowLayout id="rl2" rendered="#{viewScope.backingPPR.showIt}">

        <amx:cellFormat id="cf1">

          <amx:commandButton text="commandButton1" id="cb3"/>

        </amx:cellFormat>

      </amx:rowLayout>

    </amx:tableLayout>


in managed bean:

     boolean showIt = false;


    public void setShowIt(boolean showIt) {

        boolean oldShowIt = this.showIt;

        this.showIt = showIt;

        propertyChangeSupport.firePropertyChange("showIt", oldShowIt, showIt);

    }


    public boolean isShowIt() {

        return showIt;

    }


    public void addPropertyChangeListener(PropertyChangeListener l) {

        propertyChangeSupport.addPropertyChangeListener(l);

    }


    public void removePropertyChangeListener(PropertyChangeListener l) {

        propertyChangeSupport.removePropertyChangeListener(l);

    }


Categories: Development

Flipkart and Focus - 1 - Losing It?

Abhinav Agarwal - Wed, 2015-04-22 09:48
This is the first of a series of articles I wrote for DNA in April on why I believed Flipkart (India's largest online retailer and among the most highly valued startups in the world) was at losing focus, at the wrong time, when faced with its most serious competition to date.

"Why Flipkart seems to be losing focus", appeared in DNA on Sunday, April 12, 2015.

Part I
Among all start-ups that have emerged from India in recent and not-so recent times, Flipkart is likely to be at the top of most people’s minds. The list is admittedly weighted heavily in favour of newer companies, given that the Indian start-up ecosystem has only in the last decade or so started to pick up steam. But that is changing, and the list is getting longer and diverse, with such names as Urban Ladder, Zomato, Reel, Druva Software, WebEngage, etc…[1] in just the online segment. But today, in 2015, Flipkart is the big daddy of them; with total equity funding of US $2.5 billion and a valuation of a whopping US$11 billion as of April 2015, it was ranked the seventh most valuable start-up in the world[2] (though that was still a far cry from the $178 billion market cap enjoyed by US online retailer Amazon[3] and $220 billion market cap of Chinese online retailer Alibaba[4]).

Yet Flipkart seems to be in trouble.


Let’s ignore for the time being the fact that it loses much more money than it makes, and that scale does not seem to have lessened the bleeding of money – it’s caught in a situation where the more it sells the more it loses[5],[6]. How much of it is by design – i.e., a result of a decision to focus on scale and top-line, consciously sacrificing the bottom-line in the interim – is up for debate, but that Flipkart is a long way from profitability is undeniable. Let’s ignore this for the time being.

First off, it is no mean feat to start a company out of the proverbial garage and grow it, in less than a decade (since its start in 2007), into a billion dollar start-up[7]. And make it a leader in an industry. And do it in India. Flipkart has managed to do all that, and more. It has established, spectacularly so, that an Indian start-up can make it to the very top in a fiercely-contested space. Flipkart is, for the most part, has been a spectacularly successful start-up by most counts. Let nothing distract from that fact.

So why the hand-wringing? In one word, focus. Flipkart seems to be losing focus. Three reasons stand out in my mind.

First, the ongoing controversy and its decision to shutter its browser-based web site and force customers to use only its mobile app – on smartphones and tablets.
It has already shut down the mobile browser site of Mnytra – the online fashion retailer it acquired in 2014[8]. Navigate to Flipkart’s website on your browser from your smartphone or tablet and you have no choice but to download and install the app. Come May 1st, and Myntra’s website is planned to be shutdown completely![9] Elsewhere, there have been more than a whiff of rumours that Flipkart is contemplating shutting down its website[10]. This seems not only quite unnecessary, but more importantly, indicative of the grandstanding that is coming to mark some of Flipkart’s actions. Shutting down the web site to become an app-only retailer harms the company in tangible, monetary terms, while benefitting it in the currency of zero-value digital media ink.

WhatsApp, the world’s largest instant-messaging application and which started out and since its launch existed as only a mobile app – with more than 700 million users[11] - launched a browser version of its application in January 2015[12]. Facebook, the world’s largest social network, launched a browser version of its mobile app[13], Facebook Messenger. In case you are tempted to argue that Facebook took that step out of some sort of desperate need to boost numbers, keep in mind that Facebook Messenger had 500 million users in March 2015[14], before it launched its browser version of Messenger. Let’s round off with one more example: Flipboard, with more than 100 million users in 2014[15], launched a browser version in Feb 2015[16]. Yet Flipkart wants to shut down its website.

Is it because of technology? Limitations of mobile browsers? Well, yes, if you are still living in 2010. Half a decade is an eternity in Internet years! Small screen sizes were a big reason why apps were preferred a few years ago, where browser chrome (the title bar, address bar, footer, etc…) would eat up a substantial amount of precious screen real-estate. But in today’s world of gigantic 5” and larger screens, with HD or higher resolutions, this is a moot point[17]. Smartphones are becoming faster and more powerful – quad-core processors and multiple gigabytes of memory are more and more commonplace, 3G is gaining increased adoption even in emerging markets as India. With the availability of UI systems like jQuery Mobile and frameworks like PhoneGap that make a web site adapt to different form factors and which provide substantial support for gestural interactions without additional coding, old arguments hold little water. Unless perhaps you are a gaming developer.

Which Flipkart is not.

Another much-touted argument is that in a country like India, most of the online usage is now coming from mobile devices – smartphones and tablets. India has been ahead of the curve – perversely thanks to its anemic and sparse broadband coverage. According to Mary Meeker’s much-watched-read-downloaded “Internet Trends” presentation at the D10 Conference in May 2012, “Mobile Internet Usage Surpassed More Highly Monetized Desktop Internet Usage in May, 2012, in India”[18]. Indicative of this shift is the fact that in 2014 global smartphone sales overtook feature phone sales, for the first time. A little more than a billion phones were sold of each type[19]. Most of India’s billion mobile users will move towards smartphones by 2020. However, there is, and should be, scepticism over numbers – especially that project into the future. A report that estimated the number of Internet users in India at 300 million by Dec 2014 was questioned by NextBigWhat, a “A Global Media Platform For Technology Entrepreneurs”[20].

But with so little revenue coming from the web site, a Flipkart cannot afford to continue to maintain its website. “It just isn’t viable to have three separate platforms” - so goes one argument[21]. But this thinking betrays a lack of understanding of the distinction between a platform and a consumption channel on the one hand and an even poorer understanding of how complex software applications have been architected for many years now (and especially those that live in the cloud). The code, APIs, database, web server, middleware, identity management, authentication, shopping cart, order fulfilment, security – all of these are common whether you are accessing a site through a website or a mobile app or a mobile browser or even via a wearable device. If you prefer techno-alphabet-soup to describe this, you use a SOA-based approach to software design[22]. Developing a new user interface – desktop, mobile, tablet, etc… - becomes an incremental effort rather than a multi-year, multi-million dollar exercise.

Yes, many technology innovations in the world of retailing are happening in a way that is inextricably intertwined with mobile – like mobile payments and hyperlocal retailing for instance. Wal-Mart uses its mobile app to guide customers to and within its stores (using location tracking via GPS[23]). But they are not shutting down their website either.

If you are in the happy situation of having too many customers, and are ok with ceding a third or more of the online retail market to your competition[24], then shutting down an important channel for your sales is a good idea. And no, let’s not have the argument about cars and buggies either[25].

So why is Flipkart so obsessed, to the point of distraction, with the mobile app strategy?
Customer information and its mobile search ambitions for one.

End of Part I

[1] See "80+ Indian startups to work for in 2015", http://yourstory.com/2014/12/top-startups-india-work-job-employee/ , "80+ Indian startups to work for in 2015", http://yourstory.com/2014/12/top-startups-india-work-job-employee/, and "India Top | Startup Ranking", http://www.startupranking.com/top/india for a more exhaustive list.
[2] "The Billion Dollar Startup Club - WSJ.com", http://graphics.wsj.com/billion-dollar-club/ - accessed April 8, 2015.
[3] "Amazon.com, Inc.: NASDAQ:AMZN quotes & news - Google Finance", http://www.google.com/finance?chdnp=1&chdd=1&chds=1&chdv=1&chvs=maximized&chdeh=0&chfdeh=0&chdet=1428639923069&chddm=1173&chls=IntervalBasedLine&q=NASDAQ:AMZN&ntsp=0&ei=sFAnVeC4NpD6uAT5zIHoCg, accessed April 10, 2015
[4] "Alibaba Group Holding Ltd: NYSE:BABA quotes & news - Google Finance", http://www.google.com/finance?chdnp=1&chdd=1&chds=1&chdv=1&chvs=maximized&chdeh=0&chfdeh=0&chdet=1428640100641&chddm=1173&chls=IntervalBasedLine&q=NYSE:BABA&ntsp=0&ei=X1EnVamuCInwuAS5koAo , accessed April 10, 2015
[5] Per http://www.livemint.com/Companies/nEzvGCknQDBY2RgzcVAKdO/Flipkart-India-reports-loss-of-2817-crore.html, for the year ending March 31, 2013, “Revenue soared fivefold to more than Rs.1,180 crore from Rs.204.8 crore in the previous year”, but “expenses jumped more than five times to Rs.1,366 crore from Rs.265.6 crore last year” – clearly, they were not yet at the point where they could reap economies of scale. As an aside, Flipkart’s Mar 2009 FY revenues were approximately 2.5 crore rupees - http://www.sramanamitra.com/2010/10/06/building-indias-amazon-flipkart-ceo-sachin-bansal-part-3/ - and approximately 30 crore rupees for FY 2010 - http://www.sramanamitra.com/2010/10/07/building-indias-amazon-flipkart-ceo-sachin-bansal-part-4/.
[6] "For the year ended 31 March 2014, the losses of all Flipkart India entities amounted to Rs.719.5 crore on revenue of Rs.3,035.8 crore, according to data compiled by Mint from the Registrar of Companies (RoC) and Acra.", http://www.livemint.com/Companies/VXr8oJzNJ4daOYSO5wNETN/Inside-Flipkarts-complex-structure.html This tells us that both expenses and revenues are growing almost in lock-step – economies of scale are still elusive.
[7] "Flipkart claims to have hit a run rate of $1 bn in gross sales", http://www.business-standard.com/article/companies/flipkart-claims-to-have-hit-a-run-rate-of-1-bn-in-gross-sales-114030700029_1.html
[8] "Press Release - Flipkart.com", http://www.flipkart.com/s/press
[9] "Flipkart, Myntra Shut Mobile Websites, Force Visitors To Install Mobile App", http://trak.in/tags/business/2015/03/23/flipkart-myntra-shut-mobile-websites-force-mobile-app-install/
[10] "Flipkart moves towards becoming app-only platform - Livemint", http://www.livemint.com/Industry/J9VeQxowSOlHU8ZMUParUL/Flipkart-moves-towards-becoming-apponly-platform.html
[11] "• WhatsApp: number of monthly active users 2013-2015 | Statistic", http://www.statista.com/statistics/260819/number-of-monthly-active-whatsapp-users/
[12] "WhatsApp Web - WhatsApp Blog", https://blog.whatsapp.com/614/WhatsApp-Web
[13] "Facebook Launches Messenger for Web Browsers | Re/code", http://recode.net/2015/04/08/facebook-launches-messenger-for-web-browsers/
[14] "Facebook new Messenger service reaches 500 million users - BBC News", http://www.bbc.com/news/technology-29999776
[15] "The Inside Story of Flipboard, the App That Makes Digital Content Look Magazine Glossy", http://www.entrepreneur.com/article/234925
[16] "Flipboard Launches a Web Version For Reading Anywhere", http://thenextweb.com/apps/2015/02/10/flipboard-launches-full-web-version-display-feeds-browser/
[17] "The Surprising Winner of the HTML5 Versus Native Apps War | Inside BlackBerry", http://blogs.blackberry.com/2015/01/surprising-winner-of-html5-apps-war/
[18] "KPCB_Internet_Trends_2012_FINAL.pdf", http://kpcbweb2.s3.amazonaws.com/files/58/KPCB_Internet_Trends_2012_FINAL.pdf?1340750868
[19] "Global feature phone and smartphone shipments 2008-2020 | Forecast", http://www.statista.com/statistics/225321/global-feature-phone-and-smartphone-shipment-forecast/
[20] "300 million Internet Users in India By Dec? Grossly Wrong [10+ Questions to IAMAI] » NextBigWhat", http://www.nextbigwhat.com/300-million-india-internet-users-iamai-297/
[21] "Flipkart, Myntra’s app-only move draws mixed reactions - Livemint", http://www.livemint.com/Industry/v6SCQhhl94uriMLM3Qev6N/Flipkart-Myntras-apponly-move-draws-mixed-reactions.html
[22] "The Secret to Amazons Success Internal APIs ·", http://apievangelist.com/2012/01/12/the-secret-to-amazons-success-internal-apis/
[23] "Walmart Mobile - Walmart.com", http://www.walmart.com/cp/Walmart-Mobile-App/1087865
“[24] In 2014, 50 per cent of shopping queries were made through mobile devices, compared to 24 per cent in 2012”, http://www.business-standard.com/article/companies/google-says-indian-e-commerce-market-to-hit-15-bn-by-2016-114112000835_1.html
[25] "Failing Like a Buggy Whip Maker? Better Check Your Simile - NYTimes.com", http://www.nytimes.com/2010/01/10/business/10digi.html?_r=0

© 2015, Abhinav Agarwal (अभिनव अग्रवाल). All rights reserved.

Oracle OpenWorld 2015 - Call for Papers Deadline is April 29!

WebCenter Team - Wed, 2015-04-22 06:21
If you’re an Oracle technology expert, conference attendees want to hear it straight from you. So don’t wait—proposals must be submitted by April 29. You have one week left to submit!

Oracle OpenWorld 2015

Wanted: Outstanding Oracle Experts

The Oracle OpenWorld 2015 Call for Proposals is now open. Attendees at the conference are eager to hear from experts on Oracle business and technology. They’re looking for insights and improvements they can put to use in their own jobs: exciting innovations, strategies to modernize their business, different or easier ways to implement, unique use cases, lessons learned, the best of best practices.

If you’ve got something special to share with other Oracle users and technologists, they want to hear from you, and so do we. Submit your proposal now for this opportunity to present at Oracle OpenWorld, the most important Oracle technology and business conference of the year.

We recommend you take the time to review the General Information, Submission Information, Content Program Policies, and Tips and Guidelines pages before you begin. We look forward to your submissions.

Attention HCM customers: HCM Central @ OpenWorld is designed to provide a single place for all things HCM. Have a story related to Oracle implementations around HCM? Submit here.

Attention CX customers: CX Central @ OpenWorld is designed to provide a single place for all things related to the customer lifecycle for all of Oracle's CX customers whose business requires them to definitively differentiate themselves across all channels, touch points, and interactions. Have a story related to Oracle implementations around the Customer Experience lifecycle? Submit here.

Submit Your Proposal

By submitting a session for consideration, you authorize Oracle to promote, publish, display, and disseminate the content submitted to Oracle, including your name and likeness, for use associated with the Oracle OpenWorld and JavaOne San Francisco 2015 conferences. Press, analysts, bloggers and social media users may be in attendance at OpenWorld or JavaOne sessions.

Submit Now.