Skip navigation.

Feed aggregator

Oracle Partners ♥ APIs for PaaS and IoT User Experience

Usable Apps - Mon, 2015-06-08 14:02

Platform as a Service (PaaS) and the Internet of Things (IoT) are two ginormous business propositions for Oracle partners. But together they’re a new game-changer of seemingly endless possibilities.

But how do PaaS and IoT work together? Is there a user experience (UX) dimension? And, what should Oracle Applications Cloud partners be thinking about for SaaS?

The IoT train is arriving at your platform now. Be on it.

The IoT train is arriving at your platform. Prepare to board.

PaaS and IoT

The PaaS business proposition might be summarized as "Bring Your Code" to a very productive way to innovate and build custom app and integrations. IoT relies on ubiquitous connectivity across devices of all sorts, with the “things” exchanging bits of data along the way.

 5 Ideas - Oracle Profit Magazine

Platform as a Service offers awesome ideas for rapidly innovating, developing, and deploying scalable applications.

I discussed PaaS and IoT with Mark Vilrokx (@mvilrokx), our all-things PaaS UX architect from the AppsLab (@theappslab) crew, and how we might put a business shape around the concept for partners.

"These 'things' don’t need UIs. For PaaS, all they need is a web API", says Mark. “Developers need to think about how IoT devices talk to SaaS applications using APIs and about what kind of PaaS infrastructure is needed to support building these kind of solutions."

"Oracle is up there, with an IoT platform to simplify building IoT solutions. Developers now need now to adopt an approach of not writing UIs, but writing UI services: APIs are part of the Cloud UX toolkit."

IoT in the Enterprise: Connecting the Data

To illustrate what all this might mean for customer solutions, let's assume we have a use case to track items across a supply chain using the cloud.

IoT is all about the data. Using IoT we can gather the data unobtrusively and in a deeply contextual way using devices across the IoT spectrum: beacons, proximity sensors, wearable tech of all sorts, drones, and so on. We can detect where the item is in the supply chain, when it’s expected at its destination, who will receive it, when it arrives, and so on. The item’s digital signature in the Internet of Things becomes data in the cloud.

There are lots of other rich possibilities for PaaS and IoT. Check out this Forbes OracleVoice article, for example.

PaaS for SaaS and IoT

PaaS with SaaS is also a perfect combination to rapidly innovate and keep pace in a fast-moving, competitive space of cloud applications solutions.

SaaS is not done in a vacuum in the enterprise world of integrations, and is an innovation accelerator in its own right, but with PaaS and IoT added into the technology mix, we have an alignment of technology stars that are a solution provider’s dream.

We can use APIs to integrate IoT data in our supply chain example, but we can also use PaaS to build a bespoke app with a dashboard UI for an inventory administrator to correct any outliers or integrate our supply chain with a freight company’s system. For SaaS, we can now also integrate the data with, say, Oracle ERP Cloud, using the Oracle Java Cloud Service SaaS Extension (JCS-SX).

And guess what? Our UX enablement has already helped partners build pure PaaS and PaaS4SaaS solutions, all using the same Oracle ADF-based Rapid Development Kit!

APIs as UX Design

Where does this leave UX? UX takes on increased power as a key differentiator for partners in the PaaS, SaaS, and IoT space. The UX mix of science and empathy makes the complications of all that technology and the machinations of enterprise business processes fade away for users in a delightful way and deliver ROI for customer decision makers.

 Learn to heart APIs

Developers: Pivot and learn to ♥ APIs. At the heart of the Cloud UX toolkit to win business.

So, the user experience for a task flow build using API connectivity must still be designed to be compelling and to provide value. And, when UIs are required, they must still be designed in an optimal way, reflecting the UX mobility strategy, even if that means making the UI invisible to users.

For example, going back to our use case, we would glance at a notification on a smartwatch letting us know that our item has entered the supply chain or that it’s been received. The data comes from contextual sensors and is communicated in a convenient, micro-transactional way on our wrists.

Oracle Partner UX Enablement

Web APIs are the new Cloud UX for connecting data and devices. That APIs are UX design is not really a new idea, but what is emerging now are new business opportunities for partners who exploring are PaaS, SaaS, and IoT innovation.

Be sure of one thing: The Oracle Applications User Experience team takes a strategic view of Cloud UX enablement for partners. Whether it is PaaS, SaaS, or IoT, our enablement is there to help you take your business to a higher level.

For partners who say "Bring It On", you know where to find us and what our enablement requirements are

SQL Server 2016 : availability groups and the new potential support for standard edition

Yann Neuhaus - Mon, 2015-06-08 13:00

In my first blog about availability groups with SQL Server 2016, I talked quickly about the new interesting option: DB_FAILOVER. In this blog post, I will continue by introducing the new potential support of availability groups in a standard edition (based on the last Microsoft Ignit news). Yes, this sounds a great news because it will increase the scope of possible customers but bear in mind that it concerns potentially the standard edition (not in its last shape I guess) and we may expect some limitations. Let's have a look at the potential limitations in this blog post.

First of all, you’ll notice a new option called “Basic Availability Group” from the configuration wizard as shown below:

 

blog_50_-_0_-_basic_availability_group_option

 

At this point we can wonder what "Basic Availability Group" means exactly? Let me speculate: this option allows us to simulate the availability groups feature in standard edition. I guess, this option will disappear with the first SQL Server 2016 RTM release. In addition, the word “Basic” tends to suggest some limitations, so let’s try to configure what I will call a BAG (Basic Availability Group) in this blog post.

The first thing I noticed is that the availability group will include only one database. In others words, adding a second database is not possible and you will face the following error from the GUI:

 

blog_50_-_1_-_bag_adding_databases

 

Ok, let’s continue. This time, the next limitation concerns the read-only capabilities on the secondary replicas which are not supported with BAGs. From the GUI, I have no other choices than "No".

 

blog_50_-_2_-_bag_ro_secondary

 

Likewise, if I try to change the Readable Secondary value for the SQL162 instance, I will also face the following error message:

 

ALTER AVAILABILITY GROUP BAG MODIFY REPLICA ON 'SQL162' WITH (        SECONDARY_ROLE        (              ALLOW_CONNECTIONS = READ_ONLY            ) )

 

Msg 41199, Level 16, State 8, Line 1 The specified command is invalid because the AlwaysOn Availability Groups allow_connections feature is not supported by this edition of SQL Server. For information about features supported by the editions of SQL Server, see SQL Server Books Online.

 

Next, configuring backup preferences is not a possible option from the GUI. All parameters are greyed as shown below:

 

blog_50_-_3_-_bag_backup_preference

 

Go ahead and after installing my availability group, I noticed that the backup preferences policy was setup to Primary.

Finally, configuring a listener is also not supported on BAGs. Again, all configuration options are not available from the GUI. However, adding a listener after implementing the availability group, gives us the opportunity to enter the listener information but it will raise an error message at the final step:

 

blog_50_-_6_-_lstner_error

 

What about adding a third replica with BAG?  In fact, we're limited to 2 replicas and we are not able to add another either from the GUI because the option is also greyed or from script because it will raise the following error message:

 

-- Adding a third replica ALTER AVAILABILITY GROUP [BAG] ADD REPLICA ON N'SQL163' WITH (        ENDPOINT_URL = N'TCP://SQL163.dbi-services.test:5022',        FAILOVER_MODE = MANUAL,        AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,        BACKUP_PRIORITY = 50,        SECONDARY_ROLE        (                   ALLOW_CONNECTIONS = NO        ) ); GO  
Msg 35223, Level 16, State 0, Line 21 Cannot add 1 availability replica(s) to availability group 'BAG'. The availability group already contains 2 replica(s), and the maximum number of replicas supported in an availability group is 2.

 

To summarize BAG comes with a lot of restrictions. So, when you create an availability group (on standard edition), you will able to benefit:

  • Only 2 replicas with either synchronous or asynchronous replication capabilities (the both are available with the current CTP2)
  • One and only one database per availability group
  • Backup capabilities only on the primary
  • New DB_FAILOVER option

However you will not able to use:

  • Failover capabilities by using the listeners (the listeners are not available with BAG)
  • Read-only capabilities (database snapshots are available with evaluation editions but is that going the case with the future standard edition?)

 

What about client failover capabilities in this case? Indeed, as said earlier, we cannot rely on the listener in order to switch over transparently to a new replica but in the same time, we are able to configure automatic failover for the availability group itself. A basic connectivity test (from a custom powershell script) after switching my availability group to a different replica raised the following error message:

 

The target database, 'bag', is participating in an availability group and is currently not accessible for queries. Either data movemen t is suspended or the availability replica is not enabled for read access. To allow read-only access to this and other databases in th e availability group, enable read access to one or more secondary availability replicas in the group. For more information, see the A LTER AVAILABILITY GROUP statement in SQL Server Books Online.

 

At this point, I expected to get at least the same failover mechanism provided with mirroring feature (assuming that DAG is the future replacement of DBM as said at the last Microsoft Init in Chicago). Does it mean that we’ll have to add the failover partner attribute in the connection string from the client side? Let's try by modifying the connection string of mypowershell script:

 

$dataSource = “SQL161"; $dataPartner = "SQL162"; $user = "sa”; $pwd = "xxxxx"; $database = "bag"; $connectionString = "Server=$dataSource;uid=$user; pwd=$pwd;Database=$database;Integrated Security=False;Failover Partner=$dataPartner;Initial Catalog=bag”;

 

- Test with SQL161 as the primary

 

blog_50_-_8_-_connectionstring_test

 

- Test After switching my availability group from SQL161 to SQL162

 

blog_50_-_7_-_connectionstring_test

 

Ok it seems to work correctly now.

In conclusion, the Basic Availability Group feature seems to be designed to replace the well-known mirroring feature, which is now deprecated, but with the limited-scope advantages of the availability groups. I believe we'll have other opportunities to discuss about this feature in the near future because at this point, it has a good chance to not yet be carved in stone.

 

<b>Contribution by Angela Golla,

Oracle Infogram - Mon, 2015-06-08 12:57
Contribution by Angela Golla, Infogram Deputy Editor

2-Minute Tech Tips
OTN ArchBeat 2 Minute Tech Tip videos pit a recognized expert in Oracle technologies against a countdown clock to deliver a useful technical tip in two minutes or less.  There are many videos on different topics that are listed at 2-Minute Tech Tips. 

For those of us who are old enough to remember what "MacGyvering" is, be sure to check out the "Innovation, MacGyvering and Hackathons" video.  It definitely brought a smile to my face. 

Moodle Association: New pay-for-play roadmap input for end users

Michael Feldstein - Mon, 2015-06-08 12:27

By Phil HillMore Posts (329)

As long as we’re on the subject of changes to open source LMS models . . .

Moodle is in the midst of releasing a fairly significant change to the community with a new not-for-profit entity called the Moodle Association. The idea is to get end users more directly involved in setting the product roadmap, as explained by Martin Dougiamas in this discussion thread and in his recent keynotes (the one below from early March in Germany).

[After describing new and upcoming features] So that’s the things we have going now, but going back to this – this is the roadmap. Most people agree those things are pretty important right now. That list came from mostly me, getting feedback from many, many, many places. We’ve got the Moots, we’ve got the tracker, we’ve got the community, we’ve got Moodle partners who have many clients (and they collect a lot of feedback from their paying clients). We have all of that, and somehow my job is to synthesize all of that into a roadmap for 30 people to work on. It’s not ideal because there’s a lot, a lot of stuff going on in the community.

So I’m trying to improve that, and one of the things – this is a new thing that we’re starting – is a Moodle Association. And this will be starting in a couple of months, maybe 3 or 4 months. It will be at moodleassociation.org, and it’s a full association. It’s a separate legal organization, and it’s at arm’s length from Moodle [HQ, the private company that develops Moodle Core]. It’s for end users of Moodle to become members, and to work together to decide what the roadmap should be. At least part of the roadmap, because there will be other input, too. A large proportion, I hope, will be driven by the Moodle Association.

They’ll become members, sign up, put money every year into the pot, and then the working groups in there will be created according to what the brainstorming sessions work out, what’s important, create working groups around those important things, work together on what the specifications of that thing should be, and then use the money to pay for that development, to pay us (Moodle HQ), to make that stuff.

It’s our job to train developers, to keep the organization of the coding and review processes, but the Moodle Association is telling us “work on this, work on that”. I think we’ll become a more cohesive community with the community driving a lot of the Moodle future.

I’m very excited about this, and I want to see this be a model of development for open source. Some other projects have something like this thing already, but I think we can do it better.

In the forum, Martin shared two slides on the funding model. The before model:

moodle-model-before

 

The model after:

moodle-model-after

 

One obvious change is that Moodle partners (companies like Blackboard / Moodlerooms, RemoteLearner, etc) will no longer be the primary input to development of core Moodle. This part is significant, especially as Blackboard became the largest contributing member of Moodle with its acquisition of Moodlerooms in 2012. This situation became more important after Blackboard also bought Remote-Learner UK this year. It’s worth noting that Martin Dougiamas, founder of Moodle, was on the board of Remote-Learner parent company in 2014 but not this year.

A less obvious change, however, is that the user community – largely composed of schools and individuals using Moodle for free – has to contend with another pay-for-play source of direction. End users can pay to join the association, and the clear message is that this is the best way to have input. In a slide shown at the recent iMoot conference and shared at MoodleNews, the membership for the association was called out more clearly.

massociation2

What will this change do to the Moodle community? We have already seen the huge changes to the Kuali open source community caused by the creation of KualiCo. While the Moodle Association is not as big of a change, I cannot imagine that it won’t affect the commercial partners.

There are already grumblings from the Moodle end user community (labeled as Moodle.org, as this is where you can download code for free), as indicated by the discussion forum started just a month ago.

I’m interested to note that Moodle.org inhabitants are not a ‘key stakeholder’, but maybe when you say ‘completely separate from these forums and the tracker’ it is understandable. Maybe with the diagram dealing only with the money connection, not the ideas connection, if you want this to ‘work’ then you need to talk to people with $$. ie key = has money.

I’ll be interested how the priorities choice works: do you get your say dependent on how much money you put in?

This to me is the critical issue with the future.

Based on MoodleNews coverage of the iMoot keynote, the answer to this question is that the say is dependent on money.

Additionally, there will be levels of membership based on the amount you contribute. The goal is to embrace as many individuals from the community but also to provide a sliding scale of membership tiers so that larger organizations, like a university, large business, or non-Moodle Partner with vested interested in Moodle, (which previously could only contribute through the Moodle Partner arrangement, if at all) can be members for much larger annual sums (such as AU$10k).

The levels will provide votes based on dollars contributed (potentially on a 1 annual dollar contributed = 1 vote).

This is why I use the phrase “pay-for-play”. And a final thought – why is it so hard to get public information (slides, videos, etc) from the Moodle meetings? The community would benefit from more openness.

Update 6/10: Corrected statement that Martin Dougiamas was on the Remote Learner board in 2014 but not in 2015.

The post Moodle Association: New pay-for-play roadmap input for end users appeared first on e-Literate.

rSmart to Asahi to Scriba: What is happening to major Sakai partner?

Michael Feldstein - Mon, 2015-06-08 11:16

By Phil HillMore Posts (329)

It looks like we have another name and ownership change for one of the major Sakai partners, but this time the changes have a very closed feel to them. rSmart, led by Chris Coppola at the time, was one of the original Sakai commercial affiliates, and the LMS portion of the company was eventually sold to Asahi Net International (ANI) in 2013. ANI had already been involved in the Sakai community as a Japanese partner and also as an partial investor in rSmart, so that acquisition was not seen as a huge change other than setting the stage for KualiCo to acquire the remainder of rSmart.

In late April, however, ANI was acquired by a private equity firm out of Los Angeles (Vert Capital), and this move is different. Vert Capital did not just acquire ANI; they also changed the company name to Scriba and took the company off the grid for now. No news items explaining intentions, no web site, no changes to Apereo project page, etc. Japanese press coverage of the acquisition mentions the parent company’s desire to focus on the Japanese market.

What is going on?

A rudimentary search for “Scriba education learning management” brings up no news or web sites, but it does bring up a recent project on freelancer.com to create the new company logo. By the way, paying $90 gets 548 entries from 237 freelancers – and adjuncts are underpaid?! The winning logo has a certain “we’re like Moodle, but our hat covers two letters” message that I find quite original.

Furthermore, neither scriba.com nor scriba.org are registered by the company (both are owned by keyword naming companies that pre-purchase domains for later sale). The ANI website mentions nothing about the sale, and in fact has no news since October, 2014. The Sakai project page has no update, but the sponsorship page for Open Apereo conference last week did have new logo. This sale has the appearance of a last-minute acquisition under financial distress[1].

Vert Capital is a “private investment firm that provides innovative financing solutions to lower/middle market companies globally”. The managing director who is leading this deal, Adam Levin, has a background in social media and general media companies. Does Vert Capital plan on making further ed tech acquisitions? I wouldn’t be surprised, as ed tech is fast-changing market yet more companies are in need of “innovative financing”.

I have asked Apereo for comment, and I will share that or any other updates as I get it. If anyone has more information, feel free to share in comments or send me a private note.

H/T: Thanks to reader who wishes to remain anonymous for some pointers to public information for this post.

  1. Note, that is conjecture.

The post rSmart to Asahi to Scriba: What is happening to major Sakai partner? appeared first on e-Literate.

Practical Tips for Oracle BI Applications 11g Implementations

Rittman Mead Consulting - Mon, 2015-06-08 06:35

As with any product or technology, the more you use it the more you learn about the “right” way to do things. Some of my experiences implementing Oracle Business Intelligence Applications 11g have led me to compile a few tips that will improve the overall process for installation and configuration and make the application more maintainable in the future. You can find me at KScope15 in Hollywood, FL beginning June 21st, presenting this exact topic. In this post I want to give you a quick preview of a couple of the topics in my presentation.

Data Extract Type – Choose Wisely

Choosing how the data is extracted from the source and loaded to the data warehouse target is an important part of the overall ETL performance in Oracle BI Applications 11g. In BI Apps, there are three extract modes to choose from:

  • JDBC mode
    This default mode will use the generic Loading Knowledge Modules (LKM) in Oracle Data Integrator to extract the data from the source and stream it through the ODI Agent, then down to the target. The records are streamed through the agent to translate datatypes between heterogeneous data sources. That makes the JDBC mode useful only when the source database is non-Oracle (since the target for BI Apps will always be an Oracle database).
  • Database link mode
    If your source is Oracle, then the database link mode is the best option. This mode uses the database link functionality built-in to the Oracle database, allowing the source data to be extracted across this link. This eliminates the need for an additional translation of the data as occurs in the JDBC mode.
  • SDS mode
    This should really be called “GoldenGate mode”, but I’m sure Oracle wants to keep their options open. In this mode, Oracle GoldenGate is used to replicate source transactions to the target data warehouse in what is called a Source Dependent Data Store (SDS) schema. This SDS schema mimics the source schema(s), allowing the SDE process to extract from the DW local SDS schema rather than across the network to the actual source.
    If the use of GoldenGate is an option, it’s hands-down better than JDBC mode should you be extracting data from a non-Oracle source. (Have a look at my OTN ArchBeat 2-minute Tech Tip as I attempt beat the clock while discussing when to use GoldenGate with BI Apps!)

OBIA architecture

Let’s go into a bit more detail about using GoldenGate with BI Applications. Because the SDS is setup to look exactly like the source schema, the Oracle Data Integrator pre-built Interfaces can change which source they are using from within the Loading Knowledge Module (LKM) by evaluating a variable (IS_SDS_DEPLOYED) at various points throughout the LKM. Using this approach, the GoldenGate integration can be easily enabled at any point, even after initial configuration. The Oracle BI Applications team did a great job of utilizing the features of ODI that allow the logical layer to be abstracted from the physical layer and data source connection. For further information about how to implement Oracle GoldenGate with Oracle BI Applications 11g, check out the OTN Technical article I wrote which describes the steps for implementation in detail.

Disaster Recovery

If the data being reported on in BI Applications is critical to your business, you probably want a disaster recovery process. This will involve an entirely different installation on a full server stack located somewhere that is not near the production servers. Now, there are many different approaches to DR with each of the products involved in BI Applications – OBIEE, ODI, databases, etc., but I think this approach is more simple than many others.

BI Apps DR Architecture

The installation of BI Apps would occur on each site (primary and standby) as standalone installations. It’s critical that you have a well defined, hopefully scripted and automated, process for installation and configuration, since everything will need to be exactly the same between the two sites. Looking at the architecture diagram above, you can see the data warehouse, ODI repository, and BIACM repository schemas will be replicated from primary to standby via Oracle Dataguard. The OBIEE metadata repositories are not replicated due to much of the configuration information being stored in files rather than the database schema.

With the installation and configuration identical, any local, internal URLs will be setup to use the local site URL (e.g. http://biapps.rittmanmead-primary.com). The external URLs, such as the top-level site (e.g. http://biapps.rittmanmead.com/biacm) or database JDBC connection URLs, will all use canonical names (CNAMEs) as the URL. A CNAME is really just an alias used in the DNS, allowing an easy switch when redirecting from one site to another. For example, the CNAME biapps.rittmanmead-primary.com will have an alias of biapps.rittmanmead.com. This alias will switch to point to biapps.rittmanmead-standby.com during the failover / switchover process.

We can now run through a few simple steps to perform the failover or switchover to the standby server.

  • Update Global CNAMEs
  • Switch primary database via DataGuard
  • Update the Web Catalog and Application Role assignments
  • Start NodeManager, OHS, WebLogic AdminServer
  • Update Embedded LDAP User GUID in ODI (if necessary)
  • Start BI and ODI Managed Servers
  • Update and Deploy the RPD
  • Start the BI Services

Looks pretty straightforward, right? With the appropriate attention to detail up front during the installation and configuration, it becomes simple to maintain and perform the DR switchover and failover. I’ll go into more detail on these topics and others, such as installation and configuration, LDAP integration, and high availability, during my presentation at KScope15 later this month. I hope to see you there!

 

Categories: BI & Warehousing

SharePoint Governance? Why?

Yann Neuhaus - Mon, 2015-06-08 06:16

Companies are struggling with SharePoint. It’s been installed, and abandoned. Business stuff is not drove to make SharePoint succeed.
From this point you need to dress up a governance for SharePoint.
Governance focuses on the technology, business and human side of SharePoint.


 

What is GOVERNANCE? what

Governance is the set of:

  • policies
  • roles
  • responsibilities
  • processes

that help and drive Companie's IT Team and business divisions in order to get their GOALS.
Good governance is therefore establishing sufficiently robust and thorough processes to ensure that not only can those objectives be delivered but that they are delivered in an effective and transparent way.

Example: with permission governance, it's easy to manage who is authorized to get the EDIT permission which allows user to Contribute AND DELETE (list/Library).

In other words, we can equate Governance to something we see in our daily life.

goal

  What happens with NO GOVERNANCE?

No Governance means nothing to be followed and let everything going in all ways!
Without a proper governance, be sure that business objectives won't be achieved, and at least the SharePoint implementation will failed.

Example: if there is no governance about "Site Creation", everybody would be able to create site, and probably on the wrong way. Imagine a SharePoint site without any permissions levels, etc...

You might meet a chaotic situation as depicted by the traffic jam below:

traffic

A Bad Governance will introduce:

  • Social Exclusion
  • Inefficiency
  • Red Tape
  • Corruption
How to start a Governance?

Step by step, define a Governance implementation:

1. The Governance Committee must be organised

A governance committee includes people from the Business & IT divisions of an organization.

2. Decide the SharePoint Elements to be covered

SharePoint Elements that can be governed:

  • Operational Management
  • Technical Operations
  • Site and Security Administration
  • Content Administration
  • Personal and Social Administration


3. Define and implement Rules & Policies

The implementation includes the good writing of Rules & Policies:

  • Setting up Rights & Permissions for Users & Groups
  • Restrict Site Collection creation
  • Setup content approval & routing
  • Setup Locks & Quotas
  • Set Document Versioning Policies
  • Set Retention / Deletion Policies
  • Restrict page customization & usage of SharePoint Designer
  • Setup workflows for automating approvals & processes (using SharePoint Tool or a third party tool)

Having a good communication/adoption with users of those elements will drive higher productivity and less support calls for issues.


4. Drive & Reinforce Governance

Regular meetings are conducted by the Governance Committee to review governance, any necessary change to the Rules & Policies is updated during this phase.

Use the Best practices for governance plans:

  • Determine initial principles and goals
  • Classify your business information
  • Develop an education strategy
  • Develop an ongoing plan

Technet source: https://technet.microsoft.com/en-us/library/cc263356.aspx

 

Governance and Teamwork is essential to smart implementation!

Collaboration

Wrong Java version on Unified Directory Server

Frank van Bortel - Mon, 2015-06-08 06:09
Wrong version Java After losing the battle with the OS guys for control over java, I keep stumbling upon environments that have wrong java versions due to the fact java is installed in /usr/java, or /usr/bin.In such cases, this is the result:which java /usr/bin/java As I do not have control over /usr/bin, I install java in /oracle/middleware/java, so I would like which java /oracle/middleware/Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

Teradata will support Presto

DBMS2 - Mon, 2015-06-08 03:32

At the highest level:

  • Presto is, roughly speaking, Facebook’s replacement for Hive, at least for queries that are supposed to run at interactive speeds.
  • Teradata is announcing support for Presto with a classic open source pricing model.
  • Presto will also become, roughly speaking, Teradata’s replacement for Hive.
  • Teradata’s Presto efforts are being conducted by the former Hadapt.

Now let’s make that all a little more precise.

Regarding Presto (and I got most of this from Teradata)::

  • To a first approximation, Presto is just another way to write SQL queries against HDFS (Hadoop Distributed File System). However …
  • … Presto queries other data stores too, such as various kinds of RDBMS, and federates query results.
  • Facebook at various points in time created both Hive and now Presto.
  • Facebook started the Presto project in 2012 and now has 10 engineers on it.
  • Teradata has named 16 engineers – all from Hadapt – who will be contributing to Presto.
  • Known serious users of Presto include Facebook, Netflix, Groupon and Airbnb. Airbnb likes Presto well enough to have 1/3 of its employees using it, via an Airbnb-developed tool called Airpal.
  • Facebook is known to have a cluster cited at 300 petabytes and 4000 users where Presto is presumed to be a principal part of the workload.

Daniel Abadi said that Presto satisfies what he sees as some core architectural requirements for a modern parallel analytic RDBMS project: 

  • Data is pipelined between operators, with no gratuitous writing to disk the way you might have in something MapReduce-based. This is different from the sense of “pipelining” in which one query might keep an intermediate result set hanging around because another query is known to need those results as well.
  • Presto processing is vectorized; functions don’t need to be re-invoked a tuple at a time. This is different from the sense of vectorization in which several tuples are processed at once, exploiting SIMD (Single Instruction Multiple Data). Dan thinks SIMD is useful mainly for column stores, and Presto tries to be storage-architecture-agnostic.
  • Presto query operators and hence query plans are dynamically compiled, down to byte code.
  • Although it is generally written in Java, Presto uses direct memory management rather than relying on what Java provides. Dan believes that, despite being written in Java, Presto performs as if it were written in C.

More precisely, this is a checklist for interactive-speed parallel SQL. There are some query jobs long enough that Dan thinks you need the fault-tolerance obtained from writing intermediate results to disk, ala’ HadoopDB (which was of course the MapReduce-based predecessor to Hadapt).

That said, Presto is a newish database technology effort, there’s lots of stuff missing from it, and there still will be lots of stuff missing from Presto years from now. Teradata has announced contribution plans to Presto for, give or take, the next year, in three phases:

  • Phase 1 (released immediately, and hence in particular already done):
    • An installer.
    • More documentation, especially around installation.
    • Command-line monitoring and management.
  • Phase 2 (later in 2015)
    • Integrations with YARN, Ambari and soon thereafter Cloudera Manager.
    • Expanded SQL coverage.
  • Phase 3 (some time in 2016)
    • An ODBC driver, which is of course essential for business intelligence tool connectivity.
    • Other connectors (e.g. more targets for query federation).
    • Security.
    • Further SQL coverage.

Absent from any specific plans that were disclosed to me was anything about optimization or other performance hacks, and anything about workload management beyond what can be gotten from YARN. I also suspect that much SQL coverage will still be lacking after Phase 3.

Teradata’s basic business model for Presto is:

  • Teradata is selling subscriptions, for which the principal benefit is support.
  • Teradata reserves the right to make some of its Presto-enhancing code subscription-only, but has no immediate plans to do so.
  • Teradata being Teradata, it would love to sell you Presto-related professional services. But you’re absolutely welcome to consume Presto on the basis of license-plus-routine-support-only.

And of course Presto is usurping Hive’s role wherever that makes sense in Teradata’s data connectivity story, e.g. Teradata QueryGrid.

Finally, since I was on the phone with Justin Borgman and Dan Abadi, discussing a project that involved 16 former Hadapt engineers, I asked about Hadapt’s status. That may be summarized as:

  • There are currently no new Hadapt sales.
  • Only a few large Hadapt customers are still being supported by Teradata.
  • The former Hadapt folks would love Hadapt or Hadapt-like technology to be integrated with Presto, but no such plans have been finalized at this time.
Categories: Other

QlikView Tips & Tricks: The Link Table

Yann Neuhaus - Mon, 2015-06-08 01:00

In this blog, I will show you how to bypass a “Synthetic Key” table in QlickView.

Why bypassing a “Synthetic Key” table?

If you have multiples links between two tables, QlikView generates automatically a “Synthetic Key” table. (here “$Syn 1” table).

The QlikView best practice recommend to remove such kind of key table for a question of performance and “correctness” of the result.

1_QV_Link_Table.PNG

How to bypass this “Synthetic key” table?

The “Link Table” is the solution to bypass the generation of a synthetic key table.

This table will contain two kind of fields:

  • A “foreign key”, made with the fields that are common to the two tables
  • The fields that have been used to create the new “foreign key”

This “Link Table” will have the following structure:

2_QV_Link_Table.PNG

In our case, the structure of the “Link Table” will be the following:

3_QV_Link_Table.PNG

How to proceed? Add the needed fields in the linked tables

Before creating the “Link Table”, we must add the fields in the tables that we should linked together.

Remark: A best practice to create this “Foreign_Key” field is to separate the different fields with “|”.

So, in our case, the fields in the table SALESDETAILS will be added as follow:

4_QV_Link_Table.PNG

The fields in table BUDGET will be added as follow:

5_QV_Link_Table.PNG

Create the “Link table”

The fields to create the “Link Table” are now added. So we can create the table as follow:

Click on “Tab / Add Tab” and name it “LINK_TABLE” (1).

6_QV_Link_Table.PNG

Type the following script:

(1) The name of the table

(2)The names of the fields should be the same in each table

(3) Use the CONCATENATE instruction

7_QV_Link_Table.PNG

Reload the data (1) and check the result (2)

8_QV_Link_Table.PNG

The result should be like this:

9_QV_Link_Table.PNG

Creepy Dolls - A Technology and Privacy Nightmare!

Abhinav Agarwal - Sun, 2015-06-07 22:22
This post was first published on LinkedIn on 20th May, 2015.

"Hi, I'm Chucky. Wanna play?"[1]  Fans of the horror film genre will surely recall these lines - innocent-sounding on their own, yet bone-chilling in the context of the scene in the movie - that Chucky, the possessed demonic doll, utters in the cult classic, "Child's Play". Called a "cheerfully energetic horror film" by Roger Ebert [2], the movie was released to more than a thousand screens on its debut in November 1988 [3]. It went on to spawn at least five sequels and developed a cult following of sorts over the next two decades [4].

Chucky the doll
(image credit: http://www.shocktillyoudrop.com/)In "Child's Play", Chucky the killer doll stays quiet around the adults - at least initially - but carries on secret conversations with Andy, and is persuasive enough to convince him to skip school and travel to downtown Chicago. Chucky understands how children think, and can evidently manipulate - or convince, depending on how you frame it - Andy into doing little favours for him. A doll that could speak, hear, see, understand, and have a conversation with a human in the eighties was the stuff out of science fiction, or in the case of "Child's Play" - out of a horror movie.


Edison Talking Doll.
Image credit: www.davescooltoys.comA realistic doll that could talk and converse was for long the "holy grail" of dollmakers [5]. It will come as a huge surprise to many - at least it did to me - that within a few years of the invention of the phonograph by Thomas Edison in 1877, a doll with a pre-recorded voice had been developed and marketed in 1890! It didn't have a very happy debut however. After "several years of experimentation and development", the Edison Talking Doll, when it launched in 1890, "was a dismal failure that was only marketed for a few short weeks."[6] Talking dolls seem to have made their entry into mainstream retail only with the advent of "Chatty Cathy" - released by Mattel in the 1960s - and which worked on a simple pull-string mechanism. The quest to make these dolls more interactive and more "intelligent" continued; "Amazing Amanda" was another milestone in this development; it incorporated "voice-recognition and memory chips, sensory technology and facial animatronics" [7]. It was touted as an "an evolutionary leap from earlier talking dolls like Chatty Cathy of the 1960's" by some analysts [8]. In some ways that assessment was not off-the-mark. After all, "Amazing Amanda" utilized RFID technology - among the hottest technology buzzwords a decade back. "Radio-frequency tags in Amanda's accessories - including toy food, potty and clothing - wirelessly inform the doll of what it is interacting with." This is what enabled "Amazing Amanda" to differentiate between "food" (pizza, or "cookies, pancakes and spaghetti") and "juice"[9]. "However, even with all these developments and capabilities, the universe of what these toys could was severely limited. At most they could recognize the voice of the child as its "mommy".
Amazing Amanda doll.
Image credit:amazing-amanda.fuzzup.netThey were constrained by both the high price of storage (Flash storage is much sturdier than spinning hard drives, but an order of magnitude costlier; this limits the amount of storage possible) and limited computational capability (putting in a high-end microprocessor inside every doll would make them prohibitively expensive). The flip side was that what the toys spoke in home to the children stayed at home. These toys had a limited set of pre-programmed sentences and emotions they could convey, and if you wanted something different, you went out and bought a new toy, or in some cases, a different cartridge.


That's where things stood. Till now.

Screenshot of ToyFair websiteBetween February 14-17, 2015, the Jacob K. Javits Convention Center in New York saw "the Western Hemisphere’s largest and most important toy show"[10] - the 2015 Toy Fair. This was a trade-show, which meant that "Toy Fair is not open to the public. NO ONE under the age of 18, including infants, will be admitted."[11] It featured a "record-breaking 422,000+ net square feet of exhibit space"[12] and hundreds of thousands of toys. Yet no children were allowed. Be that as it may, there was no dearth of, let's say, "innovative" toys. Apart from an "ultra creepy mechanical doll, complete with dead eyes", a fake fish pet that taken to a "whole new level of weird", or a "Doo Doo Head" doll that had the shape of you-guessed-it [13], of particular interest was a "Hello Barbie" doll, launched by the Fortune 500 behemoth, Mattel. This doll had several USPs to its credit. It featured voice-recognition software, voice recording capabilities, the ability to upload recorded conversations to a server (presumably Mattel's or ToyTalk's) in the cloud, over "Wi-Fi" - as a representative at the exhibition took pains to emphasize, repeatedly - and give "chatty responses."[14] This voice data would be processed and analyzed by the company's servers. The doll would learn the child's interests, and be able to carry on a conversation on those topics - made possible by the fact that the entire computational and learning capabilities of a server farm in the cloud could be accessed by every such toy. That the Barbie franchise is a vital one to Mattel could not be understated. The Barbie brand netted Mattel $1.2 billion in FY 2013 [15], but this represented a six per cent year-on-year decline. Mattel attributed that this decline in Barbie sales in part to "product innovation not being strong enough to drive growth." The message was clear. Something very "innovative" was needed to jump-start sales. To make that technological leap forward, Mattel decided to team up with ToyTalk.

ToyTalk is a San Francisco-based start-up, and its platform powered the voice-recognition software used by "Hello Barbie". ToyTalk is headed by "CEO Oren Jacob, Pixar's former CTO, who worked at the groundbreaking animation company for 20 years" [16], and which claimed "$31M in funding from Greylock Partners, Charles River Ventures, Khosla Ventures, True Ventures and First Round Capital as well as a number of angel investors." [17]

Cover of Misery, by Stephen King.
Published by Viking Press.The voice recognition software would allow Mattel and ToyTalk to learn the preferences of the child, and over time refine the responses that Barbie would communicate back. As the Mattel representative put it, "She's going to get to know all my likes and all my dislikes..."[18] - a statement that at one level reminds one of Annie Wilkes when she says, "I'm your number one fan."[19] We certainly don't want to be in Paul Sheldon shoes.

Hello Barbie's learning would start happening from the time the doll was switched on and connected to a Wi-Fi network. ToyTalk CEO Oren Jacob said, "we'll see week one what kids want to talk about or not" [20]. These recordings, once uploaded to the company's servers, would be used by "ToyTalk's speech recognition platform, currently powering the company's own interactive iPad apps including The Winston Show, SpeakaLegend, and SpeakaZoo" and which then "allows writers to create branching dialogue based on what children will potentially actually say, and collects kids' replies in the cloud for the writers to study and use in an evolving environment of topics and responses."[20]. Some unknown set of people. sitting in some unknown location, would potentially get to hear and listen to entire conversations of a child before his parents would.

If Mattel or ToyTalk did not anticipate the reaction this doll would generate, one can only put it down to the blissful disconnect from the real-world that Silicon Valley entrepreneurs often develop, surrounded as they are by similar-thinking digerati. In any case, the responses were swift, and in most cases brutal. The German magazine "Stern" headlined an article on the doll - "Mattel entwickelt die Stasi-Barbie" [21] Even without the benefit of translation, the word "Stasi" stood out like a red flag. In any case, if you wondered, the headline translated to "Mattel developed the Stasi Barbie" [22]. Stern "curtly re-baptised" it "Barbie IM". "The initials stand for “Inoffizieller Mitarbeiter”, informants who worked for East Germany’s infamous secret police, the Stasi, during the Cold War." [23] [24]. A Newsweek article carried a story, "Privacy Advocates Call Talking Barbie 'Surveillance Barbie'"[25]. France 24 wrote - "Germans balk at new ‘Soviet snitch’ Barbie" [26]. The ever-acerbic The Register digged into ToyTalk's privacy policy on the company's web site, and found these gems out [27]:
Screenshot of ToyTalk's Privacy page- "When users interact with ToyTalk, we may capture photographs or audio or video recordings (the "Recordings") of such interactions, depending upon the particular application being used.
- We may use, transcribe and store such Recordings to provide and maintain the Service, to develop, test or improve speech recognition technology and artificial intelligence algorithms, and for other research and development or internal purposes."

Further reading revealed that what your child spoke to the doll in the confines of his home in, say, suburban Troy Michigan, could end up travelling half the way across the world, to be stored on a server in a foreign country - "We may store and process personal information in the United States and other countries." [28]

What information would ToyTalk share with "Third Parties" was equally disturbing, both for the amount of information that could potentially be shared as well as for the vagueness in defining who these third-parties could possibly be - "Personal information"; "in an aggregated or anonymized form that does not directly identify you or others;"; "in connection with, or during negotiations of, any merger, sale of company assets, financing or acquisition, or in any other situation where personal information may be disclosed or transferred as one of the business assets of ToyTalk"; "We may also share feature extracted data and transcripts that are created from such Recordings, but from which any personal information has been removed, with Service Providers or other third parties for their use in developing, testing and improving speech recognition technology and artificial intelligence algorithms and for research and development or other purposes."[28] A child's speech, words, conversation, voice - as recorded by the doll - was the "business asset" of the company.

And lest the reader have any concerns about safety and security of the data on the company's servers, the following disclaimer put paid to any reassurances on that front also: "no security measures are perfect or impenetrable and no method of data transmission that can be guaranteed against any interception or other type of misuse."[28] If the sound of hands being washed-off could be put down on paper, that sentence above is what it could conceivably look like.

Apart from the firestorm of criticism described above, the advocacy group "Campaign for a Commercial Free Childhood" started a campaign to petition Mattel "CEO Christopher Sinclair to stop "Hello Barbie" immediately." [29]

The brouhaha over "Hello Barbie" is however only symptomatic of several larger issues that have emerged and intersect each other in varying degrees, raising important questions about technology, including the cloud, big data, the Internet of Things, data mining, analytics; privacy in an increasingly digital world; advertising and the ethics of marketing to children; law and how it is able to or unable to cope with an increasingly digitized society; and the impact on children and teens - sociological as well as psychological. Technology and Moore's Law [30] have combined with the convenience of broadband to make possible what would have been in the realm of science fiction even two decades ago. The Internet, while opening up untold avenues of betterment for society at large, has however also revealed itself as not without a dark side - a dilemma universally common to almost every transformative change in society. From the possibly alienating effects of excessive addiction to the Internet to physiological changes that the very nature of the hyperlinked web engenders in humans - these are issues that are only recently beginning to attract the attention of academics and researchers. The basic and most fundamental notions of what people commonly understood as "privacy" are not only being challenged in today's digital world, but in most cases without even a modicum of understanding on the part of the affected party - you. In the nebulous space that hopefully still exists between those who believe in technology as the only solution capable of delivering a digital nirvana to all and every imaginable problem in society on the one hand and the Luddites who see every bit of technology as a rabid byte (that's a bad pun) against humanity lies a saner middle ground that seeks to understand and adapt technology for the betterment of humanity, society, and the world at large.

So what happened to Chucky? Well, as we know, it spawned a successful and profitable franchise of sequels and other assorted franchise. Which direction "Hello Barbie" takes is of less interest to me as the broader questions I raised in the previous paragraph.

References:
[1] http://www.imdb.com/title/tt0094862/quotes?item=qt0289926 
[2] "Child's Play" review, http://www.rogerebert.com/reviews/childs-play-1988
[3] http://www.the-numbers.com/movie/Childs-Play#tab=box-office
[4] https://en.wikipedia.org/wiki/Child%27s_Play_%28franchise%29
[5] "A Brief History of Talking Dolls--From Bebe Phonographe to Amazing Amanda", http://collectdolls.about.com/od/dollsbymaterial/a/talkingdolls.htm
[6] "Edison Talking Doll", http://www.edisontinfoil.com/doll.htm
[7] http://www.canada.com/story.html?id=f4370a3c-903d-4728-a9a4-3d3f941055a6
[8] http://www.nytimes.com/2005/08/25/technology/circuits/25doll.html?pagewanted=all&_r=0
[9] http://www.canada.com/story.html?id=f4370a3c-903d-4728-a9a4-3d3f941055a6
[10] http://www.toyfairny.com/toyfair/Toy_Fair/Show_Info/A_Look_Back.aspx
[11] http://www.toyfairny.com/ToyFair/ShowInfo/About_the_Show/Toy_Fair/Show_Info/About_the_Show.aspx
[12] http://www.toyfairny.com/ToyFair/ShowInfo/About_the_Show/Toy_Fair/Show_Info/About_the_Show.aspx
[13] http://mashable.com/2015/02/15/weird-toys-2015-toy-fair/
[14] https://www.youtube.com/watch?feature=player_embedded&v=RJMvmVCwoNM
[15] http://corporate.mattel.com/PDFs/2013_AR_Report_Mattel%20Inc.pdf
[16] http://www.fastcompany.com/3042430/most-creative-people/using-toytalk-technology-new-hello-barbie-will-have-real-conversations-
[17] https://www.toytalk.com/about/
[18] https://www.youtube.com/watch?feature=player_embedded&v=RJMvmVCwoNM
[19] http://www.imdb.com/title/tt0100157/quotes?item=qt0269492
[20] http://www.fastcompany.com/3042430/most-creative-people/using-toytalk-technology-new-hello-barbie-will-have-real-conversations-
[21] http://www.stern.de/digital/ueberwachung/barbie-wird-zum-spion-im-kinderzimmer-2173997.html
[22] https://translate.google.co.in/?ie=UTF-8&hl=en&client=tw-ob#auto/en/Mattel%20entwickelt%20die%20Stasi-Barbie
[23] http://www.france24.com/en/20150224-hello-barbie-germany-stasi-data-collection/
[24] http://www.stern.de/digital/ueberwachung/barbie-wird-zum-spion-im-kinderzimmer-2173997.html
[25] http://www.newsweek.com/privacy-advocates-want-take-wifi-connected-hello-barbie-offline-313432
[26] http://www.france24.com/en/20150224-hello-barbie-germany-stasi-data-collection/
[27] http://www.theregister.co.uk/2015/02/19/hello_barbie/
[28] https://www.toytalk.com/legal/privacy/
[29] http://org.salsalabs.com/o/621/p/dia/action3/common/public/?action_KEY=17347
[30] http://en.wikipedia.org/wiki/Moore's_law


Disclaimer: Views expressed are personal.


© 2015, Abhinav Agarwal. All rights reserved.

Partner Webcast – Oracle Database 12c: Application Express 5.0 for Cloud development

If you have the Oracle Database, you already have Application Express. When you get Oracle Database Cloud, you get Application Express full development platform for cloud-based applications. Since...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Flipkart and Focus 4 - Beware the Whispering Death

Abhinav Agarwal - Sun, 2015-06-07 12:43
The fourth part of my series on Flipkart and its apparent loss of Focus and its battle with Amazon appeared in DNA on April 20th, 2015.

Part 4 – Beware the Whispering Death
Monopolies may have the luxury of getting distracted. If you were a Microsoft in the 1990s, you could force computer manufacturers to pay you a MS-DOS royalty for every computer they sold, irrespective of whether the computer had a Microsoft operating system installed on it or not[1]. You dared not go against Microsoft, because if you did, it could snuff you out – “cut off the oxygen supply[2]”, to put it more evocatively. But if you are a monopoly, you do have to keep one eye on the regulator[3], which distracts you. If you are not a monopoly, you have to keep one eye on the competition (despite what Amazon may keep saying to the contrary, that they “just ignore the competition”[4]).



Few companies exist in a competitive vacuum. In Flipkart’s case, the competition is Amazon – make no mistake about it. Yes, there is SnapDeal, eBay India, and even HomeShop18; but the numbers speak for themselves. Flipkart has pulled ahead of the pack. As long as Amazon had not entered the Indian market, Flipkart’s rise was more or less certain, thanks to its sharp focus on expanding its offerings, honing its supply-chain, and successfully raising enough capital to not have to worry about its bottom-line while it furiously expanded. Amazon India made a quiet entry on the fifth of June 2013[5], with two categories – books, Movies & TV shows, but followed up with a very splashy blitz two months later in August (it offered 66% discounts on many books[6] to mark India’s 66 years of Indian independence – I should know, I binge-bought about twenty books!). A little more than a year later, in September 2014, Amazon turned the screws even more when its iconic founder-CEO, Jeff Bezos, visited India. In a very showy display that earned it a ton of free advertising, Bezos wore a sherwani and got himself photographed swinging from an Indian truck[7], met Narendra Modi, the Indian Prime Minister[8], and reiterated Amazon’s commitment and confidence in the Indian market[9] - all this without ever taking Flipkart’s name. It didn’t help Flipkart that on July 30th 2014, Amazon India had announced an additional $2 billion investment in India[10]. It didn’t hurt Amazon either that it timed the press release exactly one day after Flipkart closed $1 billion in funding[11] - this was entirely in Amazon’s way of jiu jitsu-ing its competitors (so much for “ignoring the competition”). Flipkart on its part ran into yet more needless problems with its much-touted “Big Billion” sale that was mercilessly ambushed by competitors[12], and which resulted in its founders having to tender an apology[13] for several glitches its customers faced during the sale. Then there were questions on just how much money it actually made from the event, which I analyzed[14].

Flipkart seemed to be getting distracted.

When facing a charged-up Michael Holding, you cannot afford to let your guard down, even if you are batting on 91. Ask the legendary Sunil Gavaskar[15]. Amazon is the Michael Holding of competitors. Ask Marc Lore, the founder of Jet, “which is planning to launch a sort of online Costco later this spring with 10 million discounted products”[16]. Marc who? He is the co-founder of Quidsi. Quidsi who? Quidsi is (was) the company behind the website Diapers.com, and which was acquired by Amazon. Therein lies a tale.

Diapers.com was the website of Quidsi, a New Jersey start-up founded in 2005 by Marc Lore and Vinit Bharara to solve a very real problem: children running through diapers at a crazy pace, and “dragging screaming children to the store is a well-known parental hassle.” What made selling diapers online unviable for retailers was the cost involved in “shipping big, bulky, low-margin products like jumbo packs of Huggies Snug and Dry to people’s front doors.” Diapers.com solved the problem by using “software to match every order with the smallest possible shipping box, minimizing excess weight and thus reducing the per-order shipping cost.” Within a few years, it grew from zero to over $300 million in annual sales. It was only when VC firms, including Accel Partners, pumped in $50 million that Amazon and Jeff Bezos started to pay attention. Sometime in 2009, Amazon started to drop prices on diapers and other baby products by up to 30 percent. Quidsi (the company behind Diapers.com) lowered prices – as an experiment – only to watch Amazon’s website change prices accordingly. Quidsi fared well under Amazon’s assault, “at least at first.” However, growth slowed. “Investors were reluctant to furnish Quidsi with additional capital, and the company was not yet mature enough for an IPO.” Quidsi and WalMart vice chairman (and head of WalMart.com) Eduardo Castro-Wright spoke, but Quidsi’s asking price of $900 million more than what WalMart was willing to pay. Even as Lore and Bharara travelled to Seattle to meet with Amazon for a possible deal, Amazon launched Amazon Mom – literally while the two were in the air and therefore unreachable by a frantic Quidsi staff! “Quidsi executives took what they knew about shipping rates, factored in Procter and Gamble’s wholesale prices, and calculated that Amazon was on track to lose $100 million over three months in the diapers category alone.” Amazon offered $580 million. WalMart upped its offer to $600 million – this offer was revealed to Amazon, because of the conditions in the preliminary term sheet that required Quidsi “to turn over information about any subsequent offers.” When Amazon executives learned of this offer, “they ratcheted up the pressure even further, threatening the Quidsi founders that “sensei,” being such a furious competitor, would drive diaper prices to zero if they went with Walmart.” Quidsi folded, sold to Amazon, and the deal was announced on November 8, 2010[17]. Marc Lore continued with Amazon for two years after that – most likely the result of a typical retention and no-compete clause in such acquisitions.

The tale of Quidsi is one cautionary tale for any company going head-to-head with Amazon. For more details on the fascinating history of Amazon, I would recommend Brad Stone’s book, “The Everything Store: Jeff Bezos and the Age of Amazon”[18] – from which I have adapted the example of Diapers.com above. You can read another report here[19]. I suspect you may well find some copies of the book lying around in Flipkart’s Bengaluru offices!

In their evolution and growth as an online retailer, Flipkart has adopted and emulated several of Amazon’s successful features. Arguably the most successful innovation from Amazon has been to reduce, or entirely eliminate in some cases, the friction of ordering goods from their website. The pace and extent of innovation is quite breath-taking. A brief overview will help illustrate the point.
Amazon used to charge for every order placed in addition to a handling charge per item (typically 99 cents). In 2002, it launched “Free Super Saver Shipping on qualifying orders over $49” as a test. After seeing the results, it lowered this threshold to $25[20]. For over ten years that price held, till 2013, when it raised this minimum to $35[21]. Not content with this, to lure in that segment of customers who wanted to order even a single item, and have it delivered in two days or less, Amazon launched a new express shipping option – Amazon Prime – where “for a flat membership fee of $79 per year, members get unlimited, express two-day shipping for free, with no minimum purchase requirement.”[22] This proved to be a blockbuster hit for Amazon, and the company piled on goodies to this program – Amazon Instant Video, an “instant streaming of selected movies and TV shows” at no additional cost[23]. That same year it launched “Library Lending for Kindle Books”, which allowed customers to “to borrow Kindle books from over 11,000 local libraries to read on Kindle and free Kindle reading apps”[24], with no due date, and added that to the Prime program, at no extra cost. In 2011 it launched “Subscribe & Save” – that let customers order certain items on a regular basis at a discounted price – basically you had to select the frequency, and the item would be delivered every month/quarter without your having to re-order it. Amazon launched “Kindle Matchbook”, where, “For thousands of qualifying books, your past, present, and future print-edition purchases now allow you to buy the Kindle edition for $2.99 or less.[25]” Similarly, its “AutoRip” program allowed customers to receive free MP3 format versions of CDs they had purchased from Amazon (since 1998)[26], and which was extended to Vinyl Records[27].

If all this was not enough, in 2015 Amazon launched a physical button called Dash Button – on April 1st, no less – that would let customers order an item of their selection with one press of the button. It could be their favourite detergent, dog food, paper towels, diapers – an expanding selection. You could stick that button anywhere – your refrigerator, car dashboard, anywhere. It was indeed so outlandish that many thought it was an April Fool’s gimmick[28].
Amazon has been relentless in eliminating friction between the customer and the buying process on Amazon on the one hand, and on squeezing out its competitors with a relentless, ruthless pressure on the other. It manages to do all this while topping customer satisfaction surveys[29], year after year[30].

Flipkart has certainly not been caught flat-footed. It’s been busy introducing several similar programs. It began with free shipping, then raised the minimum to ?100, then ?200, and eventually ?500. Somewhere in between, it modified that to exclude books fulfilled by WS Retail (which was co-founded by Flipkart founders and which accounts for more than three-fourths of all products sold on Flipkart[31]) from that minimum. In May 2014, it launched Flipkart First, an Amazon Prime-like membership program that entitled customers to free “in-a-day” shipping for an annual fees of ?500[32]. It also tied up with Mumbai’s famed “dabbawalas” to solve the last-mile connectivity problem for deliveries[33].

Flipkart’s foray into digital music however was less than successful. It shuttered its online music store, Flyte, in June 2013, a little over a year after launching it[34]. Some speculated it was unable to compete with free offerings like Saavn, Gaana, etc… and was unable to meet the annual minimum guarantees it had signed up with music labels for[35]. Whether it really needed to pull the plug so soon is debatable – for all purposes it may have signalled weakness to the world. Competitors watch these developments very, very closely. Its e-book business has been around for a little over two years, but is not clear how much traction they have in the market. With the launch of Amazon Kindle in India, Flipkart will see it being squeezed even more. The history of the ebook market is not a happy tale – if you are not Amazon or the customer.

The market for instant-gratification refuses to stand still. Amazon upped the ante by launching Amazon Prime Now in December 2014. Prime program customers were guaranteed one-hour delivery on tens of thousands of items for $7.99 (two-hour delivery was free)[36]. This program was launched in Manhattan, and rapidly expanded to half a dozen cities in the US by April 2015[37]. Closer to home, in India, it launched KiranaNow in March 2015, in Bangalore, promising delivery of groceries and related items in four hours[38].

More than anything else, the online retail world is a race to eliminate friction from the buying process, to accelerate and enable buying decisions – as frequently as possible, and to provide instant gratification through instant delivery (in the case of e-books or streaming music or video) or one-hour deliveries. Flipkart may well be the incumbent and the player to beat in the Indian market, but Amazon brings with it close to two decades of experience – experience of battling it out in conditions that are very similar to the Indian market in several respects. More ominously, for Flipkart, Amazon has won many more battles than it has lost. Distraction can prove to be a fatal attraction and affliction.

[1] This is described in James Wallace’s book, “Overdrive: Bill Gates and the Race to Control Cyberspace”, http://www.amazon.in/gp/product/B00J348MXG/ref=as_li_tl?ie=UTF8&camp=3626&creative=24822&creativeASIN=B00J348MXG&linkCode=as2&tag=abhisblog-21&linkId=XIHAIBIQ3H6L6NMH
[2] "BBC NEWS | Special Report | 1998 | 04/98 | Microsoft | USA versus Microsoft: The first two days", http://news.bbc.co.uk/2/hi/special_report/1998/04/98/microsoft/198390.stm
[3] " Justice to Launch Probe of Microsoft ", http://www.washingtonpost.com/wp-srv/business/longterm/microsoft/stories/1993/launch082193.htm
[4] "We just ignore our competitors, never felt pressure from Alibaba's rise: Jeff Bezos, CEO Amazon ", http://articles.economictimes.indiatimes.com/2014-09-29/news/54437158_1_amazon-india-expectations-competitors
[5] "Amazon Launches In India", http://www.amazon.in/gp/feature.html/ref=amb_link_183716847_70?ie=UTF8&docId=1000728823&pf_rd_m=A1VBAL9TL5WCBF&pf_rd_s=center-4&pf_rd_r=05DGNQKB48Z6RV3KZV1G&pf_rd_t=1401&pf_rd_p=605972407&pf_rd_i=1000834593
[6] https://www.dropbox.com/s/jb9wa2vqu1x0p4x/AmazonIn_2013.png?dl=0
[7] "jeff bezos truck bangalore - Google Search", https://www.google.co.in/search?q=jeff+bezos+truck+bangalore&tbm=isch&tbo=u&source=univ&sa=X&ei=_IszVc3QGc-LuAT5i4CADw&ved=0CB0QsAQ&biw=1600&bih=741
[8] "Amazon chief Jeffrey Bezos calls on Prime Minister Modi - The Times of India", http://timesofindia.indiatimes.com/business/india-business/Amazon-chief-Jeffrey-Bezos-calls-on-Prime-Minister-Modi/articleshow/44229776.cms
[9] "No obstacles to growth in India: Amazon CEO Jeff Bezos", http://www.hindustantimes.com/business-news/no-obstacles-to-growth-in-india-says-amazon-ceo-jeff-bezos/article1-1269464.aspx
[10] "Amazon Announces Additional US $2 Billion Investment in India", http://www.amazon.in/gp/feature.html?ie=UTF8&docId=1000818573
[11] "India's Flipkart Raises $1 Billion in Fresh Funding - WSJ", http://www.wsj.com/articles/indias-flipkart-raises-1-billion-in-fresh-funding-1406641579?mod=LS1
[12] "Ambushed: When Flipkart’s Big Billion Sale turned into a nightmare | Best Media Info, News and Analysis on Indian Advertising, Marketing and Media Industry.", http://www.bestmediainfo.com/2014/10/ambushed-when-flipkarts-big-billion-sale-turned-into-a-nightmare/
[13] "Flipkart’s ‘Big Billion Day Sale’ Prompts Big Apology - India Real Time - WSJ", http://blogs.wsj.com/indiarealtime/2014/10/08/flipkarts-big-billion-day-sale-prompts-big-apology/
[14] "A Billion Dollar Sale, And A Few Questions", http://www.dnaindia.com/analysis/standpoint-a-billion-dollar-sale-and-a-few-questions-2047853
[15] "3rd Test: India v West Indies at Ahmedabad, Nov 12-16, 1983", http://www.espncricinfo.com/ci/engine/match/63352.html
[16] "Why Amazon Refuses to Wear Purple Lanyards in Vegas", http://www.bloomberg.com/news/articles/2015-04-01/why-amazon-refuses-to-wear-purple-lanyards-in-vegas
[17] "Amazon.com to Acquire Diapers.com and Soap.com", http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=1493202
[18] “The Everything Store: Jeff Bezos and the Age of Amazon”, by Brad Ston, http://www.amazon.in/Everything-Store-Brad-Stone/dp/0593070461/tag=abhisblog-21&ref=sr_1_1?ie=UTF8&qid=1429439636&sr=8-1&keywords=the+everything+store#reader_0593070461
[19] "Amazon vs. Jet.com: Marc Lore Aims to Beat Bezos", http://www.bloomberg.com/news/features/2015-01-07/amazon-bought-this-mans-company-now-hes-coming-for-them-correct
[20] "Amazon Media Room: Press Releases", http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=503037
[21] "Amazon Raises Free Shipping Threshold From $25 to $35", http://www.pcmag.com/article2/0,2817,2426202,00.asp
[22] "Amazon Media Room: Press Releases", http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=669786
[23] "Amazon Media Room: Press Releases", http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=1531234
[24] "Amazon Media Room: Press Releases", http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=1552678
[25] "Amazon.com: Kindle MatchBook", https://www.amazon.com/gp/digital/ep-landing-page?ie=UTF8&*Version*=1&*entries*=0
[26] "Introducing “Amazon AutoRip” – Customers Now Receive Free MP3 Versions of CDs Purchased From Amazon – Past, Present and Future", http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=1773251
[27] "Amazon Media Room: Press Releases", http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=1802939
[28] "Amazon launches a product so gimmicky we thought it was an April Fools' joke", http://venturebeat.com/2015/03/31/amazon-launches-a-product-so-gimmicky-we-thought-it-was-an-april-fools-joke/
[29] "Customer Satisfaction Lowest at Wal-Mart, Highest at Nordstrom and Amazon", http://247wallst.com/retail/2015/02/18/customer-satisfaction-lowest-at-wal-mart-highest-at-nordstrom-and-amazon/
[30] "Customers Rank Amazon #1 in Customer Satisfaction", http://www.amazon.com/gp/feature.html?ie=UTF8&docId=1001924291
[31] "Flipkart top seller WS Retail to separate logistics arm Ekart into wholly-owned unit", http://articles.economictimes.indiatimes.com/2015-01-21/news/58306262_1_ekart-ws-retail-logistics-arm
[32] "India's Flipkart Launches Subscription Service for Customers", http://thenextweb.com/in/2014/05/08/indias-flipkart-launching-amazon-prime-like-subscription-service-called-flipkart-first/
[33] "Now Mumbai's famed dabbawalas will deliver your Flipkart buys", http://www.dnaindia.com/money/report-now-mumbai-s-famed-dabbawalas-will-deliver-your-flipkart-buys-2076276
[34] "Flipkart closes Flyte MP3 store a year after launch", http://www.livemint.com/Consumer/TJOoP9he0fq0EG7S8lRXYK/Flipkart-closes-Flyte-MP3-store-a-year-after-launch.html
[35] "Why Flipkart Shut Down Flyte Music - MediaNama", http://www.medianama.com/2013/05/223-why-flipkart-shut-flyte-music/
[36] "Amazon Media Room: Press Releases", http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=2000521
[37] "Amazon again expands 'Prime Now' one-hour delivery service, this time to Austin", http://www.geekwire.com/2015/amazon-again-expands-prime-now-one-hour-delivery-service-this-time-to-austin/

[38] "Now Amazon will deliver from your local kirana store", http://www.dnaindia.com/money/report-now-amazon-will-deliver-from-your-local-kirana-store-2076280


© 2015, Abhinav Agarwal (अभिनव अग्रवाल). All rights reserved.

The State of SaaS

Floyd Teter - Sun, 2015-06-07 12:09
I've been reading quite a bit lately about the maturation of SaaS...how the market is transitioning away from the "early adopter" phase into more of a mainstream marketplace.  With all due, respect to those making such claims, I must offer a dissenting opinion.  While a big fan of SaaS, I still recognize at least three factors that must be addressed before SaaS can be considered a mature offering.  These three areas represent the soft underbelly of SaaS: integration, data state, and fear of losing control.

Integration
Perhaps your experience is different, but I have yet to see a service integration for enterprise software that works reliably out-of-the-box.  Pick your vendor:  Oracle, Workday, Amazon, Microsoft, Salesforce, Infor...it just doesn't happen.  There are too many variations amongst various customer applications.  And, in all honesty, enterprise software vendors just don't seem to be all that good at writing packaged integrations.  That's part of the reason we see integration as a service players like MuleSoft and Boomi making a play in the market.  It is also why so many technology companies offer integration implementation services.  We're still a far cry from easy, packaged integration.

Data State
After spending years in the enterprise software market, I'm firmly convinced that everyone has loads of "dirty data".  Records that poorly model transactions, inconsistent relationships between records, custom data objects that have worked around business rules intended for data governance.  Every closet has a skeleton.  The most successful SaaS implementations I've seen either summarize all those records into opening entries for the new system or junk customer data history altogether.  Both these approaches work in the financial applications, but not so well in HCM or Marketing.  Until we can figure out automated ways to take figurative steaming piles of waste and transform them into beautiful, fragrant rose beds of clean data, SaaS will continue to be a challenging transition for most enterprise software customers.

Fear of Losing Control
Certain customer departments are resistant to SaaS, mostly out of a fear of losing control.  Some is borne of a genuine concern over data security.  Some is over fear of losing job security.  

For those concerned over data security, consider that data security is critical for SaaS vendors.  Without the ability to secure data, they're out of business.  It's a core function for them.  So they hire armies of the best and brightest security people.  And they invest heavily in security systems.  And most customers can't match that, either in terms of talent or systems.  Result:  the SaaS vendors provide security solutions that are simply out of the reach of enterprise software customers.  There is a greater risk in keeping your data in-house than in utilizing a SaaS vendor to protect your data.

For those fearing the loss of job security, they're correct...unless they're willing to change.  The skills of maintaining large back-office enterprise software systems just don't apply in a SaaS world (unless you're working for a SaaS vendor).  I'd lump database administrators and database developers into this category.  However, there are new opportunities for those skills...developing and maintaining software that enables strategic in-house initiatives.  There are also opportunities to extend SaaS applications to support unique in-house needs.  Both scenarios require a change - working more closely with business as a partner rather than as a technology custodian.

Overcoming the fear of losing control will require significant in advocacy and evangelism...most customers need information, training, and assurance in overcoming these fears.  But we can't really say that SaaS is "there" until we see a significant turn in perceptions here.


So there you have it.  Is SaaS up-and-coming? Absolutely.  Is the SaaS market transitioning to a mainstream, mature marketplace?  No...lots of maturing needed in the areas of integration, data state, and fear of losing control before we can get there.

As always, your thoughts are welcome in the comments...

An alternative to DBA_EXTENTS optimized for LMT

Yann Neuhaus - Sun, 2015-06-07 11:45

This is a script I have for several years, when tablespaces became locally managed. When we want to know to which segment a block (identified by file id, block id) belongs to, the DBA_EXTENTS view can be very long when you have lot of datafiles and lot of segments. This view using the underlying X$ tables and constrained by hints is faster when queried for one FILE_ID/BLOCK_ID. I did that in 2006 when having lot of corruptions on several 10TB databases with 5000 datafiles.

Since then, I've used it only a few times, so there is no guarantee that the plan is still optimal in current version, but the approach of starting to filter the segments that are in the same tablespace as the file_id makes it optimal for a search by file_id and block_id.

The script

Here is the creation of the DATAFILE_MAP view:

create or replace view datafile_map as
WITH
 l AS ( /* LMT extents indexed on ktfbuesegtsn,ktfbuesegfno,ktfbuesegbno */
  SELECT ktfbuesegtsn segtsn,ktfbuesegfno segrfn,ktfbuesegbno segbid, ktfbuefno extrfn, 
         ktfbuebno fstbid,ktfbuebno + ktfbueblks - 1 lstbid,ktfbueblks extblks,ktfbueextno extno 
  FROM sys.x$ktfbue
 ),
 d AS ( /* DMT extents ts#, segfile#, segblock# */
  SELECT ts# segtsn,segfile# segrfn,segblock# segbid, file# extrfn, 
         block# fstbid,block# + length - 1 lstbid,length extblks, ext# extno 
  FROM sys.uet$
 ),
 s AS ( /* segment information for the tablespace that contains afn file */
  SELECT /*+ materialized */
  f1.fenum afn,f1.ferfn rfn,s.ts# segtsn,s.FILE# segrfn,s.BLOCK# segbid ,s.TYPE# segtype,f2.fenum segafn,t.name tsname,blocksize
  FROM sys.seg$ s, sys.ts$ t, sys.x$kccfe f1,sys.x$kccfe f2  
  WHERE s.ts#=t.ts# AND t.ts#=f1.fetsn AND s.FILE#=f2.ferfn AND s.ts#=f2.fetsn 
 ),
 m AS ( /* extent mapping for the tablespace that contains afn file */
SELECT /*+ use_nl(e) ordered */ 
 s.afn,s.segtsn,s.segrfn,s.segbid,extrfn,fstbid,lstbid,extblks,extno, segtype,s.rfn, tsname,blocksize
 FROM s,l e
 WHERE e.segtsn=s.segtsn AND e.segrfn=s.segrfn AND e.segbid=s.segbid
 UNION ALL
 SELECT /*+ use_nl(e) ordered */  
 s.afn,s.segtsn,s.segrfn,s.segbid,extrfn,fstbid,lstbid,extblks,extno, segtype,s.rfn, tsname,blocksize
 FROM s,d e
  WHERE e.segtsn=s.segtsn AND e.segrfn=s.segrfn AND e.segbid=s.segbid
 UNION ALL
 SELECT /*+ use_nl(e) use_nl(t) ordered */ 
 f.fenum afn,null segtsn,null segrfn,null segbid,f.ferfn extrfn,e.ktfbfebno fstbid,e.ktfbfebno+e.ktfbfeblks-1 lstbid,e.ktfbfeblks extblks,null extno, null segtype,f.ferfn rfn,name tsname,blocksize
 FROM sys.x$kccfe f,sys.x$ktfbfe e,sys.ts$ t
 WHERE t.ts#=f.fetsn and e.ktfbfetsn=f.fetsn and e.ktfbfefno=f.ferfn
 UNION ALL
 SELECT /*+ use_nl(e) use_nl(t) ordered */ 
 f.fenum afn,null segtsn,null segrfn,null segbid,f.ferfn extrfn,e.block# fstbid,e.block#+e.length-1 lstbid,e.length extblks,null extno, null segtype,f.ferfn rfn,name tsname,blocksize
 FROM sys.x$kccfe f,sys.fet$ e,sys.ts$ t
 WHERE t.ts#=f.fetsn and e.ts#=f.fetsn and e.file#=f.ferfn
 ),
 o AS (
  SELECT s.tablespace_id segtsn,s.relative_fno segrfn,s.header_block   segbid,s.segment_type,s.owner,s.segment_name,s.partition_name 
  FROM SYS_DBA_SEGS s 
 )
SELECT 
 afn file_id,fstbid block_id,extblks blocks,nvl(segment_type,decode(segtype,null,'free space','type='||segtype)) segment_type,
 owner,segment_name,partition_name,extno extent_id,extblks*blocksize bytes,
 tsname tablespace_name,rfn relative_fno,m.segtsn,m.segrfn,m.segbid
 FROM m,o WHERE extrfn=rfn and m.segtsn=o.segtsn(+) AND m.segrfn=o.segrfn(+) AND m.segbid=o.segbid(+)
UNION ALL
SELECT 
 file_id+(select to_number(value) from v$parameter WHERE name='db_files') file_id,
 1 block_id,blocks,'tempfile' segment_type,
 '' owner,file_name segment_name,'' partition_name,0 extent_id,bytes,
  tablespace_name,relative_fno,0 segtsn,0 segrfn,0 segbid
 FROM dba_temp_files
;
Sample output
COLUMN   partition_name ON FORMAT   A16
COLUMN   segment_name ON FORMAT   A20
COLUMN   owner ON FORMAT   A16
COLUMN   segment_type ON FORMAT   A16

select file_id,block_id,blocks,segment_type,owner,segment_name,partition_name from datafile_map 
where file_id=1326 and 3782 between block_id and block_id + blocks - 1
SQL> /

 FILE_ID BLOCK_ID  BLOCKS SEGMENT_TYPE     OWNER            SEGMENT_NAME     PARTITION_NAME
-------- -------- ------- ---------------- ---------------- ---------------- ----------------
    1326     3781      32 free space

you identified free space block

select file_id,block_id,blocks,segment_type,owner,segment_name,partition_name from datafile_map 
where file_id=1326 and 3982 between block_id and block_id + blocks - 1
SQL> /


 FILE_ID BLOCK_ID  BLOCKS SEGMENT_TYPE     OWNER            SEGMENT_NAME         PARTITION_NAME
-------- -------- ------- ---------------- ---------------- -------------------- ----------------
    1326     3981       8 TABLE PARTITION  TESTUSER         AGGR_FACT_DATA       AFL_P_211

you identified a data block

select file_id,block_id,blocks,segment_type,owner,segment_name,partition_name from datafile_map 
where file_id=202 and 100 between block_id and block_id + blocks - 1
SQL> /

   FILE_ID   BLOCK_ID     BLOCKS SEGMENT_TYPE     OWNER            SEGMENT_NAME         PARTITION_NAME
---------- ---------- ---------- ---------------- ---------------- -------------------- ---------------
       202          1       1280 tempfile                          C:O102TEMP02.DBF

you identified a tempfile file_id

select file_id,block_id,blocks,segment_type,owner,segment_name,partition_name from datafile_map 
where file_id=1 and block_id between 0 and 100 order by file_id,block_id;

   FILE_ID   BLOCK_ID     BLOCKS SEGMENT_TYPE     OWNER            SEGMENT_NAME         PARTITION_NAME
---------- ---------- ---------- ---------------- ---------------- -------------------- ---------------
         1          9          8 ROLLBACK         SYS              SYSTEM
         1         17          8 ROLLBACK         SYS              SYSTEM
         1         25          8 CLUSTER          SYS              C_OBJ#
         1         33          8 CLUSTER          SYS              C_OBJ#
         1         41          8 CLUSTER          SYS              C_OBJ#
         1         49          8 INDEX            SYS              I_OBJ#
         1         57          8 CLUSTER          SYS              C_TS#
         1         65          8 INDEX            SYS              I_TS#
         1         73          8 CLUSTER          SYS              C_FILE#_BLOCK#
         1         81          8 INDEX            SYS              I_FILE#_BLOCK#
         1         89          8 CLUSTER          SYS              C_USER#
         1         97          8 INDEX            SYS              I_USER#

you mapped the first segments in system tablespace

Try it on a database with lot of segments and lot of datafiles, and compare with DBA_EXTENTS. Then you will know which one to choose in case of emergency.

RMAN -- 1 : Backup Job Details

Hemant K Chitale - Sun, 2015-06-07 03:57
Here's a post on how you could be misled by a simple report on the V$RMAN_BACKUP_JOB_DETAILS view.

Suppose I run RMAN Backups through a shell script.  Like this :

[oracle@localhost Hemant]$ ls -l *sh
-rwxrw-r-- 1 oracle oracle 336 Jun 7 17:30 Backup_DB_Plus_ArchLogs.sh
[oracle@localhost Hemant]$ cat Backup_DB_Plus_ArchLogs.sh
ORACLE_SID=orcl;export ORACLE_SID

rman << EOF
connect target /

spool log to Backup_DB_plus_ArchLogs.LOG

backup as compressed backupset database ;

sql 'alter system switch logfile';
sql 'alter system archive log current' ;

backup as compressed backupset archivelog all;

backup as compressed backupset current controlfile ;

EOF

[oracle@localhost Hemant]$
[oracle@localhost Hemant]$
[oracle@localhost Hemant]$
[oracle@localhost Hemant]$ ./Backup_DB_Plus_ArchLogs.sh

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Jun 7 17:31:06 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

RMAN>
connected to target database: ORCL (DBID=1229390655)

RMAN>
RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> RMAN> [oracle@localhost Hemant]$
[oracle@localhost Hemant]$

I then proceed to check the results of the run in V$RMAN_BACKUP_JOB_DETAILS.

SQL> l
1 select to_char(start_time,'DD-MON HH24:MI') StartTime, to_char(end_time,'DD-MON HH24:MI') EndTime,
2 input_type, status
3 from v$rman_backup_job_details
4* where start_time > trunc(sysdate)+17.5/24
SQL> /

STARTTIME ENDTIME INPUT_TYPE STATUS
--------------------- --------------------- ------------- -----------------------
07-JUN 17:31 07-JUN 17:31 DB FULL FAILED

SQL>

It says that I ran one FULL DATABASE Backup that failed. Is that really true ?  Let me check the RMAN spooled log.

[oracle@localhost Hemant]$ cat Backup_DB_plus_ArchLogs.LOG

Spooling started in log file: Backup_DB_plus_ArchLogs.LOG

Recovery Manager11.2.0.2.0

RMAN>
RMAN>
Starting backup at 07-JUN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=60 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=59 device type=DISK
RMAN-06169: could not read file header for datafile 6 error reason 4
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 06/07/2015 17:31:08
RMAN-06056: could not access datafile 6

RMAN>
RMAN>
sql statement: alter system switch logfile

RMAN>
sql statement: alter system archive log current

RMAN>
RMAN>
Starting backup at 07-JUN-15
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=615 RECID=1 STAMP=881773851
channel ORA_DISK_1: starting piece 1 at 07-JUN-15
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=616 RECID=2 STAMP=881773851
input archived log thread=1 sequence=617 RECID=3 STAMP=881773853
input archived log thread=1 sequence=618 RECID=4 STAMP=881774357
input archived log thread=1 sequence=619 RECID=5 STAMP=881774357
channel ORA_DISK_2: starting piece 1 at 07-JUN-15
channel ORA_DISK_2: finished piece 1 at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v12b_.bkp tag=TAG20150607T173112 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=620 RECID=6 STAMP=881775068
input archived log thread=1 sequence=621 RECID=7 STAMP=881775068
input archived log thread=1 sequence=622 RECID=8 STAMP=881775071
channel ORA_DISK_2: starting piece 1 at 07-JUN-15
channel ORA_DISK_1: finished piece 1 at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v10y_.bkp tag=TAG20150607T173112 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_2: finished piece 1 at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v292_.bkp tag=TAG20150607T173112 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
Finished backup at 07-JUN-15

Starting Control File and SPFILE Autobackup at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/autobackup/2015_06_07/o1_mf_s_881775075_bq83v3nr_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 07-JUN-15

RMAN>
RMAN>
Starting backup at 07-JUN-15
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 07-JUN-15
channel ORA_DISK_1: finished piece 1 at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_ncnnf_TAG20150607T173117_bq83v6vg_.bkp tag=TAG20150607T173117 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 07-JUN-15

Starting Control File and SPFILE Autobackup at 07-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/autobackup/2015_06_07/o1_mf_s_881775080_bq83v88z_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 07-JUN-15

RMAN>
RMAN>

Recovery Manager complete.
[oracle@localhost Hemant]$

Hmm. There were *three* distinct BACKUP commands in the script file.  The first was BACKUP ... DATABASE ..., the second was BACKUP ... ARCHIVELOG ... and the third was BACKUP ... CURRENT CONTROLFILE.  All three were executed.
Only the first BACKUP execution failed.  The subsequent  two BACKUP commands succeeded.  They were for ArchiveLogs and the Controlfile.
And *yet* the view V$RMAN_BACKUP_JOB_DETAILS shows that I ran  a FULL DATABASE BACKUP that failed.  It tells me nothing about the ArchiveLogs and the ControlFile backups that did succeed !


What if I switch my strategy from using a shell script to an rman script ?

[oracle@localhost Hemant]$ ls -ltr *rmn
-rw-rw-r-- 1 oracle oracle 287 Jun 7 17:41 Backup_DB_plus_ArchLogs.rmn
[oracle@localhost Hemant]$ cat Backup_DB_plus_ArchLogs.rmn
connect target /

spool log to Backup_DB_plus_ArchLogs.TXT

backup as compressed backupset database ;

sql 'alter system switch logfile';
sql 'alter system archive log current' ;

backup as compressed backupset archivelog all;

backup as compressed backupset current controlfile;

exit

[oracle@localhost Hemant]$
[oracle@localhost Hemant]$
[oracle@localhost Hemant]$
[oracle@localhost Hemant]$ rman @Backup_DB_plus_ArchLogs.rmn

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Jun 7 17:42:17 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

RMAN> connect target *
2>
3> spool log to Backup_DB_plus_ArchLogs.TXT
4>
5> backup as compressed backupset database ;
6>
7> sql 'alter system switch logfile';
8> sql 'alter system archive log current' ;
9>
10> backup as compressed backupset archivelog all;
11>
12> backup as compressed backupset current controlfile;
13>
14> exit[oracle@localhost Hemant]$




SQL> l
1 select to_char(start_time,'DD-MON HH24:MI') StartTime, to_char(end_time,'DD-MON HH24:MI') EndTime,
2 input_type, status
3 from v$rman_backup_job_details
4 where start_time > trunc(sysdate)+17.5/24
5* order by start_time
SQL> /

STARTTIME ENDTIME INPUT_TYPE STATUS
--------------------- --------------------- ------------- -----------------------
07-JUN 17:31 07-JUN 17:31 DB FULL FAILED
07-JUN 17:42 07-JUN 17:42 DB FULL FAILED

SQL>

[oracle@localhost Hemant]$
[oracle@localhost Hemant]$ cat Backup_DB_plus_ArchLogs.TXT

connected to target database: ORCL (DBID=1229390655)

Spooling started in log file: Backup_DB_plus_ArchLogs.TXT

Recovery Manager11.2.0.2.0

Starting backup at 07-JUN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=59 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=50 device type=DISK
RMAN-06169: could not read file header for datafile 6 error reason 4
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 06/07/2015 17:42:19
RMAN-06056: could not access datafile 6

Recovery Manager complete.
[oracle@localhost Hemant]$

Now, this time, once the first BACKUP command failed, RMAN seems to have bailed out. It didn't even try executing the subsequent BACKUP commands !

How can V$RMAN_BACKUP_JOB_DETAILS differentiate from the two failed backups ?

SQL> l
1 select to_char(start_time,'DD-MON HH24:MI') StartTime, to_char(end_time,'DD-MON HH24:MI') EndTime,
2 input_bytes/1048576 Input_MB, output_bytes/1048576 Output_MB,
3 input_type, status
4 from v$rman_backup_job_details
5 where start_time > trunc(sysdate)+17.5/24
6* order by start_time
SQL> /

STARTTIME ENDTIME INPUT_MB OUTPUT_MB INPUT_TYPE STATUS
--------------------- --------------------- ---------- ---------- ------------- -----------------------
07-JUN 17:31 07-JUN 17:31 71.5219727 34.878418 DB FULL FAILED
07-JUN 17:42 07-JUN 17:42 0 0 DB FULL FAILED

SQL>

The Input Bytes does indicate that some files were backed up in the first run. Yet, it doesn't tell us how much of those were ArchiveLogs and how much were the ControlFile.


Question 1 : How would you script your backups ?  (Hint : Differentiate between the BACKUP DATABASE and the BACKUP ARCHIVELOG runs).

Question 2 : Can you improve your Backup Reports ?

Yes, the RMAN LIST BACKUP command is useful.  But you can't select the columns, format the output or add text  as you would with a query on V$ views.

[oracle@localhost oracle]$ NLS_DATE_FORMAT=DD_MON_HH24_MI_SS;export NLS_DATE_FORMAT
[oracle@localhost oracle]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Jun 7 17:51:41 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> list backup completed after "trunc(sysdate)+17.5/24";

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
17 375.50K DISK 00:00:01 07_JUN_17_31_13
BP Key: 17 Status: AVAILABLE Compressed: YES Tag: TAG20150607T173112
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v12b_.bkp

List of Archived Logs in backup set 17
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------------- ---------- ---------
1 616 14068910 07_JUN_17_10_49 14068920 07_JUN_17_10_51
1 617 14068920 07_JUN_17_10_51 14068931 07_JUN_17_10_53
1 618 14068931 07_JUN_17_10_53 14069550 07_JUN_17_19_17
1 619 14069550 07_JUN_17_19_17 14069564 07_JUN_17_19_17

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
18 1.03M DISK 00:00:00 07_JUN_17_31_14
BP Key: 18 Status: AVAILABLE Compressed: YES Tag: TAG20150607T173112
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v292_.bkp

List of Archived Logs in backup set 18
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------------- ---------- ---------
1 620 14069564 07_JUN_17_19_17 14070254 07_JUN_17_31_08
1 621 14070254 07_JUN_17_31_08 14070265 07_JUN_17_31_08
1 622 14070265 07_JUN_17_31_08 14070276 07_JUN_17_31_11

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
19 13.72M DISK 00:00:02 07_JUN_17_31_14
BP Key: 19 Status: AVAILABLE Compressed: YES Tag: TAG20150607T173112
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_annnn_TAG20150607T173112_bq83v10y_.bkp

List of Archived Logs in backup set 19
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------------- ---------- ---------
1 615 14043833 12_JUN_23_28_21 14068910 07_JUN_17_10_49

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
20 Full 9.36M DISK 00:00:00 07_JUN_17_31_15
BP Key: 20 Status: AVAILABLE Compressed: NO Tag: TAG20150607T173115
Piece Name: /NEW_FS/oracle/FRA/ORCL/autobackup/2015_06_07/o1_mf_s_881775075_bq83v3nr_.bkp
SPFILE Included: Modification time: 07_JUN_17_28_15
SPFILE db_unique_name: ORCL
Control File Included: Ckp SCN: 14070285 Ckp time: 07_JUN_17_31_15

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
21 Full 1.05M DISK 00:00:02 07_JUN_17_31_19
BP Key: 21 Status: AVAILABLE Compressed: YES Tag: TAG20150607T173117
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_ncnnf_TAG20150607T173117_bq83v6vg_.bkp
Control File Included: Ckp SCN: 14070306 Ckp time: 07_JUN_17_31_17

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
22 Full 9.36M DISK 00:00:00 07_JUN_17_31_20
BP Key: 22 Status: AVAILABLE Compressed: NO Tag: TAG20150607T173120
Piece Name: /NEW_FS/oracle/FRA/ORCL/autobackup/2015_06_07/o1_mf_s_881775080_bq83v88z_.bkp
SPFILE Included: Modification time: 07_JUN_17_31_18
SPFILE db_unique_name: ORCL
Control File Included: Ckp SCN: 14070312 Ckp time: 07_JUN_17_31_20

RMAN>

So, the RMAN LIST BACKUP can provide details that V$RMAN_BACKUP_JOB_DETAILS cannot provide. Yet, it doesn't tell us that a Backup failed.
.
.
.

Categories: DBA Blogs

Install Oracle RightNow Cloud Adapter in JDeveloper

Today, there are thousands of enterprise customers across the globe using Oracle RightNow CX cloud service for providing superior customer experience across multiple channels including web, contact...

We share our skills to maximize your revenue!
Categories: DBA Blogs