Skip navigation.

Feed aggregator

Flipkart, ecommerce, machine learning, and free advice

Abhinav Agarwal - Sat, 2015-07-04 12:08
I wrote about the obsession of Flipkart (and Myntra) with "mobile-only" without even having an iPad-optimized app! I also talked about the stunning advances being made in voice-search by using machine learning, cognitive learning, natural language processing, even as voice-based search capabilities of e-commerce companies - including Amazon - remain abysmal. Finally, I also included several use-cases that these companies need to work on incorporating into their capabilities.

That piece, Flipkart, Focus and Free Advice, appeared in DNA on June 27th, 2015.


My earlier pieces on the same topic:
  1. Flipkart vs Amazon: Beware the Whispering Death - 20th April '15 (blog, dna)
  2. Mobile Apps: There’s Something (Profitable) About Your Privacy - 18th April '15  (blog, dna)
  3. Mobile advertising and how the numbers game can be misleading - 14th April '15  (blog, dna)
  4. Is Flipkart losing focus - 12th April '15  (blog, dna)
Flipkart, Focus, and Free Advice – Shipping Charges Also Waived!What is one to make of a statement like this - “India is not mobile-first, but mobile-only country[1]”? Especially so if it is from the co-founder of the largest ecommerce company in India, and it turns out the company does not even have an app for the Apple iPad?

I have written at length on the distractions that seem to have been plaguing Flipkart and why it cannot afford to drop its guard in this fiercely contested space[2] - especially in light of all the noise surrounding its mobile ambitions. Somewhat paradoxically, this post is about offering advice to Flipkart that calls for some diversification!

As a logical next step, I wanted to take a look at Flipkart’s mobile apps – both on the iOS and Android platforms – to see how well they were executing on their very bold ambitions. As an aside, I also wanted to see if these (and competitive) mobile apps were leveraging all the computing power now available on tap inside these tiny devices. After all, apart from the recent – and amazing – advances Google has made in its voice-based search capabilities[3], there was this stunning demo from Hound[4] that gave a glimpse into the huge advances that voice-recognition, search, and machine-learning technologies have made in the last decade.
#MustRead - what next, e-tailing ? @flipkart Flipkart, Focus and Free Advice http://t.co/zincR6LS9D via @dna @AbhinavAgarwal— Harini Calamur (@calamur) June 28, 2015
Flipkart, Focus and Free Advice http://t.co/UNF20JCqer by @AbhinavAgarwal— dna (@dna) June 27, 2015The results were, to put it mildly, massively disappointing – which I will describe in some detail.
It should be clear that Amazon and Flipkart and SnapDeal are going to be at each other’s throats in the Indian online retail market. This is one battle from which neither player can walk away. Amazon has lost the China market to Alibaba (“In the first quarter of 2014, Alibaba's e-tailing site had a 48.4 per cent market share against Amazon China's less than 3 per cent.”[5] If that was not enough, Alibaba and Foxconn are in talks with SnapDeal for a rumoured $500 million investment![6]).

Amazon cannot afford to now lose the India market to a local upstart. Flipkart, on the other hand, has even less choice. It plays only in the Indian market. It cannot walk away either; there is no other market for it to walk towards. Its valuations – expected to rise to $15 billion after its next round of funding[7] make it way too costly for it to be acquired – at least profitably so for those funders who have put in hundreds of millions of dollars at these later and higher valuations. Amazon and Flipkart have deep pockets; Flipkart can afford to bleed hundreds of millions of dollars a year even as it grows, while Amazon has conditioned Wall Street to grant it the currency of ultra-high valuations even as it operates on razor-thin margins. It is unlikely that either will be able to deliver a knockout punch to the other anytime soon. This is a fifteen-round slugfest that will be decided by who can keep soaking in the blows and keep standing at the end of the fifteenth round; while they fight, the customer continues to win. Amazon has more diversity in its portfolio of business divisions than does Flipkart – ecommerce, cloud computing services, streaming audio and video, MRO and industrial supplies, smartphones, tablets, and more. While these divisions may at times face off against each other in expectedly healthy and sometimes unhealthy rivalry, they still form a formidable front against the competition. To quote these immortal lines from the Mahabharata, “we may be five against a hundred, but against a common enemy we are a hundred and five.”

So what does Flipkart do? Three things, to begin with.

First, it needs to get serious about software.
When you have a web site that offers millions of products from tens of thousands of resellers to millions of customers that reside in tens of thousands of cities and towns and villages, you need to make sure that your customers are seeing the products that are of most relevance to them, and which they are most likely to buy. If that problems looks like a nail to you, specifically a large-scale optimization problem with a huge number of decision variables, then large-scale computing and regression modelling are the hammer. You need to be applying this hammer to the almost infinite number of nails in front of you, all day and all night long. This is what enables you to present an ever-relevant basket of products to your customers, which keeps them engaged when on your site, and which hopefully makes them buy more often than not. Flipkart needs to take a close, long, hard look at its search capabilities – about which I will talk later in this post – and its suggestions engine, because both are very subpar at this point. If it’s any consolation, while Amazon is certainly better in the search department, its capabilities in this area are nothing great either, yet. Where Amazon scores over its competitors – every single one of them - is its huge and ever-growing corpus of customer reviews. Flipkart probably recognizes the important of this corpus of customer reviews, but has run into rough weather over the expected problem of fake reviews[8].

For inspiration on where the trifecta of search, machine learning, and e-commerce could venture – with Big Data in tow - one can turn to the story of how the popular American TV game show “Jeopardy” became the battleground for IBM researchers to build upon their experience with Deep Blue (the computer that had beaten world chess champion Gary Kasparov in 1997[9]) and to build a computer that would defeat the reigning champion of Jeopardy. That happened in February 2011, after four years of work led by IBM researcher David Ferucci and “about twenty researchers”[10].
This required advances in machine learning and other esoteric concepts like LAT (Lexical Answer Type), IDF (Inverse Document Frequency), temporal and even geospatial reasoning.[11] A new suite of software and platforms, built on a concept called genetic programming (“a technique inspired by biological evolution”) has started to make its way into mainstream commercial applications.  The algorithm here “begins by randomly combining various mathematical building blocks into equations and then testing to see how well the equations fit the data. Equations that fail the test are discarded, while those that show promise are retained and recombined in new ways so that the system ultimately converges on an accurate mathematical model.”[12] What this essentially means is going beyond keyword search-based correlations and moving to more semantic-oriented searches that combine machine learning with natural language processing. This in turn requires serious software brains (smart programmers using and refining the right algorithms and models) and muscle (massive learning and training sets in the hundreds of gigabytes running on clusters of tens of thousands of nodes).
If Flipkart is serious about the mobile ad business (about which I have expressed my reservations), even then it needs to get to the holy grail of deep-learning in ad-tech – “Inferring Without Interfering” the customer’s intent.”[13] In any event, this policy will only stand Flipkart in good stead. If they are already doing so, then good, but the proof is not in the pudding as much as in the eating of the pudding.

A critical differentiator in the coming times is not, I repeat, not, going to be driven by slick UIs or gimmicks on mobile apps like “shake to see offers”, but by offering truly intelligent and immersive experience that are feasible even today. Advances in machine learning, and capabilities such as voice, video, location, and more, when used in tandem will power the next set of innovations. Rather than stick to the tried and tested and old way of making users search using simple keywords and correlations and prior history, e-tailers need to make the shopping experience more intelligent.

Appendix 2 and 3 outline possible use-cases. It should be clear that both Flipkart and Amazon have a long, long way to go before realizing anything close to the vision outlined, but without such advances, competitors like Google will find the wedge they need to prise open this market for themselves.

Second, Flipkart (or even Amazon for that matter, or SnapDeal, or whichever competitor you happen to care about, though in this case the admonition is more targeted at Flipkart in light of its mobile-only pronouncements) needs to get serious about the mobile platform.

Browse to either Flipkart or Myntra’s websites from a browser on an iPad and you are asked to use their app instead. Would you believe if I told you Flipkart does not have an iPad app (as of 15th June 2015)? No? Go check for yourself – I did! Ditto for Myntra (the online fashion retailer Flipkart acquired in 2014)! See Appendix 1 for what I found when I downloaded their apps on my iPad tablet. This would be comedically farcical if serious money weren’t riding on such decisions.

Third, Flipkart needs to get into the cloud business.

Yes, I am serious.

Let’s look at the competition – Amazon. It is the 800 pound gorilla in the cloud computing industry, where its offering goes by the umbrella of AWS (Amazon Web Services) and offers almost everything you could think of under the cloud – platform, infrastructure, software, database, email, storage, even machine learning, and much more. How gorilla-ish? “AWS offers five times the utilized compute capacity of the other 14 cloud providers in the Gartner Magic Quadrant. Combined.[14]” Since 2005, Amazon has spent “roughly $12 billion” on its infrastructure[15]. It competes with the likes of Microsoft and Google in this space. Yet, Amazon’s cloud revenues are estimated to be “30 times bigger than Microsoft’s.[16]

And yet I argue that Flipkart should get into the cloud business. As I wrote last year[17], Flipkart had to invest substantially (per my estimates, more than one hundred crore rupees, or somewhere in the vicinity of $15-$20 million dollars – which is not chump change) to build its capacity to stand up to the traffic it expected for its “Big Billion Day”. This is in addition to the regular additions it must be making to its computing infrastructure. All this is not surprising, given that the retail business is prone to lumpiness in traffic – a disproportionate amount of traffic is concentrated around sale events, or holidays.

For example, while Amazon reportedly had ten million Prime subscribers in March 2013, it reported that over 10 million “tried Prime for the first time” over the holidays in 2014 (traditionally the period between Thanksgiving and Christmas).[18] To prevent web sites from keeling over under the crush of holiday traffic, companies invest substantially, in advance, to make sure the web site keeps chugging along. The flip side is that for those periods when traffic is more average and a fraction of peak traffic, all those thousands of computers, the hundreds of gigabytes of memory, terabytes of disk space, and gobs of network bandwidth capacity are lying idle – depreciating away, obsolescing away.

Amazon realized this a decade ago and started building a rental model around its excess capacity – this was the genesis behind Amazon Web Services. There is no reason for Flipkart to not do the same. What works for Amazon has worked quite well for Flipkart[19]. If it spins off its entire e-commerce infrastructure into a separate entity, it can palm much off the capital costs of its computing infrastructure to the cloud computing subsidiary, substantially improving its balance sheet in the process. You could argue this is nothing but an accounting gimmick, and I am not going to argue with that aspect of the decision - there would be undeniable and real benefits to this decision, and it’s childish to expect a business to be run on utopian principles. As things stand, the state government of Telangana is already assiduously wooing Amazon to invest in an AWS centre in the state[20]. Once operating on Indian soil, Amazon will be able to meet legal requirements that require certain categories of data to remain with the national borders.

Any industry so heavily influenced and shaped by technology as the e-commerce industry would do well to listen to the winds of change. If unheard and unheeded, these winds of change turn into gale storms of disruption that blow away incumbents faster than you can imagine. “Mobile-only” is a useful-enough mantra, but translating that into an “app-only” sermon hints at myopic thinking – a troubling sign for sure. It turns out that Google “secretly” acquired a company that specializes in “streaming native mobile apps”. Is this a shape of the things to come? How will this transform the world of mobile apps, or even the mobile landscape in general? Time will tell, but “lock-in” may well be a wise strategy for your customers, but a terrible one to apply to yourself.[21].

Appendix 1 - App-solutely Serious about Apps?Fire up your favourite mobile browser on an Apple iPad and browse to Myntra’s website (that would be www.myntra.com). You are greeted with a message to vamoose to their mobile app, because after all, Myntra is all about mobility – social mobility in fashion, and mobile devices when speaking more literally.
Figure 1 - Myntra web site on tablet browser
Incredulity hits you in the face when you realize that (on the Apple App Store) the Myntra app is “optimized for iPhone 5, iPhone 6 and iPhone 6 Plus”, but not the iPad. Yes, you read that right – the web site that tells you have to use its mobile app and mobile app only on an iPad does not have an app optimized for the iPad.
Figure 2 - Myntra app details on the Apple App Store
I am, however, somewhat of a cynical person. I tried searching for the keyword “myntra” on the Apple App Store. The only filter applied was to look for “iPad Only” apps. Here are the beatific search results. Indian gave the world the concept of zero, and the search results page gave one practical application of that elegant mathematical concept.
Figure 3 - Search results for "iPad Only" apps on the Apple AppStore for "myntra"
So where was that Myntra app hiding? I changed the filter to “iPhone Only”, and true-enough, there was that Myntra app.
Figure 4 - Myntra app on the Apple App Store
In case you are wondering how that was even possible, know that most apps created for the iPhone (or iPod Touch) can run on an iPad without any modifications – all that is required for you to keep this in mind when compiling the app. Apple calls this a “Universal app”[22].

Now that can’t be so bad, right? After all, the app is available on the iPhone and the iPad, so where and what is the grouse? I will come to that in just a bit, but take a look at what the Myntra app looks like when run on the iPad.
Figure 5 - Myntra app running on an iPadThis is how the app runs inside an iPad. You have the option of tapping the “2x” button, after which the app uses the full screen, but by scaling everything to twice its size. There is no other intelligence here being applied – like changing the icons, or the text, or adding more features. This is iOS doing what little work you see.
Why this arouses incredulity is due to the stunning dissonance one experiences – between the statements of the Myntra (and Flipkart) executives going to town about a “mobile-only” world[23] on the one hand and the reality of a missing-in-action iPad-optimized app on the other. Yes, one could make the argument that Apple commanded a stunningly low single-digit share of 7% of the tablet market in India[24], but to make this argument is to negate your very philosophy of a “mobile-only” world. Mobile includes smartphones, tablets, phablets, wearables (for which Flipkart does have an app![25]), smart-TVs, and even embedded devices.
Flipkart’s mobile web site works - at least for now - on the iPad (though it does not on a smartphone – you have no option but to use their app), but the story is not much different there. No iPad-optimized app, but a smartphone app that does duty on the iPad by virtue of it being a “Universal” app.
Figure 6 - Flipkart shopping app in the Apple App Store
 Figure 7 - Flipkart shopping app on the Apple iPad It’s not as if Amazon’s iPad app is much better. Yes, they do have an iPad app, but it looks more like a hybrid app – a native shell with an embedded browser snuck in, and very little by way of any tablet optimizations.
Figure 8 - Amazon app for the iPad
Appendix 2 – Natural Speech SearchesMobile shopping apps like Flipkart and Amazon provide you the option of inputting your search query via voice (more because of the support the underlying mobile OS provides), but that forces you say out aloud what you have typed – keywords, and nothing more.
Unlike the stunning Hound demo or the capabilities of Google Now[26], e-tailers have yet to leave the stone age in search capabilities. While Hound can understand and answer (correctly) queries like “Show me hotels in Seattle for Friday, staying one night” and then support refinements to the query like “Show only the ones costing less than $300” or “Show only the ones that have three or four or five stars that are pet friendly, that have a gym and a pool, within 4.5 miles of the Space Needle”[27], and Google Now can understand foreign accents (like my Indian English accent) and parse phrases like “ghat”, “jyotirling” and more, a relatively simple phrase like - “What are the best sellers in fiction” – leads to disappointment on both Amazon and Flipkart’s mobile apps.
Figure 9 - Search results in the Amazon appAnd to be clear, what was presented was not the bestsellers list, because the bestseller list looked like this:
Figure 10 - Non-fiction bestsellers in books as shown on the Amazon app
I tried another search – “Suggest books for children”. I don’t know what to call the search results, but one with “*divorce* your Child” as the first result is surreal.
Figure 11 - Search results on Amazon app
To complete my brief experiment on Amazon, I tried “Show me best sellers in electronics”. That also did not yield any relevant results.
Figure 12 - Search results in the Amazon appFlipkart is not much better, and at this point we are really looking at rock-bottom as the baseline. Even a marginal improvement would be welcome here. Sadly, not the case. Though, Flipkart does separate each word out and allow you to delete any one word to refine your search. Given the abysmal quality of search results, it is somewhat of a zero-divide-by-zero case, resulting in only infinite misery trying to find the right combination of keywords that will yield the desired results.
Figure 13 - Search results on the Flipkart app
Does the Myntra app fare any better? Predictably, it doesn’t. If semantic search in the e-commerce space was a problem that had been cracked by either Flipkart or Myntra, it would have been shared across both platforms by now.
Figure 14 - Search results in the Myntra app
Even Google, with its oft-stated e-commerce ambitions[28],[29] , and the eye-popping advances that it has made with its voice-based search (Siri from Apple and lately Cortana from Microsoft also deserve to be included, but neither company seems to be quite interested in e-commerce at the scale of Amazon, yet) left me disappointed with a simple search like – “what are the fiction best sellers in India”.
Figure 15 - Search results in the Google app
Appendix 3What do I have in mind with respect to the kinds of queries that Flipkart (or Amazon) should be trying to enable? Without any further context, I present the following examples:
One:
(this is a comparatively simpler form of the semantic search capabilities I propose)
Me: show me the best sellers in non-fiction
App: [displays a list of book best sellers in non-fiction] [Optionally, excludes or places the ones I have bought at the bottom of the list; or marks them differently and provides me with an option of reading them online – assuming I had purchased an e-book version]
Me: show me only those books that have been published in the last three months;
App: [filters the previous set of search results to show only those non-fiction best sellers that have been published in the last three months]
Me: also include books that were on the bestseller list this year
App: [adds books that were in the top 10/20 bestsellers list in 2015 but have now dropped out of the rankings]
Me: cancel the last search, and show me those books that are also available as e-books, and then sort them by price
App: [displays a list of book best sellers in non-fiction, filtered by those available on the Kindle, and sorts by price, ascending]
Me: send me free e-book samples of the first five books from this list and remind me in one week whether I want to purchase them.
App: [downloads free samples of the first three books to my e-book app] [creates a reminder to remind me in one week]


Two:
(this is a more social and more nuanced form of the semantic search outlined above)
Me: show me a list of LED TVs
App: [displays a list of the bestselling LED TVs]
Me: show me LED TVs that are HD, 40 inches or larger, cost no more than Rs 60,000, and can be delivered in the next three days.
App: [displays a list of TVs matching the criteria, and adds – “there are only three TVs that match your search criteria, so I have changed the price to Rs 70,000, which has resulted in five more search results. Say “cancel” to undo.”]
Me: Which among these would be most relevant to me?
App: [displays the list sorted based on popularity in my postal code] [offers to show the list sorted on TVs sold in the last three months to the housing community I live in – or the company I work at – or based on people with my profile of educational qualifications or marital/family status – based on privacy settings of course]
Me: summarize the most useful reviews for the first TVs, and keep each under two minutes.
App: [summarizes the most useful reviews and then reads out a software-generated summary, in less than two minutes. Also sends a text summary to my WhatsApp or email]
Far-distant utopia? Naah, I don’t think so. This is within the realm of the possible, and I expect to see this become reality in the next two years. Today, however, we are some ways off from the innovations where online shopping will become a truly immersive, interactive experience akin to having a natural conversation with an incredibly knowledgeable yet infinitely patient salesperson.

Three:
(ratcheting things up one more notch)
Me: (standing amidst the ruins of Hampi) Suggest some good books about this place.
App: [suggests bestsellers or highest-rated books on three categories: coffee-table books on Hampi; history of Hampi and the Vijayanagar Empire; historical fiction books set in the fifteenth/sixteenth century Vijaynagara Empire]
Me: Also suggest something on the significance of this chariot temple
App: …

Four:
App: [reminds me that I have a party at my house this weekend where four families are coming over]
Me: I need some snacks and also suggest some recent action movies to rent
App: [suggests food-items to order and shows a list of the five top grossing movies of the year in the “Action” genre and shows options: buy, rent (really?), stream]
Me: place the first, third, and fifth items in the shopping cart, record this and deliver to my wife. Then rent to stream the third movie in HD format on Saturday evening.
App: [places these items in the shopping cart, records a 15 second video and pings the spouse via a notification/alert to view the video. It also places an order for the selected movie]

Disclaimer: views expressed are personal.

 References:
[1] "India is not mobile-first, but mobile-only country: Sachin Bansal, Flipkart's founder and Mukesh Bansal, Myntra's CEO - timesofindia-economictimes", http://articles.economictimes.indiatimes.com/2015-05-13/news/62124447_1_myntra-sachin-bansal-ceo-mukesh-bansal
[2] See http://www.dnaindia.com/analysis/standpoint-flipkart-vs-amazon-beware-the-whispering-death-2079185 and http://www.dnaindia.com/analysis/standpoint-why-flipkart-seems-to-be-losing-focus-2076806
[3] "Google Launches Custom Voice Actions For Third Party Apps", http://searchengineland.com/google-launches-custom-voice-actions-for-third-party-apps-220148
[4] "After Nine Years of Secret Development, Hound Voice Search App Has a Dazzling Demo | Re/code", http://recode.net/2015/06/02/after-nine-years-of-secret-development-hound-voice-search-app-has-a-dazzling-demo/
[5] "A missed opportunity in China has Amazon founder Jeff Bezos backing his India venture", http://indiatoday.intoday.in/story/amazon-jeff-bezos-india-business-venture-flipkart-snapdeal/1/393933.html
[6] "Alibaba, Foxconn in Talks to Invest $500 Million in India’s Snapdeal - India Real Time - WSJ", http://blogs.wsj.com/indiarealtime/2015/06/16/alibaba-foxconn-in-talks-to-invest-500-million-in-indias-snapdeal/
[7] "Flipkart set to raise up to $800 million - Livemint", http://www.livemint.com/Companies/j2B9ax1SNS4JrDdJAU19sO/Flipkart-set-to-raise-up-to-800-mn.html
[8] See "How e-retailers such as Flipkart, Amazon are keeping fake products at bay - timesofindia-economictimes", http://articles.economictimes.indiatimes.com/2015-01-08/news/57791521_1_amazon-india-sellers-mystery-shoppers, "Who Reviews the Reviewers? How India's Online Businesses Are Fighting Fake Reviews | NDTV Gadgets", http://gadgets.ndtv.com/internet/features/who-reviews-the-reviewers-how-indias-online-business-are-fighting-fake-reviews-697112, and "How genuine are product reviews on FlipKart? - Quora", http://www.quora.com/How-genuine-are-product-reviews-on-FlipKart
[9] "Deep Blue (chess computer) - Wikipedia, the free encyclopedia", https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)
[10] "Rise of the Robots: Technology and the Threat of a Jobless Future”, by Martin Ford, Jeff Cummings, ISBN 9781480574779, http://www.amazon.com/Rise-Robots-Technology-Threat-Jobless/dp/1480574775
[11] Ibid and "The AI Behind Watson — The Technical Article", http://www.aaai.org/Magazine/Watson/watson.php
[12] "Rise of the Robots: Technology and the Threat of a Jobless Future”
[13] Gregory Piatetsky on Twitter: "The #DeepLearning future of ads: "Inferring Without Interfering with what moves the Customer most" - J. Kobelius http://t.co/zh96DC6DDG" https://twitter.com/kdnuggets/status/610848069672927232
[14] "Gartner: AWS Now Five Times The Size Of Other Cloud Vendors Combined - ReadWrite", http://readwrite.com/2013/08/21/gartner-aws-now-5-times-the-size-of-other-cloud-vendors-combined
[15] Ibid.
[16] "How much bigger is Amazon’s cloud vs. Microsoft and Google?", http://www.networkworld.com/article/2837910/public-cloud/how-much-bigger-is-amazon-s-cloud-vs-microsoft-and-google.html
[17] "A Billion Dollar Sale, And A Few Questions", http://www.dnaindia.com/analysis/standpoint-a-billion-dollar-sale-and-a-few-questions-2047853
[18] "Amazon added 10M new Prime subscribers over the holidays, could make up to $1B in annual revenue | VentureBeat | Business | by Harrison Weber", http://venturebeat.com/2014/12/26/amazon-made-nearly-1b-from-new-prime-subscriptions-over-the-holidays/
[19] See http://www.dnaindia.com/analysis/standpoint-flipkart-vs-amazon-beware-the-whispering-death-2079185
[20] "Amazon likely to bring Web Services to Telangana | Business Standard News", http://www.business-standard.com/article/companies/amazon-likely-to-bring-web-services-to-telangana-115061000659_1.html
[21] "Report: Last Year Google Secretly Acquired Agawi, A Specialist In Streaming Native Mobile Apps | TechCrunch", http://techcrunch.com/2015/06/18/report-last-year-google-secretly-acquired-agawi-a-specialist-in-streaming-native-mobile-apps/
[22] "Start Developing iOS Apps Today: Tutorial: Basics", https://developer.apple.com/library/ios/referencelibrary/GettingStarted/RoadMapiOS/FirstTutorial.html
[23] http://articles.economictimes.indiatimes.com/2015-05-13/news/62124447_1_myntra-sachin-bansal-ceo-mukesh-bansal
[24] "Samsung Tops Indian Tablet Market Share, Followed By Micromax, iBall", http://trak.in/tags/business/2014/11/28/indian-tablet-market-share-growth/
[25] "Flipkart launches an app for Android Wear sporting wearables - Tech2", http://tech.firstpost.com/news-analysis/flipkart-launches-an-app-for-android-wear-sporting-wearables-240070.html
[26] See this for a comparison between Hound and Google Now - "Here’s how Hound beta compares to Google Now (Video) | 9to5Google", http://9to5google.com/2015/06/05/hound-beta-vs-google-now-video/
[27] "After Nine Years of Secret Development, Hound Voice Search App Has a Dazzling Demo | Re/code", http://recode.net/2015/06/02/after-nine-years-of-secret-development-hound-voice-search-app-has-a-dazzling-demo/
[28] "Google Preps Shopping Site to Challenge Amazon - WSJ", http://www.wsj.com/articles/google-preps-shopping-site-to-challenge-amazon-1418673413
[29] "Google Finds Partners To Help It Compete With Amazon - Forbes", http://www.forbes.com/sites/benkepes/2015/04/13/google-finds-partners-to-help-it-compete-with-amazon/



© 2015, Abhinav Agarwal. All rights reserved.

Oracle Process Cloud Service - Consuming ADF BC REST Service in Web Form

Andrejus Baranovski - Sat, 2015-07-04 08:57
With the introduction of Oracle Process Cloud Service (https://cloud.oracle.com/process) there is an option to run your business process in the cloud. You can implement very similar things as it is possible with BPM 12c in JDeveloper, but only in the cloud. There is no option to implement human task UI with ADF, it must be done with Web Forms (light UI forms implementation framework). This is disadvantage, as it will require to externalise complex business logic and access data collections through REST calls, instead of processing it locally in ADF extension. However, this is how it is implemented currently in the BPM cloud version.

This is how it looks - Web Form editor in Process Cloud Service:


You have an option to select specific types from component palette. There are such types as money, email, phone, text. Components are provided with built in validation, checks for valid email address, etc. User who implements a form, needs to drag and drop components one by one (generally it works OK, but sometimes may get stuck and page refresh will be required). Properties section is available to enter details for the selected component.

Cool part about it - we can define business logic rules. For example, to show/hide customer field based on trip type, etc. One of the rules below, shows how REST service can be called inline and REST response is applied for drop-down field. This is pretty simple to fetch data from external REST service and use it for the Web Form component. Just invoke REST service and assign collection to the component supporting array of data:


If REST call requires to pass parameters through URL, this is also possible, see example below - list of cities, based on the selected country in the Web Form:


Web Form works pretty well in Preview mode, it even calls REST services and shows real data. Validation messages for built-in checks are displayed pretty nicely. Here I'm selecting a country from the list populated by REST service:


Based on the selection in the Country drop-down, filtered list for Cities becomes available (another REST call):


Good news - most of the Oracle Process Cloud interface itself is implemented with ADF. Here you can see a section listing all Web Forms available for the current process:


I have implemented REST service for Countries and Cities lists in ADF BC. There are two custom methods in AM implementation, to fetch the data and transform it to the list format:


REST resources are defined through annotations and are configured to produce JSON data, this will allow Oracle Process Cloud to parse such data automatically:


If you are going to check how REST service is implemented in ADF BC, you can run a query in Postman to retrieve a list of all countries from HR schema:


Another query to retrieve a list of cities, by country:


Download sample application, with ADF BC REST service - RestADFBCApp.zip.

RMAN -- 4 : Recovering from an Incomplete Restore

Hemant K Chitale - Fri, 2015-07-03 23:22
What do you do if a RESTORE fails mid-way ?  Do you need to rerun the whole restore ?  If it is a very large database, it could take [many ?] hours.

RMAN is "smart" enough to detect datafiles that have been restored and not re-attempt a restore.

Here, I begin a database restore.

 
RMAN> restore controlfile from autobackup;

Starting restore at 04-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=19 device type=DISK

recovery area destination: /NEW_FS/oracle/FRA
database name (or database unique name) used for search: HEMANTDB
channel ORA_DISK_1: AUTOBACKUP /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp found in the recovery area
AUTOBACKUP search with format "%F" not attempted because DBID was not set
channel ORA_DISK_1: restoring control file from AUTOBACKUP /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp
channel ORA_DISK_1: control file restore from AUTOBACKUP complete
output file name=/home/oracle/app/oracle/oradata/orcl/control01.ctl
output file name=/home/oracle/app/oracle/flash_recovery_area/orcl/control02.ctl
Finished restore at 04-JUL-15

RMAN>
RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN>
RMAN> restore database;

Starting restore at 04-JUL-15
Starting implicit crosscheck backup at 04-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=19 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK
Crosschecked 8 objects
Finished implicit crosscheck backup at 04-JUL-15

Starting implicit crosscheck copy at 04-JUL-15
using channel ORA_DISK_1
using channel ORA_DISK_2
Finished implicit crosscheck copy at 04-JUL-15

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp

using channel ORA_DISK_1
using channel ORA_DISK_2

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00004 to /home/oracle/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqsccg_.bkp
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqsccg_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:02:34
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00003 to /home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqwt4s_.bkp
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqwt4s_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:35
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00002 to /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
ORA-01092: ORACLE instance terminated. Disconnection forced
ORACLE error from target database:
ORA-03135: connection lost contact
Process ID: 3777
Session ID: 1 Serial number: 9

[oracle@localhost ~]$

After having restored a few datafiles, the restore failed on being disconnected from the database. (The  server or database instance has crashed).  Since the controlfile has been restored, I can bring up the database in MOUNT mode and then re-attempt a RESTORE DATABASE.

[oracle@localhost ~]$ rman target sys/oracle@orcl

Recovery Manager: Release 11.2.0.2.0 - Production on Sat Jul 4 12:56:41 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN>
RMAN> restore database;

Starting restore at 04-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK

skipping datafile 3; already restored to file /home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
skipping datafile 4; already restored to file /home/oracle/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00002 to /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:02:36
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00010 to /home/oracle/app/oracle/oradata/orcl/APEX_2614203650434107.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxovh_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxovh_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00005 to /home/oracle/app/oracle/oradata/orcl/example01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxjv6_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxjv6_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:04:02
Finished restore at 04-JUL-15

RMAN>

RMAN detects that datafiles 3 (undotbs01.dbf) and 4 (users01.dbf) had already been restored.
 If you look at the previous RESTORE run, you can see that these were restored by Channel ORA_DISK_2. The first channel ORA_DISK_1 had started restoring system01.dbf but hadn't completed restoring the datafile when the restore crashed. That restore of datafile 1 (system01.dbf) had to be redone.

 (Another thing to note : Oracle doesn't necessarily restore datafiles in the order of file_id (file#) ! There really is no ORDER BY for a RESTORE)

RMAN> recover database;

Starting recover at 04-JUL-15
using channel ORA_DISK_1
using channel ORA_DISK_2

starting media recovery

archived log for thread 1 with sequence 628 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc
archived log for thread 1 with sequence 629 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc
archived log for thread 1 with sequence 630 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc
archived log for thread 1 with sequence 631 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc
archived log for thread 1 with sequence 632 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc
archived log for thread 1 with sequence 633 is already on disk as file /home/oracle/app/oracle/oradata/orcl/redo03.log
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc thread=1 sequence=628
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc thread=1 sequence=629
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc thread=1 sequence=630
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc thread=1 sequence=631
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc thread=1 sequence=632
archived log file name=/home/oracle/app/oracle/oradata/orcl/redo03.log thread=1 sequence=633
media recovery complete, elapsed time: 00:00:02
Finished recover at 04-JUL-15

RMAN>
RMAN> alter database open resetlogs;

database opened

RMAN>


UPDATE 07-Jul-15 :  Also see my earlier (year 2012) post "Datafiles not Restored -- using V$DATAFILE and V$DATAFILE_HEADER"  which also shows retrying a RESTORE DATABASE after a failure of restoring a datafile.  There, a single file in a BackupSet failed to restore.  Oracle didn't continue and try the other datafiles in that BackupSet.  I could either fix the error and retry the entire BackupSet (RESTORE DATABASE would have identified the right BackupSet containing those files) OR I could, as I did in that scenario, individually restore DataFiles from the BackupSet.

It can be a good idea to have your database backup consist of multiple BackupSets, using either multiple CHANNELs or FILESPERSET during the BACKUP.


You could also note, as an aside, that Log Sequence 633 was an online redo log file. RMAN automatically verifies that the online redo log files designated by the controlfile are present and uses them.

.
.
.

Categories: DBA Blogs

Log Buffer #430: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-07-03 12:56

This Log Buffer Edition cuts through the crowd and picks some of the outstanding blog posts from Oracle, SQL Server and MySQL.


Oracle:

  • Continuous Delivery (CD) is a software engineering approach in which teams keep producing valuable software in short cycles and ensure that the software can be reliably released at any time.
  • Query existing HBase tables with SQL using Apache Phoenix.
  • Even though WebLogic with Active GridlLink are Oracle’s suggested approach to deploy Java applications that use Oracle Real Applications Clusters (RAC), there might be scenarios in which you can’t make that choice (e.g. certification issues, licensing, library dependency, etc.).
  • OSB & MTOM: When to use Include Binary Data by Reference or Value.
  • Ever used SoapUI to test services on multiple environments? Then you probably ran in to the job of ever changing the endpoints to the hosts of the particular environment; development, test, acceptance, production (although I expect you wouldn’t use SoapUI against a prod-env). This is not that hard if you have only one service endpoint in the project.

SQL Server:

  • Using DAX to create SSRS reports: The Basics.
  • Getting to know your customers better – cohort analysis and RFM segmentation in R.
  • Using the T-SQL PERCENTILE Analytic Functions in SQL Server 2000, 2005 and 2008.
  • Schema-Based Access Control for SQL Server Databases.
  • How to Fix a Corrupt MSDB SQL Server Database.

MySQL:

  • MySQL Enterprise Audit – parsing audit information from log files, inserting into a MySQL table.
  • Proposal to deprecate MySQL INTEGER display width and ZEROFILL.
  • Using Cgroups to Limit MySQL and MongoDB memory usage.
  • Slave election is a popular HA architecture,  first MySQL MariaDB toolkit to manage switchover and failover in a correct way was introduce by Yoshinori Matsunobu into MHA.
  • Setting up environments, starting processes, and monitoring these processes on multiple machines can be time consuming and error prone.

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

Categories: DBA Blogs

WebLogic Server (FMW) : Generating Thread Dumps using OS commands

Online Apps DBA - Fri, 2015-07-03 03:33

This post is coming from our Oracle Fusion Middleware Training where we cover Oracle WebLogic Server on Day1 . One of the performance issue that commonly encountered in poorly written application (or on not so performant Fusion Middleware infrastructure) is Stuck Threads.

Stuck Threads in WebLogic Server means a thread performing the same request for a very long time and more than the configurable Stuck Thread Max Time in WebLogic .

Thread dumps are diagnosis information that is used to analyse and troubleshoot performance related issues such as server hangs, deadlocks, slow running, idle or stuck applications etc.

How to generate Thread dumps?
In this post, I will walk you through the steps to generate Thread dumps of a server using operating system (O.S.) commands.

1. Start the server from command line script (using nohup). Let us take managed server as an example for which we need to generate the thread dumps so start the server using script as shown below.
cd $DOMAIN_HOME/bin
nohup ./startManagedWeblogic.sh <Server_name> &

2. Now identify the PID (java Process ID) for the managed server using the below command:
ps auxwww | grep –i java | grep –i <server_name> (This command is for Solaris)

3. Now run the below command to create the thread dump.
kill -3 <PID>

(This will send a signal to the process whose dump we require. This signal causes the Java Virtual Machine to generate a stack trace of the process.)

This command will create thread dump in the nohup.out file (where we started the managed server)

4. Open the nohup.out file to see generated thread dumps:

References

Related Posts for WebLogic/FMW
  1. WebLogic Server (FMW) : Generating Thread Dumps using OS commands

The post WebLogic Server (FMW) : Generating Thread Dumps using OS commands appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

Testing the just released PostgreSQL 9.5 Alpha in a docker container

Yann Neuhaus - Fri, 2015-07-03 00:15

On the 2cnd of July the PostgreSQL Global Development Group released an alpha version of the upcoming PostgreSQL 9.5. The same day, Josh Berkus, another of those PostgreSQL core team members released a docker image for testing this alpha release. It's never been that easy to get started with PostgreSQL or testing new features.

Happy Birthday to oracle-base.com (sort-of)

Tim Hall - Thu, 2015-07-02 23:35

birthday-cake-clipartToday is another anniversary, but this time it’s the website, which is 15 years old.

OK. This is a bit of a cheat because:

  • The website originally had a different name, so you could say the website with it’s current name is 13 months younger, but it’s the same site, so whatever.
  • I don’t actually know the exact day the first page went online, but I do know the date I bought the original domain name (before the rename to oracle-base.com), so I know the first page was put up about now.

Anyway, July 3rd is from now on the official birthday of the website. Makes it easy to remember, because it’s the day after my birthday.

Cheers

Tim…

PS. For those that are interested, the blog was 10 years old last month. I do know the exact date for that because the posts are dated and you can read the first post. :)

Happy Birthday to oracle-base.com (sort-of) was first posted on July 3, 2015 at 6:35 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Seven Weeks with the Fitbit Surge

Oracle AppsLab - Thu, 2015-07-02 14:35

As my wearables odyssey continues, it’s time to document my time with the Fitbit Surge.

I ended up wearing the Surge a lot longer than I’d worn the Nike+ Fuelband, the Basis Peak and the Jawbone UP24 because June was a busy month, and I didn’t have time to switch.

For comparison’s sake, I suggest you read Ultan’s (@ultan) review of the Surge. He’s a hardcore fitness dude, and I’m much more a have-to-don’t-like-to exercise guy, which makes for a nice companion read.

As usual, this isn’t a review, more loosely-coupled observations. You can find lots of credible reviews of the Surge, billed as a “Super Watch” by the recently IPO’ed Fitbit, e.g. this one from Engadget.

Here we go.

The watch

As with most of the other wearables I’ve used, the Surge must be setup from software installed on a computer. It also requires the use of a weird USB doohickey for pairing, after which the watch firmware updates.

IMG_20150512_070824

I get why they provide ways for people to sync to software installed on computers, but I wonder how many users really eschew the smartphone app or don’t have a smartphone.

Anyway, despite Fitbit Connect, the software you have to install, saying the firmware update process will take five to ten minutes, my update took much longer, like 30 minutes.

Physically, the Surge is chunky. Its shape reminds me of a door-stop, like a wedge. While this looks weird, it’s really a nice design idea, essentially tilting the display toward the user, making it easier to read at a glance.

IMG_20150513_063914

I found wearing the device to be comfortable, although the rubber of the band did make my skin clammy after a while, see the Epilogue for more on that.

The display is easy to read in any light, and the backlight comes on automatically in low light conditions.

Surge carries water resistant rating of 5 ATM, which amounts to 50 meters deep, but for some reason, Fitbit advises against submerging it. Weird, right?

Not one to follow directions, I took the Surge in a pool with no ill effects. However, once or twice during my post-workout steam, the display did show some condensation under the glass. So, who knows?

The device interface is a combination of touches and three physical buttons, all easy to learn through quick experimentation.

The watch screens show the day’s activity in steps, calories burned, miles, and floors climbed. It also tracks heart rate via an optical heart rate sensor.

In addition, you can start specific activity tracking from the device including outdoor running with GPS tracking, which Ultan used quite a lot, and from what I’ve read, is the Surge’s money feature. I only run indoors on a treadmill (lame), so I didn’t test this feature.

The Surge does have a treadmill activity, but I found its mileage calculation varied from the treadmill’s, e.g. 3.30 miles on the treadmill equated to 2.54 on the Surge. Not a big deal to me, especially given how difficult tracking mileage would be for a device to get right through sensors.

Speaking of, the Surge packs a nice array of sensors. In addition to the aforementioned GPS and optical heart rate sensor, it also sports a 3-axis accelerometer and a 3-axis gyroscope.

The Surge tracks sleep automatically, although I’m not sure how. Seemed to be magically accurate though.

Fitbit advertises the Surge’s battery life as seven days, but in practice, I only got about four or five days per charge. Luckily, Fitbit will inform you when the battery gets low via app notifications and email, both of which are nice.

Happily, the battery charges very quickly, albeit via a proprietary charging cord. Lose that cord, and you’re toast. I misplaced mine, which effectively ended this experiment.

The app and data

As Ultan mentioned in his post, the Fitbit Aria wifi scale makes using any Fitbit device better. I’ve had an Aria for a few years, but never really used it. So, this was a great chance to try it with the Surge.

Fitbit provides both mobile and web apps to track data.

I mostly used the mobile app which shows a daily view of activity, weight and food consumption, if you choose to track that manually. Tapping any item shows you details, and you can swipe between days.

Screenshot_2015-07-02-13-01-58 Screenshot_2015-07-02-13-02-03 Screenshot_2015-07-02-13-02-17 Screenshot_2015-07-02-13-02-57 Screenshot_2015-07-02-13-02-45 Screenshot_2015-07-02-13-03-26

It’s all very well-done, easy to use, and they do a nice job of packing a lot information into a small screen.

From within the app, you can set up phone notifications for texts and calls, a feature I really liked from wearing the Basis Peak.

Noel, send me a text message.

Noel, send me a text message.

Unfortunately, I only got notified about half the time, not ideal, and I’m not the only one with this issue. Danny Bryant (@dbcapoeira) and I chatted about our Surge experiences at Kscope, and he mentioned this as an issue for him as well.

Fitibit offers Challenges to encourage social fitness competition, which seems nice, but not for me. There are badges for milestones too, like walking 500 miles, climbing 500 floors, etc. Nice.

Sleep tracking on the mobile app is pretty basic, showing number of times awake and number of times restless.

Fitbit’s web app is a dashboard showing the same information in a larger format. They hide some key insights in the Log section, e.g. the sleep data in there is more detailed than what the dashboard shows.

Fitbit Dashboard

Fitbit Dashboard

Track My Sleep on Fitbit

Fitbit Log

Track My Activities on Fitbit

Fitbit Log

I have to say I prefer the Jawbone approach to viewing data; they only have a mobile app which dictates the entire experience and keeps it focused.

Fitbit sends weekly summary emails too, so yet another way to view your data. I like the emails, especially the fun data point about my average time to fall asleep for the week, usually zero minutes. I guess this particular week I was well-rested.

fitbitEmail

I did have some time zone issues when I went to Florida. The watch didn’t update automatically, and I did some digging and found a help article about traveling with your Fitbit with this tip:

Loss of data can occur if the “Set Automatically” timezone option in the app’s “Settings” is on. Toggle the “Set Automatically” timezone option to off.

So for the entire week in Hollywood, my watch was three hours slow, not a good look for a watch.

And finally, data export out of Fitbit’s ecosystem is available, at a cost. Export is a premium feature. “Your data belongs to you!” for for $50 a year. Some consolation though, they offer a free trial for a week, so I grabbed my data for free, at least this time.

Overall, the Surge compares favorably to the Basis Peak, but unlike the Jawbone UP24, I didn’t feel sad when the experiment ended.

Epilogue

Perhaps you’ll recall that Fitbit’s newer devices have been causing rashes for some users. I’m one of those users. I’m reporting this because it happened, not as an indictment of the device.

I wore the Surge for seven weeks, pretty much all the time. When I took it off to end the experiment, my wife noticed a nasty red spot on the outer side of my arm. I hadn’t seen it, and I probably would never have noticed.

IMG_20150629_131631 IMG_20150629_131702

It doesn’t itch or anything, just looks gnarly. After two days, it seems to be resolving, no harm, no foul.

The rash doesn’t really affect how I view the device, although if I wear the Surge again, I’ll remember to give my skin a break periodically.

One unexpected side effect of not wearing a device as the rash clears up is that unquantified days feel weird. I wonder why I do things if they’re not being quantified. Being healthy for its own sake isn’t enough. I need that extra dopamine from achieving something quantifiable.

Strange, right?

Find the comments.Possibly Related Posts:

Oracle Priority Support Infogram for 02-JUL-2015

Oracle Infogram - Thu, 2015-07-02 13:53

Oracle Support
Two good items from the My Oracle Support blog:
Three Scenarios for Using Support Identifier Groups
Stay Up to Date with Key My Oracle Support Resources of Your Choice using Hot Topics.
A Guide to Providing a Good Problem Description When Raising Service Requests, from the Communications Industry Support Blog.
MySQL
MySQL Enterprise Monitor 3.0.22 has been released, from the MySQL Enterprise Tools Blog.
Big Data
Identifying Influencers with the Built-in Page Rank Analytics in Oracle Big Data Spatial and Graph, from Adding Location and Graph Analysis to Big Data.
WebLogic
Additional new material WebLogic Community, from WebLogic Partner Community EMEA.
Improve SSL Support for Your WebLogic Domains, from Proactive Support - Identity Management.
Fusion Middleware
Calling Fusion SOAP Services from Ruby, from Angelo Santagata's Blog.
JDBC
Using Universal Connection Pooling (UCP) with JBoss AS, from JDBC Middleware.
OBIEE
OBIEE SampleApp V506 is Available, from Business Analytics - Proactive Support.
Ops Center
Upgrading to 12.3, from the Ops Center blog.
Identity Management
Configuring OAM SSO for ATG BCC and Endeca XM, from Proactive Support - Identity Management.
And from the same blog: Monitoring OAM Environment
Health Sciences
Health Sciences Partner Support Best Practices & Resources, from Chris Warticki's Support Blog.
Primavera
New Primavera P6 Release 15.1 Patch Set 2, from the Primavera Support Blog.
EBS
From the Oracle E-Business Suite Support blog:
Self-Evaluation for High Volume Order Import (HVOP)
General Ledger Balances Corruption - Causes, Suggestions, Solutions
Revaluation in Fixed Assets
iSetup? Use It Even Inside The Same Instance!
Using Translation and Plan to Upgrade? Don't Miss This!

Continue Your Work In Process!

Query existing HBase tables with SQL using Apache Phoenix

Kubilay Çilkara - Thu, 2015-07-02 13:25
Spending a bit more time with Apache Phoenix and reading again my previous post I realised that you can use it to query existing HBase tables. That is NOT tables created using Apache Phoenix, but HBase - the columnar NoSQL database in Hadoop.

I think this is cool as it gives you the ability to use SQL on an HBase table.

To test this, let's say you login to HBase and you create an HBase table like this:

> create 'table2', {NAME=>'cf1', VERSIONS => 5}

The table2 is a simple table in HBase with one column family cf1 and now let's put some data to this HBase table.

> put 'table2', 'row1', 'cf1:column1', 'Hello SQL!'

then maybe add another row

> put 'table2', 'row4', 'cf1:column1', 'London'

Now, in Phoenix all you will have to do is create a database View for this table and query it with SQL. The database View will be read-only.  How cool is that, you don't even need to physically create the table or move the data to Phoenix or convert it, a database view will be sufficient and via Phoenix you can query the HBase table with SQL.

In Phoenix you create the view for the table2 using the same name. As you can see below the DDL used to create the view is case sensitive and if you created your HBase table name in lower case you will have to put the name in between double quotes.

So login to Phoenix and create the "table2" view like this:

> create view "table2" ( pk VARCHAR PRIMARY KEY, "cf1"."column1" VARCHAR );

And here is how you then query it in Phoenix:


SQL Query on Phoenix
Tremendous potential here, imagine all those existing HBase tables which now you can query with SQL. More, you can point your Business Intelligence tools and Reporting Tools and other tools which work with SQL and query HBase as if it was another SQL database.

A solution worth investigating further? It definitely got me blogging in the evenings again.

To find out more about Apache Phoenix visit their project page https://phoenix.apache.org/



Categories: DBA Blogs

OSB & MTOM: When to use Include Binary Data by Reference or Value

Darwin IT - Thu, 2015-07-02 09:23
As can be seen in my blogs of these days, I've been busy with implementing a service using MTOM in OSB. When enabling XOP/MTOM Support you'll have to choose between:
  • Include Binary Data by Reference
  • Include Binary Data by Value

I used the first because I want to process the content in another service on another WLS-Domain. However, in my first service catching the initial request I want to do an XSD validation. And although the rest of the message is valid, the Validate activity raises an exception with the message: 'Element not allowed: binary-content@http://www.bea.com/wli/sb/context in element Bestandsdata....'.

Looking into this problem I came up with this section in the doc,  which states that you use 'Include Binary Data by Value' when you want to:
  • transfer your data to a service that does not support MTOM
  • validate your message
Now, what does this other option? OSB then parses the root of the Inbound MIME message in search for  xop:Include-tags. When found, it will Base64 encode the binary-content and replaces the tags with the Base64-string.

Now, although I want exactly that in the end, I don't want that at this point of the service. I want to transform my message, without the Base64-strings. And I want to encode the data only on my other domain.

So I just want to ignore messages that start with the 'Element not allowed: binary-content@...' messages. To do so I came up with the next expression:
fn:count($fault/ctx:details/con:ValidationFailureDetail/con:message[not(fn:starts-with(text(),'Element not allowed: binary-content@http://www.bea.com/wli/sb/context in element Bestandsdata'))])>0 
Add an If-Then-Else activity to your Error Handler Stage with this expression. Add the following Namespace:
  • Prefix: con
  • Namespace:  http://www.bea.com/wli/sb/stages/transform/config

If the expression evaluates to true, then you have in fact an invalid XML-message. In the else branch you can add a Resume to ignore the exception.

This expression might come in handy in other situations as well.

D2L Again Misusing Academic Data For Brightspace Marketing Claims

Michael Feldstein - Thu, 2015-07-02 05:56

By Phil HillMore Posts (333)

At this point I’d say that we have established a pattern of behavior.

Michael and I have been quite critical of D2L and their pattern of marketing behavior that is misleading and harmful to the ed tech community. Michael put it best:

I can’t remember the last time I read one of D2L’s announcements without rolling my eyes. I used to have respect for the company, but now I have to make a conscious effort not to dismiss any of their pronouncements out-of-hand. Not because I think it’s impossible that they might be doing good work, but because they force me to dive into a mountain of horseshit in the hopes of finding a nugget of gold at the bottom. Every. Single. Time. I’m not sure how much of the problem is that they have decided that they need to be disingenuous because they are under threat from Instructure or under pressure from investors and how much of it is that they are genuinely deluding themselves. Sadly, there have been some signs that at least part of the problem is the latter situation, which is a lot harder to fix. But there is also a fundamental dishonesty in the way that these statistics have been presented.

Well, here’s the latest. John Baker put out a blog called This Isn’t Your Dad’s Distance Learning Program with this theme:

But rather than talking about products, I think it’s important to talk about principles. I believe that if we’re going to use education technology to close the attainment gap, it has to deliver results. That — as pragmatic as it is — is the main guiding principle.

The link about “deliver results” leads to this page (excerpted as it existed prior to June 30th, for reasons that will become apparent).

Why Brightspace

Why Brightspace? Results.

So the stage is set – use ed tech to delivery results, and Brightspace (D2L’s learning platform, or LMS) delivers results. Now we come to the proof, including these two examples.

CSULB UWM Results

According to Californiat State University-Long Beach, retention has improved 6% year-over-year since they adopted Brightspace.[snip]

University of Wisconsin-Milwaukee reported an increase in the number of students getting A’s and B’s in Brightspace-powered courses by over 170%

Great results, no? Let’s check the sources. Ah . . . clever marketing folks – no supporting data or even hyperlinks to learn more. Let’s just accept their claims and move along.

. . .

OK, that was a joke.

CSU Long Beach

I contacted CSU Long Beach to learn more, but I could find no one who knew where this data came from or even that D2L was making this claim. I shared the links and context, and they went off to explore. Today I get a message saying that the issue has been resolved, but that CSU Long Beach would make no public statements on the matter. Fair enough – the observations below are my own.

If you now look at that Results page now, the CSU Long Beach claim is no longer there – down the memory hole[1] with no explanation, replaced by a new claim about Mohawk College.

Mohawk UWM Results

While CSU Long Beach would not comment further on the situation, there are only two plausible explanations for the issue being resolved by D2L taking down the data. Either D2L was using legitimate data that they were not authorized to use (best case scenario) or D2L was using data that doesn’t really exist. I could speculate further, but the onus should be on D2L since they are the ones who made the claim.

UW Milwaukee

I also contacted UW Milwaukee to learn more, and I believe the data in question is from the U-Pace program which has been fully documented.[2][3]

The U-Pace instructional approach combnes self-paced, master-based learning with instructor-initiated Amplified Assistance in an online environment.

The control group was traditionally-taught (read that as large lecture classes) for Intro to Psychology.

From the EDUCAUSE Quarterly article on U-Pace, for disadvantaged students the number of A’s and B’s increased 163%. This is the closest data I can find to back up D2L’s claim of 170% increase.

U-Pace results EQ

There are three immediate problems here (ignoring the fact that I can’t find improvements of more than 170% – I’ll take 163%).

  1. First, the data claim is missing the context of “for underprepared students” who exhibited much higher gains than prepared students. That’s a great result for the U-Pace program, but it is also important context to include.
  2. The program is an instructional change, moving from large lecture classes to self-paced, mastery-learning approach. That is the intervention, not the use of the LMS. In fact, D2L was the LMS used in both the control group and the U-Pace treatment group.
  3. The program goes out of its way to call out the minimal technology needed to adopt the approach, and they even list Blackboard, Desire2Learn, and Moodle as examples of LMS’s that work with the following conditions:

U-Pace LMS Reqs

This is an instructional approach that claims to be LMS neutral with D2L’s Brightspace used in both the control group and treatment group, yet D2L positions the results as proof that Brightspace gets results! It’s wonderful that Brightspace LMS worked during the test and did not get in the way, but that is a far cry from Brightspace “delivering results”.

The Pattern

We have to now add these two cases to the Lone Star College and LeaP examples. In all cases, there is a pattern.

  1. D2L makes marketing claim implying their LMS Brightspace delivers results, referring to academic outcomes data with missing supporting data or references.
  2. I contact school or research group to learn more.
  3. Data is either misleading (treatment group is not LMS usage but instead instructional approach, adaptive learning technology, or student support software) or just plain wrong (with data taken down).
  4. In all cases, the results could have been presented honestly, showing the appropriate context, links for further reading, and explanation of the LMS role. But they were not presented honestly.
  5. e-Literate blog post almost writes itself.
  6. D2L moves on to make their next claim, with no explanations.

I understand that other ed tech vendors make marketing claims that cannot always be tied to reality, but these examples cross a line. They misuse and misrepresent academic outcomes data – whether public research-based on internal research – and essentially take credit for their technology “delivering results”.

This is the misuse of someone else’s data for corporate gain. Institutional data. Student data. That is far different than using overly-positive descriptions of your own data or subjective observations. That is wrong.

The Offer

For D2L company officials, I have an offer.

  1. If you have answers or even corrections about these issues, please let us know through your own blog post or comments to this blog.
  2. If you find any mistakes in my analysis, I will write a correction post.
  3. We are happy to publish any reply you make here on e-Literate.
  1. Their web page does not allow archiving with the Wayback Machine, but I captured screenshots in anticipation of this move.
  2. Note – While I assume this claim derives from U-Pace, I am not sure. It is the closest example of real data that I could find, thanks to a helpful tip from UW-M staff. I’ll give D2L the benefit of the doubt despite their lack of reference.
  3. And really, D2L marketing staff should learn how to link to external sources. It’s good Internet practice.

The post D2L Again Misusing Academic Data For Brightspace Marketing Claims appeared first on e-Literate.

Set environment properties in SoapUI (freeware)

Darwin IT - Thu, 2015-07-02 04:26
Ever used SoapUI to test services on multiple environments? Then you probably ran in to the job of ever changing the endpoints to the hosts of the particular environment; development, test, acceptance, production (although I expect you wouldn't use SoapUI against a prod-env). This is not that hard if you have only one service endpoint in the project. But what if you want to test against multiple services or have to call a service on one system with the result of the other during a testcase. You can even have testcases that mock services called by your (BPEL/BPM) process and call back the process to have it process to a further stage. And then you can end up having multiple endpoints per environment.

You can set multiple endpoints on a request and toggle between them. But you'll have to do that for every request.

SoapUI however, supports the use of properties in the endpoints. So you can setup different host-properties and URI properties on the project:
In this case you see that I have one property for the Service URI, the part of the URL after the host:port, and several ...Host properties for each seperate environment, and one actual.

As said, you can have a property based endpoint like this:
So I have one single endpoint defined based on:
http://${#Project#CSServiceHost}/${#Project#CSServiceURI}
Here you see that the endpoint is based on two properties: ${#Project#CSServiceHost} and ${#Project#CSServiceURI}. In those properties '#Project#' refers to the level in SoapUI the properties are defined. You can also refer to #TestSuite#, #TestCase#, etc.

Now you could manually copy and paste the host of the particular environment to the actual host property, but that can be error prone when dealing with multiple endpoints.
What I did was to create a seperate TestSuite called 'ProjectSettings'. In there I created a testcase per environment: 'SetLocalHosts', SsetDevHosts', etc. In there I created a PropertyTransfer that transfers the particular env-host-property to the actual host-property:

You can create a property transfer for each applicable host in your environment. You can enhance the testcase with particular groovyscripts to determine the properties on run-time. You could even call a generic TestCase from there.

Running the particular testcase before your tests will setup your SoapUI project for the target environment in one go.

Maybe I'll enhance this further in my projects, but for now I find this neat. However, it would have been nice if SoapUI would support different environments with hostnames/urls applicable for that environment. And that you could select a target-environment on project level using a poplist.
Also it would be nice to have custom scripts (like macro's) on project level, that could be coupled to a button in the button bar, in stead of how I do it above.

Streamline Oracle Development with Cloud Services

Angelo Santagata - Thu, 2015-07-02 04:24
Streamline Java Development with Cloud Services
On-Demand Webinar Replay
Learn to deliver java applications to market faster. Reduce hardware and software costs for new development and testing environments. Improve DevOps efficiency. Build, test and run enterprise-grade applications in the Cloud and on premise.

Listen to this webinar replay with development expert James Governor, co-founder of RedMonk, and Daniel Pahng, President and CEO of mFrontiers, LLC, an ISV with hands-on experience developing enterprise mobility and Internet of Things (IOT) solutions, as they present this webcast on developing applications in the cloud. Listen today! For more information: July 2015                                                                                          Oracle Corporation - All rights reserved

Table Recovery in #Oracle 12c

The Oracle Instructor - Thu, 2015-07-02 03:18

You can now restore single tables from backup! It is a simple command although it leads to much effort by RMAN. See it as an enhancement over a ‘normal’ Point In Time Recovery:

Point In Time Recovery

Point In Time Recovery

After a full restore from a sufficiently old backup, archived logs are being applied in direction of the presence until before the logical error. Then a new incarnation comes up (with RESETLOGS) and the whole database is as it was at that time. But what if it is only a dropped table that needs to be recovered? Enter the 12c New Feature:

Table Recovery

Table Recovery

Above is what RMAN does upon Table Recovery. The restore is done to the auxiliary destination, while the database keeps on running like it is just now. The new incarnation is there only temporarily, just to export the dropped table from. Afterwards, it is removed. RMAN will then import the table back to the still running database – unless you say otherwise with the NOTABLEIMPORT clause. So it is a huge effort to go through for the system in spite of the simple RMAN command:

 

SQL> select count(*) from sales;

  COUNT(*)
----------
  10000000

SQL> select sysdate from dual;

SYSDATE
-------------------
2015-07-02 09:33:37

SQL> drop table sales purge;

Table dropped.

Oops – that was a mistake! And I can’t simply say flashback table sales to before drop because of the purge. RMAN to the rescue!

[oracle@uhesse ~]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Jul 2 09:34:35 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PRIMA (DBID=2113606181)

RMAN> list backup of database;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
1       Full    142.13M    DISK        00:01:45     2015-07-01 17:50:32
        BP Key: 1   Status: AVAILABLE  Compressed: YES  Tag: TAG20150701T174847
        Piece Name: /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp
  List of Datafiles in backup set 1
  File LV Type Ckp SCN    Ckp Time            Name
  ---- -- ---- ---------- ------------------- ----
  1       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/system01.dbf
  2       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/sysaux01.dbf
  3       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/undotbs01.dbf
  4       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/users01.dbf

RMAN> host 'mkdir /tmp/auxi';

host command complete

RMAN> recover table adam.sales until time '2015-07-02 09:33:00' auxiliary destination '/tmp/auxi';

Starting recover at 2015-07-02 09:35:54
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1

Creating automatic instance, with SID='tDtf'

initialization parameters used for automatic instance:
db_name=PRIMA
db_unique_name=tDtf_pitr_PRIMA
compatible=12.1.0.2
db_block_size=8192
db_files=200
diagnostic_dest=/u01/app/oracle
_system_trig_enabled=FALSE
sga_target=1512M
processes=200
db_create_file_dest=/tmp/auxi
log_archive_dest_1='location=/tmp/auxi'
#No auxiliary parameter file used


starting up automatic instance PRIMA

Oracle instance started

Total System Global Area    1593835520 bytes

Fixed Size                     2924880 bytes
Variable Size                402656944 bytes
Database Buffers            1174405120 bytes
Redo Buffers                  13848576 bytes
Automatic instance created

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# restore the controlfile
restore clone controlfile;

# mount the controlfile
sql clone 'alter database mount clone database';

# archive current online log
sql 'alter system archive log current';
}
executing Memory Script

executing command: SET until clause

Starting restore at 2015-07-02 09:36:21
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=3 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_ncsnf_TAG20150701T174847_bs832pht_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fra/PRIMA/backupset/2015_07_01/o1_mf_ncsnf_TAG20150701T174847_bs832pht_.bkp tag=TAG20150701T174847
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl
Finished restore at 2015-07-02 09:36:23

sql statement: alter database mount clone database

sql statement: alter system archive log current

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  2 to new;
set newname for clone tempfile  1 to new;
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 3, 2;

switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /tmp/auxi/PRIMA/datafile/o1_mf_temp_%u_.tmp in control file

Starting restore at 2015-07-02 09:36:32
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /tmp/auxi/PRIMA/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /tmp/auxi/PRIMA/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00002 to /tmp/auxi/PRIMA/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp tag=TAG20150701T174847
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:36
Finished restore at 2015-07-02 09:37:08

datafile 1 switched to datafile copy
input datafile copy RECID=4 STAMP=883993028 file name=/tmp/auxi/PRIMA/datafile/o1_mf_system_bs9tj1fk_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=5 STAMP=883993028 file name=/tmp/auxi/PRIMA/datafile/o1_mf_undotbs1_bs9tj1hw_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=6 STAMP=883993028 file name=/tmp/auxi/PRIMA/datafile/o1_mf_sysaux_bs9tj1jd_.dbf

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
# recover and open database read only
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "SYSAUX";
sql clone 'alter database open read only';
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  1 online

sql statement: alter database datafile  3 online

sql statement: alter database datafile  2 online

Starting recover at 2015-07-02 09:37:09
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 13 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc
archived log for thread 1 with sequence 14 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc
archived log for thread 1 with sequence 15 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc
archived log for thread 1 with sequence 16 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc
archived log for thread 1 with sequence 17 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc
archived log for thread 1 with sequence 18 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc
archived log for thread 1 with sequence 19 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc
archived log for thread 1 with sequence 20 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc
archived log for thread 1 with sequence 21 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc
archived log for thread 1 with sequence 22 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc
archived log for thread 1 with sequence 23 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc
archived log for thread 1 with sequence 24 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc
archived log for thread 1 with sequence 25 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc
archived log for thread 1 with sequence 26 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc
archived log for thread 1 with sequence 27 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc
archived log for thread 1 with sequence 28 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc
archived log for thread 1 with sequence 29 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc
archived log for thread 1 with sequence 30 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc
archived log for thread 1 with sequence 31 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc
archived log for thread 1 with sequence 32 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc
archived log for thread 1 with sequence 33 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc
archived log for thread 1 with sequence 34 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc
archived log for thread 1 with sequence 35 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc
archived log for thread 1 with sequence 36 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc
archived log for thread 1 with sequence 37 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc
archived log for thread 1 with sequence 38 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc
archived log for thread 1 with sequence 39 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc thread=1 sequence=13
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc thread=1 sequence=14
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc thread=1 sequence=15
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc thread=1 sequence=16
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc thread=1 sequence=17
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc thread=1 sequence=18
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc thread=1 sequence=19
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc thread=1 sequence=20
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc thread=1 sequence=21
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc thread=1 sequence=22
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc thread=1 sequence=23
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc thread=1 sequence=24
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc thread=1 sequence=25
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc thread=1 sequence=26
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc thread=1 sequence=27
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc thread=1 sequence=28
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc thread=1 sequence=29
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc thread=1 sequence=30
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc thread=1 sequence=31
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc thread=1 sequence=32
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc thread=1 sequence=33
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc thread=1 sequence=34
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc thread=1 sequence=35
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc thread=1 sequence=36
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc thread=1 sequence=37
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc thread=1 sequence=38
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc thread=1 sequence=39
media recovery complete, elapsed time: 00:01:00
Finished recover at 2015-07-02 09:38:11

sql statement: alter database open read only

contents of Memory Script:
{
   sql clone "create spfile from memory";
   shutdown clone immediate;
   startup clone nomount;
   sql clone "alter system set  control_files =
  ''/tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl'' comment=
 ''RMAN set'' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
# mount database
sql clone 'alter database mount clone database';
}
executing Memory Script

sql statement: create spfile from memory

database closed
database dismounted
Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    1593835520 bytes

Fixed Size                     2924880 bytes
Variable Size                419434160 bytes
Database Buffers            1157627904 bytes
Redo Buffers                  13848576 bytes

sql statement: alter system set  control_files =   ''/tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl'' comment= ''RMAN set'' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    1593835520 bytes

Fixed Size                     2924880 bytes
Variable Size                419434160 bytes
Database Buffers            1157627904 bytes
Redo Buffers                  13848576 bytes

sql statement: alter database mount clone database

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# set destinations for recovery set and auxiliary set datafiles
set newname for datafile  4 to new;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  4;

switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

Starting restore at 2015-07-02 09:39:11
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=12 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00004 to /tmp/auxi/TDTF_PITR_PRIMA/datafile/o1_mf_users_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp tag=TAG20150701T174847
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:35
Finished restore at 2015-07-02 09:39:47

datafile 4 switched to datafile copy
input datafile copy RECID=8 STAMP=883993187 file name=/tmp/auxi/TDTF_PITR_PRIMA/datafile/o1_mf_users_bs9to0k1_.dbf

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# online the datafiles restored or switched
sql clone "alter database datafile  4 online";
# recover and open resetlogs
recover clone database tablespace  "USERS", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  4 online

Starting recover at 2015-07-02 09:39:47
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 13 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc
archived log for thread 1 with sequence 14 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc
archived log for thread 1 with sequence 15 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc
archived log for thread 1 with sequence 16 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc
archived log for thread 1 with sequence 17 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc
archived log for thread 1 with sequence 18 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc
archived log for thread 1 with sequence 19 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc
archived log for thread 1 with sequence 20 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc
archived log for thread 1 with sequence 21 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc
archived log for thread 1 with sequence 22 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc
archived log for thread 1 with sequence 23 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc
archived log for thread 1 with sequence 24 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc
archived log for thread 1 with sequence 25 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc
archived log for thread 1 with sequence 26 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc
archived log for thread 1 with sequence 27 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc
archived log for thread 1 with sequence 28 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc
archived log for thread 1 with sequence 29 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc
archived log for thread 1 with sequence 30 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc
archived log for thread 1 with sequence 31 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc
archived log for thread 1 with sequence 32 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc
archived log for thread 1 with sequence 33 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc
archived log for thread 1 with sequence 34 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc
archived log for thread 1 with sequence 35 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc
archived log for thread 1 with sequence 36 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc
archived log for thread 1 with sequence 37 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc
archived log for thread 1 with sequence 38 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc
archived log for thread 1 with sequence 39 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc thread=1 sequence=13
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc thread=1 sequence=14
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc thread=1 sequence=15
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc thread=1 sequence=16
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc thread=1 sequence=17
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc thread=1 sequence=18
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc thread=1 sequence=19
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc thread=1 sequence=20
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc thread=1 sequence=21
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc thread=1 sequence=22
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc thread=1 sequence=23
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc thread=1 sequence=24
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc thread=1 sequence=25
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc thread=1 sequence=26
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc thread=1 sequence=27
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc thread=1 sequence=28
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc thread=1 sequence=29
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc thread=1 sequence=30
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc thread=1 sequence=31
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc thread=1 sequence=32
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc thread=1 sequence=33
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc thread=1 sequence=34
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc thread=1 sequence=35
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc thread=1 sequence=36
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc thread=1 sequence=37
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc thread=1 sequence=38
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc thread=1 sequence=39
media recovery complete, elapsed time: 00:01:15
Finished recover at 2015-07-02 09:41:03

database opened

contents of Memory Script:
{
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/tmp/auxi''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/tmp/auxi''";
}
executing Memory Script

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/tmp/auxi''

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/tmp/auxi''

Performing export of tables...
   EXPDP> Starting "SYS"."TSPITR_EXP_tDtf_lwFD":
   EXPDP> Estimate in progress using BLOCKS method...
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   EXPDP> Total estimation using BLOCKS method: 600 MB
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   EXPDP> . . exported "ADAM"."SALES"                              510.9 MB 10000000 rows
   EXPDP> Master table "SYS"."TSPITR_EXP_tDtf_lwFD" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_tDtf_lwFD is:
   EXPDP>   /tmp/auxi/tspitr_tDtf_59906.dmp
   EXPDP> Job "SYS"."TSPITR_EXP_tDtf_lwFD" successfully completed at Thu Jul 2 09:42:53 2015 elapsed 0 00:01:06
Export completed


contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables...
   IMPDP> Master table "SYS"."TSPITR_IMP_tDtf_uink" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_tDtf_uink":
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   IMPDP> . . imported "ADAM"."SALES"                              510.9 MB 10000000 rows
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   IMPDP> Job "SYS"."TSPITR_IMP_tDtf_uink" successfully completed at Thu Jul 2 09:54:13 2015 elapsed 0 00:11:12
Import completed


Removing automatic instance
Automatic instance removed
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_temp_bs9tm7pz_.tmp deleted
auxiliary instance file /tmp/auxi/TDTF_PITR_PRIMA/onlinelog/o1_mf_2_bs9trods_.log deleted
auxiliary instance file /tmp/auxi/TDTF_PITR_PRIMA/onlinelog/o1_mf_1_bs9trjw6_.log deleted
auxiliary instance file /tmp/auxi/TDTF_PITR_PRIMA/datafile/o1_mf_users_bs9to0k1_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_sysaux_bs9tj1jd_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_undotbs1_bs9tj1hw_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_system_bs9tj1fk_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl deleted
auxiliary instance file tspitr_tDtf_59906.dmp deleted
Finished recover at 2015-07-02 09:54:16

See how much work was done by RMAN here? But now, life is good again:

SQL> select count(*) from adam.sales;

  COUNT(*)
----------
  10000000

You say that you could have done that yourself even before 12c? Yes, you’re right: It’s not magic, it’s just more comfortable now ;-)


Tagged: Backup & Recovery, PracticalGuide, RMAN
Categories: DBA Blogs

Introducing Formspider 1.9

Gerger Consulting - Thu, 2015-07-02 01:35
For the past year, we've been working hard on the new version of Formspider, the application development tool for Oracle PL/SQL Developers. Join our special virtual event on July 7th and become one of the first people who'll find out what we have in store for you.Whether you are an IT Manager trying to modernize your legacy software, an Oracle Forms Developer looking for a new development tool that is suitable to your skill set, a  PL/SQL Developer searching for a great way to build web applications or an APEX Developer who thinks that there must be a better solution, we'll have something for you.See you on July 7th.Kind Regards,
Yalim K. Gerger
Founder
Categories: Development

Happy Birthday to Me!

Tim Hall - Wed, 2015-07-01 23:40

birthday-cake-clipartHave you guessed what today is?

It’s amazing, finally reaching the age of 26 (+20).

Cheers

Tim…

PS. There’s another anniversary coming tomorrow. :)

Update: Just noticed this on Google.

google-birhtday

Happy Birthday to Me! was first posted on July 2, 2015 at 6:40 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Three Scenarios for Using Support Identifier Groups

Joshua Solomin - Wed, 2015-07-01 15:52
div#mainColumn { overflow:visible; }

Support Identifier Groups are a way to manage and organize hardware and software assets in the My Oracle Support (MOS) application. While many customers are already utilizing this feature, Oracle Portal Services has noticed there are still large swaths of customers who have not set up any SI groups, or who have set up SI groups but haven't added any assets to the groups to activate them.

We've put together some quick examples to help Customer User Administrators, or CUAs, set up their Oracle support assets more functionally and logically.

Watch the Video! Benefits of Support Identifier Groups (SIGs)
  • Simpler, easier management of your Support Identifiers, hardware, and software assets.
  • Logically organize by geography, asset, or role.
  • Establish defaults so that future hardware and software assets get automatically added to your chosen support identifier.
  • Improve service request (SR) visibility and simplify SR reporting.
  • Streamline access to relevant support information.
What's a Support Identifier?

If you're new to My Oracle Support, an SI is an automatically-generated record "tag" that links purchased Oracle hardware or software to support resources.

Large organizations might have dozens (or possibly hundreds) of SIs scattered across multiple lines of business and geographic areas. In order for a user to receive support on Oracle products—say a database admin or HR manager—they must be assigned to an active SI. An SI is "active" as long is it has 1) an asset assigned to it and 2) hasn't expired.

Setting up Groups

So how are SI groups different from a standard SI? From a functional standpoint they're identical; the difference is an SI "group" is one generated by a CUA, rather than one generated automatically by Oracle. Normally assets and users get assigned to whatever support identifier they happen to land in when a purchase is made. This can make it hard to keep track of where assets and assigned users reside—functionally, geographically, based on role, and so on.

By creating their own SI groups, CUAs can organize assets and users as they see fit.

To make the most of Support Identifier Groups, you will need to pre-plan how users and assets are best organized. Once defined you can set up your Groups, adding users and assets logically the way you need them.

Make a Plans Plan Steps Expanded SI Group

In this scenario a group of CUAs might want to reorganize their current SIs to reflect specific projects or lines of business.

When to Use

Keep in mind that assets can reside in more than one SI at a time. The idea behind this scenario is to group assets according to specific projects or operations. An asset might be used for more than one project at a time; the goal is to organize them to make it easier to track.

Expanded SI Consolidate SIs

In this scenario, the CUAs have a batch of SIs with assets assigned and scattered all over the place. They want to move the assets from their current SIs, and organize them into new SI groups consolidated by location.

When to Use

Location-based operations are obviously good candidates; grouping by location makes it easy to chart how and where assets are being used.

Consolidating SIs can also be useful if you have assets that are used exclusively by one group with little or no crossover between lines of business.

Note that when you choose to remove all active assets from a current SI, that SI gets deactivated automatically. Any users assigned to a deactivated SI would need to be moved to one of the new SI groupings.

Consolidated SI Consolidating with a Default SIG

This scenario is similar to the previous consolidation scenario; the main difference is that one of the new SI groups is set up as a default for all future purchases going forward.

Note that all new hardware or software assets are automatically be assigned to the default going forward.

When to Use

This scenario is useful when you have a specific set of assets and users that are logically segregated from other operations, and you want to keep them separate. Often this might include assets used for specific operations, while the "default" group is for the primary workflow.

Consolidated SI with Default Bottom Line

When planned and managed properly, SI groups can help reduce time spent managing Oracle assets. Visit Document 1569482.2 for more information.

BEEZY: Social Network for SharePoint 2013

Yann Neuhaus - Wed, 2015-07-01 13:32


Beezy-logo-M-25255B20-25255D


Social Networking.. Everybody is actually "connected": professional network, private social network... There is so many solutions around as of today. Which one should I use? What are the differences?
Regarding the use of  a social work, we have already seen YAMMER, what about BEEZY?

What is Beezy? what

Beezy is a social network built inside SharePoint.
Beezy comes in two flavors: on premises behind the firewall on SharePoint Server 2010 and in the cloud on Office365.


Beezy Features

 

  • Collaboration tools: by a click, sharing Files, events, tasks, images, video, links, is possible! Yes it is!
  • Groups: Beezy allows to create groups to structure corporate information, the setting up is user friendly and even if a group is shut down, information’ are kept.
  • Microblogging: this is a good way for collaboration, team spirit, you share ideas and get feedbacks in real-time. As with Twitter, you can use Tag like hashtags (#) and replies (@) and Embed videos from YouTube!
  • Follows: Knowledge management is also about effectively filtering information. Following, replying… users are notified when a change is made to anything they are following whether conversations or documents.
  • Profiles: A unique employee profile regrouping professional data, latest activity is available. You can also link your past activities with LinkedIn, and synchronize employee data with Active Directory.

Here is video link about Beezy: Beezy or Yammer? Beezy-logo-M-25255B20-25255D   and yammer

 

The biggest difference between both tools is the integration.
Beezy is SharePoint integrated whereas Yammer get only a link in a top menu and a web part that doesn’t accept uploading files in the microblog.

Beezy works within the SharePoint framework, all of your permissions, storage, compliance policies and procedures remain the same, unlike in a hybrid solution using Yammer/Office365 where the level of access is limited by comparison, requiring additional management overhead.


Only good User Experience drives real adoption
As we already seen in others articles, user experience only is capable to drive to a real adoption. More simple, fast and intuitive tools you will put in place, more your employees will jump in.

Collaboration
Conclusion


Beezy offers a complete integrated Collaboration tool in SharePoint 2013 / OFFICE 365, easily deploy in SharePoint Servers 2013 and easy to use.
In order to make the right choice, take time to analyze your business needs, try solutions with small groups, get feedbacks from users and then take a decision.


Source: www.beezy.net



OTN Virtual Technology Summit - Spotlight on Middleware Track

OTN TechBlog - Wed, 2015-07-01 09:00
OTN Virtual Technology Summit - Spotlight on Middleware Track It's All About WebLogic

The Middleware Track for the July 2015 edition of the Oracle Technology Network Virtual Technology Summit brings together three experts on Oracle Fusion Middleware to present how-to technical sessions on WebLogic Server's role in today's middleware architectures. The sessions in this track will focus on security and authentication, service monitoring and exploration, and on WebLogic 12c's new APIs and tools for application development. Other products and technologies covered in these sessions include Oracle SOA Suite, Service Bus, JMX, JAX-RS, JSON, WebSocket, and more.

Register Now: Middleware Track Sessions:

Debugging Weblogic Authentication
By Maarten Smeets, Senior Oracle SOA / ADF Developer, AMIS
Enterprises often centrally manage login information and group memberships (identity). Many systems use this information to achieve Single Sign On (SSO) functionality, for example. Surprisingly, access to the Weblogic Server Console is often not centrally managed. This session will explain why centralizing management of these identities not only increases security, but can also reduce operational cost and even increase developer productivity. The session will demonstrate several methods for debugging authentication using an external LDAP server in order to lower the bar to apply this pattern. This technically-oriented presentation will be especially useful for people working in operations who are responsible for managing Weblogic Servers.

Real-Time Service Monitoring and Exploration
By Oracle ACE Associate Robert van Molken , Senior Oracle Integration Specialist, AMIS
There is a great deal of value in knowing which services are deployed and correctly running on an Oracle SOA Suite or Service Bus instance. This session will explain and demonstrate how to retrieve this data using JMX and the available Managed Beans on Weblogic. You will learn how the data can be retrieved using existing Java APIs, and how to explore dependencies between Service Bus and SOA Suite. You'll also learn how the retrieved data can be used to create a simple dashboard or even detailed reports.

New APIs and Tools for Application Development in WebLogic 12c
By Shukie Ganguly,Senior Technology Architect, Oracle
WebLogic Server 12.1.3 provides support for innovative APIs and productive Tools for application development, including APIs for JAX-RS 2.0, JSON Processing (JSR 353), WebSocket (JSR 356), and JPA 2.1. This session will provide an overview of each of these APIs, and then demonstrate how you can use these capabilities to simplify the development of server applications accessed by "rich" clients using lightweight web-based protocols such as REST and WebSocket.

OTN Wants You!

Become a member of the OTN Community: Register here to start participating in our online community. Share your expertise with other community members!

NEW REWARDS! If you attend this virtual technology summit and are a member of the Oracle Technology Network Community you will earn 150 points towards our new Rewards and Recognition program (use the same email for both). Read all about it in our FAQ: Oracle Community - Rewards & Recognition FAQ.