Feed aggregator

Convert rows to columns dynamically

Tom Kyte - Wed, 2017-02-22 20:06
Hi Tom, I have a table with data as below BRANCHNAME CUSTOMERNUM 100 1001010 100 1001011 103 1001012 104 1001013 104 1001014 104 1001015 105 1001016 105 1001017 106 1001018 now my requirement is to get the output as below. Get the c...
Categories: DBA Blogs

Website Broken Links (Mostly Not Oracle-Related)

Tim Hall - Wed, 2017-02-22 14:57

In a recent twitter exchange someone asked if I scan for broken links, oh if you only knew, and the answer is yes. I don’t do it all the time as the results can be rather depressing, and I am OCD enough to force myself to fix them. I also get people notifying me of them, which is very welcome, so I am always trying to keep on top of this stuff. Based on that exchange I thought it was about time, so I logged on the sitecrawl.net and started a new scan.

As usual, the number of internal broken links were low. I had a couple of typos in links that are now corrected.

Typically I am greeted by hundreds of broken links to Oracle documentation, but thankfully this time that was pretty good. Only about 30, many of which were to ORDS docs.

Probably the biggest offenders this time were:

  • Google : They dropped the Picassa URLs, so lots of blog posts had to be amended.
  • Twitter. Now it’s not actually Twitter’s fault, but there were a lot of twitter accounts in the blog comments that no longer exist. I’m not even talking about those that are obvious people trying to promote their brand, but regular users too. I didn’t realise ditching your Twitter account was such a big thing.
  • URL Shortners : Either the URL shortener reference no longer exists, the thing it points to no longer exists, or a retweet has chopped off the URL, so it is just junk.

I’ve been pretty merciless with some of this stuff. Rather than wasting a whole weekend, it’s only taken about 2 hours to get things ship-shape.

Cheers

Tim…

Website Broken Links (Mostly Not Oracle-Related) was first posted on February 22, 2017 at 9:57 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle New Public Cloud Licensing Policy – Good or Bad?

Pythian Group - Wed, 2017-02-22 11:05

A little while ago after a question from a customer about supporting Oracle products on the Google Cloud Platform (GCP) I decided to take a look for any updates to the Oracle public cloud support policies. The document can be easily found on the Oracle website. I quickly noticed some significant changes in the new version of the document. More specifically, I’m referring to the changes that came with the latest version of that document dated January 23, 2017.

In this blog I will attempt to summarize the findings and my thoughts about the matter. Before proceeding any further, let’s begin with a safe harbor notice (as Oracle does) and mention that I do not work for Oracle and I am not a licensing expert. Some of my observations may be incorrect and everyone reading this post is strongly encouraged to make a fully informed decision only after consultation with an Oracle sales representative. After reading the Oracle licensing policy, you should bear in mind that the document provides only guidelines and it is published for “education purposes only”.

So, what do we have in the new edition? The document shares details about the Oracle licensing policy on public cloud environments including AWS EC2, AWS RDS and Microsoft Azure platforms. Alas, still no mention of Google Cloud Platform (GCP). It leaves GCP out of charted territory and even though it doesn’t explicitly prohibit you from moving your Oracle products to GCP, it makes it difficult to estimate the impact and cost.

The first paragraph has a link to listing of all Oracle products where the policy applies. Notably, the document explicitly lists almost all Oracle Database Enterprise Edition options and packs except Oracle RAC and Multitenant. If the absence of Oracle RAC may have some technical justifications, then the exclusion of the Multitenant option doesn’t make too much sense for me.

The next paragraph reveals a lot of changes. The new version of the document officially recognizes AWS vCPU as a thread, not as a core. Prior to January 23 2017, we used to have an AWS published document showing vCores by instance type for licensing calculations, and it was widely-used even though it was never officially provided by Oracle. People used number of cores from the document and applied the Oracle multi-core factor table on top of the cores count. There was never a similar document for Azure consequently considered a vCPU as a vCore and the same calculation using the multi-core factor table. The new version of the Oracle document now states explicitly that the two vCPU on AWS have the same licensing cost as one vCPU on Azure.

It’s explicitly stated in the document that either two vCPU from AWS or one vCPU from Azure are equivalent to one Oracle CPU license. Another statement confirms that from now on, the Oracle multi-core factor no longer applies for the mentioned public cloud environments. This can be a serious impact to people migrating or planning to migrate to AWS or Azure using Bring Your Own License (BYOL) policy. They may now find themselves in a difficult position. Either you plan your migration to a smaller environment or increase your licensing cost. In this case, it’s important to keep in mind that the size of AWS EC2 instance may have direct impact not only to CPU power but to maximum available I/O on the storage layer.

Additionally, there is now a section containing formally defined rules for Oracle Database Standard Edition and Standard Edition 2. According to the paper, we count every four vCPUs on AWS or two vCPUs on Azure as one socket. This means that you cannot have more than 8 AWS vCPUs for a Standard Edition 2 (SE2) license. Standard Edition (SE) allows you to have a 16 vCPU machine, and hence still provides more opportunities when sizing EC2 or RDS . Unfortunately, the SE is only available up to version 12.1.0.1 and support for that release is coming to an end. So what can still be used for an Oracle SE2 on AWS? We can get one of the *.2xlarge instances on EC2 or pick up a similarly sized RDS instance. Is that going to be big enough? That depends on your workload profile, but again, keep in mind IO limitations per instance type. It is going to be a maximum of 8000 IOPS per the instance. Not a small number, but you will need to measure and identify your IO requirements before going to the cloud.

On one hand, the new policies are way more clear and direct than they used to be and I believe that the clarification is good. It is always easier to plan your implementations and budget when you are confident of what to expect. On the other hand, it looks like we have to pay twice as much in licensing fees when moving to AWS or Azure when compared with any bare metal or OVM environment on premises. Will it make Oracle products more attractive for customers? I have some doubts that it will. Will it make the Oracle Cloud a more interesting target platform for cloud migrations? Possibly. That is likely the main goal of Oracle but we’ve yet to see if it works out for them as expected. I liked it when Oracle made Oracle Express Edition (XE) available for everyone for free and when Oracle Standard Edition came with the RAC option at no additional costs. While I don’t have any official numbers, I think that the Express Edition and RAC included with SE turned many customers onto Oracle products. However, I’m afraid that the new licensing policy for cloud may do the opposite and turn some people away from Oracle and consequently play out really badly for Oracle in the long term.

Categories: DBA Blogs

ORA-00933 SQL command not properly ended Solution

Complete IT Professional - Wed, 2017-02-22 05:00
Are you getting the ORA-00933: SQL command not properly ended error? Learn what causes it and how to resolve it in this article. ORA-00933 Cause You have run an SQL statement and have gotten this error: ORA-00933: SQL command not properly ended What causes this error? Most likely, the SQL statement you’re running has a […]
Categories: Development

Webcast: "Migrating and Managing Customizations for EBS 12.2"

Steven Chan - Wed, 2017-02-22 02:05

EBS customizationsOracle University has a wealth of free webcasts for Oracle E-Business Suite.  If you're looking for an overview of how to manage your customizations when upgrading to EBS 12.2, see:

Have you created custom schemas, personalized or extended your Oracle E-Business Suite environment? Santiago Bastidas, Senior Principal Product Manager, discusses how to select the best upgrade approach for existing customizations. This session will help you understand the new customization standards required by the Edition-Based Redefinition feature of Oracle Database to be compliant with the Online Patching feature of Oracle E-Business Suite. You’ll learn about customization use cases, tools, and technologies you can use to ensure that all your customizations are preserved during and after the upgrade. You’ll also hear about reports you can run before the upgrade to detect and fix your customizations to make them 12.2-compliant. This material was presented at Oracle OpenWorld 2016.

Categories: APPS Blogs

Script to recompile the synonyms in a schema

Tom Kyte - Wed, 2017-02-22 01:46
Hi I have written the below script to recompile the synonyms in all the schemas but I am getting invalid character error. Could you check is there any wrong with the script: spool 'c:synonyms.txt' begin for i in (select object_name,ow...
Categories: DBA Blogs

Join (or equivalent) a collection to a table.

Tom Kyte - Wed, 2017-02-22 01:46
I need to build a procedure that will accept a collection of numbers that I need to then find matches in a table. If the elements of the collection were in another table, then it would be a simple case to join the tables. How can I accomplish this ...
Categories: DBA Blogs

Dedicated and Shared Mode

Tom Kyte - Wed, 2017-02-22 01:46
Hi Tom, How can we know that our database is running on shared mode or dedicated mode. Can we configure the database so that we can change the mode according to our need. Thanks, Snehasish Das.
Categories: DBA Blogs

Exceptions handling - how to rollback correctly

Tom Kyte - Wed, 2017-02-22 01:46
Hi everyone, my question is about how to correctly handling exception in a pl/sql procedure: I need to rollback everything made in a begin-end block if there's any kind of exception. Here's the example code: <code>create table prova (cod ...
Categories: DBA Blogs

Log_checkpoint_interval and timeout

Tom Kyte - Wed, 2017-02-22 01:46
Hi, It is rather confusing from documentation that the meaning of Log_checkpoint_interval and log_checkpoint_timeout between Oracle 8 and 8i versions. I believe that even though the definitions changed in 8i meaning is same,if so why did the...
Categories: DBA Blogs

OGG: Unable to lock file “/xxx/de000000″ (error 11, Resource temporarily unavailable).

Yann Neuhaus - Wed, 2017-02-22 00:32

When you see the above message in the GoldenGate logfile there are usually orphan processes that prevent GoldenGate from locking the file (typically when your trail files are on NFS). In a case I had at a customer last week this was not the case. It could be confirmed that there are no other processes sitting on the file by doing an fuser on the file on all nodes of the cluster (This was an 8 node Exadata). What we finally needed to do was:

cd [TRAIL_DIRECTORY]
mv de000000 de000000_bak
cp de000000_bak de000000
rm de000000

Once we did this we could start the extract again and GoldenGate was happy. Hope this helps …

 

Cet article OGG: Unable to lock file “/xxx/de000000″ (error 11, Resource temporarily unavailable). est apparu en premier sur Blog dbi services.

Oracle Mobile Cloud Service 3.1 is available now!

Are you a digital business yet? Mobile is the center of Digital Transformation.  Oracle Mobile Cloud Service provides you with the power and the tools you need to develop a strategy for...

We share our skills to maximize your revenue!
Categories: DBA Blogs

PeopleTools Idea Pages

PeopleSoft Technology Blog - Tue, 2017-02-21 16:26

We're doing a lot to enhance PeopleTools, and we have many avenues for gathering requirements: focus groups, conferences, research, advisory boards, and so on.  One area that we would like to promote particularly is the PeopleSoft Idea pages.  There is an idea page for PeopleTools specifically.  We are monitoring these pages regularly, so if you have suggestions for enhancements regardless of size or complexity, please feel free to submit them here.  This is your chance to guide us in the direction of PeopleSoft technology.  In addition to submitting your own ideas, you can vote on the suggestions of others to give them more weight.  Take a look at these pages and submit your ideas and suggestions.  This area is really for ideas about any area of PeopleSoft technology, so if you have ideas about areas beyond PeopleTools--like Enterprise Components or the Interaction Hub, you can submit them here as well.


PeopleTools Idea Pages

PeopleSoft Technology Blog - Tue, 2017-02-21 16:26

We're doing a lot to enhance PeopleTools, and we have many avenues for gathering requirements: focus groups, conferences, research, advisory boards, and so on.  One area that we would like to promote particularly is the PeopleSoft Idea pages.  There is an idea page for PeopleTools specifically.  We are monitoring these pages regularly, so if you have suggestions for enhancements regardless of size or complexity, please feel free to submit them here.  This is your chance to guide us in the direction of PeopleSoft technology.  In addition to submitting your own ideas, you can vote on the suggestions of others to give them more weight.  Take a look at these pages and submit your ideas and suggestions.  This area is really for ideas about any area of PeopleSoft technology, so if you have ideas about areas beyond PeopleTools--like Enterprise Components or the Interaction Hub, you can submit them here as well.


Log Buffer #505: A Carnival of the Vanities for DBAs

Pythian Group - Tue, 2017-02-21 15:16

This Log Buffer Edition searches through various blogs of Oracle, SQL Server and MySQL and picks a few contemporary ones.

Oracle:

Comma separated search and search with check-boxes in Oracle APEX

Once you have defined your users for your Express Cloud Service, all users with the role of Database Developer or higher can access the database Service Console.

Big Data Lite 4.7.0 is now available on OTN!

Install and configure Oracle HTTP Server Standalone

Can I Customize EBS on Oracle Cloud?

SQL Server:

vCenter Server fails to start, Purge and Shrink Vcenter SQL Database

Introducing a DevOps Workgroup to Farm Credit Services of America

Scaling out SSRS on SQL Server Standard Edition

Using AT TIME ZONE to fix an old report

How to import data to Azure SQL Data Warehouse using SSIS

MySQL:

MySQL Bug 72804 Workaround: “BINLOG statement can no longer be used to apply query events”

Sysadmin 101: Troubleshooting

Making MaxScale Dynamic: New in MariaDB MaxScale 2.1

Improving the Performance of MariaDB MaxScale

Group Replication: Shipped Too Early

Categories: DBA Blogs

12cR2: lockdown profiles and ORA-01219

Yann Neuhaus - Tue, 2017-02-21 14:40

When you cannot open a database, you will get some users unhappy. When you cannot open multitenant database, then the number of unhappy users is multiplied by the number of PDBs. I like to encounter problems in my lab before seeing them in production. Here is a case where I’ve lost a file. I don’t care about the tablespace, but would like to put it offline and at least be able to open the database.

ORA-01113

So, it’s my lab, I dropped a file while the database was down. The file belongs to a PDB but I cannot open the CDB:

SQL> startup
ORACLE instance started.
 
Total System Global Area 1577058304 bytes
Fixed Size 8793208 bytes
Variable Size 1124074376 bytes
Database Buffers 436207616 bytes
Redo Buffers 7983104 bytes
Database mounted.
ORA-01113: file 23 needs media recovery
ORA-01110: data file 23: '/tmp/STATSPACK.dbf'

Yes this is a lab, I like to put datafiles in /tmp (lab only) and I was testing my Statspack scripts for an article to be published soon. I’ve removed the file and have no backup. I recommand to do nasty things on labs, because those things sometimes happen on production systems and better be prepared. This recommandation supposes you cannot mistake your lab prompt with a production one of course.

ORA-01157

The database is in mount. I cannot open it:

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01157: cannot identify/lock data file 23 - see DBWR trace file
ORA-01110: data file 23: '/tmp/STATSPACK.dbf'

This is annoying. I would like to deal with this datafile later and open the CDB. I accept that the PDB it belongs to (PDB1 here) cannot be opened but I wish I can open the other ones quickly.

ORA-01219

Let’s go to the PDB and take the datafile offline:

SQL> alter session set container=pdb1;
Session altered.
 
SQL> alter database datafile 23 offline for drop;
alter database datafile 23 offline for drop
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01219: database or pluggable database not open: queries allowed on fixed tables or views only

This is quite annoying. I know that the database is not open. I know that the pluggable database is not open. I want to put a datafile offline, and this is an operation that concerns only the controlfile. No need to have the database opened. Actually, I need to put this datafile offline in order to open the CDB.

SQL_TRACE

This is annoying, but you know why Oracle is the best database system: troubleshooting. I have an error produced by recursive SQL (ORA-00604) and I want to know the SQL statement that raised this error:


SQL> alter session set sql_trace=true;
alter session set sql_trace=true;
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01219: database or pluggable database not open: queries allowed on fixed tables or views only

Oh yes, I forgot that I cannot issue any SQL statement. But you know why Oracle is the best database system: troubleshooting.


SQL> oradebug setmypid
Statement processed.
SQL> oradebug EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12;
Statement processed.
 
SQL> alter database datafile 23 offline for drop;
alter database datafile 23 offline for drop
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01219: database or pluggable database not open: queries allowed on fixed tables or views only
 
SQL> oradebug EVENT 10046 TRACE NAME CONTEXT OFF;
Statement processed.
SQL> oradebug TRACEFILE_NAME
/u01/app/oracle/diag/rdbms/orcl/orcl1/trace/orcl1_ora_20258.trc

Here is the trace:

*** 2017-02-21T13:36:51.239026+01:00 (PDB1(3))
=====================
PARSING IN CURSOR #140359700679600 len=34 dep=0 uid=0 oct=35 lid=0 tim=198187306591 hv=3069536809 ad='7b8db148' sqlid='dn9z45avgauj9'
alter database datafile 12 offline
END OF STMT
PARSE #140359700679600:c=3000,e=71171,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=198187306590
WAIT #140359700679600: nam='PGA memory operation' ela= 30 p1=327680 p2=1 p3=0 obj#=-1 tim=198187307242
WAIT #140359700679600: nam='control file sequential read' ela= 14 file#=0 block#=1 blocks=1 obj#=-1 tim=198187307612
WAIT #140359700679600: nam='control file sequential read' ela= 13 file#=0 block#=16 blocks=1 obj#=-1 tim=198187307743
WAIT #140359700679600: nam='control file sequential read' ela= 6 file#=0 block#=18 blocks=1 obj#=-1 tim=198187307796
WAIT #140359700679600: nam='control file sequential read' ela= 9 file#=0 block#=1119 blocks=1 obj#=-1 tim=198187307832

This is expected. I’m in PDB1 (container id 3) and run my statement to put the datafile offline.
And then it switches to CDB$ROOT (container 0):

*** 2017-02-21T13:36:51.241022+01:00 (CDB$ROOT(1))
=====================
PARSING IN CURSOR #140359700655928 len=248 dep=1 uid=0 oct=3 lid=0 tim=198187308584 hv=1954812753 ad='7b67d9c8' sqlid='6qpmyqju884uj'
select ruletyp#, ruleval, status, ltime from lockdown_prof$ where prof#=:1 and level#=:2 order by ltime
END OF STMT
PARSE #140359700655928:c=2000,e=625,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=198187308583
=====================
PARSE ERROR #140359700655928:len=249 dep=1 uid=0 oct=3 lid=0 tim=198187308839 err=1219
select ruletyp#, ruleval, status, ltime from lockdown_prof$ where prof#=:1 and level#=:2 order by ltime
 
*** 2017-02-21T13:36:51.241872+01:00 (PDB1(3))
EXEC #140359700679600:c=4000,e=2684,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=0,tim=198187309428
ERROR #140359700679600:err=604 tim=198187309511

I have a parse error when reading LOCKDOWN_PROF$ in the root container. It is a table, a dictionary table stored in SYSTEM tablespace. The CDB is not open. It is not accessible, reason for the error message.

Then, I remember that I’ve set a lockdown profile at CDB level. It doesn’t make sense for CDB$ROOT, but I’ve set it to get it as default for all new created PDBs. Any statement that may be disabled by a lockdown profile has to read the lockdown profile rules stored in root. And here I learn that this occurs when parsing the DDL statement, not at execution time.

In my opinion this is a bug. Either I should not set pdb_lockdown at CDB level, or it shouldn’t be checked when the CDB is closed. Because then any DDL will fail. I’m not blocked by the lockdown profile here. Just because the lockdown profile cannot be read.

pdb_lockdown

Now I know how to workaround the problem: unset the lockdown profile, offline my datafile, open the CDB, open the PDB, drop the tablespace.

SQL> alter system set pdb_lockdown='';
System altered.
SQL> alter session set container=pdb1;
Session altered.
SQL> alter database datafile 23 offline for drop;
Database altered.
SQL> alter session set container=cdb$root;
Session altered.
SQL> alter database open;

Lockdown profile is a very nice feature allowing fine grain control on what can be done by users on a PDB, even admins ones. But it is a new mecanism, leading to situations we have never seen before. Don’t forget the power (and fun) of troubleshooting.

 

Cet article 12cR2: lockdown profiles and ORA-01219 est apparu en premier sur Blog dbi services.

Flipkart and the Executive Revolving Door

Abhinav Agarwal - Tue, 2017-02-21 10:20
T
he contrast could not have been more striking, or poignant.
2017 began on a sombre note for Flipkart, when it announced on the 9th of Jan that Kalyan Krishnamurthy had been named CEO, and its current CEO Binny Bansal would become group CEO. It was the Indian e-commerce startup's third CEO in less than one year.
Three days later, on the 12th, Amazon let it be known via a press release that it intended "to grow its full-time U.S.-based workforce from 180,000 in 2016 to over 280,000 by mid-2018." To let that sink in, Amazon, already a company with a 180,000 employees in the US, would add another hundred-thousand full-time employees in eighteen months. Media was all over the news.

The battle for dominance of the Indian e-commerce market continues well into its third year. For all practical purposes this battle began in earnest only after Amazon entered India in 2013, and since then it has transformed into a brutal, no-holds barred, fifteen-round slugfest between Flipkart and Amazon. Yes, there is SnapDeal that is entering its end-game (there are talks of a merger between Paytm's marketplace and SnapDeal and of senior-level exits amidst rumours of a cash-crunch), there is ShopClues that has had to defer its IPO plans, and an e-commerce tragedy by the name of IndiaPlaza that was among the earliest e-commerce entities, which survived the dot-com bust of 2001, and yet folded up in a most ignominious manner. Ever since Amazon entered India in 2013, it notched up one success after another against the Indian behemoth, Flipkart. Flipkart went from strength to strength when it came to valuations even as it reeled from one blow to another in the market. Flipkart's party finally entered its long-expected yet still-painful endgame in 2016. For Amazon the costs have been equally staggering - billions of dollars sunk into its Indian operations, promises of billions more to be spent, break-even years and years away, and almost every last penny of profits from its parent company being shoveled into its Indian outpost.

Running Circles Around The Revolving Door
[Image credit: http://www.thefiscaltimes.com]People are a company's most valuable asset, so companies say. Companies more often than not think differently. More often than not, companies do not simply know who a good hire is, how an organizational culture of excellence is built upon the right employees, and that throwing money gets you expensive employees, not necessarily the right employees.

In Flipkart's case, the intent was certainly right. Shortly after it closed $1M in Series A funding in Oct 2009 and $10M in June 2010 (worth about ₹50 crores at then exchange rates), they went after hiring talent in earnest. It was not a success, to put it mildly.

"Vasudha Mangalam came in from a technology company to eventually lead HR. Vipul Bathwal, a 2008 IIM Ahmedabad graduate, came on board to identify newer categories. Satyarth Priyedarshi, a former head of merchandising for the Borders bookstore chain in Dubai, was roped in to head buying and merchandising. Tapan Kumar Das, the erstwhile finance head at venture-funded salon chain YLG, joined as VP, finance, along with Anupama Sharma, a Stanford Business School graduate who would lead marketing. Within a year, all five quit." [Can Flipkart Deliver? - Forbes, Jul 6, 2012]

It is not as if the initial exodus of high-profile talent was an aberration. The spectacle of people hired into senior management positions leaving within a year or two, or being sidelined, was a regular feature in the theater that was Flipkart - Sanjay Baweja as CFOPunit Soni as Chief Product Officer, former Myntra head Mukesh BansalAnand KV as head of Customer Experience, private-label head Mausam BhattSharat Singh as engineering head for its Digital Marketing Cloud, Rajinder Sharma as Legal head, Ankit Nagori as Chief Business Officer, Manish Maheshwari as head of seller marketplace, Joy Bandekar as corporate president, Saran Chatterjee as VP of Product Management, Anurag Dod, Sameer Nigam, and more - the average tenure of senior executives at Flipkart was estimated to be as low as 11 months.

What was worse was that there was no unanimity in the hiring of these senior executives. Take the case of Puneet Soni, the high-profile hire Flipkart brought on board in 2015 at a salary of $1 million dollars (more than 6 crore rupees at the then prevailing exchange rates). He was hired when Sachin Bansal was the CEO. In Jan 2016, Binny Bansal took over as the CEO. Within three months, Puneet Soni had left the company, and was replaced by Surojit Chatterjee as Senior Vice President of Product Management. Less than a year later, in Jan 2017, Kalyan Krishnamurthy became Flipkart's third CEO. This was followed by the exits of Surojit ChatterjeeSaikiran Krishnamurthy, head of Ekart, and Samardeep Subandh, chief marketing officer.

This seemed to suggest that senior executives' tenure was linked not to their performance but to the top man at Flipkart - whoever that may have been. It is not unsurprising for CEOs to want people they know and trust to be in key roles, but in the case of Flipkart, it would have been expected that any senior hire would have had the backing of both founders. Clearly, the exits proved that was not the case.

This steady exodus of senior executives should have set off alarm bells not only with the founders, but also at the board. Senior management exits at any company are not uncommon, but Flipkart was witnessing a flood of exits, a veritable revolving door that saw senior executives stay at the company for less than a year on average. Boards ignore such warning signs at their own peril. That neither the board nor the company's founders learned any lessons became clear by the parade of high-profile hires and high-profile exits that continued. A revolving door was an apt image and metaphor.

What could the founders and the board have done differently? Three things:
First, they should have recognized that they - the founders - may not necessarily have possessed the competence to make the right choices when hiring senior executives. The board should have brought experienced heads to consult with the founders. The investors were on the board. They were the ones who were putting in substantial amounts of money into a company that had not made a single paisa of profits in its existence. The investors had leverage and a fiduciary responsibility to guide the founders.

Second, Flipkart should have asked whether talent imported from Silicon Valley and transplanted to India would work? Was Flipkart, flush with investor money, going for trophy hiring?

Third, and most importantly, Flipkart - its founders and the board - should have identified the four critical areas to focus in their hiring process:


Hire functional experts - every successful enterprise is built on a successful division and specialization of labour. Logistics, marketing, analytics, customer service, customer experience, channel management are only some of the functional areas that Flipkart needed to grow in a sustainable and efficient manner. As I will show in the next post, its hires came up short in almost every single of these areas.

Add management structures - functional experts may or may not be the right people to also manage these management structures that would be created as a result. This is where management structures have to mean more than simply adding more and more layers of management. Creating vertical organizational structures versus loosely-coupled matrix structures are decisions that should not be taken lightly.

Build planning and forecasting capabilities - every large retail company, and Flipkart is a retail company if nothing else, has to be able to forecast demand very, very accurately. Based on this forecast, it has to procure goods and have a logistics operation that will deliver these items to the customer when an order is placed in the shortest possible time, at the lowest cost, and with the highest quality. Any number of wags will tell you that it is possible to optimize only two of these three parameters - time, cost, and quality. This is where the right person with the right experience can make the difference between mediocrity and success. 
Amazon understood that when, in 1998, it hired Richard Dalzell, a vice president at Wal-Mart who became chief information officer at Amazon. Wal-Mart sued Amazon, and the two settled in 1999. Well, what goes around comes around. In 2016, Target hired Amazon Vice President of operations Arthur Valdez, and made him Executive Vice President, Chief Supply Chain and Logistics Officer! Did Flipkart understand the importance of the right hires? It is certainly debatable.

Spell out and reinforce the cultural values that will sustain the business - when Netflix CEO Reed Hastings published (along with others) a PowerPoint presentation outlining what the Netflix culture was and how its practices shaped and reinforced it, and posted in online, it was viewed millions of times, and Facebook COO Sheryl Sandberg said it "may well be the most important document ever to come out of the Valley.

As companies grow, the job of the founders becomes less one of quotidian management and more of overseeing and guiding the company's overall direction and adherence to a culture that reflects the values the founders want to imprint. Did Flipkart's hiring choices truly reflect the "voice" of the company?
Customers Foot The Bills!Optics matter. Appearances matter. Brand image consultants will tell you a picture is worth a thousand words. As the founders of a high-profile start-up, both Binny Bansal and Sachin Bansal should have been aware of that. Flipkart's PR team should have been aware of that. Yet this image appeared in a Fortune magazine article in May 2016. What do you see below? The founders sitting in the boot of a Flipkart delivery van, surrounded by delivery boxes. Look below - you see the founders sitting with their feet planted on top of customers' delivery boxes. The image was jarring - if customers are important to you, you do not plant your feet on top of their delivery boxes. What if the box holds a holy book that a customer orders from Flipkart? As far as branding goes, this image was an utter and complete failure on the part of Flipkart - to have allowed whoever it was to have talked them into doing this picture.
Flipkart founders Binny Bansal and Sachin Bansal
[image credit: Fortune India, http://fortuneindia.com/2016/may/flipkart-vs-amazon-1.4516]Contrast this with how Amazon's CEO Jeff Bezos appeared in magazine covers - the second cover, from Business Week, shows his holding an open Amazon delivery box, almost reverentially. The picture conveyed a sense of respect for the customer. Binny Bansal had once remarked - "Our vision was always to be the Amazon of India." He should have known how much attention Amazon pays to its messaging. He clearly didn't.

 
Amazon CEO Jeff Bezos on the cover of Fortune and Business Week magazines.

I have written at length on this fascinating battle in the e-commerce space. When I read about and witnessed its mobile-only obsession I had called it a dangerous distraction, not to mention a revenue chimera and a privacy nightmare. I warned that Flipkart was making a mistake, a big mistake, in taking its eye off the ball in competing against Amazon, using a cricket analogy that should have been familiar to the Indian founders. I gave some more free advice. I wrote about how hubris-driven million-dollar hires had resulted in billion dollar erosions in valuations.

I first posted this on Medium on Feb 21, 2017.

© 2017, Abhinav Agarwal (अभिनव अग्रवाल). All rights reserved.

Backup Oracle Databases to AWS S3

Pythian Group - Tue, 2017-02-21 10:17

There are different options for backing up Oracle databases to Cloud, but using Oracle Secure Backup module to take backups into AWS S3 is one of the most efficient methods in terms of costs and backup/restore performance.

In this post I will show you how to install, configure and use Oracle Secure Backup to take your Oracle database backups to AWS S3. This method can be used for Oracle database version 9.2 or higher.

In this example, database version is 12c and platform is Linux x86_64.

Oracle Secure Backup module must be installed into database Oracle Home. Using installed libraries you can then take backups via RMAN into AWS S3 the same way you backup to sbt_tape.

Requirements:

1- An AWS account and an IAM user with access to S3:

For setting up backups to AWS you will require an AWS account and an IAM user with full access to AWS S3. During setup Access Keys and Secret Access Key of this IAM user will be used. There is no need to have access to AWS Console.

You can use AWS Free tire for test purposes.

2- Oracle Secure Backup module for AWS:
You can download Oracle Secure Backup module for AWS from here

3- OTN account:
During installation you need to provide an OTN account.

4- Java 1.7 or higher:
Java 1.7 or higher must be installed on your server before you can proceed.

Installation:

1- Create Oracle Wallet Directory:

If Oracle Wallet directory does not exist, create one. This folder will be used to store AWS Access Keys and Secret Access Key.
Create this directory in $ORACLE_HOME/dbs/:


   $ cd $ORACLE_HOME/dbs/
   $ mkdir osbws_wallet

2- Download osbws_installer.zip from the link provided above and put in your installation folder, in this example /mnt/stage/osb , unzip the compressed file and you will have two files as shown below:


   $ pwd
   /mnt/stage/osb
   $ unzip osbws_installer.zip
   Archive:  osbws_installer.zip
     inflating: osbws_install.jar
     inflating: osbws_readme.txt
   $ ls
   osbws_installer.zip  osbws_install.jar  osbws_readme.txt

3- Install OSB Cloud Module for Amazon S3 into your Oracle Home:


   $ cd /mnt/stage/osb
   $ java -jar osbws_install.jar -AWSID XxXxX -AWSKey XxXxX -walletDir $ORACLE_HOME/osbws_wallet -libDir $ORACLE_HOME/lib -location ap-southeast-2 -awsEndPoint  s3-ap-southeast-2.amazonaws.com  -otnUser bakhshandeh@pythian.com -otnPass

Parameters that you will need to set for installation are as below:


  -AWSID:       AWS Access Key

  -AWSKey:      AWS Secret Access Key

  -walletDir:   Location where Backup Module will store AWS keys

  -libDir:      Location where Backup Module libraries will be installed

  -location:    This is AWS S3 location where you want to put your backups into. 
                Value for this parameter must be a valid Region from Amazon Regions.
                In this example "ap-southeast-2" which is region for "Asia Pacific (Sydney)" has been used

  -awsEndPoint: This should be valid end-point from location AWS region specified by "location" parameter
                In this example "s3-ap-southeast-2.amazonaws.com" has been used which is one of the end-points in ""Asia Pacific (Sydney)""

  -otnUser:     OTN Account

  -otnPass:     OTN Password

In my example I did not pass any value for -otnPass parameter and this was the only workaround I found for the error noted below during my tests:


   Downloading Oracle Secure Backup Web Service Software Library from file osbws_linux64.zip.
   Error: Library download failed. Please check your network connection to Oracle Public Cloud.

When I encountered this error I could only fix the issue by passing no value for otnPass, but it might work for you.

Running Backup using RMAN:

Installation will create a file in $ORACLE_HOME/dbs which is usually named osb<SID>.ora and you need to use full path of this file in your allocate channel command in RMAN.

In my example SID is KAMRAN


   $ cd $ORACLE_HOME/dbs
   $ pwd
   /apps/oracle/product/12.1.0.2/db_1/dbs
   $ ls -al osb*.ora
   -rw-r--r-- 1 oracle oinstall 194 Jan  5 11:31 osbwsKAMRAN.ora
   $

Content of this file is as below:


   $ cat osbwsKAMRAN.ora
   OSB_WS_HOST=http://s3-ap-southeast-2.amazonaws.com
   OSB_WS_LOCATION=ap-southeast-2
   OSB_WS_WALLET='location=file:/apps/oracle/product/12.1.0.2/db_1/dbs/osbws_wallet CREDENTIAL_ALIAS=gordon-s_aws'
   $

This file can be used for any other database in same Oracle Home. For this reason,I renamed it to osbwsCONFIG.ora so that name is generic and there is no dependency to any of databases.

mv osbwsKAMRAN.ora osbwsCONFIG.ora

I will use osbwsCONFIG.ora in RMAN channel settings.

Now you just need to allocate a channel for your backup/restore commands as below using above file as below:


   allocate channel c1 device type sbt parms='SBT_LIBRARY=libosbws.so,SBT_PARMS=(OSB_WS_PFILE=/apps/oracle/product/12.1.0.2/db_1/dbs/osbwsCONFIG.ora)';

This is a complete example which shows backup piece details and how they have been located in AWS S3 regions you specified during installation:


   $ . oraenv
   KAMRAN
   $ rman target /

   Recovery Manager: Release 12.1.0.2.0 - Production on Wed Dec 28 11:21:48 2016

   Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

   connected to target database: KAMRAN (DBID=283560064)

   RMAN>run{
   2> allocate channel c1 device type sbt parms='SBT_LIBRARY=libosbws.so,SBT_PARMS=(OSB_WS_PFILE=/apps/oracle/product/12.1.0.2/db_1/dbs/osbwsCONFIG.ora)';
   3> backup datafile 1;
   4> }

   released channel: ORA_DISK_1
   allocated channel: c1
   channel c1: SID=374 instance=KAMRAN device type=SBT_TAPE
   channel c1: Oracle Secure Backup Web Services Library VER=3.16.11.11

   Starting backup at 28-DEC-16
   channel c1: starting full datafile backup set
   channel c1: specifying datafile(s) in backup set
   input datafile file number=00001 name=+DATA/KAMRAN/DATAFILE/system.258.887023011
   channel c1: starting piece 1 at 28-DEC-16
   channel c1: finished piece 1 at 28-DEC-16
   piece handle=09rojka7_1_1 tag=TAG20161228T112807 comment=API Version 2.0,MMS Version 3.16.11.11
   channel c1: backup set complete, elapsed time: 00:00:45
   channel c1: starting full datafile backup set
   channel c1: specifying datafile(s) in backup set
   including current control file in backup set
   including current SPFILE in backup set
   channel c1: starting piece 1 at 28-DEC-16
   channel c1: finished piece 1 at 28-DEC-16
   piece handle=0arojkbl_1_1 tag=TAG20161228T112807 comment=API Version 2.0,MMS Version 3.16.11.11
   channel c1: backup set complete, elapsed time: 00:00:07
   Finished backup at 28-DEC-16
   released channel: c1

   RMAN> list backup tag TAG20161228T112807;


   List of Backup Sets
   ===================


   BS Key  Type LV Size       Device Type Elapsed Time Completion Time
   ------- ---- -- ---------- ----------- ------------ ---------------
   9       Full    741.75M    SBT_TAPE    00:00:38     28-DEC-16
           BP Key: 9   Status: AVAILABLE  Compressed: NO  Tag: TAG20161228T112807
           Handle: 09rojka7_1_1   Media: s3-ap-southeast-2.amazonaws.com/oracle-data-gordonsm-ap1
     List of Datafiles in backup set 9
     File LV Type Ckp SCN    Ckp Time  Name
     ---- -- ---- ---------- --------- ----
     1       Full 58915843   28-DEC-16 +DATA/KAMRAN/DATAFILE/system.258.887023011

   BS Key  Type LV Size       Device Type Elapsed Time Completion Time
   ------- ---- -- ---------- ----------- ------------ ---------------
   10      Full    22.50M     SBT_TAPE    00:00:01     28-DEC-16
           BP Key: 10   Status: AVAILABLE  Compressed: NO  Tag: TAG20161228T112807
           Handle: 0arojkbl_1_1   Media: s3-ap-southeast-2.amazonaws.com/oracle-data-gordonsm-ap1
     SPFILE Included: Modification time: 26-DEC-16
     SPFILE db_unique_name: KAMRAN
     Control File Included: Ckp SCN: 58915865     Ckp time: 28-DEC-16

   RMAN>
Some performance statistics:

I used a 340G database for testing performance and tried full backups into AWS S3 using different number of channels.
First, I allocated two channels and backup to AWS was complete in 48 minutes. I then tried four channels, and the backup to AWS was completed in 27 minutes.

I predicted that by increasing the number of channels to eight, would make backup complete faster. Surprisingly, with 8 channels backup completed in 27 minutes (which was exactly the same result when I used four channels).
So in my case, the optimum number of channels for taking backups to AWS S3 was four.

I should mention that same database when backed up to NFS disks using four channels it completed in 22 minutes, so backup time of 27 minutes to AWS was acceptable.

Restore was even faster. I tried restore without recovering the database, same 340G database full restore of databases from AWS backups completed in 22 minutes which again is acceptable.

Categories: DBA Blogs

Dataguard Oracle 12.2: Keeping Physical Standby Sessions Connected During Role Transition

Yann Neuhaus - Tue, 2017-02-21 09:13

As of Oracle Database 12c Release 2 (12.2.0.1), when a physical standby database is converted into a primary you have the option to keep any sessions connected to the physical standby, without disruption, during the switchover/failover. When the database is reopened as the primary, the suspended sessions resume their operations as if nothing had happened. If the database (or an individual PDB) is not opened in the primary role, the sessions will be terminated.
To enable this feature, the STANDBY_DB_PRESERVE_STATES initialization parameter in the standby side is used. This parameter can have following values:
NONE — No sessions on the standby are retained during a switchover/failover.
SESSION or ALL — User sessions are retained during switchover/failover.
This parameter is only meaningful on a physical standby database that is open in real-time query mode. This needs Active dataguard option
In this blog we are going  to do a demonstration of this new feature. First we present below our configuration

DGMGRL> show configuration;
Configuration - ORCL_DR
Protection Mode: MaxProtection
Members:
ORCL_SITE - Primary database
ORCL_SITE1 - Physical standby database
ORCL_SITE2 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 32 seconds ago)
DGMGRL>

Now let’s connect to the standby ORCL_SITE1 and let’s note our session’s info (sid, serial#)
SQL>
select username,sid, serial# from v$session where sid=SYS_CONTEXT('USERENV','SID');
USERNAME SID SERIAL#
--------------- ---------- ----------
SYSTEM 65 2869


SQL> show parameter db_unique_name
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string ORCL_SITE1


SQL> select open_mode from v$database;
OPEN_MODE
--------------------
READ ONLY WITH APPLY

With the default value NONE for the parameter standby_db_preserve_states on ORCL_SITE1 let’s do a switchover to ORCL_SITE1.
SQL>
show parameter standby_db_preserve_states;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
standby_db_preserve_states string NONE
SQL>


DGMGRL> switchover to 'ORCL_SITE1';
Performing switchover NOW, please wait...
Operation requires a connection to database "ORCL_SITE1"
Connecting ...
Connected to "ORCL_SITE1"
Connected as SYSDBA.
New primary database "ORCL_SITE1" is opening...
Operation requires start up of instance "ORCL" on database "ORCL_SITE"
Starting instance "ORCL"...
ORACLE instance started.
Database mounted.
Database opened.
Connected to "ORCL_SITE"
Switchover succeeded, new primary is "ORCL_SITE1"

While the switchover going on, let’s start a query on ORCL_SITE1. As expected we get an error, the session was disconnected

SQL> select * from dba_objects;
select * from dba_objects
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 1915
Session ID: 65 Serial number: 2869
SQL>

Our new configuration is now like this

DGMGRL> show configuration;
Configuration - ORCL_DR
Protection Mode: MaxProtection
Members:
ORCL_SITE1 - Primary database
ORCL_SITE - Physical standby database
ORCL_SITE2 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 58 seconds ago)
DGMGRL>

Now let’s connect to the standby ORCL_SITE with the standby_db_preserve_states set to ALL

SQL> select username,sid, serial# from v$session where sid=SYS_CONTEXT('USERENV','SID');
USERNAME SID SERIAL#
--------------- ---------- ----------
SYSTEM 58 58847


SQL> show parameter db_unique_name
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string ORCL_SITE


SQL> show parameter standby_db_preserve_states
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
standby_db_preserve_states string ALL

Now let’s do a switchover back to SITE_ORCL and let’s monitor the connection.

DGMGRL> switchover to 'ORCL_SITE';
Performing switchover NOW, please wait...
Operation requires a connection to database "ORCL_SITE"
Connecting ...
Connected to "ORCL_SITE"
Connected as SYSDBA.
New primary database "ORCL_SITE" is opening...
Operation requires start up of instance "ORCL" on database "ORCL_SITE1"
Starting instance "ORCL"...
ORACLE instance started.
Database mounted.
Database opened.
Connected to "ORCL_SITE1"
Switchover succeeded, new primary is "ORCL_SITE"
DGMGRL>

As expected, after the switchover I see that my session is still connected with the same SID and SERIAL#. Indeed user sessions are retained and when the database is reopened as the primary, the suspended sessions resume their operations as if nothing had happened.

SQL> select username,sid, serial# from v$session where sid=SYS_CONTEXT('USERENV','SID');
USERNAME SID SERIAL#
--------------- ---------- ----------
SYSTEM 58 58847

Just in the documentation it is mentioned that “Sessions that have long running queries or are using database links will not be retained regardless of the setting of this parameter”.

 

Cet article Dataguard Oracle 12.2: Keeping Physical Standby Sessions Connected During Role Transition est apparu en premier sur Blog dbi services.

Introducing Advanced Analytics Training from Rittman Mead!

Rittman Mead Consulting - Tue, 2017-02-21 09:00

alt

Rittman Mead is proud to release our new training course: Advanced Analytics with Oracle's R Technologies.

Oracle has made significant investments in the R language with Oracle R, ROracle and Oracle R Enterprise. Using these tools, data scientists and business intelligence practitioners can work together more efficiently and can transition between their roles more easily.

Rittman Mead has developed a three-day course that tackles R's notoriously steep learning curve. It builds on Oracle professionals' existing skills to accelerate growth into R programming and data science.

What does the course include?

Day one is all about the R programming language, starting with a history and explanation of Oracle's R technologies. Hands-on coding begins right away, with practical labs comparing R's data types and data structures with those found in the Oracle Database. The day wraps up with R programming concepts like conditions and functions, providing a strong grasp of the fundamentals on the very first day.

Day two focuses on the analysis pipeline, from data acquisition to data visualization. You will use SQL and R to tidy and transform raw data into a structured format and then use visualization and basic statistics to gain insights.

Day three looks at statistical modeling—discussing linear models and the predictive modeling pipeline. We present pros and cons of different types of models and get hands-on with preprocessing, model tuning, cross-validation and interpreting model results.

Our course is a mixture of theory and practical exercises—ensuring that you'll understand the tools and know when to apply them.

Who should attend?

The course is suitable for Oracle practitioners having some experience with SQL and business intelligence. No previous knowledge of R is assumed or necessary.

Sounds great, where do I sign up?

Please view our UK & Europe or US training schedule for public courses. For any questions or queries, including on-site training requests, please contact Daniel Delgado (US) or Sam Jeremiah (UK & Europe) for more details.

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator