Feed aggregator

Goldengate Importance of Commenting

Michael Dinh - Fri, 2017-01-27 20:35

I was curious to determine effect for modifying ./dirdat/aa to dirdat/aa for Goldengate parameter.

While it may look to be the same, it is not treated the same.

$ ll dirdat/*
-rw-r-----. 1 oracle oinstall    3719 Jan 27 14:15 dirdat/aa000000000
-rw-r-----. 1 oracle oinstall 8086335 Jan 27 18:16 dirdat/aa000000001

$ ll ./dirdat/*
-rw-r-----. 1 oracle oinstall    3719 Jan 27 14:15 ./dirdat/aa000000000
-rw-r-----. 1 oracle oinstall 8086335 Jan 27 18:16 ./dirdat/aa000000001

Here is what happens when parameter is modified from ./dirdat/aa to dirdat/aa

$ grep EXTTRAIL ggserr.log
2017-01-27 10:45:15  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): ADD EXTTRAIL ./dirdat/aa EXTRACT e_hawk, MEGABYTES 500.
2017-01-27 10:45:16  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): ADD EXTRACT p_hawk  EXTTRAILSOURCE ./dirdat/aa.
2017-01-27 18:18:26  ERROR   OGG-01044  Oracle GoldenGate Capture for Oracle, e_hawk.prm:  The trail 'dirdat/aa' is not assigned to extract 'E_HAWK'. 
Assign the trail to the extract with the command "ADD EXTTRAIL/RMTTRAIL dirdat/aa, EXTRACT E_HAWK".

How do you find how trail file as added?

GGSCI (arrow1.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     ABENDED     E_HAWK      00:00:04      00:12:25
EXTRACT     STOPPED     P_HAWK      00:00:00      07:44:04


GGSCI (arrow1.localdomain) 2> info e*

EXTRACT    E_HAWK    Last Started 2017-01-27 18:18   Status ABENDED
Checkpoint Lag       00:00:04 (updated 00:12:27 ago)
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2017-01-27 18:16:52
                     SCN 0.3369629 (3369629)


GGSCI (arrow1.localdomain) 3> info e* showch

EXTRACT    E_HAWK    Last Started 2017-01-27 18:18   Status ABENDED
Checkpoint Lag       00:00:04 (updated 00:12:41 ago)
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2017-01-27 18:16:52
                     SCN 0.3369629 (3369629)


Current Checkpoint Detail:

Read Checkpoint #1

  Oracle Integrated Redo Log

  Startup Checkpoint (starting position in the data source):
    Timestamp: 2017-01-27 10:45:14.000000
    SCN: Not available

  Recovery Checkpoint (position of oldest unprocessed transaction in the data source):
    Timestamp: 2017-01-27 18:16:52.000000
    SCN: 0.3369628 (3369628)

  Current Checkpoint (position of last record read in the data source):
    Timestamp: 2017-01-27 18:16:52.000000
    SCN: 0.3369629 (3369629)

Write Checkpoint #1

  GGS Log Trail

  Current Checkpoint (current write position):
    Sequence #: 1
    RBA: 8086335
    Timestamp: 2017-01-27 18:16:56.302658
    Extract Trail: ./dirdat/aa
    Seqno Length: 9
    Flip Seqno Length: No
    Trail Type: EXTTRAIL

Header:
  Version = 2
  Record Source = A
  Type = 13
  # Input Checkpoints = 1
  # Output Checkpoints = 1

File Information:
  Block Size = 2048
  Max Blocks = 100
  Record Length = 2048
  Current Offset = 0

Configuration:
  Data Source = 3
  Transaction Integrity = 1
  Task Type = 0

Status:
  Start Time = 2017-01-27 18:18:26
  Last Update Time = 2017-01-27 18:16:56
  Stop Status = A
  Last Result = 520



GGSCI (arrow1.localdomain) 4>

Alternatively, make it easier by commenting parameter files.

$ head dirprm/e_hawk.prm
EXTRACT e_hawk
-- CHECKPARAMS
-- ADD EXTRACT e_hawk, INTEGRATED TRANLOG, BEGIN NOW
-- ADD EXTTRAIL ./dirdat/aa EXTRACT e_hawk, MEGABYTES 500
USERIDALIAS ggs_user
EXTTRAIL ./dirdat/aa
INCLUDE dirprm/global_ggenv.inc

Temporal tables with PostgreSQL

Yann Neuhaus - Fri, 2017-01-27 15:51

In this blog we are going to talk about a nice extension in PostgreSQL: temporal_tables. This extension provides support for temporal tables.
What is a temporal table? Just a table that tracks the period of validity of a row.
When implemented, this feature allows you to specify that old rows are archived into another table (that is called the history table). This can be useful for many purposes
-Audit
-Comparison
-Checking table state in the past
First we have to install the temporal_table extension. We are going to use the pgxn client to install the extension.
Install the yum repository for PostgreSQL

[root@pgserver1 ~]# rpm -ivh https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/pgdg-oraclelinux96-9.6-3.noarch.rpm
Retrieving https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/pgdg-oraclelinux96-9.6-3.noarch.rpm
warning: /var/tmp/rpm-tmp.3q9X12: Header V4 DSA/SHA1 Signature, key ID 442df0f8: NOKEY
Preparing... ################################# [100%] Updating / installing...
1:pgdg-oraclelinux96-9.6-3 ################################# [100%] [root@pgserver1 ~]#

And after we install the pgxn client

root@pgserver1 ~]# yum search pgxn
Loaded plugins: langpacks, ulninfo
pgdg96 | 4.1 kB 00:00:00
(1/2): pgdg96/7Server/x86_64/group_gz | 249 B 00:00:00
(2/2): pgdg96/7Server/x86_64/primary_db | 127 kB 00:00:00
==================================================== N/S matched: pgxn =====================================================
pgxnclient.x86_64 : Command line tool designed to interact with the PostgreSQL Extension Network
Name and summary matches only, use "search all" for everything.


[root@pgserver1 ~]# yum install pgxnclient.x86_64
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package pgxnclient.x86_64 0:1.2.1-2.rhel7 will be installed
....
....
Installed:
pgxnclient.x86_64 0:1.2.1-2.rhel7
Complete!
[root@pgserver1 ~]#

And finally we can install the extension

[root@pgserver1 ~]# pgxn install temporal_tables --pg_config=/u01/app/PostgreSQL/9.6/bin/pg_config
INFO: best version: temporal_tables 1.1.1
INFO: saving /tmp/tmpJit39m/temporal_tables-1.1.1.zip
INFO: unpacking: /tmp/tmpJit39m/temporal_tables-1.1.1.zip
INFO: building extension
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -DMAP_HUGETLB=0x40000 -fpic -I. -I./ -I/u01/app/PostgreSQL/9.6/include/postgresql/server -I/u01/app/PostgreSQL/9.6/include/postgresql/internal -D_GNU_SOURCE -I/opt/local/Current/include/libxml2 -I/opt/local/Current/include -c -o temporal_tables.o temporal_tables.c
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -DMAP_HUGETLB=0x40000 -fpic -I. -I./ -I/u01/app/PostgreSQL/9.6/include/postgresql/server -I/u01/app/PostgreSQL/9.6/include/postgresql/internal -D_GNU_SOURCE -I/opt/local/Current/include/libxml2 -I/opt/local/Current/include -c -o versioning.o versioning.c
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -DMAP_HUGETLB=0x40000 -fpic -shared -o temporal_tables.so temporal_tables.o versioning.o -L/u01/app/PostgreSQL/9.6/lib -L/opt/local/Current/lib -Wl,--as-needed -Wl,-rpath,'/u01/app/PostgreSQL/9.6/lib',--enable-new-dtags
INFO: installing extension
/bin/mkdir -p '/u01/app/PostgreSQL/9.6/lib/postgresql'
/bin/mkdir -p '/u01/app/PostgreSQL/9.6/share/postgresql/extension'
/bin/mkdir -p '/u01/app/PostgreSQL/9.6/share/postgresql/extension'
/bin/mkdir -p '/u01/app/PostgreSQL/9.6/doc/postgresql/extension'
/usr/bin/install -c -m 755 temporal_tables.so '/u01/app/PostgreSQL/9.6/lib/postgresql/temporal_tables.so'
/usr/bin/install -c -m 644 .//temporal_tables.control '/u01/app/PostgreSQL/9.6/share/postgresql/extension/'
/usr/bin/install -c -m 644 .//temporal_tables--1.1.1.sql .//temporal_tables--1.0.0--1.0.1.sql .//temporal_tables--1.0.1--1.0.2.sql .//temporal_tables--1.0.2--1.1.0.sql .//temporal_tables--1.1.0--1.1.1.sql '/u01/app/PostgreSQL/9.6/share/postgresql/extension/'
/usr/bin/install -c -m 644 .//README.md '/u01/app/PostgreSQL/9.6/doc/postgresql/extension/'
[root@pgserver1 ~]#

Once the installation done, we can load it in our database.

[postgres@pgserver1 extension]$ psql
Password:
psql.bin (9.6.1)
Type "help" for help.
postgres=# CREATE EXTENSION temporal_tables;
CREATE EXTENSION
postgres=#

We can then verify that the temporal extension is now present in our database.

postgres=# \dx
List of installed extensions
Name | Version | Schema | Description
-----------------+---------+------------+-----------------------------------------
adminpack | 1.0 | pg_catalog | administrative functions for PostgreSQL
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
temporal_tables | 1.1.1 | public | temporal tables
(3 rows)

postgres=# \dx+ temporal_tables
Objects in extension "temporal_tables"
Object Description
----------------------------------------------------
function set_system_time(timestamp with time zone)
function versioning()
(2 rows)

For the demonstration, we create the following table Customers

CREATE TABLE Customers (
CustNo SERIAL NOT NULL,
CustName VARCHAR(30) NOT NULL,
start_date timestamp NOT NULL DEFAULT now(),
PRIMARY KEY (CustNo)
);

In order to make this table system-period temporal table we should first add a system period column

postgres=# ALTER TABLE Customers ADD COLUMN sys_period tstzrange NOT NULL;
ALTER TABLE

Then we need a history table that contains archived rows of our table. The easiest way to create it is by using LIKE statement

postgres=# CREATE TABLE Customers_history (LIKE Customers);
CREATE TABLE

Finally we create a trigger on our table to link it with the history table

postgres=# CREATE TRIGGER customers_hist_trigger BEFORE INSERT OR UPDATE OR DELETE ON Customers FOR EACH ROW
EXECUTE PROCEDURE versioning('sys_period', 'Customers_history', true);
CREATE TRIGGER
postgres=#

Now Let’s insert data into customers

insert into customers (custname,start_date) values ('HP','2013-08-05 00:00:00');
insert into customers (custname,start_date) values ('IBM','2014-10-10 00:00:00');
insert into customers (custname,start_date) values ('DBI','2017-01-07 00:00:00');
insert into customers (custname) values ('DHL');

We can see below rows in customers.
For example the row concerning IBM was inserted on 2017-01-26 10:48:49. Information stored in the sys_period column and represents the starting validity of the row. Note the borders [,). The lower bound is [ and thus inclusive. The upper bound is ) which means it is exclusive.
For IBM [“2017-01-26 10:48:49.768031+01″,) means
Start of validity: 2017-01-26 10:48:49.
End of validity: infinity (because there is nothing).

postgres=# table customers;
custno | custname | start_date | sys_period
--------+----------+----------------------------+------------------------------------
1 | IBM | 2014-10-10 00:00:00 | ["2017-01-26 10:48:49.768031+01",)
2 | DBI | 2017-01-07 00:00:00 | ["2017-01-26 10:48:49.778487+01",)
3 | DHL | 2017-01-26 10:48:49.841405 | ["2017-01-26 10:48:49.841405+01",)
4 | HP | 2013-08-05 00:00:00 | ["2017-01-26 10:50:21.275201+01",)
(4 rows)

The table Customers_history is empty. This is normal because no update or delete are done, just we have inserted rows.

postgres=# table customers_history;
custno | custname | start_date | sys_period
--------+----------+------------+------------
(0 rows)
postgres=#

Let’s do an update on customers, but before let’s display the current time.
postgres=# select now();
now
-------------------------------
2017-01-26 11:02:32.381634+01
(1 row)


postgres=# update customers set custname='HPSuisse' where custno=4;
UPDATE 1
postgres=#

Verifying again the customers table, we can see that the validity of row concerning HPsuisse starts at 2017-01-26 11:02:46

postgres=# table customers;
custno | custname | start_date | sys_period
--------+----------+----------------------------+------------------------------------
1 | IBM | 2014-10-10 00:00:00 | ["2017-01-26 10:48:49.768031+01",)
2 | DBI | 2017-01-07 00:00:00 | ["2017-01-26 10:48:49.778487+01",)
3 | DHL | 2017-01-26 10:48:49.841405 | ["2017-01-26 10:48:49.841405+01",)
4 | HPSuisse | 2013-08-05 00:00:00 | ["2017-01-26 11:02:46.347574+01",)
(4 rows)

If we now query the table customers_history, we can see the row updated on the table customers with the validity of the row.

postgres=# table customers_history;
custno | custname | start_date | sys_period
--------+----------+---------------------+-------------------------------------------------------------------
4 | HP | 2013-08-05 00:00:00 | ["2017-01-26 10:50:21.275201+01","2017-01-26 11:02:46.347574+01")

Let’s do a delete on the table customers

postgres=# select now();
now
-------------------------------
2017-01-26 11:32:12.229105+01
(1 row)


postgres=# delete from customers where custno=3;
DELETE 1

Below rows in table customers

postgres=# table customers;
custno | custname | start_date | sys_period
--------+----------+---------------------+------------------------------------
1 | IBM | 2014-10-10 00:00:00 | ["2017-01-26 10:48:49.768031+01",)
2 | DBI | 2017-01-07 00:00:00 | ["2017-01-26 10:48:49.778487+01",)
4 | HPSuisse | 2013-08-05 00:00:00 | ["2017-01-26 11:02:46.347574+01",)
(3 rows)

And in the history table, we can see a new row with the validity date.

postgres=# table customers_history;
custno | custname | start_date | sys_period
--------+----------+----------------------------+-------------------------------------------------------------------
4 | HP | 2013-08-05 00:00:00 | ["2017-01-26 10:50:21.275201+01","2017-01-26 11:02:46.347574+01")
3 | DHL | 2017-01-26 10:48:49.841405 | ["2017-01-26 10:48:49.841405+01","2017-01-26 11:32:15.370438+01")
(2 rows)

Conclusion
In this blog we see how temporal tables can be implemented with PostgreSQL using extention temporal_table. This feature can help for auditing, archiving,…
And the history table can be moved to lower storage.

 

Cet article Temporal tables with PostgreSQL est apparu en premier sur Blog dbi services.

Oracle Database 11.2.0.4 and 12.1.0.2 New CPU End Dates

With the upcoming on-premise release of Oracle Database 12.2.0.1, Oracle has updated the Critical Patch Update (CPU) security patch end dates for 11.2.0.4 and 12.1.0.2.  Currently (as of January 2017), only 11.2.0.4 and 12.1.0.2 are supported for CPUs.

The CPU end-dates, which correspond with the end of Extended Support, have been extended to October 2020 for 11.2.0.4 and July 2021 for 12.1.0.2.  The first year of extended support for both versions is free until December 2018 for 11.2.0.4 and July 2019 for 12.1.0.2.

All Oracle databases should be updated to either 11.2.0.4 or 12.1.0.2, which provides at least three years of CPU support.  To ensure database security and minimize Oracle support costs, organizations should plan to upgrade 11.2.0.4 and 12.1.0.2 databases in 2018 and move to 12.2 at that time.  All new databases should be 12.1.0.2 and look to begin production use of 12.2 in late 2017 or with the release of 12.2.0.2 in eary 2018.

For databases that are not currently upgraded to 11.2.0.4 or 12.1.0.2, you must mitigate the risk of not applying security patches as there are at least 27 moderate to high risk unpatched security vulnerabilities in unsupported versions.  A number of these vulnerabilities allow any user, even with only CREATE SESSION, to compromise the entire database.  At a minimum, you must harden the database, limit network access as much as possible, review access and privileges, and enable auditing and monitoring in order to potentially identify attacks and compromises.

See MOS Support Note 742060.1 for more information on Oracle Database version support.

Oracle Database, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

ORA-00942 table or view does not exist Solution

Complete IT Professional - Fri, 2017-01-27 05:00
Have you gotten an ORA-00942 error? I’ll explain the cause and the solution of the error in this article. ORA-00942 Cause The error message appears when you try to run an SQL statement: ORA-00942: table or view does not exist This happens for one of many reasons: The statement references a table or view that […]
Categories: Development

measuring the performance between 9i and 12c

Tom Kyte - Fri, 2017-01-27 03:06
Dears, We have been upgrading from 9i to 12c and trying to ensure 12c performs better than 9i as part of the upgradation process, so we selected 20 different queries, which covered multiple tables, joins, functions, and functionalities e.t.c., We ...
Categories: DBA Blogs

Looping for Nested dimensional objects in an array

Tom Kyte - Fri, 2017-01-27 03:06
I have CLIENT_ORDER_OBJ as OBJECT (order_id number, Order_type varchar2(25)); CLIENT_CITY_OBJ as OBJECT (city VARCHAR2(25)) TEST_OBJ is OBJECT with below attributes (client_id VARCHAR2(25), client_name ...
Categories: DBA Blogs

execution of store procedure

Tom Kyte - Fri, 2017-01-27 03:06
Dear Sir, Q 1 > I want to know how much time to take complete execution of store procedure either procedure or function with resource consumption. Q 2 > Can I know execution path of sql statement written in the store procedure? Q 3 > how we can ex...
Categories: DBA Blogs

insufficient privileges on SYS.DBMS_SESSION

Tom Kyte - Fri, 2017-01-27 03:06
I have this query. select count(*) from TABLE (fn_report_tin_con( 'profession_cd' /**P*/, 'organization_type_cd' /**P*/, 'CMM0016' /**P*/, '20161201' /**P*/, '20170126' /**P*/, '' /**P*/, '' /**P*/)); Error is: ORA-01031: insufficient pr...
Categories: DBA Blogs

SQL%ROWCOUNT in Java

Tom Kyte - Fri, 2017-01-27 03:06
Hi, This may sound Java question but I have to ask this. I want to know if there is a way to access sql%rowcount in Java to get no. of rows affected. My DML statements are native SQL calls from Java and want to know no. of rows affected by DML. ...
Categories: DBA Blogs

Error in execution job fill data from a Database

Tom Kyte - Fri, 2017-01-27 03:06
I have a problem in the job execution in our routine application. This job fill data from a Database. The job name is #P1CPC02, it contains the next code: AGENT BCPORA17 USER root SCRIPTNAME - /xcom_rep/CREP/carga/bin/cargaNormal.sh ...
Categories: DBA Blogs

ORA-30927: Unable to complete execution due to failure in temporary table transformation - When using WITH + UNION

Tom Kyte - Fri, 2017-01-27 03:06
Basically when I run the below query or any like it (actually pulling data) I am getting the ORA-30927 error, I can run this query without the UNION and it will not give this error, however trying to run multiple SELECT with UNION (in order to displa...
Categories: DBA Blogs

Bulk collect into multiple collections

Tom Kyte - Fri, 2017-01-27 03:06
I need to populate two collection as output from a stored procedure. The only difference between the two is the filter used to select the data. Our current method requires two "select x bulk collect into y from z where filter" statements (as illust...
Categories: DBA Blogs

Is EBS Compatible with Microsoft Windows' New Supersedence Approach?

Steven Chan - Fri, 2017-01-27 02:05

Microsoft recently announced a change in how they handle the supersedence behaviour for several Windows releases prior to Windows 10.  Affected releases are:

  • Windows 7
  • Windows 8.1
  • Windows Server 2008 R2
  • Windows Server 2012
  • Windows Server 2012 R2

Windows Supersedence

Microsoft states:

The new supersedence behaviour allows organisations managing updates via WSUS or Configuration Manager to:

  • Selectively install Security Only Quality Updates (bundled by Month) at any time
  • Periodically deploy the Security Monthly Quality Rollup and only deploy the Security Only Quality Updates since then, and;
  • More easily monitor software update compliance using Configuration Manager or WSUS.

Does this affect EBS certifications?

No.  This new behaviour has no impact on EBS certifications with Windows desktop clients.  All current and future EBS desktop client certifications are expected to work with this new supersedence model for Windows security updates.

EBS customers do not have to wait for new certification announcements before rolling out security updates under this new supersedence model to their end-user desktop clients.

Related Articles


Categories: APPS Blogs

Links for 2017-01-26 [del.icio.us]

Categories: DBA Blogs

DBaaS Performance

Jonathan Lewis - Fri, 2017-01-27 01:58

I don’t know how I missed it but Randolf Geist has been doing writing a series of posts on the performance of Oracle’s DBaaS offering, using a series of long-running tests to capture not only raw performance figures but also an indication of consistency. You can find all of these tests with a search URL on his blog, but I’ve also created a little index here to make it easier for me to access them in order.

Oracle Database Cloud (DBaaS) Performance Consistency Oracle Database Cloud (DBaaS) Performance

… to be continued (I hope).

h/t to Connor McDonald for the tweet that took me back to Randolf’s blog.

 


Flipkart: Million-Dollar Hiring Mistakes

Abhinav Agarwal - Thu, 2017-01-26 23:50
Flipkart: Million-Dollar Hiring Mistakes Translate Into Billion-Dollar Valuation Erosions

As the week drew to a close, a story that broke headlines in the world of Indian e-commerce was the departure of Flipkart’s Chief Product Officer, Punit Soni. Rumours had started swirling about Punit Soni’s impending exit since the beginning of the year (link), almost immediately after Mukesh Bansal had taken over from Binny Bansal as Flipkart’s CEO (link).

Punit Soni’s LinkedIn headlinePunit Soni was among a clutch of high-profile hires made by Flipkart in 2015, rumoured to have been paid a million dollar salary (amounting to 6.2 crores at then prevailing currency exchange rates — see this and this). This was in addition to any stock options he and other similar high-profile hires earned.
One decision that Punit Soni was most closely associated with was the neutering of Flipkart’s mobile-web execution, where he killed Flipkart’s mobile site, forcing users to download the app on smartphones. The mobile app itself was poorly designed, had a mostly unusable interface, and was riddled with bugs to the point of crashing every few minutes. I had written in detail on its mobile app’s state in 2015 (see this article in dna, or from my blog). At the time I had expressed my astonishment that Myntra, the fashion e-tailer that Flipkart acquired and which had gone app-only, had a mobile app that was NOT optimized for the iPad. The same was the story with the Flipkart app — no iPad-optimized app, but a “universal” app that ran on both the iPhone and iPad devices. Even today, the Flipkart iPad app does not support landscape-mode orientation, even as Amazon’s iPad app has grown from strength to strength.

A statement made by Punit Soni in 2015 revealed a disturbing focus with technology instead of the customer experience — “The Mindshare in the Company Is Going to Be App Only” (link) — a case of techno-solutionism if you will. At one point, there were strong rumours of Flipkart going app-only (link) — killing off its desktop website completely. I had written on this mobile-only obsession ( Mobile advertising and how the numbers game can be misleading, Mobile Apps: There’s Something (Profitable) About Your Privacy).
Suroji Chatterjee’s LinkedIn headlineIf hiring Punit Soni was a million-dollar mistake, or whether there was simply a mismatch of expectations between employee and employer, or whether Punit Soni’s exit the inevitable consequence of the favoured falling out of favor with the ascension of a new emperor, it does not appear as if Flipkart has learned any lessons. His replacement is said to be yet another ex-Googler, Surojit Chatterjee.

Whether Surojit will fare any better than his predecessor is best left to time or tea-leaf readers, this hire however does exemplify the curse of VC money in more ways than one. First, free money leads to the hubris of mistaking outlay with outcomes — splurging a million dollars on a paycheck with the outcome of success in the e-commerce battles. Second, VCs pay the piper (Flipkart is nowhere close to being profitable), and therefore they decide the tune. If VCs want an executive from a marquee company like Google, Flipkart’s founders may well have no say in the matter. Third, in the closed network of venture funding and Silicon Valley, the you-scratch-my-back club ensures lucrative job mobility for professionals and VCs alike.

Costly though million-dollar hiring mistakes can be, they can translate into even bigger billion-dollar erosion in valuations, as Flipkart would have found out, when Morgan Stanley Institutional Fund Trust Mid Cap Growth Portfolio, Fidelity Rutland Square Trust Strategic Advisers Growth Fund, and Variable Annuity Life Insurance Co.’s Valic Company I Mid Cap Strategic Growth Fund marked down the value of their Flipkart holdings by 23%, 23%, and 11% respectively ( Flipkart Valuation Cuts Spark Concern for India’s Billion Dollar Startups — WSJ).

Is Flipkart listening? In its battle with Amazon, it cannot afford to ignore the Whispering Death.

Related Links:
I first published this post on Medium on Apr 15, 2016.

©2017, Abhinav Agarwal. All rights reserved.

Weekly Link Roundup – Jan 27, 2017

Complete IT Professional - Thu, 2017-01-26 17:57
This week I’ve read a few interesting articles on Oracle and I thought I’d share them here. RI (Referential Integrity) Constraints: 3 Reasons to Include Them in Your Data Warehouse Kent Graziano from The Data Warrior (and Snowflake) wrote an interesting article on using referential integrity constraints inside a data warehouse. I haven’t really considered […]
Categories: Development

Uncommonly Common

Dylan's BI Notes - Thu, 2017-01-26 17:41
An interesting concept. Significant Terms Aggregation – Elastic Search
Categories: BI & Warehousing

get row count from all the tables from different schemas and store in materialized view

Tom Kyte - Thu, 2017-01-26 08:46
Daily activity is to fetch row count from all the tables in different schemas using below query. But issue is its taking too much time [around 50 mins to fetch 50000 rows]. SELECT b.source_name, a.table_name, alh.A_ETL_LOAD_SET_KEY, to_numbe...
Categories: DBA Blogs

Procedure calling

Tom Kyte - Thu, 2017-01-26 08:46
Hello Tom, I have to call a procedure inside a procedure more than 35k times. Can I do this using a normal loop or bulk collect-forall will be better approach? Thanks in advance.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator