Feed aggregator

Flipkart: Million-Dollar Hiring Mistakes

Abhinav Agarwal - Thu, 2017-01-26 23:50
Flipkart: Million-Dollar Hiring Mistakes Translate Into Billion-Dollar Valuation Erosions

As the week drew to a close, a story that broke headlines in the world of Indian e-commerce was the departure of Flipkart’s Chief Product Officer, Punit Soni. Rumours had started swirling about Punit Soni’s impending exit since the beginning of the year (link), almost immediately after Mukesh Bansal had taken over from Binny Bansal as Flipkart’s CEO (link).

Punit Soni’s LinkedIn headlinePunit Soni was among a clutch of high-profile hires made by Flipkart in 2015, rumoured to have been paid a million dollar salary (amounting to 6.2 crores at then prevailing currency exchange rates — see this and this). This was in addition to any stock options he and other similar high-profile hires earned.
One decision that Punit Soni was most closely associated with was the neutering of Flipkart’s mobile-web execution, where he killed Flipkart’s mobile site, forcing users to download the app on smartphones. The mobile app itself was poorly designed, had a mostly unusable interface, and was riddled with bugs to the point of crashing every few minutes. I had written in detail on its mobile app’s state in 2015 (see this article in dna, or from my blog). At the time I had expressed my astonishment that Myntra, the fashion e-tailer that Flipkart acquired and which had gone app-only, had a mobile app that was NOT optimized for the iPad. The same was the story with the Flipkart app — no iPad-optimized app, but a “universal” app that ran on both the iPhone and iPad devices. Even today, the Flipkart iPad app does not support landscape-mode orientation, even as Amazon’s iPad app has grown from strength to strength.

A statement made by Punit Soni in 2015 revealed a disturbing focus with technology instead of the customer experience — “The Mindshare in the Company Is Going to Be App Only” (link) — a case of techno-solutionism if you will. At one point, there were strong rumours of Flipkart going app-only (link) — killing off its desktop website completely. I had written on this mobile-only obsession ( Mobile advertising and how the numbers game can be misleading, Mobile Apps: There’s Something (Profitable) About Your Privacy).
Suroji Chatterjee’s LinkedIn headlineIf hiring Punit Soni was a million-dollar mistake, or whether there was simply a mismatch of expectations between employee and employer, or whether Punit Soni’s exit the inevitable consequence of the favoured falling out of favor with the ascension of a new emperor, it does not appear as if Flipkart has learned any lessons. His replacement is said to be yet another ex-Googler, Surojit Chatterjee.

Whether Surojit will fare any better than his predecessor is best left to time or tea-leaf readers, this hire however does exemplify the curse of VC money in more ways than one. First, free money leads to the hubris of mistaking outlay with outcomes — splurging a million dollars on a paycheck with the outcome of success in the e-commerce battles. Second, VCs pay the piper (Flipkart is nowhere close to being profitable), and therefore they decide the tune. If VCs want an executive from a marquee company like Google, Flipkart’s founders may well have no say in the matter. Third, in the closed network of venture funding and Silicon Valley, the you-scratch-my-back club ensures lucrative job mobility for professionals and VCs alike.

Costly though million-dollar hiring mistakes can be, they can translate into even bigger billion-dollar erosion in valuations, as Flipkart would have found out, when Morgan Stanley Institutional Fund Trust Mid Cap Growth Portfolio, Fidelity Rutland Square Trust Strategic Advisers Growth Fund, and Variable Annuity Life Insurance Co.’s Valic Company I Mid Cap Strategic Growth Fund marked down the value of their Flipkart holdings by 23%, 23%, and 11% respectively ( Flipkart Valuation Cuts Spark Concern for India’s Billion Dollar Startups — WSJ).

Is Flipkart listening? In its battle with Amazon, it cannot afford to ignore the Whispering Death.

Related Links:
I first published this post on Medium on Apr 15, 2016.

©2017, Abhinav Agarwal. All rights reserved.

Weekly Link Roundup – Jan 27, 2017

Complete IT Professional - Thu, 2017-01-26 17:57
This week I’ve read a few interesting articles on Oracle and I thought I’d share them here. RI (Referential Integrity) Constraints: 3 Reasons to Include Them in Your Data Warehouse Kent Graziano from The Data Warrior (and Snowflake) wrote an interesting article on using referential integrity constraints inside a data warehouse. I haven’t really considered […]
Categories: Development

Uncommonly Common

Dylan's BI Notes - Thu, 2017-01-26 17:41
An interesting concept. Significant Terms Aggregation – Elastic Search
Categories: BI & Warehousing

get row count from all the tables from different schemas and store in materialized view

Tom Kyte - Thu, 2017-01-26 08:46
Daily activity is to fetch row count from all the tables in different schemas using below query. But issue is its taking too much time [around 50 mins to fetch 50000 rows]. SELECT b.source_name, a.table_name, alh.A_ETL_LOAD_SET_KEY, to_numbe...
Categories: DBA Blogs

Procedure calling

Tom Kyte - Thu, 2017-01-26 08:46
Hello Tom, I have to call a procedure inside a procedure more than 35k times. Can I do this using a normal loop or bulk collect-forall will be better approach? Thanks in advance.
Categories: DBA Blogs

Kill session

Tom Kyte - Thu, 2017-01-26 08:46
I executed a stored procedure in Oracle 11g using utl_stmp that had a loop that was not properly ended. Needless to say, I received thousands of emails. I requested my DBA to kill the session so that I can stop receiving emails. After the session was...
Categories: DBA Blogs

Schema Creator Name

Tom Kyte - Thu, 2017-01-26 08:46
Hello Tom, Is there a way to know the schema/User creator in Oracle?
Categories: DBA Blogs

the best way to count distinct tuples

Tom Kyte - Thu, 2017-01-26 08:46
I was always afraid to ask, but probably you can tell me: what is the best way to count distinct tuples ? Why does just <code> select count(distinct a, b) from ( select 1 a, 1 b, 1 c from dual union all select 1 a, 1 b, 2 c from dual union...
Categories: DBA Blogs

Index fragmentation - REBUILD Vs SHRINK SPACE

Tom Kyte - Thu, 2017-01-26 08:46
Hi Chris/Connor, We have gather list of tables/Index along with Allocated space, Used space and %fragmentation. Could you please help to how do analysis on Indexes e.g. based on allocated/used space which index we may need to REBUILD or SHRINK. ...
Categories: DBA Blogs

Oracle Streams - Hold/Intercept changes

Tom Kyte - Thu, 2017-01-26 08:46
Hi, We have two databases configured with One-Way Oracle Streams replication. <b>We need to intercept the data changes before they are applied, and apply these changes somewhere in the future using our business logic (keeping the original order...
Categories: DBA Blogs

Query to display Master - Detail Output in Separate Lines

Tom Kyte - Thu, 2017-01-26 08:46
Hi, Good Day ! In my current requirement, I have to display the master details relation in separate rows. The Header should show only the master record while detail should show the detail record only. I written a query using join & union all a...
Categories: DBA Blogs

Part 2 – vagrant up – get your Oracle infrastructure up an running

Yann Neuhaus - Thu, 2017-01-26 08:31

Last week in the first part of this blog we have seen a short introduction how to setup an Oracle Infrastructure with Vagrant and Ansible. Remember all the files for this example are available here https://github.com/nkadbi/oracle-db-12c-vagrant-ansible
Get the example code:

git clone https://github.com/nkadbi/oracle-db-12c-vagrant-ansible

If you have prepared your environment with Ansible, Vagrant and Oracle Virtual Box installed – and provided the Oracle software zip files –
than you can just start to build your Test Infrastructure with the simple callvagrant up
cleanup is also easy- stop the vagrant machines and deletes all traces:
vagrant destroy
How does this work ?
vagrant up starts Vagrant which will setup two virtual servers using a sample box with CentOS 7.2.
When this has been finished Vagrant calls Ansible for provisioning which configures the linux servers, installs the Oracle software and creates your databases on the target servers in parallel.

Vagrant configuration
All the configuration for Vagrant is in one file called Vagrantfile
I used a box with CentOS 7.2 which you can find among other vagrant boxes here https://atlas.hashicorp.com/search
config.vm.box = "boxcutter/centos72" If you start vagrant up the first time it will download the vagrant box
$ vagrant up

Bringing machine 'dbserver1' up with 'virtualbox' provider...
Bringing machine 'dbserver2' up with 'virtualbox' provider...
==> dbserver1: Box 'boxcutter/centos72' could not be found. Attempting to find and install...
dbserver1: Box Provider: virtualbox
dbserver1: Box Version: >= 0
==> dbserver1: Loading metadata for box 'boxcutter/centos72'
dbserver1: URL: https://atlas.hashicorp.com/boxcutter/centos72
==> dbserver1: Adding box 'boxcutter/centos72' (v2.0.21) for provider: virtualbox
dbserver1: Downloading: https://atlas.hashicorp.com/boxcutter/boxes/centos72/versions/2.0.21/providers/virtualbox.box
==> dbserver1: Successfully added box 'boxcutter/centos72' (v2.0.21) for 'virtualbox'!
==> dbserver1: Importing base box 'boxcutter/centos72'...

I have chosen a private network for the virtual servers and use vagrant hostmanager plugin to take care of the /etc/hosts files on all guest machines (and optionally your localhost)
you can add this plugin to vagrant with:
vagrant plugin install vagrant-hostmanager
The corresponding part in the Vagrantfile will look like this:
config.hostmanager.enabled = true
config.hostmanager.ignore_private_ip = false # include private IPs of your VM's
config.vm.hostname = “dbserver1”
config.vm.network "private_network", ip: "192.168.56.31"

ssh Configuration
The Vagrant box comes already with ssh key configuration and- if security does not matter in your demo environment – the easiest way to configure ssh connection to your guest nodes is to use the same ssh key for all created virtual hosts.
config.ssh.insert_key = false # Use the same insecure key provided from box for each machine After bringing up the virtual servers you can display the ssh settings:
vagrant ssh-config The important lines from the output are:
Host dbserver1
HostName 127.0.0.1
User vagrant
Port 2222
IdentityFile /home/user/.vagrant.d/insecure_private_key
You should be able to reach your guest server without password with user vagrant
vagrant ssh dbserver1
Than you can switch to user oracle ( password = welcome1 ) or root (default password for vagrant boxes vagrant) su - oracle or directly connect with ssh ssh vagrant@127.0.0.1 -p 2222 -i /home/user/.vagrant.d/insecure_private_key
Virtual Disks
I added additional virtual disks because I wanted to separate data file destination from fast recovery area destination. # attach disks only localy
if ! File.exist?("dbserver#{i}_disk_a.vdi") # create disks only once
v.customize ['createhd', '--filename', "dbserver#{i}_disk_a.vdi", '--size', 8192 ] v.customize ['createhd', '--filename', "dbserver#{i}_disk_b.vdi", '--size', 8192 ] v.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', "dbserver#{i}_disk_a.vdi"] v.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', 2, '--device', 0, '--type', 'hdd', '--medium', "dbserver#{i}_disk_b.vdi"] end # create disks only once

Provisioning with Ansible
At the end of the Vagrantfile provisioning with Ansible is called.
N = 2
(1..N).each do |i| # do for each server i
...
if i == N
config.vm.provision "ansible" do |ansible| # vm.provisioning
#ansible.verbose = "v"
ansible.playbook = "oracle-db.yml"
ansible.groups = { "dbserver" => ["dbserver1","dbserver2"] }
ansible.limit = 'all'
end # end vm.provisioning
end
end
To prevent the Ansible provisioning to start before all servers have been setup by Vagrant, I included the condition if i == N , where N is the number of desired servers.

Ansible Inventory
The Ansible Inventory is a collection of guest hosts against which Ansible will work.
You can either put the information in an inventory file or let Vagrant create an Inventory file for you. Vagrant does this if you did not specify any inventory file.
To enable Ansible to connect to the target hosts without password Ansible has to know the ssh key provided by the vagrant box.
Example Ansible Inventory:
# Generated by Vagrant
dbserver2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/user/.vagrant.d/insecure_private_key'
dbserver1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/user/.vagrant.d/insecure_private_key'
[dbserver] dbserver1
dbserver2
You can see that the inventory created by Vagrant presents the necessary information to Ansible to connect to the targets and has also defined the group dbserver which includes the server dbserver1 and dbserver2.

Ansible configuration
tell Ansible where to find the inventory in the ansible.cfg.
nocows=1
hostfile = .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
host_key_checking = False

Ansible Variables
In this example I have put the general variables for all servers containing an Oracle Database into this file:
group_vars/dbserver
The more specific variables including variables used to create the database like the database name, character set
can be adapted individual for each server:
host_vars/dbserver1,host_vars/dbserver2

Ansible Playbook
The Ansible playbook is a simple textfile written in YAML syntax, which is easy readable.
Our playbook oracle-db.yml has only one play called “ Configure Oracle Linux 7 with Oracle Database 12c” which will be applied on all servers belonging to the group dbserver. In my example Vagrant creates the vagrant inventory and initiates the play of the playbook but you can also start it stand-alone or repeat it if you want.
ansible-playbook oracle-db.yml
This is the whole playbook, to configure the servers and install Oracle Databases:
$cat oracle-db.yml
---
- name: Configure Oracle Linux 7 with Oracle Database 12c
hosts: dbserver
become: True
vars_files:
# User Passwords hashed are stored here:
- secrets.yml
roles:
- role: disk_layout
- role: linux_oracle
- role: oracle_sw_install
become_user: '{{ oracle_user }}'
- role: oracle_db_create
become_user: '{{ oracle_user }}'

Ansible roles
To make the playbook oracle-db.yml lean and to be more flexible I have split all the tasks into different roles.This makes it easy to reuse parts of the playbook or skip parts. For example if you only want to install the oracle software on the server, but do not want to create databases you can just delete the role oracle_db_create from the playbook.
You (and Ansible ) will find the files containing the tasks for a role in the directory roles/my_role_name/main.yml.
There can be further directories. The default directory structure looks like below. If you want to create a new role you can even create the directory structure by using ansible-galaxy. Ansible Galaxy is Ansible’s official community hub for sharing Ansible roles. https://galaxy.ansible.com/intro

# example to create the directory structure for the role "my_role_name"
ansible-galaxy init my_role_name


# default Ansible role directory structure
roles/
my_role_name/
defaults/
files/
handlers/
meta/
tasks/
templates/
vars/

Ansible Modules
Ansible will run the tasks described in the playbook on the target servers by invoking Ansible Modules.
This Ansible Web Page http://docs.ansible.com/ansible/list_of_all_modules.html shows information about Modules ordered by categories.
You can also get information about all the Ansible modules from command line:

# list all modules
ansible-doc --list
# example to show documentation about the Ansible module "copy"
ansible-doc copy

One Example:
To install the oracle software with response file I use the Ansible module called “template”. Ansible uses Jinja2, a templating engine for Python.
This makes it very easy to design reusable templates. For example Ansible will replace {{ oracle_home }} with the variable, which I have defined in group_vars/dbserver, and than copies the response file to the target servers:

Snipped from the Jinja2 template db_install.rsp.j2

#-------------------------------------------------------------------------------
# Specify the complete path of the Oracle Home.
#-------------------------------------------------------------------------------
ORACLE_HOME={{ oracle_home }}

Snipped from roles/oracle_sw_install/tasks/main.yml

- name: Gerenerate the response file for software only installation
template: src=db_install.rsp.j2 dest={{ installation_folder }}/db_install.rsp

Ansible Adhoc Commands – Some Use Cases
Immediately after installing Ansible you already can use Ansible to gather facts from your localhost which will give you a lot of information:
ansible localhost -m setup
Use Ansible adhoc command with module ping to check if you can reach all target servers listed in your inventory file:

$ ansible all -m ping
dbserver2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
dbserver1 | SUCCESS => {
"changed": false,
"ping": "pong"
}

File transfer – spread a file to all servers in the group dbserver
ansible dbserver -m copy -b -a "src=/etc/hosts dest=/etc/hosts"

Conclusion
With the open source tools Vagrant and Ansible you can easily automate the setup of your infrastructure.
Even if you do not want to automate everything, Ansible still can help you with your daily work if you want to check or apply something on several servers.
Just group your servers in an inventory and run an Ansible Adhoc Command or write a small playbook.

Please keep in mind that this is a simplified example for an automated Oracle Database Installation.
Do not use this example for productive environments.

 

Cet article Part 2 – vagrant up – get your Oracle infrastructure up an running est apparu en premier sur Blog dbi services.

Collaborating WITH (and Not IN) Microsoft Outlook

WebCenter Team - Thu, 2017-01-26 08:30

By: Marc-Andre Houle, Principal Manager, Product Management, Oracle Cloud Services

Microsoft (MS) Outlook continues to be one of the most commonly used email and calendaring tools used by enterprise users. People use MS Outlook every day to send emails, schedule meetings, and share tasks. But even though Outlook is marketed as a collaboration tool, it’s not always the most efficient way to collaborate around content. There are reasons why attaching files sometimes makes sense, but collaborating with attachments often leads to confusion and duplication.There are multiple email threads, version control issues and of course, email quota issues. When considering the best means of collaboration, ask yourself this: are multiple people able to collaborate and work on the same content effectively? On their mobile devices even?

To help enterprise users work more efficiently, the Oracle Documents Cloud Service introduced an add-in for MS Outlook, which exposes the rich collaboration and content features of the Oracle Documents Cloud Service from right within the Outlook client. Our add-in for Outlook makes it easy for people to add links to files, folders, and conversations within Outlook. So whether you’re composing an email, creating an event, or  a task, you will see an Oracle Documents Cloud Service ribbon item in the compose window menu that exposes our functionality. If you click the “Add Link” button, you’re offered three choices: "Document", "Folder", or "Conversation".

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

If you want to send a link to a document, you can configure what type of link to send. For example, you can send a Members Link to people who are already a member of that folder. That link requires people to login and already be a member of the folder. Or you can send a "Public Link" to people who don’t already have access to the folder. Clicking on "Link Options" lets you set options and security on the link that gets sent out. A link will automatically get created for you, but you can choose to use a pre-existing link, if one already exists.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

As part of the settings, you can set what permissions are set for each link. Can the people receiving the link download the file or only be allowed to view it? Or do you want to let them contribute and make changes to the file? That can all be set here.

For security reasons, you can also limit who can access the link. The first option is for sharing links with people outside your organization, while the second option limits it to named users within your organization. You can also set an expiry date for the link and set a password.

The link then gets added to the body of the email. The first part of the link is the title of the document and links to the “View” of the document in Oracle Documents Cloud Service. We also add an easy link to the “Download the document” feature. This is a normal text link within MS Outlook that you can change, if you need to.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

There are times, however, when it’s not just a single document you want to share, but rather an entire folder. The "Folder" picker is similar to the "File" picker, and offers all the same link options and security settings.

Finally, you can also add a link to a conversation. Unlike files and folders though, we don’t allow conversations to be sent out externally or to people who are not members of the conversation. The options are therefore simpler, but no less powerful. You can send a link to a conversation and draw people back to the collaboration that’s happening in Oracle Documents in the context of your project or content.

Oracle Documents Cloud Service' add-in for MS Outlook is installed automatically with Oracle Documents' Desktop Client, and supports the most common versions of MS Office, including Office 2007, 2010, 2013, and 2016.

If you want to find out more about the Oracle Documents Cloud Service and its Desktop Sync features, come check us out at https://cloud.oracle.com/documents.

Don’t have Oracle Documents Cloud Service yet? Then, I highly recommend getting started with a free trial version available at cloud.oracle.com/documents to see how you can now drive content and social collaboration anytime, anywhere and on any device.

Oracle 12cR2 – RMAN cold backup with TAG’s

Yann Neuhaus - Thu, 2017-01-26 07:35

I am planning to backup my 12R2 container database, because a huge application change is coming up,
and I want to be sure that I have a good RMAN backup beforehand. For that particular DB, I want to do it with a cold backup in combination with RMAN tags. Unfortunately I don’t have any backups at the moment, so I start with a full backup with the TAG ‘DBI_BACKUP’ to be 100% that I restore the correct one.

RMAN> list backup summary;

specification does not match any backup in the repository

RMAN> shutdown immediate

database closed
database dismounted
Oracle instance shut down

RMAN> startup mount

connected to target database (not started)
Oracle instance started
database mounted

Total System Global Area    1795162112 bytes

Fixed Size                     8793832 bytes
Variable Size                553648408 bytes
Database Buffers            1224736768 bytes
Redo Buffers                   7983104 bytes

RMAN> run
    {
         allocate channel c1 device type disk format '/u99/backup/CDB/database_%U';
     allocate channel c2 device type disk format '/u99/backup/CDB/database_%U';
         allocate channel c3 device type disk format '/u99/backup/CDB/database_%U';
     allocate channel c4 device type disk format '/u99/backup/CDB/database_%U';
     BACKUP INCREMENTAL LEVEL 0 FORCE AS COMPRESSED BACKUPSET DATABASE plus archivelog tag 'DBI_BACKUP';
         backup current controlfile tag 'DBI_BACKUP' format '/u99/backup/CDB/control_%U';
         backup spfile tag 'DBI_BACKUP' format '/u99/backup/CDB/spfile_%U';
         release channel c1;
         release channel c2;
         release channel c3;
         release channel c4;
    }2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13> 14>

allocated channel: c1
channel c1: SID=16 device type=DISK

allocated channel: c2
channel c2: SID=271 device type=DISK

allocated channel: c3
channel c3: SID=31 device type=DISK

allocated channel: c4
channel c4: SID=272 device type=DISK


Starting backup at 26-JAN-2017 13:18:53
current log archived
channel c1: starting compressed archived log backup set
channel c1: specifying archived log(s) in backup set
input archived log thread=1 sequence=4 RECID=3 STAMP=934074668
input archived log thread=1 sequence=5 RECID=4 STAMP=934154679
channel c1: starting piece 1 at 26-JAN-2017 13:18:53
channel c2: starting compressed archived log backup set
channel c2: specifying archived log(s) in backup set
input archived log thread=1 sequence=2 RECID=1 STAMP=934038010
input archived log thread=1 sequence=3 RECID=2 STAMP=934066843
channel c2: starting piece 1 at 26-JAN-2017 13:18:53
channel c3: starting compressed archived log backup set
channel c3: specifying archived log(s) in backup set
input archived log thread=1 sequence=6 RECID=5 STAMP=934203623
input archived log thread=1 sequence=7 RECID=6 STAMP=934275778
input archived log thread=1 sequence=8 RECID=7 STAMP=934284094
channel c3: starting piece 1 at 26-JAN-2017 13:18:53
channel c4: starting compressed archived log backup set
channel c4: specifying archived log(s) in backup set
input archived log thread=1 sequence=9 RECID=8 STAMP=934284153
input archived log thread=1 sequence=10 RECID=9 STAMP=934284199
input archived log thread=1 sequence=11 RECID=10 STAMP=934291133
channel c4: starting piece 1 at 26-JAN-2017 13:18:53
channel c4: finished piece 1 at 26-JAN-2017 13:18:54
piece handle=/u99/backup/CDB/database_2arr09lt_1_1 tag=DBI_BACKUP comment=NONE
channel c4: backup set complete, elapsed time: 00:00:01
channel c1: finished piece 1 at 26-JAN-2017 13:19:08
piece handle=/u99/backup/CDB/database_27rr09lt_1_1 tag=DBI_BACKUP comment=NONE
channel c1: backup set complete, elapsed time: 00:00:15
channel c2: finished piece 1 at 26-JAN-2017 13:19:08
piece handle=/u99/backup/CDB/database_28rr09lt_1_1 tag=DBI_BACKUP comment=NONE
channel c2: backup set complete, elapsed time: 00:00:15
channel c3: finished piece 1 at 26-JAN-2017 13:19:08
piece handle=/u99/backup/CDB/database_29rr09lt_1_1 tag=DBI_BACKUP comment=NONE
channel c3: backup set complete, elapsed time: 00:00:15
Finished backup at 26-JAN-2017 13:19:08

Starting backup at 26-JAN-2017 13:19:08
channel c1: starting compressed incremental level 0 datafile backup set
channel c1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u02/oradata/CDB/datafile/o1_mf_system_d81c2wsf_.dbf
channel c1: starting piece 1 at 26-JAN-2017 13:19:09
channel c2: starting compressed incremental level 0 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00003 name=/u02/oradata/CDB/datafile/o1_mf_sysaux_d81c49wd_.dbf
channel c2: starting piece 1 at 26-JAN-2017 13:19:09
channel c3: starting compressed incremental level 0 datafile backup set
channel c3: specifying datafile(s) in backup set
input datafile file number=00010 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_sysaux_d81cgjc2_.dbf
channel c3: starting piece 1 at 26-JAN-2017 13:19:09
channel c4: starting compressed incremental level 0 datafile backup set
channel c4: specifying datafile(s) in backup set
input datafile file number=00009 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_system_d81cgjbv_.dbf
input datafile file number=00011 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_undotbs1_d81cgjc2_.dbf
channel c4: starting piece 1 at 26-JAN-2017 13:19:09
channel c4: finished piece 1 at 26-JAN-2017 13:19:24
piece handle=/u99/backup/CDB/database_2err09md_1_1 tag=TAG20170126T131908 comment=NONE
channel c4: backup set complete, elapsed time: 00:00:15
channel c4: starting compressed incremental level 0 datafile backup set
channel c4: specifying datafile(s) in backup set
input datafile file number=00006 name=/u02/oradata/CDB/datafile/o1_mf_sysaux_d81c6fqn_.dbf
channel c4: starting piece 1 at 26-JAN-2017 13:19:24
channel c3: finished piece 1 at 26-JAN-2017 13:19:39
piece handle=/u99/backup/CDB/database_2drr09md_1_1 tag=TAG20170126T131908 comment=NONE
channel c3: backup set complete, elapsed time: 00:00:30
channel c3: starting compressed incremental level 0 datafile backup set
channel c3: specifying datafile(s) in backup set
input datafile file number=00013 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_rman_d8ccofgs_.dbf
input datafile file number=00012 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_users_d81cgq9f_.dbf
channel c3: starting piece 1 at 26-JAN-2017 13:19:39
channel c3: finished piece 1 at 26-JAN-2017 13:19:40
piece handle=/u99/backup/CDB/database_2grr09nb_1_1 tag=TAG20170126T131908 comment=NONE
channel c3: backup set complete, elapsed time: 00:00:01
channel c3: starting compressed incremental level 0 datafile backup set
channel c3: specifying datafile(s) in backup set
input datafile file number=00005 name=/u02/oradata/CDB/datafile/o1_mf_system_d81c6fqo_.dbf
channel c3: starting piece 1 at 26-JAN-2017 13:19:41
channel c2: finished piece 1 at 26-JAN-2017 13:19:41
piece handle=/u99/backup/CDB/database_2crr09md_1_1 tag=TAG20170126T131908 comment=NONE
channel c2: backup set complete, elapsed time: 00:00:32
channel c2: starting compressed incremental level 0 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00008 name=/u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c6fqp_.dbf
channel c2: starting piece 1 at 26-JAN-2017 13:19:41
channel c2: finished piece 1 at 26-JAN-2017 13:19:44
piece handle=/u99/backup/CDB/database_2irr09nd_1_1 tag=TAG20170126T131908 comment=NONE
channel c2: backup set complete, elapsed time: 00:00:03
channel c2: starting compressed incremental level 0 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00004 name=/u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c530h_.dbf
channel c2: starting piece 1 at 26-JAN-2017 13:19:44
channel c2: finished piece 1 at 26-JAN-2017 13:19:45
piece handle=/u99/backup/CDB/database_2jrr09ng_1_1 tag=TAG20170126T131908 comment=NONE
channel c2: backup set complete, elapsed time: 00:00:01
channel c2: starting compressed incremental level 0 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00007 name=/u02/oradata/CDB/datafile/o1_mf_users_d81c542r_.dbf
channel c2: starting piece 1 at 26-JAN-2017 13:19:45
channel c2: finished piece 1 at 26-JAN-2017 13:19:46
piece handle=/u99/backup/CDB/database_2krr09nh_1_1 tag=TAG20170126T131908 comment=NONE
channel c2: backup set complete, elapsed time: 00:00:01
channel c1: finished piece 1 at 26-JAN-2017 13:19:52
piece handle=/u99/backup/CDB/database_2brr09md_1_1 tag=TAG20170126T131908 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:43
channel c3: finished piece 1 at 26-JAN-2017 13:19:52
piece handle=/u99/backup/CDB/database_2hrr09nd_1_1 tag=TAG20170126T131908 comment=NONE
channel c3: backup set complete, elapsed time: 00:00:11
channel c4: finished piece 1 at 26-JAN-2017 13:19:52
piece handle=/u99/backup/CDB/database_2frr09ms_1_1 tag=TAG20170126T131908 comment=NONE
channel c4: backup set complete, elapsed time: 00:00:28
Finished backup at 26-JAN-2017 13:19:52

Starting backup at 26-JAN-2017 13:19:52
current log archived
channel c1: starting compressed archived log backup set
channel c1: specifying archived log(s) in backup set
input archived log thread=1 sequence=12 RECID=11 STAMP=934291192
channel c1: starting piece 1 at 26-JAN-2017 13:19:53
channel c1: finished piece 1 at 26-JAN-2017 13:19:54
piece handle=/u99/backup/CDB/database_2lrr09np_1_1 tag=DBI_BACKUP comment=NONE
channel c1: backup set complete, elapsed time: 00:00:01
Finished backup at 26-JAN-2017 13:19:54

Starting backup at 26-JAN-2017 13:19:54
channel c1: starting full datafile backup set
channel c1: specifying datafile(s) in backup set
including current control file in backup set
channel c1: starting piece 1 at 26-JAN-2017 13:19:55
channel c1: finished piece 1 at 26-JAN-2017 13:19:56
piece handle=/u99/backup/CDB/control_2mrr09nq_1_1 tag=DBI_BACKUP comment=NONE
channel c1: backup set complete, elapsed time: 00:00:01
Finished backup at 26-JAN-2017 13:19:56

Starting backup at 26-JAN-2017 13:19:56
channel c1: starting full datafile backup set
channel c1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel c1: starting piece 1 at 26-JAN-2017 13:19:56
channel c1: finished piece 1 at 26-JAN-2017 13:19:57
piece handle=/u99/backup/CDB/spfile_2nrr09ns_1_1 tag=DBI_BACKUP comment=NONE
channel c1: backup set complete, elapsed time: 00:00:01
Finished backup at 26-JAN-2017 13:19:57

Starting Control File and SPFILE Autobackup at 26-JAN-2017 13:19:57
piece handle=/u03/fast_recovery_area/CDB/autobackup/2017_01_26/o1_mf_s_934291197_d8mtcfjz_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 26-JAN-2017 13:19:58

released channel: c1

released channel: c2

released channel: c3

released channel: c4

RMAN>

After the backup was done, I do a quick “list backup summary” to see if everything is there, and also check the destination directory.

RMAN> list backup summary tag 'DBI_BACKUP';

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
67      B  A  A DISK        26-JAN-2017 13:18:54 1       1       YES        DBI_BACKUP
68      B  A  A DISK        26-JAN-2017 13:19:02 1       1       YES        DBI_BACKUP
69      B  A  A DISK        26-JAN-2017 13:19:07 1       1       YES        DBI_BACKUP
70      B  A  A DISK        26-JAN-2017 13:19:07 1       1       YES        DBI_BACKUP
81      B  A  A DISK        26-JAN-2017 13:19:53 1       1       YES        DBI_BACKUP
82      B  F  A DISK        26-JAN-2017 13:19:55 1       1       NO         DBI_BACKUP
83      B  F  A DISK        26-JAN-2017 13:19:56 1       1       NO         DBI_BACKUP

RMAN>

oracle@dbidg03:/u99/backup/CDB/ [CDB] ls -l
total 975304
-rw-r----- 1 oracle oinstall  18792448 Jan 26 13:19 control_2mrr09nq_1_1
-rw-r----- 1 oracle oinstall 112111616 Jan 26 13:19 database_27rr09lt_1_1
-rw-r----- 1 oracle oinstall 112711168 Jan 26 13:19 database_28rr09lt_1_1
-rw-r----- 1 oracle oinstall  58626048 Jan 26 13:19 database_29rr09lt_1_1
-rw-r----- 1 oracle oinstall   3691520 Jan 26 13:18 database_2arr09lt_1_1
-rw-r----- 1 oracle oinstall 215056384 Jan 26 13:19 database_2brr09md_1_1
-rw-r----- 1 oracle oinstall 132710400 Jan 26 13:19 database_2crr09md_1_1
-rw-r----- 1 oracle oinstall 112173056 Jan 26 13:19 database_2drr09md_1_1
-rw-r----- 1 oracle oinstall  56778752 Jan 26 13:19 database_2err09md_1_1
-rw-r----- 1 oracle oinstall 110149632 Jan 26 13:19 database_2frr09ms_1_1
-rw-r----- 1 oracle oinstall   1507328 Jan 26 13:19 database_2grr09nb_1_1
-rw-r----- 1 oracle oinstall  54157312 Jan 26 13:19 database_2hrr09nd_1_1
-rw-r----- 1 oracle oinstall   7716864 Jan 26 13:19 database_2irr09nd_1_1
-rw-r----- 1 oracle oinstall   1327104 Jan 26 13:19 database_2jrr09ng_1_1
-rw-r----- 1 oracle oinstall   1073152 Jan 26 13:19 database_2krr09nh_1_1
-rw-r----- 1 oracle oinstall      7680 Jan 26 13:19 database_2lrr09np_1_1
-rw-r----- 1 oracle oinstall    114688 Jan 26 13:19 spfile_2nrr09ns_1_1

But to be really 100% sure that I can restore the backup from TAG, I do a restore preview. The restore preview exists for quite a while now, but it is not so widly used for whatever reasons, I don’t know. I find it quite useful.

RMAN> restore database preview from tag 'DBI_BACKUP';

Starting restore at 26-JAN-2017 13:22:49
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=16 device type=DISK

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 01/26/2017 13:22:49
RMAN-06026: some targets not found - aborting restore
RMAN-06023: no backup or copy of datafile 13 found to restore
RMAN-06023: no backup or copy of datafile 12 found to restore
RMAN-06023: no backup or copy of datafile 11 found to restore
RMAN-06023: no backup or copy of datafile 10 found to restore
RMAN-06023: no backup or copy of datafile 9 found to restore
RMAN-06023: no backup or copy of datafile 8 found to restore
RMAN-06023: no backup or copy of datafile 7 found to restore
RMAN-06023: no backup or copy of datafile 6 found to restore
RMAN-06023: no backup or copy of datafile 5 found to restore
RMAN-06023: no backup or copy of datafile 4 found to restore
RMAN-06023: no backup or copy of datafile 3 found to restore
RMAN-06023: no backup or copy of datafile 1 found to restore

RMAN>

Oh no … that doesn’t look good. RMAN complaints that no backup or copy exists for all datafiles. What is going here? Is my backup useless? Yes and no. If I rely only on the TAG, then yes. However, the RMAN backup have been created successfully but with two different TAG’s. For the datafiles it used tag=TAG20170126T131908 and for the archivelogs, the controlfile and the spfile it used tag=DBI_BACKUP.

So what is wrong here? The TAG was simply specified at the wrong location. If you put tag after archivelog, then only the archivelogs get that tag.

BACKUP INCREMENTAL LEVEL 0 FORCE AS COMPRESSED BACKUPSET DATABASE plus archivelog tag 'DBI_BACKUP';

If you want to have the datafiles and the archivelogs tagged correctly, you have to put it after level 0 in my case. That’s usually enough.

BACKUP INCREMENTAL LEVEL 0 tag 'DBI_BACKUP' FORCE AS COMPRESSED BACKUPSET DATABASE plus archivelog;

Or if you want to be double sure and you are sort of paranoid, you can specify it twice, one after level 0, and one after archivelog.

BACKUP INCREMENTAL LEVEL 0 tag 'DBI_BACKUP' FORCE AS COMPRESSED BACKUPSET DATABASE plus archivelog tag 'DBI_BACKUP';

ok. So lets try it again from scratch. But this time I put the Tag after LEVEL 0.

RMAN> list backup summary;

specification does not match any backup in the repository

RMAN> shutdown immediate

database closed
database dismounted
Oracle instance shut down

RMAN> startup mount

connected to target database (not started)
Oracle instance started
database mounted

Total System Global Area    1795162112 bytes

Fixed Size                     8793832 bytes
Variable Size                553648408 bytes
Database Buffers            1224736768 bytes
Redo Buffers                   7983104 bytes


RMAN> run
    {
         allocate channel c1 device type disk format '/u99/backup/CDB/database_%U';
     allocate channel c2 device type disk format '/u99/backup/CDB/database_%U';
         allocate channel c3 device type disk format '/u99/backup/CDB/database_%U';
     allocate channel c4 device type disk format '/u99/backup/CDB/database_%U';
     BACKUP INCREMENTAL LEVEL 0 tag 'DBI_BACKUP' FORCE AS COMPRESSED BACKUPSET DATABASE plus archivelog;
         backup current controlfile tag 'DBI_BACKUP' format '/u99/backup/CDB/control_%U';
         backup spfile tag 'DBI_BACKUP' format '/u99/backup/CDB/spfile_%U';
         release channel c1;
         release channel c2;
         release channel c3;
         release channel c4;
    }2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13> 14>

allocated channel: c1
channel c1: SID=237 device type=DISK

allocated channel: c2
channel c2: SID=20 device type=DISK

allocated channel: c3
channel c3: SID=254 device type=DISK

allocated channel: c4
channel c4: SID=22 device type=DISK


Starting backup at 26-JAN-2017 13:43:45
channel c1: starting compressed archived log backup set
channel c1: specifying archived log(s) in backup set
input archived log thread=1 sequence=4 RECID=3 STAMP=934074668
input archived log thread=1 sequence=5 RECID=4 STAMP=934154679
channel c1: starting piece 1 at 26-JAN-2017 13:43:46
channel c2: starting compressed archived log backup set
channel c2: specifying archived log(s) in backup set
input archived log thread=1 sequence=2 RECID=1 STAMP=934038010
input archived log thread=1 sequence=3 RECID=2 STAMP=934066843
channel c2: starting piece 1 at 26-JAN-2017 13:43:46
channel c3: starting compressed archived log backup set
channel c3: specifying archived log(s) in backup set
input archived log thread=1 sequence=6 RECID=5 STAMP=934203623
input archived log thread=1 sequence=7 RECID=6 STAMP=934275778
input archived log thread=1 sequence=8 RECID=7 STAMP=934284094
input archived log thread=1 sequence=9 RECID=8 STAMP=934284153
channel c3: starting piece 1 at 26-JAN-2017 13:43:46
channel c4: starting compressed archived log backup set
channel c4: specifying archived log(s) in backup set
input archived log thread=1 sequence=10 RECID=9 STAMP=934284199
input archived log thread=1 sequence=11 RECID=10 STAMP=934291133
input archived log thread=1 sequence=12 RECID=11 STAMP=934291192
input archived log thread=1 sequence=13 RECID=12 STAMP=934291966
channel c4: starting piece 1 at 26-JAN-2017 13:43:46
channel c4: finished piece 1 at 26-JAN-2017 13:43:47
piece handle=/u99/backup/CDB/database_3frr0b4i_1_1 tag=DBI_BACKUP comment=NONE
channel c4: backup set complete, elapsed time: 00:00:01
channel c4: starting compressed archived log backup set
channel c4: specifying archived log(s) in backup set
input archived log thread=1 sequence=14 RECID=13 STAMP=934292026
input archived log thread=1 sequence=15 RECID=14 STAMP=934292464
channel c4: starting piece 1 at 26-JAN-2017 13:43:47
channel c4: finished piece 1 at 26-JAN-2017 13:43:48
piece handle=/u99/backup/CDB/database_3grr0b4j_1_1 tag=DBI_BACKUP comment=NONE
channel c4: backup set complete, elapsed time: 00:00:01
channel c1: finished piece 1 at 26-JAN-2017 13:44:02
piece handle=/u99/backup/CDB/database_3crr0b4i_1_1 tag=DBI_BACKUP comment=NONE
channel c1: backup set complete, elapsed time: 00:00:16
channel c2: finished piece 1 at 26-JAN-2017 13:44:02
piece handle=/u99/backup/CDB/database_3drr0b4i_1_1 tag=DBI_BACKUP comment=NONE
channel c2: backup set complete, elapsed time: 00:00:16
channel c3: finished piece 1 at 26-JAN-2017 13:44:02
piece handle=/u99/backup/CDB/database_3err0b4i_1_1 tag=DBI_BACKUP comment=NONE
channel c3: backup set complete, elapsed time: 00:00:16
Finished backup at 26-JAN-2017 13:44:02

Starting backup at 26-JAN-2017 13:44:02
channel c1: starting compressed incremental level 0 datafile backup set
channel c1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u02/oradata/CDB/datafile/o1_mf_system_d81c2wsf_.dbf
channel c1: starting piece 1 at 26-JAN-2017 13:44:02
channel c2: starting compressed incremental level 0 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00003 name=/u02/oradata/CDB/datafile/o1_mf_sysaux_d81c49wd_.dbf
channel c2: starting piece 1 at 26-JAN-2017 13:44:02
channel c3: starting compressed incremental level 0 datafile backup set
channel c3: specifying datafile(s) in backup set
input datafile file number=00010 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_sysaux_d81cgjc2_.dbf
channel c3: starting piece 1 at 26-JAN-2017 13:44:02
channel c4: starting compressed incremental level 0 datafile backup set
channel c4: specifying datafile(s) in backup set
input datafile file number=00009 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_system_d81cgjbv_.dbf
input datafile file number=00011 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_undotbs1_d81cgjc2_.dbf
channel c4: starting piece 1 at 26-JAN-2017 13:44:03
channel c4: finished piece 1 at 26-JAN-2017 13:44:18
piece handle=/u99/backup/CDB/database_3krr0b52_1_1 tag=DBI_BACKUP comment=NONE
channel c4: backup set complete, elapsed time: 00:00:15
channel c4: starting compressed incremental level 0 datafile backup set
channel c4: specifying datafile(s) in backup set
input datafile file number=00006 name=/u02/oradata/CDB/datafile/o1_mf_sysaux_d81c6fqn_.dbf
channel c4: starting piece 1 at 26-JAN-2017 13:44:18
channel c3: finished piece 1 at 26-JAN-2017 13:44:33
piece handle=/u99/backup/CDB/database_3jrr0b52_1_1 tag=DBI_BACKUP comment=NONE
channel c3: backup set complete, elapsed time: 00:00:31
channel c3: starting compressed incremental level 0 datafile backup set
channel c3: specifying datafile(s) in backup set
input datafile file number=00013 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_rman_d8ccofgs_.dbf
input datafile file number=00012 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_users_d81cgq9f_.dbf
channel c3: starting piece 1 at 26-JAN-2017 13:44:33
channel c3: finished piece 1 at 26-JAN-2017 13:44:34
piece handle=/u99/backup/CDB/database_3mrr0b61_1_1 tag=DBI_BACKUP comment=NONE
channel c3: backup set complete, elapsed time: 00:00:01
channel c3: starting compressed incremental level 0 datafile backup set
channel c3: specifying datafile(s) in backup set
input datafile file number=00005 name=/u02/oradata/CDB/datafile/o1_mf_system_d81c6fqo_.dbf
channel c3: starting piece 1 at 26-JAN-2017 13:44:35
channel c2: finished piece 1 at 26-JAN-2017 13:44:38
piece handle=/u99/backup/CDB/database_3irr0b52_1_1 tag=DBI_BACKUP comment=NONE
channel c2: backup set complete, elapsed time: 00:00:36
channel c2: starting compressed incremental level 0 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00008 name=/u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c6fqp_.dbf
channel c2: starting piece 1 at 26-JAN-2017 13:44:38
channel c2: finished piece 1 at 26-JAN-2017 13:44:41
piece handle=/u99/backup/CDB/database_3orr0b66_1_1 tag=DBI_BACKUP comment=NONE
channel c2: backup set complete, elapsed time: 00:00:03
channel c2: starting compressed incremental level 0 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00004 name=/u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c530h_.dbf
channel c2: starting piece 1 at 26-JAN-2017 13:44:41
channel c2: finished piece 1 at 26-JAN-2017 13:44:42
piece handle=/u99/backup/CDB/database_3prr0b69_1_1 tag=DBI_BACKUP comment=NONE
channel c2: backup set complete, elapsed time: 00:00:01
channel c2: starting compressed incremental level 0 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00007 name=/u02/oradata/CDB/datafile/o1_mf_users_d81c542r_.dbf
channel c2: starting piece 1 at 26-JAN-2017 13:44:43
channel c1: finished piece 1 at 26-JAN-2017 13:44:44
piece handle=/u99/backup/CDB/database_3hrr0b52_1_1 tag=DBI_BACKUP comment=NONE
channel c1: backup set complete, elapsed time: 00:00:42
channel c2: finished piece 1 at 26-JAN-2017 13:44:44
piece handle=/u99/backup/CDB/database_3qrr0b6b_1_1 tag=DBI_BACKUP comment=NONE
channel c2: backup set complete, elapsed time: 00:00:01
channel c3: finished piece 1 at 26-JAN-2017 13:44:46
piece handle=/u99/backup/CDB/database_3nrr0b62_1_1 tag=DBI_BACKUP comment=NONE
channel c3: backup set complete, elapsed time: 00:00:11
channel c4: finished piece 1 at 26-JAN-2017 13:44:46
piece handle=/u99/backup/CDB/database_3lrr0b5i_1_1 tag=DBI_BACKUP comment=NONE
channel c4: backup set complete, elapsed time: 00:00:28
Finished backup at 26-JAN-2017 13:44:46

Starting backup at 26-JAN-2017 13:44:46
specification does not match any archived log in the repository
backup cancelled because there are no files to backup
Finished backup at 26-JAN-2017 13:44:46

Starting backup at 26-JAN-2017 13:44:46
channel c1: starting full datafile backup set
channel c1: specifying datafile(s) in backup set
including current control file in backup set
channel c1: starting piece 1 at 26-JAN-2017 13:44:47
channel c1: finished piece 1 at 26-JAN-2017 13:44:48
piece handle=/u99/backup/CDB/control_3rrr0b6e_1_1 tag=DBI_BACKUP comment=NONE
channel c1: backup set complete, elapsed time: 00:00:01
Finished backup at 26-JAN-2017 13:44:48

Starting backup at 26-JAN-2017 13:44:48
channel c1: starting full datafile backup set
channel c1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel c1: starting piece 1 at 26-JAN-2017 13:44:48
channel c1: finished piece 1 at 26-JAN-2017 13:44:49
piece handle=/u99/backup/CDB/spfile_3srr0b6g_1_1 tag=DBI_BACKUP comment=NONE
channel c1: backup set complete, elapsed time: 00:00:01
Finished backup at 26-JAN-2017 13:44:49

Starting Control File and SPFILE Autobackup at 26-JAN-2017 13:44:49
piece handle=/u03/fast_recovery_area/CDB/autobackup/2017_01_26/o1_mf_s_934292553_d8mvt1l0_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 26-JAN-2017 13:44:50

released channel: c1

released channel: c2

released channel: c3

released channel: c4

RMAN>

As you can see in the log, all backup pieces have been done with tag=DBI_BACKUP. But let’s double check it again.

RMAN> list backup summary tag 'DBI_BACKUP';

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
104     B  A  A DISK        26-JAN-2017 13:43:46 1       1       YES        DBI_BACKUP
105     B  A  A DISK        26-JAN-2017 13:43:47 1       1       YES        DBI_BACKUP
106     B  A  A DISK        26-JAN-2017 13:43:54 1       1       YES        DBI_BACKUP
107     B  A  A DISK        26-JAN-2017 13:43:59 1       1       YES        DBI_BACKUP
108     B  A  A DISK        26-JAN-2017 13:43:59 1       1       YES        DBI_BACKUP
109     B  0  A DISK        26-JAN-2017 13:44:14 1       1       YES        DBI_BACKUP
110     B  0  A DISK        26-JAN-2017 13:44:30 1       1       YES        DBI_BACKUP
111     B  0  A DISK        26-JAN-2017 13:44:34 1       1       YES        DBI_BACKUP
112     B  0  A DISK        26-JAN-2017 13:44:36 1       1       YES        DBI_BACKUP
113     B  0  A DISK        26-JAN-2017 13:44:39 1       1       YES        DBI_BACKUP
114     B  0  A DISK        26-JAN-2017 13:44:41 1       1       YES        DBI_BACKUP
115     B  0  A DISK        26-JAN-2017 13:44:43 1       1       YES        DBI_BACKUP
116     B  0  A DISK        26-JAN-2017 13:44:43 1       1       YES        DBI_BACKUP
117     B  0  A DISK        26-JAN-2017 13:44:44 1       1       YES        DBI_BACKUP
118     B  0  A DISK        26-JAN-2017 13:44:44 1       1       YES        DBI_BACKUP
119     B  F  A DISK        26-JAN-2017 13:44:47 1       1       NO         DBI_BACKUP
120     B  F  A DISK        26-JAN-2017 13:44:48 1       1       NO         DBI_BACKUP

RMAN> restore database preview summary from tag 'DBI_BACKUP';

Starting restore at 26-JAN-2017 13:45:26
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=237 device type=DISK

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
116     B  0  A DISK        26-JAN-2017 13:44:43 1       1       YES        DBI_BACKUP
112     B  0  A DISK        26-JAN-2017 13:44:36 1       1       YES        DBI_BACKUP
114     B  0  A DISK        26-JAN-2017 13:44:41 1       1       YES        DBI_BACKUP
117     B  0  A DISK        26-JAN-2017 13:44:44 1       1       YES        DBI_BACKUP
118     B  0  A DISK        26-JAN-2017 13:44:44 1       1       YES        DBI_BACKUP
115     B  0  A DISK        26-JAN-2017 13:44:43 1       1       YES        DBI_BACKUP
113     B  0  A DISK        26-JAN-2017 13:44:39 1       1       YES        DBI_BACKUP
109     B  0  A DISK        26-JAN-2017 13:44:14 1       1       YES        DBI_BACKUP
110     B  0  A DISK        26-JAN-2017 13:44:30 1       1       YES        DBI_BACKUP
111     B  0  A DISK        26-JAN-2017 13:44:34 1       1       YES        DBI_BACKUP
using channel ORA_DISK_1

archived logs generated after SCN 1904449 not found in repository
recovery will be done up to SCN 1904449
Media recovery start SCN is 1904449
Recovery must be done beyond SCN 1904725 to clear datafile fuzziness
Finished restore at 26-JAN-2017 13:45:26

RMAN>

Ok. Very good. That looks promising now. :-) Let’s do the application changes now …

RMAN> alter database open;

Statement processed

-- Do some application changes ...

SQL> create table x ...
SQL> create table y ...
SQL> create table z ...

And the final test is of course, to do the real restore/recovery to the point where the cold backup was done.

RMAN> shutdown abort

Oracle instance shut down

RMAN> startup nomount

connected to target database (not started)
Oracle instance started

Total System Global Area    1795162112 bytes

Fixed Size                     8793832 bytes
Variable Size                553648408 bytes
Database Buffers            1224736768 bytes
Redo Buffers                   7983104 bytes

RMAN> restore controlfile from '/u99/backup/CDB/control_3rrr0b6e_1_1';

Starting restore at 26-JAN-2017 13:48:50
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=237 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u02/oradata/CDB/controlfile/o1_mf_d81c6189_.ctl
output file name=/u03/fast_recovery_area/CDB/controlfile/o1_mf_d81c61b4_.ctl
Finished restore at 26-JAN-2017 13:48:51

RMAN> alter database mount;

Statement processed
released channel: ORA_DISK_1

	
RMAN> run
    {
         allocate channel c1 device type disk;
     allocate channel c2 device type disk;
         allocate channel c3 device type disk;
     allocate channel c4 device type disk;
     restore database from tag 'DBI_BACKUP';
         release channel c1;
         release channel c2;
         release channel c3;
         release channel c4;
    }2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12>

allocated channel: c1
channel c1: SID=256 device type=DISK

allocated channel: c2
channel c2: SID=24 device type=DISK

allocated channel: c3
channel c3: SID=257 device type=DISK

allocated channel: c4
channel c4: SID=25 device type=DISK

Starting restore at 26-JAN-2017 13:49:39
Starting implicit crosscheck backup at 26-JAN-2017 13:49:39
Crosschecked 15 objects
Finished implicit crosscheck backup at 26-JAN-2017 13:49:40

Starting implicit crosscheck copy at 26-JAN-2017 13:49:40
Finished implicit crosscheck copy at 26-JAN-2017 13:49:40

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u03/fast_recovery_area/CDB/autobackup/2017_01_26/o1_mf_s_934292553_d8mvt1l0_.bkp


skipping datafile 5; already restored to file /u02/oradata/CDB/datafile/o1_mf_system_d81c6fqo_.dbf
skipping datafile 6; already restored to file /u02/oradata/CDB/datafile/o1_mf_sysaux_d81c6fqn_.dbf
skipping datafile 8; already restored to file /u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c6fqp_.dbf
channel c1: starting datafile backup set restore
channel c1: specifying datafile(s) to restore from backup set
channel c1: restoring datafile 00009 to /u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_system_d81cgjbv_.dbf
channel c1: restoring datafile 00011 to /u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_undotbs1_d81cgjc2_.dbf
channel c1: reading from backup piece /u99/backup/CDB/database_3krr0b52_1_1
channel c2: starting datafile backup set restore
channel c2: specifying datafile(s) to restore from backup set
channel c2: restoring datafile 00010 to /u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_sysaux_d81cgjc2_.dbf
channel c2: reading from backup piece /u99/backup/CDB/database_3jrr0b52_1_1
channel c3: starting datafile backup set restore
channel c3: specifying datafile(s) to restore from backup set
channel c3: restoring datafile 00012 to /u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_users_d81cgq9f_.dbf
channel c3: restoring datafile 00013 to /u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_rman_d8ccofgs_.dbf
channel c3: reading from backup piece /u99/backup/CDB/database_3mrr0b61_1_1
channel c4: starting datafile backup set restore
channel c4: specifying datafile(s) to restore from backup set
channel c4: restoring datafile 00003 to /u02/oradata/CDB/datafile/o1_mf_sysaux_d81c49wd_.dbf
channel c4: reading from backup piece /u99/backup/CDB/database_3irr0b52_1_1
channel c3: piece handle=/u99/backup/CDB/database_3mrr0b61_1_1 tag=DBI_BACKUP
channel c3: restored backup piece 1
channel c3: restore complete, elapsed time: 00:00:03
channel c3: starting datafile backup set restore
channel c3: specifying datafile(s) to restore from backup set
channel c3: restoring datafile 00004 to /u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c530h_.dbf
channel c3: reading from backup piece /u99/backup/CDB/database_3prr0b69_1_1
channel c3: piece handle=/u99/backup/CDB/database_3prr0b69_1_1 tag=DBI_BACKUP
channel c3: restored backup piece 1
channel c3: restore complete, elapsed time: 00:00:01
channel c3: starting datafile backup set restore
channel c3: specifying datafile(s) to restore from backup set
channel c3: restoring datafile 00001 to /u02/oradata/CDB/datafile/o1_mf_system_d81c2wsf_.dbf
channel c3: reading from backup piece /u99/backup/CDB/database_3hrr0b52_1_1
channel c1: piece handle=/u99/backup/CDB/database_3krr0b52_1_1 tag=DBI_BACKUP
channel c1: restored backup piece 1
channel c1: restore complete, elapsed time: 00:00:20
channel c1: starting datafile backup set restore
channel c1: specifying datafile(s) to restore from backup set
channel c1: restoring datafile 00007 to /u02/oradata/CDB/datafile/o1_mf_users_d81c542r_.dbf
channel c1: reading from backup piece /u99/backup/CDB/database_3qrr0b6b_1_1
channel c1: piece handle=/u99/backup/CDB/database_3qrr0b6b_1_1 tag=DBI_BACKUP
channel c1: restored backup piece 1
channel c1: restore complete, elapsed time: 00:00:01
channel c2: piece handle=/u99/backup/CDB/database_3jrr0b52_1_1 tag=DBI_BACKUP
channel c2: restored backup piece 1
channel c2: restore complete, elapsed time: 00:00:27
channel c4: piece handle=/u99/backup/CDB/database_3irr0b52_1_1 tag=DBI_BACKUP
channel c4: restored backup piece 1
channel c4: restore complete, elapsed time: 00:00:35
channel c3: piece handle=/u99/backup/CDB/database_3hrr0b52_1_1 tag=DBI_BACKUP
channel c3: restored backup piece 1
channel c3: restore complete, elapsed time: 00:00:40
Finished restore at 26-JAN-2017 13:50:25

released channel: c1

released channel: c2

released channel: c3

released channel: c4

RMAN>

No recovery is needed here, because it was an cold RMAN backup. You can just open the database with open resetslogs.

RMAN> alter database open RESETLOGS;

Statement processed
Conclusion

Take care that you put your RMAN Tags at the correct location.

 

Cet article Oracle 12cR2 – RMAN cold backup with TAG’s est apparu en premier sur Blog dbi services.

Basicfile LOBs

Jonathan Lewis - Thu, 2017-01-26 06:03

I wrote a short series a little while ago about some of the nasty things that can happen (and can’t really be avoided) with Basicfile LOBs and recently realised that it needed a directory entry so that I didn’t have to supply 6 URLs if I wanted to point someone to it; so here’s the catalogue:

At some stage I may also write a similar series about Securefile LOBs – because you do hit problems if you have a system that does a lot of work modifying a LOB segment whether or not it’s Basicfile or Securefile, and you need a strategy for damage limitation.

Footnote

At the time of creating this catalogue I’ve had an SR open with Oracle for about 4 months on the problem that triggered this series, basically asking if there was a way to limit the number of chunks that could be taken off the reusable part of the index. So far I haven’t had an answer to that question; however the client was able to switch the table into a partitioned table and now drops old partitions rather than deleting old data.

 


What's new in Training in 2017?

Rittman Mead Consulting - Thu, 2017-01-26 05:30
2016 - Thank you for a great year!

Rittman Mead would like to thank everyone that attended or showed an interest in our Training courses in 2016. Since we started back in 2007, Training has been a mainstay of our service offerings.

My personal opinion is that Q3 & Q4 saw the emergence of OBIEE 12c being properly adopted within the marketplace. It made sense for companies to wait for some of the bugs from earlier releases to be ironed out as well as waiting for clarity around the release of things such as Data Visualization Desktop.

It meant that we started to really see numbers pick up in our OBIEE 12c bootcamp. For the first time we’ve really tried to stress the fact that different parts of the course can be suitable for different people based on their everyday use of the product. This has led to more business focused end-users of OBIEE attending our training.

We love travelling and 2016 yet again took us to some amazing places to deliver courses to a variety of different clients. Locations we visited included South Africa, India, Sweden, Jamaica, Bulgaria and Ireland to name a few.

Finally we were really proud to release our new On Demand Training platform in December 2016 with our first online course, OBIEE 12c Front End Development & Data Visualization.

What’s new in 2017

We’re looking forward to another busy year in 2017 and it’s certainly already underway!!

Our public training schedule has been published with a number of courses available in OBIEE & ODI.
Here you can find the course dates for UK & Europe

And here you can find the courses available in the US

2017 will also see the release of some new courses including:

Advanced Analytics and Oracle R

We are seeing more and more investment in Predictive Analytics projects from companies looking to make as much value out of their data as possible.
Our 3 day course will teach you about the tools available & the techniques required to start or continue your Predictive Analytics journey.

From acquiring to tidying and transforming data, moving into the types of Predictive Models and how to the deploy them, our course will strengthen your knowledge and teach you valuable techniques.

The Advanced Analytics & Oracle R course will be available from March 2017, please get in touch for more details.

ODI 12c Bootcamp

2017 will also see the refresh of our ODI 12c Bootcamp. There are some very handy new features in the latest version such as the Big Data Integration and also Lifecycle Management. We'll also be including some lessons on advanced techniques such as Groovy Scripting in ODI.
We’re looking forward to teaching these extra modules soon.

The new course will be released in Q2 2017.

On Demand Training

We will be adding more courses to our On Demand Training platform throughout 2017. We recognise the value of classroom instructor led training however we also understand that people have busy lives and that sometimes flexibility to learn at your own pace is important.

Our On Demand Training platform provides this opportunity whether you’re trying to reaffirm your learning post-classroom training or looking to learn a new skill for the first time.

Courses that will be added online in 2017 include OBIEE 12c RPD Modeling, OBIEE 12c Systems Management & Performance, OBIEE 11g Front End Development, ODI 12c Bootcamp, ODI 11g for BI Apps and many more….

For more information and updates, please head to our webpage

Categories: BI & Warehousing

Partner Webcast – Real Time Business Integration Insights with Oracle SOA Suite Service

Are your data still stale? Haven’t you had the need to view of the business in real time so you can still react to the issue, before it’s too late? Today’s business owners still lack visibility and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Documentum D2 4.5 and IE compatibility and F5

Yann Neuhaus - Thu, 2017-01-26 02:26

We had a problem with a customer where D2 was not loading properly in IE when going through F5 (load balancer). When trying to access D2 through the F5, let’s say: https://d2prod/D2 only a few menus and some part of the workspace were loading but it ended to say “Unexpected error occured”.

Investigation

It would have been too easy if this error appeared in the logs, but it didn’t. So that means it was not a D2 internal error but maybe in the interface or the way it is loading in IE. Because, fun fact, it was loading properly in Chrome. Additional fun fact, when using a superuser account it was also loading properly in IE!

As it was an interface error I used the IE debugging tool F12. At first I didn’t see the error in the console but when digging a bit inside all the verbose logs I found this:

SEVERE: An unexpected error occurred. Please refresh your browser
com.google.gwt.core.client.JavaScriptException: (TypeError) 
 description: Object doesn't support property or method 'querySelectorAll'
number: -2146827850: Object doesn't support property or method 'querySelectorAll'

After some researches I figured out that others had issues with “querySelectorAll” and IE. In fact it was depending on the version of  IE used because this function was not available prior IE 9.

Hence I came to the idea that my IE was not in the right compatibility mode, because I had IE 11, so it couldn’t be a version mismatch.

Fortunately thanks to the F12 console you can change the compatibility mode:

Capture_Compat_8

As I thought, the compatibility mode was set (and blocked) to 8, which was not supporting “querySelectorAll”. But I couldn’t change it to a higher value. Hence, I figured this out:

Capture_Compat_Enterprise

I was in Enterprise Mode. This mode forces the compatibility version and some other sort of things. Fortunately you can disable it in the browser by going into the “Tools” menu of IE. Then, like magic, I was able to switch to the compatibility version 10:

Capture_Compat_10_2

And, miracle. The D2 interface reloaded properly, with all menus and workspaces. You remember it was working with superuser accounts? In fact, when using a superuser account the Enterprise Mode was not activated and the Compatibility version was set to 10.

The question is, why was it forced to 8?

Solution

In fact, it was customer related. They had a policy rule applying for the old D2 (3.1) which needed the Enterprise Mode and compatibility mode set to 8. So when using the old dns link to point to the new D2, these modes were still applied.

So we asked to disable the Enterprise Mode and the compatibility mode returned to 10 by default. So be careful with IE in your company ;)

 

Cet article Documentum D2 4.5 and IE compatibility and F5 est apparu en premier sur Blog dbi services.

Original Windows 10 Release Desupported on March 26, 2017

Steven Chan - Thu, 2017-01-26 02:05

Microsoft recently announced the availability of Windows 10 version 1607, also known as the Windows 10 Anniversary Update.  With this latest release, Microsoft is desupporting the first release of Windows 10 -- version 1507 -- on March 26, 2017. After that date, no new patches will be created for version 1507.

Oracle's policy for the E-Business Suite is that we support certified third-party products as long as the third-party vendor supports them.  When a third-party vendor retires a product, we consider that to be an historical certification for EBS.

What can EBS customers expect after March 2017?

After Microsoft desupports Windows 10 version 1507 in March 2017:

  • Oracle Support will continue to assist, where possible, in investigating issues that involve Windows 10 version 1057 clients.
  • Oracle's ability to assist may be limited due to limited access to PCs running Windows 10 version 1057.
  • Oracle will continue to provide access to existing EBS patches for Windows 10 version 1057 issues.
  • Oracle will provide new EBS patches only for issues that can be reproduced on later operating system configurations that Microsoft is actively supporting (e.g. Windows 10 version 1607)

What should EBS users do?

Oracle strongly recommends that E-Business Suite upgrade their end-user desktops from Windows 10 version 1057 to the latest certified equivalents.  As of today, Windows 10 version 1607. 

What about EBS desktop-based client/server tools?

EBS sysadmins might use up to 14 different desktop-based tools to administer selected parts of the E-Business Suite.  These are:

Still in use (i.e. eligible for Error Correction Support)
  • BI Publisher (BIP/XMLP)
  • Configurator Developer
  • Discoverer Administrator/Desktop
  • Forms/Reports Developer
  • JDeveloper OA Extensions
  • Sales for Handhelds
  • Warehouse Builder
  • Workflow Builder
  • XML Gateway Message Designer
Obsolete
    • Applications Desktop Integrator (ADI)
    • Balanced Scorecard Architect (BSC)
    • Financial Analyzer
    • Financial Services Suite
    • Sales Analyzer Client

All of the tools still in use are certified with Windows 10 version 1067.

For complete details, see:

Related Articles

Categories: APPS Blogs

Oracle PaaS Partner Community Forum 2017

Angelo Santagata - Wed, 2017-01-25 18:38
Hey all, I'm going to be presenting at the Oracle PaaS Partner Community Forum 2017 in Split Croatia later this year in March on the topic of Enriching SaaS with PaaS.. Now PaaS4SaaS, as I call it, isn't hard but it isn't easy either..There are lots of different APIs to learn, different schemas to understand and let's not forget the data models and the vas around of tools you can use.. I’m the sort of guy who says, “If you haven’t done it, then you have no business in presenting or talking about it” so that leads me onto the next question…
  1. Are any of my readers going to be there?
  2. What PaaS4SaaS topics would be of interest to you?
  3. What SaaS projects do you have in the pipeline, and would help me work out #2.

Answers by email please via angelo.santagata@oracle.com and if you are going dont forget to register yourself!!

Pages

Subscribe to Oracle FAQ aggregator