Feed aggregator

Isnt with-statement not supported by Oracle-support or is just not recommended ?

Tom Kyte - Fri, 2016-12-16 03:26
I've seen in one of the client, the practice of writing the similar query as multiple layers of unions select table1.col_list, table2.col_list from table1, table2 where table1.col1 =table2.col1 union all select table1.col_list, table2.col_lis...
Categories: DBA Blogs

ETL Offload with Spark and Amazon EMR - Part 2 - Code development with Notebooks and Docker

Rittman Mead Consulting - Fri, 2016-12-16 03:00

In the previous article I gave the background to a project we did for a client, exploring the benefits of Spark-based ETL processing running on Amazon's Elastic Map Reduce (EMR) Hadoop platform. The proof of concept we ran was on a very simple requirement, taking inbound files from a third party, joining to them to some reference data, and then making the result available for analysis. The primary focus was proving the end-to-end concept, with future iterations focussing on performance and design optimisations.

Here we'll see how I went about building up the ETL process.

Processing Overview

The processing needed to iterate over a set of files in S3, and for each one:

  • Loads the file from S3
  • Determines region from filename, and adds as column to data
  • Deduplicates it
  • Writes duplicates to separate file
  • Loads sites reference data
  • Extracts domain from URL string
  • Joins facts with sites on domain
  • Writes resulting file to S3

Once the data is processed and landed back to S3, we can run analytics on it. See subsequent articles for discussion of Redshift vs in-place querying with tools such as Presto.

Ticking All The Cool-Kid Boxes - Spark AND Notebooks AND Docker!

Whilst others in Rittman Mead have done lots of work with Spark, I myself was new to it, and needed a sandpit in which I could flail around without causing any real trouble. Thus I set up a nice self-contained development environment on my local machine, using Docker to provision and host it, and Jupyter Notebooks as the interface.

Notebooks

In a world in which it seems that there are a dozen cool new tools released every day, Interactive Notebooks are for me one of the most significant of recent times for any developer. They originate in the world of data science, where taking the 'science' bit at its word, data processing and exploration is written in a self-documenting manner. It makes it possible to follow how and why code was written, what the output at each stage was -- and to run it yourself too. By breaking code into chunks it makes it much easier to develop as well, since you can rerun and perfect each piece before moving on.

Notebooks are portable, meaning that you can work with them in one system, and then upload them for others to learn from and even import to run on their own systems. I've shared a simplified version of the notebook that I developed for this project on gist here, and you can see an embedded version of it at the end of this article.

The two most common are Apache Zeppelin, and Jupyter Notebooks (previously known as iPython Notebooks). Jupyter is the one I've used previously, and stuck with again here. To read more about notebooks and see them in action, see my previous blog posts here and here.

Docker

Plenty's been written about Docker. In a nutshell, it is a way to provision and host a set of self-contained software. It takes the idea of a virtual machine (such as VMWare, or VirtualBox), but without having to install an OS, and then the software, and then configure it all yourself. You simply take a "Dockerfile" that someone has prepared, and run it. You can create copies, or throwaway and start again, from a single command. I ran Docker on my Mac through Kitematic, and natively on my home server.

There are prebuilt Docker configuration files for lots of software (including Oracle and OBIEE!), and I found one that includes Spark, PySpark, and Jupyter - perfect!

To launch it, you simply enter:

docker run -d -p 18888:8888 jupyter/all-spark-notebook

This downloads all the necessary Docker files etc - you don't need anything local first, except Docker.

I ran it with an additional flag, -v, configuring it to use a folder on my local machine to store the work that I created. By default all files reside within the Docker image itself - and get deleted when you delete the Docker instance.

docker run -d -p 18888:8888 -v /Users/rmoff/all-spark-notebook:/home/jovyan/work jupyter/all-spark-notebook

You can also run the container with an additional flag, GRANT_SUDO, so that the guest user can run sudo commands within it. To do this include -e GRANT_SUDO=yes --user root:

docker run -d -p 18888:8888 -e GRANT_SUDO=yes --user root -v /Users/rmoff/all-spark-notebook:/home/jovyan/work jupyter/all-spark-notebook

With the docker container running, you can access Jupyter notebooks on the port exposed in the command used to launch it (18888)

Getting Started with Jupyter

From Jupyter's main page you can see the files within the main folder (see above for how you can map this to a local folder on the machine hosting Docker). Using the New menu in the top-right you can create:

  • Folders and edit Text files
  • A terminal
  • A notebook, running under one of several different 'Kernels' (host interpreters and environments)

The ability to run a terminal from Jupyter is very handy - particularly on Docker. Docker by its nature isn't really designed for interaction within the container itself. It's the point of Docker in a way, that it provisions and configures all the software for you. You can use Docker to run a bash shell directly, but it's more fiddly than just using the Jupyer Terminal.

I used a Python 2 notebook for my work; with this Docker image you also have the option of Python 3, Scala, and R.

Developing the Spark job

With my development environment up and running, I set to writing the Spark job. Because I'm already familiar with Python I took advantage of PySpark. Below I describe the steps in the processing and how I achieved them.

Environment Preparation

Define AWS parameters:

access_key='XXXXXXXXXXXXXXXXXX  
secret='YYYYYYYYYYYYYYYYY'  
bucket_name='foobar-bucket'  

Set up the PySpark environment, including necessary JAR files for accessing S3 from Spark:

import os  
os.environ['AWS_ACCESS_KEY_ID'] = access_key  
os.environ['AWS_SECRET_ACCESS_KEY'] = secret  
os.environ['PYSPARK_PYTHON'] = '/usr/bin/python2'  
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages org.apache.hadoop:hadoop-aws:2.7.1,com.amazonaws:aws-java-sdk-pom:1.10.34,com.databricks:spark-csv_2.11:1.3.0 pyspark-shell'  

Create a spark context:

import pyspark  
sc = pyspark.SparkContext('local[*]')  
sqlContext = pyspark.SQLContext(sc)  

Import Python libraries

from pyspark.sql.functions import udf  
from pyspark.sql.functions import lit  
import boto  
from urlparse import urlsplit  

Note that to install python libraries not present on the Docker image (such as boto, which is used for accessing AWS functionality from within Python) you can run from a Jupyter Terminal:

/opt/conda/envs/python2/bin/pip install boto

On other platforms the path to pip will vary, but the install command is the same

Loading Data from S3

The source data comes from an S3 "bucket", on which there can be multiple "keys". Buckets and keys roughly translate to "disk drive" and "file".

We use the boto library to interact with S3 to find a list of files ('keys') matching the pattern of source files that we want to process.

Connect to the bucket
conn_s3 = boto.connect_s3()  
bucket = conn_s3.get_bucket(bucket_name)  
Iterate over the bucket contents

This bit would drive iterative processing over multiple input files; for now it just picks the last file on the list (acme_file getting set on each iteration and so remaining set after the loop)

contents=bucket.list(prefix='source_files/')  
for f in contents:  
    print f.name
    print f.size
    acme_file = f.name
print "\n\n--\nFile to process: %s" % acme_file  
Read the CSV from S3 into Spark dataframe

The Docker image I was using was running Spark 1.6, so I was using the Databricks CSV reader; in Spark 2 this is now available natively. The CSV file is loaded into a Spark data frame. Note that Spark is reading the CSV file directly from a S3 path.

full_uri = "s3n://{}/{}".format(bucket_name, acme_file)  
print full_uri  
s3n://foobar-bucket/source_files/acme_GB_20160803_100000.csv
acme_df = sqlContext.read.load(full_uri,  
                                  format='com.databricks.spark.csv',
                                  header='true',
                                  inferSchema='true')

acme_df.printSchema()  
root
 |-- product: string (nullable = true)
 |-- product_desc: string (nullable = true)
 |-- product_type: string (nullable = true)
 |-- supplier: string (nullable = true)
 |-- date_launched: timestamp (nullable = true)
 |-- position: string (nullable = true)
 |-- url: string (nullable = true)
 |-- status: string (nullable = true)
 |-- reject_reason: string (nullable = true)

The above shows the schema of the dataframe; Spark has infered this automagically from the column headers (for the column names), and then the data types within (note that it has correctly detected a timestamp in the date_launched column)

Add country column to data frame

The filename of the source data includes a country field as part of it. Here we use this regular expression to extract it:

filename=os.path.split(acme_file)[1]  
import re  
m=re.search('acme_([^_]+)_.+$', filename)  
if m is None:  
    country='NA'
else:  
    country=m.group(1)

print "Country determined from filename '%s' as : %s" % (filename,country)  
Country determined from filename 'acme_GB_20160803_100000.csv' as : GB

With the country stored in a variable, we add it as a column to the data frame:

Note that the withColumn function requires a Column value, which we create here using the PySpark lit function that was imported earlier on.

acme_df=acme_df.withColumn('country',lit(country))

acme_df.printSchema()  
root
 |-- product: string (nullable = true)
 |-- product_desc: string (nullable = true)
 |-- product_type: string (nullable = true)
 |-- supplier: string (nullable = true)
 |-- date_launched: timestamp (nullable = true)
 |-- position: string (nullable = true)
 |-- url: string (nullable = true)
 |-- status: string (nullable = true)
 |-- reject_reason: string (nullable = true)
 |-- country: string (nullable = false)

Note the new column added to the end of the schema.

Deduplication

Now that we've imported the file, we need to deduplicate it to remove entries with the same value for the url field. Here I'm created a second dataframe based on a deduplication of the first, using the PySpark native function dropDuplicates:

acme_deduped_df = acme_df.dropDuplicates(['url'])  

For informational purposes we can see how many records are in the two dataframes, and determine how many duplicates there were:

orig_count = acme_df.count()  
deduped_count = acme_deduped_df.count()  
print "Original count: %d\nDeduplicated count: %d\n\n" % (orig_count,deduped_count)  
print "Number of removed duplicate records: %d" % (orig_count - deduped_count)  
Original count: 97974
Deduplicated count: 96706


Number of removed duplicate records: 1268
Deriving Domain from URL

One of the sets of reference data is information about the site on which the product was viewed. To bring these sets of attributes into the main dataset we join on the domain itself. To perform this join we need to derive the domain from the URL. We can do this using the python urlsplit library, as seen in this example:

sample_url = 'https://www.rittmanmead.com/blog/2016/08/using-apache-drill-with-obiee-12c/'

print sample_url  
print urlsplit(sample_url).netloc  
https://www.rittmanmead.com/blog/2016/08/using-apache-drill-with-obiee-12c/
www.rittmanmead.com

We saw above that to add a column to the dataframe the withColumn function can be used. However, to add a column that's based on another (rather than a literal, which is what the country column added above was) we need to use the udf function. This generates the necessary Column field based on the urlsplit output for the associated url value.

First we define our own function which simply applies urlsplit to the value passed to it

def getDomain(value):  
    return urlsplit(value).netloc

and then a UDF based on it:

udfgetDomain = udf(getDomain)  

Finally, apply this to a third version of the dataframe:

acme_deduped_df_with_netloc = acme_deduped_df.withColumn("netloc", udfgetDomain(acme_deduped_df.url))  
Joining to Second Dataset

Having preparing the primary dataset, we'll now join it to the reference data. The source of this is currently an Oracle database. For simplicity we're working with a CSV dump of the data, but PySpark supports the option to connect to sources with JDBC so we could query it directly if required.

First we import the sites reference data CSV:

sites_file = "s3n://{}/{}".format('foobar-bucket', 'sites.csv')  
sites_df = sqlContext.read.load(sites_file,  
                                  format='com.databricks.spark.csv',
                                  header='true',
                                  inferSchema='true')

Then some light data cleansing with the filter function to remove blank SITE entries, and blank SITE_RETAIL_TYPE entries:

sites_pruned_df = sites_df.filter("NOT (SITE ='' OR SITE_RETAIL_TYPE = '')")  

Now we can do the join itself. This joins the original dataset (acme_deduped_df_with_netloc) with the sites reference data (sites_pruned_df), using a left outer join.

merged_df = acme_deduped_df_with_netloc.join(sites_pruned_df,acme_deduped_df_with_netloc.netloc == sites_pruned_df.SITE, 'left_outer')  

Using the filter function and show we can inspect the dataset for matches, and misses:

First 10 matched:

merged_df.filter(merged_df.ID.isNotNull()).select('date_launched','url','netloc','ID','SITE','SITE_RETAIL_TYPE').show(10)  

First 10 unmatched:

merged_df.filter(merged_df.ID.isNull()).select('date_launched','url','netloc','ID','SITE','SITE_RETAIL_TYPE').show(10)  
Write Back to S3

The finished dataset is written back to S3. As before, we're using the databricks CSV writer here but in Spark 2 would be doing it natively:

acme_enriched_filename='acme_enriched/%s' % filename.replace('.csv','')  
full_uri = "s3n://{}/{}".format(bucket_name, acme_enriched_filename)  
print 'Writing enriched acme data to %s' % full_uri  
merged_df.write.save(path=full_uri,  
                     format='com.databricks.spark.csv',
                     header='false',
                     nullValue='null',
                     mode='overwrite')
Summary

With the above code written, I could process input files in a couple of minutes per 30MB file. Bear in mind two important constraints to this performance:

  1. I was working with data residing up in the Amazon Cloud, with the associated network delay in transferring to and from it

  2. The processing was taking place on a single node Spark deployment (on my laptop, under virtualisation), rather than the multiple-node configuration typically seen.

The next steps, as we'll see in the next article, were to port this code up to Amazon Elastic Map Reduce (EMR). Stay tuned!

Footnote: Notebooks FTW!

(FTW)

Whilst I've taken the code and written it out above more in the form of a blog post, I could have actually just posted the Notebook itself, and it wouldn't have needed much more explanation. Here it is, along with some bonus bits on using S3 from python:

Categories: BI & Warehousing

Does Oracle E-Business Suite Require Oracle Database Enterprise Edition?

Steven Chan - Fri, 2016-12-16 02:05

Yes.  All versions of Oracle E-Business Suite require Oracle Database Enterprise Edition.  Oracle Database Enterprise Edition is a mandatory requirement for EBS 12.2, 12.1 and older EBS releases. This applies to all Oracle Database releases, including Database 12.1, 11.2, and older Database releases certified with EBS.

It is not possible to run Oracle E-Business Suite with other Oracle Database editions, including:

  • Oracle Database Standard Edition One (Database SE1)
  • Oracle Database Standard Edition (Database SE)
  • Oracle Database Express Edition (Database XE)
  • Oracle Database Personal Edition (Database PE)

Why is Oracle Database Enterprise Edition a mandatory prerequisite for EBS today?

The Oracle E-Business Suite is designed to support a wide range of deployment architectures, from small single-office instances to highly-secure, highly-available Global Single Instance deployments.  To support these architectures, the E-Business Suite uses a broad range of database features and extra cost options such as Advanced Queuing, Data Guard, Virtual Private Databases, Secure Application Roles, Fine-Grained Auditing, Advanced Security, Label Security, Materialized Views, Database Vault, Partitioning, OLAP, Advanced Compression, Connection Manager, and more.

Will Oracle Database Enterprise Edition always be a mandatory prerequisite?

It is expected that Oracle Database Enterprise Edition will be a mandatory prerequisite for all future Oracle E-Business Suite releases and Database release combinations.

The Enterprise Edition is the only Database edition that includes all of those components. Enterprise Edition is the only Database edition certified and supported for use with E-Business Suite environments

The preceding is intended to outline our general product direction.  It is intended for information purposes only, and may not be incorporated into any contract.   It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decision.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.


Categories: APPS Blogs

RMAN MAXSETSIZE, MAXPIECESIZE, FILESPERSET Unveil

Michael Dinh - Thu, 2016-12-15 20:14

First, any backup implementation should have recovery point and time objective.

Next, there’s no right or wrong, just what fits the requirements.

FILESPERSET controls maximum number of files for each backupset.
MAXPIECESIZE controls maximum size of backuppiece.
MAXSETSIZE controls maximum size of backupset.

Typically, if MAXSETSIZE is set, it should be equal to or greater than maximum size of all data files.
In general, it is not recommended to set MAXSETSIZE.

Backupset can contains 1 or more backuppiece; hence, why set a limitation on maxsetsize when it may be better to set limitation for maxpiecsize?

Justification for filesperset=1 is faster recovery and filesperset>1 is to reduce management of backup, e.g. crosscheck.

Consideration, backup versus restore frequencies.

Here’s an analogy, you are at bank withdrawing $2000 and the teller ask, how do you want the bill?
Would you like 20 $100 bills or multiple small bills 1,5,10,20 or combination of there of?

Same concept applies to backupset and filesperset.

Note: FILESPERSET and MAXOPENFILES affect multiplexing, number of buffers, size of each buffer and is not the scope of this blog post.

From the DEMO, there is only 1 backuppiece for each backupset and depending on maxsetsize, the number of backupsets created is different.

MAX DATAFILE SIZE IS 513M

ARROW1:(SYS@HAWKA):PRIMARY> select max(bytes)/1024/1024 from v$datafile;

MAX(BYTES)/1024/1024
--------------------
                 513

ARROW1:(SYS@HAWKA):PRIMARY> select name, bytes/1024/1024 from v$datafile;

NAME                                                    BYTES/1024/1024
------------------------------------------------------- ---------------
/oradata/HAWKA/datafile/o1_mf_system_d3q57w8h_.dbf                  513
/oradata/HAWKA/datafile/o1_mf_sysaux_d3q594c1_.dbf                  257
/oradata/HAWKA/datafile/o1_mf_undotbs_d3q59m1h_.dbf                 256
/oradata/HAWKA/datafile/o1_mf_users_d3q5b4gw_.dbf                   129
/oradata/HAWKA/datafile/o1_mf_users_d4gohzod_.dbf                    16

ARROW1:(SYS@HAWKA):PRIMARY>

Notice the distribution and size of datafiles in backupsets.

CONFIGURE MAXSETSIZE TO UNLIMITED

PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
4 BACKUPSETS CREATED FOR LEVEL0 (2 datafiles, 1 controlfile, 1 spfile)
4 BACKUPSETS CREATED FOR LEVEL1
8 BACKUPSETS TOTAL SIZE 413680k
---------------------------------------------------------------------------
Datafiles backupset 1 (181.24M)
  2    0  Incr 1706145    2016-DEC-14 19:38:41 /oradata/HAWKA/datafile/o1_mf_sysaux_d3q594c1_.dbf (257M)
  3    0  Incr 1706145    2016-DEC-14 19:38:41 /oradata/HAWKA/datafile/o1_mf_undotbs_d3q59m1h_.dbf(256M)
  4    0  Incr 1706145    2016-DEC-14 19:38:41 /oradata/HAWKA/datafile/o1_mf_users_d3q5b4gw_.dbf  (129M)

NOTICE: sum of datafiles > 513M since maxsetsize is unlimited and backuppiece < 513M
Datafiles backupset 2 (190.24M)
  1    0  Incr 1706146    2016-DEC-14 19:38:41 /oradata/HAWKA/datafile/o1_mf_system_d3q57w8h_.dbf (513M)
  5    0  Incr 1706146    2016-DEC-14 19:38:41 /oradata/HAWKA/datafile/o1_mf_users_d4gohzod_.dbf  ( 16M)

CONFIGURE MAXSETSIZE TO 513 M

PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
5 BACKUPSETS CREATED FOR LEVEL0 (3 datafiles, 1 controlfile, 1 spfile)
5 BACKUPSETS CREATED FOR LEVEL1
10 BACKUPSETS TOTAL SIZE 413808k
---------------------------------------------------------------------------
NOTICE: backupset is only 3.59M and no other datafiles can be considered since it would exceed maxsetsize of 513M
Datafiles backupset 1 (3.59M) 
  3    0  Incr 1706519    2016-DEC-14 19:42:36 /oradata/HAWKA/datafile/o1_mf_undotbs_d3q59m1h_.dbf (256M)
  4    0  Incr 1706519    2016-DEC-14 19:42:36 /oradata/HAWKA/datafile/o1_mf_users_d3q5b4gw_.dbf   (129M)

Before backup, RMAN does not know datafile usage and it would be too time consumsing to check. Only datafile size is checked.
RMAN performed NULL compression since backupset is 189M and datafile is 513M.

NOTICE: sum of datafiles = 513M, maxsetsize is 513M, backuppiece < 513M
Datafiles backupset 2 (189.23M)
  1    0  Incr 1706518    2016-DEC-14 19:42:36 /oradata/HAWKA/datafile/o1_mf_system_d3q57w8h_.dbf  (513M)

Datafiles backupset 3 (178.68M)
  2    0  Incr 1706521    2016-DEC-14 19:42:37 /oradata/HAWKA/datafile/o1_mf_sysaux_d3q594c1_.dbf  (257M)
  5    0  Incr 1706521    2016-DEC-14 19:42:37 /oradata/HAWKA/datafile/o1_mf_users_d4gohzod_.dbf   ( 16M)

All the details.

CONFIGURE MAXSETSIZE TO UNLIMITED

RMAN> show all;

using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name HAWKA are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/oradata/backup/%d_%I_%T_%U.bkp' MAXPIECESIZE 513 M MAXOPENFILES 1;
CONFIGURE MAXSETSIZE TO UNLIMITED;
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.2.0.4/db_1/dbs/snapcf_HAWKA.f'; # default

RMAN> backup incremental level 0 database filesperset 4 tag 'L0_MAX';
RMAN> list backup summary tag L0_MAX;


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
213     B  0  A DISK        2016-DEC-14 19:38:43 1       1       NO         L0_MAX
214     B  0  A DISK        2016-DEC-14 19:38:43 1       1       NO         L0_MAX
215     B  0  A DISK        2016-DEC-14 19:38:45 1       1       NO         L0_MAX
216     B  0  A DISK        2016-DEC-14 19:38:45 1       1       NO         L0_MAX

RMAN> list backup tag L0_MAX;

List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
213     Incr 0  181.24M    DISK        00:00:02     2016-DEC-14 19:38:43
        BP Key: 243   Status: AVAILABLE  Compressed: NO  Tag: L0_MAX
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_6prnfjq1_1_1.bkp
  List of Datafiles in backup set 213
  File LV Type Ckp SCN    Ckp Time             Name
  ---- -- ---- ---------- -------------------- ----
  2    0  Incr 1706145    2016-DEC-14 19:38:41 /oradata/HAWKA/datafile/o1_mf_sysaux_d3q594c1_.dbf 
  3    0  Incr 1706145    2016-DEC-14 19:38:41 /oradata/HAWKA/datafile/o1_mf_undotbs_d3q59m1h_.dbf
  4    0  Incr 1706145    2016-DEC-14 19:38:41 /oradata/HAWKA/datafile/o1_mf_users_d3q5b4gw_.dbf  
  
BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
214     Incr 0  190.24M    DISK        00:00:02     2016-DEC-14 19:38:43
        BP Key: 244   Status: AVAILABLE  Compressed: NO  Tag: L0_MAX
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_6qrnfjq1_1_1.bkp
  List of Datafiles in backup set 214
  File LV Type Ckp SCN    Ckp Time             Name
  ---- -- ---- ---------- -------------------- ----
  1    0  Incr 1706146    2016-DEC-14 19:38:41 /oradata/HAWKA/datafile/o1_mf_system_d3q57w8h_.dbf
  5    0  Incr 1706146    2016-DEC-14 19:38:41 /oradata/HAWKA/datafile/o1_mf_users_d4gohzod_.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
215     Incr 0  80.00K     DISK        00:00:00     2016-DEC-14 19:38:45
        BP Key: 245   Status: AVAILABLE  Compressed: NO  Tag: L0_MAX
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_6srnfjq5_1_1.bkp
  SPFILE Included: Modification time: 2016-DEC-14 19:02:12
  SPFILE db_unique_name: HAWKA

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
216     Incr 0  16.08M     DISK        00:00:00     2016-DEC-14 19:38:45
        BP Key: 246   Status: AVAILABLE  Compressed: NO  Tag: L0_MAX
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_6rrnfjq5_1_1.bkp
  Control File Included: Ckp SCN: 1706149      Ckp time: 2016-DEC-14 19:38:45

RMAN> backup incremental level 1 database filesperset 8 tag 'L1_MAX';
RMAN> list backup summary tag L1_MAX;

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
217     B  1  A DISK        2016-DEC-14 19:39:14 1       1       NO         L1_MAX
218     B  1  A DISK        2016-DEC-14 19:39:15 1       1       NO         L1_MAX
219     B  1  A DISK        2016-DEC-14 19:39:15 1       1       NO         L1_MAX
220     B  1  A DISK        2016-DEC-14 19:39:15 1       1       NO         L1_MAX

RMAN> list backup tag L1_MAX;

List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
217     Incr 1  40.00K     DISK        00:00:02     2016-DEC-14 19:39:14
        BP Key: 247   Status: AVAILABLE  Compressed: NO  Tag: L1_MAX
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_6urnfjr0_1_1.bkp
  List of Datafiles in backup set 217
  File LV Type Ckp SCN    Ckp Time             Name
  ---- -- ---- ---------- -------------------- ----
  1    1  Incr 1706213    2016-DEC-14 19:39:12 /oradata/HAWKA/datafile/o1_mf_system_d3q57w8h_.dbf
  5    1  Incr 1706213    2016-DEC-14 19:39:12 /oradata/HAWKA/datafile/o1_mf_users_d4gohzod_.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
218     Incr 1  48.00K     DISK        00:00:03     2016-DEC-14 19:39:15
        BP Key: 248   Status: AVAILABLE  Compressed: NO  Tag: L1_MAX
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_6trnfjr0_1_1.bkp
  List of Datafiles in backup set 218
  File LV Type Ckp SCN    Ckp Time             Name
  ---- -- ---- ---------- -------------------- ----
  2    1  Incr 1706212    2016-DEC-14 19:39:12 /oradata/HAWKA/datafile/o1_mf_sysaux_d3q594c1_.dbf
  3    1  Incr 1706212    2016-DEC-14 19:39:12 /oradata/HAWKA/datafile/o1_mf_undotbs_d3q59m1h_.dbf
  4    1  Incr 1706212    2016-DEC-14 19:39:12 /oradata/HAWKA/datafile/o1_mf_users_d3q5b4gw_.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
219     Incr 1  80.00K     DISK        00:00:00     2016-DEC-14 19:39:15
        BP Key: 249   Status: AVAILABLE  Compressed: NO  Tag: L1_MAX
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_70rnfjr3_1_1.bkp
  SPFILE Included: Modification time: 2016-DEC-14 19:02:12
  SPFILE db_unique_name: HAWKA

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
220     Incr 1  16.08M     DISK        00:00:00     2016-DEC-14 19:39:15
        BP Key: 250   Status: AVAILABLE  Compressed: NO  Tag: L1_MAX
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_6vrnfjr3_1_1.bkp
  Control File Included: Ckp SCN: 1706215      Ckp time: 2016-DEC-14 19:39:15

RMAN> exit

oracle@arrow1:HAWKA:/home/oracle
$ du -skc /oradata/backup/HAWK_3187737370_20161214*|sort -n
48      /oradata/backup/HAWK_3187737370_20161214_6urnfjr0_1_1.bkp
56      /oradata/backup/HAWK_3187737370_20161214_6trnfjr0_1_1.bkp
96      /oradata/backup/HAWK_3187737370_20161214_6srnfjq5_1_1.bkp
96      /oradata/backup/HAWK_3187737370_20161214_70rnfjr3_1_1.bkp
16480   /oradata/backup/HAWK_3187737370_20161214_6rrnfjq5_1_1.bkp
16480   /oradata/backup/HAWK_3187737370_20161214_6vrnfjr3_1_1.bkp
185604  /oradata/backup/HAWK_3187737370_20161214_6prnfjq1_1_1.bkp 
194820  /oradata/backup/HAWK_3187737370_20161214_6qrnfjq1_1_1.bkp
413680  total
oracle@arrow1:HAWKA:/home/oracle
$

MAX DATAFILE SIZE IS 513M AND CONFIGURE MAXSETSIZE TO 513 M

RMAN> show all;

using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name HAWKA are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/oradata/backup/%d_%I_%T_%U.bkp' MAXPIECESIZE 513 M MAXOPENFILES 1;
CONFIGURE MAXSETSIZE TO 513 M;
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.2.0.4/db_1/dbs/snapcf_HAWKA.f'; # default

RMAN> backup incremental level 0 database filesperset 4 tag 'L0_513';
RMAN> list backup summary tag L0_513;

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
221     B  0  A DISK        2016-DEC-14 19:42:36 1       1       NO         L0_513
222     B  0  A DISK        2016-DEC-14 19:42:37 1       1       NO         L0_513
223     B  0  A DISK        2016-DEC-14 19:42:38 1       1       NO         L0_513
224     B  0  A DISK        2016-DEC-14 19:42:38 1       1       NO         L0_513
225     B  0  A DISK        2016-DEC-14 19:42:38 1       1       NO         L0_513

RMAN> list backup tag L0_513;

List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
221     Incr 0  3.59M      DISK        00:00:00     2016-DEC-14 19:42:36
        BP Key: 251   Status: AVAILABLE  Compressed: NO  Tag: L0_513
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_72rnfk1c_1_1.bkp
  List of Datafiles in backup set 221
  File LV Type Ckp SCN    Ckp Time             Name
  ---- -- ---- ---------- -------------------- ----
  3    0  Incr 1706519    2016-DEC-14 19:42:36 /oradata/HAWKA/datafile/o1_mf_undotbs_d3q59m1h_.dbf
  4    0  Incr 1706519    2016-DEC-14 19:42:36 /oradata/HAWKA/datafile/o1_mf_users_d3q5b4gw_.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
222     Incr 0  189.23M    DISK        00:00:01     2016-DEC-14 19:42:37
        BP Key: 252   Status: AVAILABLE  Compressed: NO  Tag: L0_513
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_71rnfk1c_1_1.bkp
  List of Datafiles in backup set 222
  File LV Type Ckp SCN    Ckp Time             Name
  ---- -- ---- ---------- -------------------- ----
  1    0  Incr 1706518    2016-DEC-14 19:42:36 /oradata/HAWKA/datafile/o1_mf_system_d3q57w8h_.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
223     Incr 0  178.68M    DISK        00:00:01     2016-DEC-14 19:42:38
        BP Key: 253   Status: AVAILABLE  Compressed: NO  Tag: L0_513
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_73rnfk1d_1_1.bkp
  List of Datafiles in backup set 223
  File LV Type Ckp SCN    Ckp Time             Name
  ---- -- ---- ---------- -------------------- ----
  2    0  Incr 1706521    2016-DEC-14 19:42:37 /oradata/HAWKA/datafile/o1_mf_sysaux_d3q594c1_.dbf
  5    0  Incr 1706521    2016-DEC-14 19:42:37 /oradata/HAWKA/datafile/o1_mf_users_d4gohzod_.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
224     Incr 0  80.00K     DISK        00:00:00     2016-DEC-14 19:42:38
        BP Key: 254   Status: AVAILABLE  Compressed: NO  Tag: L0_513
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_75rnfk1e_1_1.bkp
  SPFILE Included: Modification time: 2016-DEC-14 19:02:12
  SPFILE db_unique_name: HAWKA

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
225     Incr 0  16.08M     DISK        00:00:01     2016-DEC-14 19:42:38
        BP Key: 255   Status: AVAILABLE  Compressed: NO  Tag: L0_513
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_74rnfk1d_1_1.bkp
  Control File Included: Ckp SCN: 1706522      Ckp time: 2016-DEC-14 19:42:37

RMAN> backup incremental level 1 database filesperset 8 tag 'L1_513';
RMAN> list backup summary tag L1_513;

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
226     B  1  A DISK        2016-DEC-14 19:44:20 1       1       NO         L1_513
227     B  1  A DISK        2016-DEC-14 19:44:20 1       1       NO         L1_513
228     B  1  A DISK        2016-DEC-14 19:44:22 1       1       NO         L1_513
229     B  1  A DISK        2016-DEC-14 19:44:23 1       1       NO         L1_513
230     B  1  A DISK        2016-DEC-14 19:44:23 1       1       NO         L1_513

RMAN> list backup tag L1_513;

List of Backup Sets
===================

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
226     Incr 1  56.00K     DISK        00:00:01     2016-DEC-14 19:44:20
        BP Key: 256   Status: AVAILABLE  Compressed: NO  Tag: L1_513
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_77rnfk4j_1_1.bkp
  List of Datafiles in backup set 226
  File LV Type Ckp SCN    Ckp Time             Name
  ---- -- ---- ---------- -------------------- ----
  3    1  Incr 1706668    2016-DEC-14 19:44:19 /oradata/HAWKA/datafile/o1_mf_undotbs_d3q59m1h_.dbf
  4    1  Incr 1706668    2016-DEC-14 19:44:19 /oradata/HAWKA/datafile/o1_mf_users_d3q5b4gw_.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
227     Incr 1  32.00K     DISK        00:00:01     2016-DEC-14 19:44:20
        BP Key: 257   Status: AVAILABLE  Compressed: NO  Tag: L1_513
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_76rnfk4j_1_1.bkp
  List of Datafiles in backup set 227
  File LV Type Ckp SCN    Ckp Time             Name
  ---- -- ---- ---------- -------------------- ----
  1    1  Incr 1706667    2016-DEC-14 19:44:19 /oradata/HAWKA/datafile/o1_mf_system_d3q57w8h_.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
228     Incr 1  96.00K     DISK        00:00:00     2016-DEC-14 19:44:22
        BP Key: 258   Status: AVAILABLE  Compressed: NO  Tag: L1_513
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_78rnfk4m_1_1.bkp
  List of Datafiles in backup set 228
  File LV Type Ckp SCN    Ckp Time             Name
  ---- -- ---- ---------- -------------------- ----
  2    1  Incr 1706671    2016-DEC-14 19:44:22 /oradata/HAWKA/datafile/o1_mf_sysaux_d3q594c1_.dbf
  5    1  Incr 1706671    2016-DEC-14 19:44:22 /oradata/HAWKA/datafile/o1_mf_users_d4gohzod_.dbf

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
229     Incr 1  80.00K     DISK        00:00:00     2016-DEC-14 19:44:23
        BP Key: 259   Status: AVAILABLE  Compressed: NO  Tag: L1_513
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_7arnfk4n_1_1.bkp
  SPFILE Included: Modification time: 2016-DEC-14 19:02:12
  SPFILE db_unique_name: HAWKA

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
230     Incr 1  16.08M     DISK        00:00:01     2016-DEC-14 19:44:23
        BP Key: 260   Status: AVAILABLE  Compressed: NO  Tag: L1_513
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_79rnfk4m_1_1.bkp
  Control File Included: Ckp SCN: 1706671      Ckp time: 2016-DEC-14 19:44:22

RMAN> list backuppiece '/oradata/backup/HAWK_3187737370_20161214_73rnfk1d_1_1.bkp';

using target database control file instead of recovery catalog

List of Backup Pieces
BP Key  BS Key  Pc# Cp# Status      Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
253     223     1   1   AVAILABLE   DISK        /oradata/backup/HAWK_3187737370_20161214_73rnfk1d_1_1.bkp

RMAN> list backupset 223;

List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
223     Incr 0  178.68M    DISK        00:00:01     2016-DEC-14 19:42:38
        BP Key: 253   Status: AVAILABLE  Compressed: NO  Tag: L0_513
        Piece Name: /oradata/backup/HAWK_3187737370_20161214_73rnfk1d_1_1.bkp
  List of Datafiles in backup set 223
  File LV Type Ckp SCN    Ckp Time             Name
  ---- -- ---- ---------- -------------------- ----
  2    0  Incr 1706521    2016-DEC-14 19:42:37 /oradata/HAWKA/datafile/o1_mf_sysaux_d3q594c1_.dbf
  5    0  Incr 1706521    2016-DEC-14 19:42:37 /oradata/HAWKA/datafile/o1_mf_users_d4gohzod_.dbf

RMAN>

$ du -skc /oradata/backup/HAWK_3187737370_20161214*|sort -n
40      /oradata/backup/HAWK_3187737370_20161214_76rnfk4j_1_1.bkp
64      /oradata/backup/HAWK_3187737370_20161214_77rnfk4j_1_1.bkp
96      /oradata/backup/HAWK_3187737370_20161214_75rnfk1e_1_1.bkp
96      /oradata/backup/HAWK_3187737370_20161214_7arnfk4n_1_1.bkp
104     /oradata/backup/HAWK_3187737370_20161214_78rnfk4m_1_1.bkp
3688    /oradata/backup/HAWK_3187737370_20161214_72rnfk1c_1_1.bkp
16480   /oradata/backup/HAWK_3187737370_20161214_74rnfk1d_1_1.bkp
16480   /oradata/backup/HAWK_3187737370_20161214_79rnfk4m_1_1.bkp
182976  /oradata/backup/HAWK_3187737370_20161214_73rnfk1d_1_1.bkp
193784  /oradata/backup/HAWK_3187737370_20161214_71rnfk1c_1_1.bkp
413808  total

Dataguard Oracle 12.2 : Support for Multiple Automatic Failover Targets

Yann Neuhaus - Thu, 2016-12-15 15:41

In a previous blog, we talked about Fast-Start Failover with the MaxProtection mode.
With Oracle 12.2 Oracle Data Guard now supports multiple failover targets in a fast-start failover configuration.
Previous functionality allowed for only a single fast-start failover target. If the failover target was unable to meet the requirements for fast-start failover at the time of primary failure, then an automatic failure would not occur.
Designating multiple failover targets significantly improves the likelihood that there will always be a standby suitable for automatic failover when needed.
Multiple failover targets increase high availability by making an automatic failover more likely to occur if there is a primary outage.
Indeed FastStartFailoverTarget can now have as value one site

DGMGRL> edit database 'ORCL_SITE' set property FastStartFailoverTarget='ORCL_SITE1';
Property "faststartfailovertarget" updated
DGMGRL>


DGMGRL> enable fast_start failover;
Enabled.
DGMGRL> show fast_start failover;
Fast-Start Failover: ENABLED
Threshold: 30 seconds
Target: ORCL_SITE1
Observer: standserver1.localdomain
Lag Limit: 30 seconds (not in use)
Shutdown Primary: TRUE
Auto-reinstate: TRUE
Observer Reconnect: (none)
Observer Override: FALSE

It can also have as values two sites (Priority following the order)

DGMGRL> disable Fast_start Failover;
Disabled.
DGMGRL> edit database 'ORCL_SITE' set property FastStartFailoverTarget='ORCL_SITE1,ORCL_SITE2';
Property "faststartfailovertarget" updated
DGMGRL>


DGMGRL> enable fast_start failover;
Enabled.
DGMGRL> show fast_start failover;
Fast-Start Failover: ENABLED
Threshold: 30 seconds
Target: ORCL_SITE1
Candidate Targets: ORCL_SITE1,ORCL_SITE2
Observer: standserver1.localdomain
Lag Limit: 30 seconds (not in use)
Shutdown Primary: TRUE
Auto-reinstate: TRUE
Observer Reconnect: (none)
Observer Override: FALSE

It can also have as value the keyword ANY (Priority defined by oracle)

DGMGRL> edit database 'ORCL_SITE' set property FastStartFailoverTarget='ANY';
Property "faststartfailovertarget" updated
DGMGRL> edit database 'ORCL_SITE1' set property FastStartFailoverTarget='ANY';
Property "faststartfailovertarget" updated
DGMGRL> edit database 'ORCL_SITE2' set property FastStartFailoverTarget='ANY';
Property "faststartfailovertarget" updated

With the value set to ANY we can see that there is a target failover but also there are also candidate targets. If a failover occurs the order will be ORCL_SITE1 first and if Not available ORCL_SITE2 will be the target.

DGMGRL> enable fast_start failover;
Enabled.
DGMGRL> show fast_start failover;
Fast-Start Failover: ENABLED
Threshold: 30 seconds
Target: ORCL_SITE1
Candidate Targets: ORCL_SITE1,ORCL_SITE2
Observer: standserver1.localdomain
Lag Limit: 30 seconds (not in use)
Shutdown Primary: TRUE
Auto-reinstate: TRUE
Observer Reconnect: (none)
Observer Override: FALSE

From the observer we can confirm that target is ORCL_SITE1

[W000 12/15 14:40:25.12] This observer is no longer registered with the configuration.
[W000 12/15 14:40:25.20] FSFO target standby is ORCL_SITE1
[W000 12/15 14:40:27.66] Observer trace level is set to USER
[W000 12/15 14:40:27.66] Try to connect to the primary.
[W000 12/15 14:40:27.66] Try to connect to the primary ORCL_SITE.
[W000 12/15 14:40:27.67] The standby ORCL_SITE1 is ready to be a FSFO target
[W000 12/15 14:40:29.67] Connection to the primary restored!

Now let’s simulate the case that ORCL_SITE1 is no longer available, for this let’s simply shutdown ORCL_SITE1 database, the failover target should be changed to ORCL_SITE2

SQL> shutdown abort
ORACLE instance shut down.
SQL>


DGMGRL> show fast_start failover;
Fast-Start Failover: ENABLED
Threshold: 30 seconds
Target: ORCL_SITE2
Candidate Targets: ORCL_SITE1,ORCL_SITE2
Observer: standserver1.localdomain
Lag Limit: 30 seconds (not in use)
Shutdown Primary: TRUE
Auto-reinstate: TRUE
Observer Reconnect: (none)
Observer Override: FALSE

From the observer we can see following messages that confirm that the failover target was switched to ORCL_SITE2

[W000 12/15 14:54:52.00] Permission granted to the primary database for target switch.
[W000 12/15 14:54:54.03] Standby database has changed to ORCL_SITE2.
[W000 12/15 14:54:54.04] Try to connect to the primary.
[W000 12/15 14:54:54.04] Try to connect to the primary ORCL_SITE.
[W000 12/15 14:54:55.48] The primary database returned to SYNC/NOT LAGGING state with the standby database ORCL_SITE2.
[W000 12/15 14:54:55.48] Connection to the primary restored!
[W000 12/15 14:54:58.48] Disconnecting from database ORCL_SITE.
[W000 12/15 14:55:04.52] The standby ORCL_SITE2 is ready to be a FSFO target

Conclusion
We have seen that we can now have multiple targets for the Fast-Start Failover and how to implement it

 

Cet article Dataguard Oracle 12.2 : Support for Multiple Automatic Failover Targets est apparu en premier sur Blog dbi services.

This cloud program will help you comply with new standards for revenue recognition

Linda Fishman Hoyle - Thu, 2016-12-15 14:09

A Guest Post by Sr. Director Seamus Moran, Oracle Financials Product Management (pictured left)

It isn’t often that three very heavy hitters—the domestic US standard setters, the Financial Accounting Standards Board (FASB), and the International Accounting Standards Board (IASB)—get together and redefine a business concept that is as significant as revenue.

But in 2014, they succeeded in doing just that.

Together, they published a converged and identical accounting standard named ASC 606 and IFRS 15.

It literally moved the business goal posts.

The new accounting standard is called Revenue from Contracts with Customers.

This new standard repeals everything we know about revenue—all of our favorite EITFs, SOPS, FAS, and even IAS 18.

At this point, you might be tempted to click away because of the technical acronyms, but please stay with me. It’s important that you understand the compliance issues you will be facing and how Oracle’s new Revenue Management Cloud can help.

In a nutshell, the new standard abolishes deferred revenue (unearned as of the date of the financial statements) in favor of an ‘accrued debt to customers for goods and services,’ which is payment that you are obligated to pay in the future.

That means you are required to estimate revenue up front and adjust the accrual afterwards. It introduces performance obligation accounting, which is a promise to transfer a good or service.

But at the end of the day, your sales are still your sales. What’s different is when those sales are booked as revenue.

You must take up the new standard on the first day of your fiscal year after January 1, 2018.

All public companies, here and abroad, are impacted to some degree. The standard deliberately does away with revenue guidance that is specific to an industry. Instead it upholds one way of recognizing revenue across the board.

The new definition is not difficult. Here’s a quick sequence of steps that you must follow:

  1. Identify each deal at inception
  2. Identify the promises to customers in that deal
  3. Value the deal at what you expect to get for it
  4. Assign that value to the promises using your standard stand-alone selling prices as a basis
  5. Recognize the revenue at that amount when you complete the transfer to the customer

An important part of adopting the standard is recalculating the opening balance sheet.

The opening balance sheet must reflect your performance obligation liability, instead of the deferred revenue liability with which you ended the previous year.

Therefore, you have two options to explain the resulting impact on equity to your investors: 1) restate the previous years or 2) disclose the subsequent year. The hard part, most likely, will be setting up an opening position!

Revenue Management Cloud is the answer.

At OpenWorld 2016, Oracle announced that Revenue Management Cloud is generally available as part of Oracle ERP Cloud. As would be expected from Oracle, it closely follows the new standard.

  • It first imports sales cycle data from third-party products, from E-Business Suite, and from related cloud applications, and identifies performance obligations and contracts.
  • It includes a standalone selling price engine and facility to import estimated selling prices.
  • It books the accrual when either party acts, and it books the revenue upon customer satisfaction.
  • It matches the billing to the revenue process.

The new standard creates a challenge for you; Oracle has the solution.

As noted above, the standard goes into effect January 1, 2018. So the coming year is the time and opportunity to get out in front of this.

Oracle can help you accelerate the transition by facilitating both a simulation of your compliance and the buildup of your opening balance using a special secondary ledger.

Revenue Management Cloud follows the standard directly so partnering with Oracle makes it easy to embrace ASC 606 and IFRS 15. This new cloud application is designed to deliver a valuable business benefit.

You can get more information on Oracle Revenue Management Cloud here.

Q2 SAAS AND PAAS CLOUD REVENUES UP 81%, AND UP 89% IN NON-GAAP CONSTANT CURRENCY

Oracle Press Releases - Thu, 2016-12-15 13:59
Press Release
Q2 SAAS AND PAAS CLOUD REVENUES UP 81%, AND UP 89% IN NON-GAAP CONSTANT CURRENCY Total Quarterly Cloud Revenue $1.1 Billion—First Quarter Over One Billion Dollar Mark

Redwood Shores, Calif.—Dec 15, 2016

Oracle Corporation (NYSE: ORCL) today announced fiscal 2017 Q2 results. Total Revenues were $9.0 billion, flat in U.S. dollars and up 1% in constant currency. Non-GAAP Total Revenues were $9.1 billion, up 1% in U.S. dollars and up 2% in constant currency. Cloud software as a service (SaaS) and platform as a service (PaaS) revenues were $878 million, up 81% in U.S. dollars and up 83% in constant currency. Non-GAAP SaaS and PaaS revenues were $912 million, up 87% in U.S. dollars and up 89% in constant currency. Total Cloud Revenues, including infrastructure as a service (IaaS), were $1.1 billion, up 62% in U.S. dollars and up 64% in constant currency. Cloud plus On-Premise Software Revenues were $7.2 billion, up 2% in U.S. dollars and up 3% in constant currency.

Operating Income was $3.0 billion and Operating Margin was 34%. Non-GAAP Operating Income was $3.8 billion and non-GAAP Operating Margin was 42%. Net Income was $2.0 billion while non-GAAP Net Income was $2.6 billion. Earnings Per Share was $0.48, while non-GAAP Earnings Per Share was $0.61. Without the impact of the U.S. dollar strengthening compared to foreign currencies and an unforeseen Egyptian currency exchange loss, Oracle’s reported GAAP and non-GAAP Earnings Per Share would have been 2 cents higher.

Short-term deferred revenues were $7.4 billion, up 6% in U.S. dollars and up 8% in constant currency compared with a year ago. Operating cash flow on a trailing twelve-month basis was $14.2 billion, up 9% from the prior year.

“For four consecutive quarters our Cloud SaaS & PaaS revenue growth rate has increased,” said Oracle CEO, Safra Catz. “As we get bigger in the cloud, we grow faster in the cloud. Our non-GAAP constant currency SaaS and PaaS growth rate is now up to 89%. This growth rate acceleration has driven our quarterly cloud revenue over the $1 billion mark. When salesforce.com crossed the billion dollar milestone their SaaS and PaaS subscription growth rate had slowed down to 36%, even after you include all their acquisitions.”

“Oracle has now passed salesforce.com and become number one in SaaS cloud applications sales to customers with over 1,000 employees according to the latest IDC report,” said Oracle CEO, Mark Hurd. “In other words, this year we are selling more enterprise SaaS than any cloud services provider in the world. We expect to book over $2 billion in new annually recurring cloud business this year alone. And, with the acquisition of NetSuite, we plan on being the #1 cloud applications service provider for companies with less than 1,000 employees as well.”

“Our Database as a Service cloud revenue was $100 million for the quarter, driving growth in our overall database business,” said Larry Ellison, Oracle Chairman and CTO. “We expect our Database as a Service and IaaS businesses will grow even faster than our skyrocketing SaaS business. A lot of people were taken by surprise when IDC ranked Oracle #1 in Enterprise SaaS, surpassing salesforce.com and overcoming their fifteen year head-start. Stay tuned. More surprises coming. I think we’re going to do even better with IaaS and the Oracle database in the cloud.”

The Board of Directors also declared a quarterly cash dividend of $0.15 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on January 5, 2017, with a payment date of January 26, 2017.

Q2 Fiscal 2017 Earnings Conference Call and Webcast

Oracle will hold a conference call and webcast today to discuss these results at 2:00 p.m. Pacific. You may listen to the call by dialing (816) 287-5563, Passcode: 425392. To access the live webcast of this event, please visit the Oracle Investor Relations website at http://www.oracle.com/investor. In addition, Oracle’s Q2 results and Fiscal 2017 financial tables are available on the Oracle Investor Relations website.

A replay of the conference call will also be available by dialing (855) 859-2056 or (404) 537-3406, Passcode: 34339280.

Contact Info
Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
Deborah Hellinger
Oracle Corporate Communciations
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE: ORCL), visit www.oracle.com or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

"Safe Harbor" Statement

Statements in this press release relating to Oracle's future plans, expectations, beliefs, intentions and prospects, including statements regarding our expectations of future revenue growth and bookings in our cloud business overall, and specifically our IaaS and Database as a Service businesses, and our expectations regarding the positive effects of the acquisition of NetSuite on our business are all "forward-looking statements" and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. We presently consider the following to be among the important factors that could cause actual results to differ materially from expectations: (1) Our cloud computing strategy, including our Oracle Cloud SaaS, PaaS, IaaS and data as a service offerings, may not be successful. (2) If we are unable to develop new or sufficiently differentiated products and services, or to enhance and improve our products and support services in a timely manner or to position and/or price our products and services to meet market demand, customers may not buy new software licenses, cloud software subscriptions or hardware systems products or purchase or renew support contracts. (3) If the security measures for our software, hardware, services or Oracle Cloud offerings are compromised or if such offerings contain significant coding, manufacturing or configuration errors, we may experience reputational harm, legal claims and financial exposure. (4) We may fail to achieve our financial forecasts due to such factors as delays or size reductions in transactions, fewer large transactions in a particular quarter, fluctuations in currency exchange rates, delays in delivery of new products or releases or a decline in our renewal rates for support contracts. (5) Our international sales and operations subject us to additional risks that can adversely affect our operating results, including risks relating to foreign currency gains and losses. (6) Economic, geopolitical and market conditions, including the continued slow economic recovery in the U.S. and other parts of the world, can adversely affect our business, results of operations and financial condition, including our revenue growth and profitability, which in turn could adversely affect our stock price. (7) We have an active acquisition program and our acquisitions may not be successful, may involve unanticipated costs or other integration issues or may disrupt our existing operations. A detailed discussion of these factors and other risks that affect our business is contained in our U.S. Securities and Exchange Commission (SEC) filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading "Risk Factors." Copies of these filings are available online from the SEC or by contacting Oracle Corporation's Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of December 15, 2016. Oracle undertakes no duty to update any statement in light of new information or future events. 

Talk to a Press Contact

Ken Bond

  • +1.650.607.0349

Deborah Hellinger

  • +1.212.508.7935

Attend Oracle Code – an event for developers, by developers!

OTN TechBlog - Thu, 2016-12-15 13:45


Attend Oracle Code – an event for developers, by developers

Registration for the first two cities is now open, San Francisco and New York.  Oracle Code will feature tech experts discussing the latest insights, technology, and innovation; educational sessions for developing software in Java, Node.js, and other programming languages  – and a hub where you can interact and exchange ideas with your peers.

Register today!

And the Oracle Code Call for Papers is open.  Submit your proposal today!

HelloSign and Oracle Documents Cloud for the Rentals Industry

WebCenter Team - Thu, 2016-12-15 13:36

Authored by: Sarah Gabot, Demand Generation Manager, HelloSign

Landlords and property managers have to deal with a lot of paperwork. With residents moving in and out of units, the papers that each tenant signs can quickly stack up for anyone managing multiple renters. 

The great news is that ever since the ESIGN Act of 2000, electronically signed documents now have the same legal weight of a handwritten signature

For anyone managing rentals, coupling Oracle Documents Cloud Service with HelloSign can be the perfect alternative to managing stacks of paperwork for your rental properties. Simply upload the documents into Oracle Documents Cloud. When you want to send it out to a tenant for a signature, you can do it straight from Oracle Documents Cloud using the HelloSign for Oracle Documents Cloud. 

Using Oracle Documents Cloud with HelloSign is perfect for: 
  • Rental leases
  • Renewals
  • Rental applications
  • Maintenance records
  • Guarantor agreements
Benefits of Moving to Oracle Documents Cloud and HelloSign

As a property manager or landlord, you’ll experience less clutter around your office space once you cut back the amount of paper you use. 

Some lesser known benefits of moving to cloud storage with eSignatures include: 
  • Signing can happen remotely. This can be great for anyone who has a cosigner or is trying to sign an agreement from afar. 
  • Better applicant experience. By allowing your renters to sign their documents electronically instead of having to shuffle through papers or sign in-person, you’re providing them a better experience overall. It makes you look more professional and modern as well. 
  • Faster turn around. When you let your tenants sign their rental documents online, filling out the paperwork becomes faster and easier. 
  • Increased paperwork accuracy. Isn’t it frustrating when someone accidentally misses a field on a form? HelloSign allows document preparers to create “required fields” that prevent signers from sending back the document without the field filled out. This prevents you from receiving incomplete paperwork. 
  • Save money. Every time you print copies of your agreements, applications, and records, you’re spending money on materials: paper, paper clips, folders, staples, etc. When you move to the cloud, you don’t have to worry about buying any of these items. 
Start Using HelloSign for Oracle Documents Cloud 

Want to start using HelloSign and Oracle Documents Cloud to help manage your rental properties? Learn more about how it works in our recent blog post or email us at oracle-sales@hellosign.com for more info. 

java.lang.SecurityException: The jurisdiction policy files are not signed by a trusted signer

Learn DB Concepts with me... - Thu, 2016-12-15 13:18


I was trying to Install OID (Oracle Identity Manager) and I got this error :

Problem:




        at oracle.as.install.engine.modules.configuration.standard.StandardConfigActionManager.start(StandardConfigActionManager.java:186)
        at oracle.as.install.engine.modules.configuration.boot.ConfigurationExtension.kickstart(ConfigurationExtension.java:81)
        at oracle.as.install.engine.modules.configuration.ConfigurationModule.run(ConfigurationModule.java:86)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.SecurityException: Can not initialize cryptographic mechanism
        at javax.crypto.JceSecurity.<clinit>(JceSecurity.java:88)
        ... 31 more
Caused by: java.lang.SecurityException: The jurisdiction policy files are not signed by a trusted signer!
        at javax.crypto.JarVerifier.verifyPolicySigned(JarVerifier.java:328)
        at javax.crypto.JceSecurity.loadPolicies(JceSecurity.java:317)
        at javax.crypto.JceSecurity.setupJurisdictionPolicies(JceSecurity.java:262)
        at javax.crypto.JceSecurity.access$000(JceSecurity.java:48)
        at javax.crypto.JceSecurity$1.run(JceSecurity.java:80)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.crypto.JceSecurity.<clinit>(JceSecurity.java:77)



Cause:

My current version of java was 1.8.* which is not fully supported.
In this case JDK 1.8.0.1 is installed on all nodes in the cluster and JCE local policy version 6 was used for AES 256 kerberos encryption. JCE must be in sync with the JDK version.
      
[oracle@linux06 jdk1.8.0_111]$ cd ..
[oracle@linux06 java]$ ls
default  jdk1.8.0_111  latest
[oracle@linux06 java]$ cd default/
[oracle@linux06 default]$ ls
bin        javafx-src.zip  man          THIRDPARTYLICENSEREADME-JAVAFX.txt
COPYRIGHT  jre             README.html  THIRDPARTYLICENSEREADME.txt
db         lib             release
include    LICENSE         src.zip


Solution:

Download :

For Java 6 use :
http://www.oracle.com/technetwork/java/javase/downloads/jce-6-download-429243.html

For Java 7 use :
http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html

For java 8 use :
http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html        

********************    ********************    ********************
Update java with with new java unlimted jusrisdiction :
********************    ********************    ********************

After download and unzip :

[oracle@linux06 JCE]$ unzip jce_policy-8.zip
Archive:  jce_policy-8.zip
   creating: UnlimitedJCEPolicyJDK8/
  inflating: UnlimitedJCEPolicyJDK8/local_policy.jar
  inflating: UnlimitedJCEPolicyJDK8/README.txt
  inflating: UnlimitedJCEPolicyJDK8/US_export_policy.jar
[oracle@linux06 JCE]$ ls -ll
total 16
-rw-rw-r--. 1 oracle oracle 8409 Dec 14 10:39 jce_policy-8.zip
drwxrwxr-x. 2 oracle oracle 4096 Dec 20  2013 UnlimitedJCEPolicyJDK8
[oracle@linux06 JCE]$ pwd
/u01/app/SFTW/JCE
[oracle@linux06 JCE]$ ls
jce_policy-8.zip  UnlimitedJCEPolicyJDK8
[oracle@linux06 JCE]$ cd UnlimitedJCEPolicyJDK8/
[oracle@linux06 UnlimitedJCEPolicyJDK8]$ ls
local_policy.jar  README.txt  US_export_policy.jar

********************    ********************    ********************
as root user backup and replace files (US_export_policy & local_policy.jar)
********************    ********************    ********************

[oracle@linux06 security]$ su root
Password:
[root@linux06 security]# ls
blacklist          java.policy    local_policy.jar
blacklisted.certs  java.security  trusted.libraries
cacerts            javaws.policy  US_export_policy.jar
[root@linux06 security]# cd /usr/java/default/jre/lib/security
[root@linux06 security]# mv US_export_policy.jar US_export_policy.jar_bak
[root@linux06 security]# mv local_policy.jar local_policy.jar_bak
[root@linux06 security]# ls -ll
total 164
-rw-r--r--. 1 root root   4054 Sep 22 18:23 blacklist
-rw-r--r--. 1 root root   1273 Sep 22 18:23 blacklisted.certs
-rw-r--r--. 1 root root 112860 Sep 22 18:23 cacerts
-rw-r--r--. 1 root root   2466 Sep 22 18:23 java.policy
-rw-r--r--. 1 root root  27358 Sep 22 18:23 java.security
-rw-r--r--. 1 root root     98 Sep 22 18:23 javaws.policy
-rw-r--r--. 1 root root   3405 Sep 22 18:35 local_policy.jar_bak
-rw-r--r--. 1 root root      0 Sep 22 18:23 trusted.libraries
-rw-r--r--. 1 root root   2920 Sep 22 18:35 US_export_policy.jar_bak
[root@linux06 security]# pwd
/usr/java/default/jre/lib/security
[root@linux06 security]# cp /u01/app/SFTW/JCE/UnlimitedJCEPolicyJDK8/US_export_policy.jar /usr/java/default/jre/lib/security
[root@linux06 security]# cp /u01/app/SFTW/JCE/UnlimitedJCEPolicyJDK8/local_policy.jar /usr/java/default/jre/lib/security
[root@linux06 security]# ls -ll
total 172
-rw-r--r--. 1 root root   4054 Sep 22 18:23 blacklist
-rw-r--r--. 1 root root   1273 Sep 22 18:23 blacklisted.certs
-rw-r--r--. 1 root root 112860 Sep 22 18:23 cacerts
-rw-r--r--. 1 root root   2466 Sep 22 18:23 java.policy
-rw-r--r--. 1 root root  27358 Sep 22 18:23 java.security
-rw-r--r--. 1 root root     98 Sep 22 18:23 javaws.policy
-rw-r--r--. 1 root root   3035 Dec 14 10:47 local_policy.jar
-rw-r--r--. 1 root root   3405 Sep 22 18:35 local_policy.jar_bak
-rw-r--r--. 1 root root      0 Sep 22 18:23 trusted.libraries
-rw-r--r--. 1 root root   3023 Dec 14 10:46 US_export_policy.jar
-rw-r--r--. 1 root root   2920 Sep 22 18:35 US_export_policy.jar_bak
Categories: DBA Blogs

DRCP Connection Broker Info

Tom Kyte - Thu, 2016-12-15 09:06
Hi Tom, I'm a java developer, actually we are planning to implement DRCP for upcoming project. Regarding Database Resident Connection Pooling (DRCP) Actually UCP documentation provided the information like "how we can connect to Connection Br...
Categories: DBA Blogs

Rename a table with append

Tom Kyte - Thu, 2016-12-15 09:06
Hi, I have table a and say my user1 is inserting data into table a so my table a is having 100 rows I have table b and say my user2 is inserting data into table b so my table b is having 50 rows now I have user3 and I have table ab w...
Categories: DBA Blogs

Partitioning Decisions

Tom Kyte - Thu, 2016-12-15 09:06
Hi, we have table with below structure CREATE TABLE NSK_DETAILS ( ITEM VARCHAR2(50 CHAR), LOC VARCHAR2(50 CHAR), COUNTRY VARCHAR2(50 CHAR), SYS_DATE DATE ); I have data categorized into two country types New Zealand and Australia....
Categories: DBA Blogs

Identifying latest partition 2

Tom Kyte - Thu, 2016-12-15 09:06
Hi Chris, I tried using one of your suggestions as sub-query but seems the " partition for " does not support it. select partition_name from ( select distinct 'SCHEMA.'||table_name from dba_tab_partitions where table_owner= 'SCHEMA' and tab...
Categories: DBA Blogs

Which feature/option could be used for a specific database event?

Tom Kyte - Thu, 2016-12-15 09:06
Hi, the system shall log database security events in accordance with the Australian Government Information Security Manual (ISM:2015) Control 0987 event list: Database logging requirement Access to particularly sensitive information Addition of...
Categories: DBA Blogs

dequeue delay

Tom Kyte - Thu, 2016-12-15 09:06
Hello Tom, I have the folloging problem. My application accepts messages into queue and than dispatches (using subscriber agent) them to other system. Incomming messages can come in batches of 100 pieces in a short time and I want to dispatch them...
Categories: DBA Blogs

UTL_FILE package usage error

Tom Kyte - Thu, 2016-12-15 09:06
hi tom, I am using utl_file package to create a sample file in the respective directory and i am using oracle 11g in windows environment. Query: CREATE OR REPLACE DIRECTORY ORA_DIR AS 'C:\app\santh\product\11.2.0\dbhome_1'; ...
Categories: DBA Blogs

Default order of the child rows of a parent in a hierarchical query

Tom Kyte - Thu, 2016-12-15 09:06
Hi Tom!: In a heap table with hierarchical data, is it possible to affirm something on the default order of the child rows of a parent in a hierarchical query when the clause 'order siblings by' is not used? Thanks!
Categories: DBA Blogs

Oracle GoldenGate for Big Data 12.3.0.1 is Released

What’s new in Oracle GoldenGate for Big Data 12.3.0.1 ? · Generic JDBC Targets  - Amazon Redshift & IBM Netezza Certified Oracle GoldenGate for Big Data 12.3.0.1 can...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator