Feed aggregator

FBDA -- 1 : Testing Flashback Data Archive in 12c (NonCDB)

Hemant K Chitale - Sat, 2016-04-02 10:53
Note : At the bottom of this post, you'll find links to more (subsequent) posts on this topic.

Some testing I'd done with Flashback Data Archive (henceforth called FBDA in this and subsequent posts, if any) in 11.2.0.4 left me with uncertainty about the automatic purging of data beyond the Retention Period.  I might return to testing 11.2.0.4, but here I shall begin testing in 12.1.0.2  (NonCDB).

Setting up FBDA :

[oracle@ora12102 ~]$ sqlplus system/oracle

SQL*Plus: Release 12.1.0.2.0 Production on Sat Apr 2 23:23:53 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Sat Apr 02 2016 23:20:47 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create tablespace fbda ;

Tablespace created.

SQL> create flashback archive fbda tablespace fbda retention 3 day;

Flashback archive created.

SQL> create tablespace hemant;

Tablespace created.

SQL> create user hemant identified by hemant
2 default tablespace hemant;

User created.

SQL> grant create table to hemant;

Grant succeeded.

SQL> grant create session to hemant;

Grant succeeded.

SQL> alter user hemant quota unlimited on hemant;

User altered.

SQL> alter user hemant quota unlimited on fbda;

User altered.

SQL> grant flashback archive administer to hemant;

Grant succeeded.

SQL> grant flashback archive on fbda to hemant;

Grant succeeded.

SQL>
SQL> connect / as sysdba
Connected.
SQL> grant execute on dbms_flashback_archive to hemant;

Grant succeeded.

SQL>
SQL> connect hemant/hemant
Connected.
SQL> create table test_fbda (id_column number, data_column varchar2(15), date_inserted date) tablespace hemant;

Table created.

SQL> alter table test_fbda flashback archive fbda;

Table altered.

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA

SQL>


Note the Flashback Archive history table corresponding to TEST_FBDA doesn't get created immediately.

SQL> connect hemant/hemant
Connected.
SQL> insert into test_fbda
2 select rownum , to_char(rownum), trunc(sysdate)
3 from dual connect by level < 1001;

1000 rows created.

SQL> commit;

Commit complete.

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA

SQL> select count(*) from test_fbda;

COUNT(*)
----------
1000

SQL> select flashback_archive_name, retention_in_days, status
2 from user_flashback_archive;

FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
RETENTION_IN_DAYS STATUS
----------------- -------
FBDA
3


SQL> select table_name, flashback_archive_name, archive_table_name, status
2 from user_flashback_archive_tables;

TABLE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
ARCHIVE_TABLE_NAME STATUS
----------------------------------------------------- -------------
TEST_FBDA
FBDA
SYS_FBA_HIST_93250 ENABLED


SQL>
SQL> !sleep 300

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA
SYS_FBA_DDL_COLMAP_93250
SYS_FBA_HIST_93250
SYS_FBA_TCRV_93250

SQL>
SQL> select object_id
2 from user_objects
3 where object_name = 'TEST_FBDA'
4 and object_type = 'TABLE'
5
SQL> /

OBJECT_ID
----------
93250

SQL>


So, it took some time for the flashback archive history table (identified on the basis of the OBJECT_ID) to appear.  The background fbda process seems to run (wakeup) every 5minutes although it may wake up more frequently if there is more activity in the database.

SQL> select trunc(date_inserted), count(*)
2 from test_fbda
3 group by trunc(date_inserted)
4 /

TRUNC(DAT COUNT(*)
--------- ----------
02-APR-16 1000

SQL> select trunc(date_inserted), count(*)
2 from sys_fba_hist_93250
3 group by trunc(date_inserted)
4 /

no rows selected

SQL> update test_fbda
2 set data_column = data_column
3 where rownum < 451;

450 rows updated.

SQL> commit;

Commit complete.

SQL> select trunc(date_inserted), count(*)
2 from sys_fba_hist_93250
3 group by trunc(date_inserted)
4 /

no rows selected

SQL>
SQL> !sleep 180

SQL> select trunc(date_inserted), count(*)
2 from sys_fba_hist_93250
3 group by trunc(date_inserted)
4 /

TRUNC(DAT COUNT(*)
--------- ----------
02-APR-16 450

SQL>
SQL> select scn_to_timestamp(startscn), scn_to_timestamp(endscn), date_inserted, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn), date_inserted
4 order by 1;

SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
DATE_INSE COUNT(*)
--------- ----------
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM
02-APR-16 450


SQL>


Notice that not all 1000 rows got copied to the FBDA.  Only the 450 rows that I updated were copied in.  They are tracked by SCN-Timestamp.  (The "DATE_INSERTED" column is my own date column, Oracle wouldn't be using that column to track DML dates for rows as the values in that column are controlled by me -- the application or developer, not Oracle).

Note :  The History Table is not supposed to be directly queried in the manner I have shown here.

Tomorrow :  More Rows, and some DDLs as well.

Post 2 here.  It covers some of the architectural components and support for ADD / DROP column DDL.

Post 3 here.  It shows support for TRUNCATEs.

Post 4 here.  On Partitions and Indexes.

Post 5 here.  On (Auto)Purging.

Post 6 here.  On Bug Notes

Post 7 here.  Maintainig Partitioned Source Table
.
.
.


Categories: DBA Blogs

Welcome New Oracle ACE Program Participants!

OTN TechBlog - Fri, 2016-04-01 20:03
12.00

Congratulations to our newest Oracle ACEs!  The Oracle ACE Program recognizes individuals for their outstanding contribution to the Oracle community. With over 500 participants in more than 60 countries around the globe, the ACE program features three tiers: Associate, ACE and ACE Director, to support community advocates at all stages of their career. Read more about the program here.

Oracle ACEs initiated in March:

Oracle ACE Directors:

Gokhan Atil

John Booth

Ludovico Caldara

Celvin Kattookaran

Carlos Sierra

Oracle ACEs:

Jérôme Françoisse

Anju Garg

Laurent Leturgez

Emrah Mete

Anuj Mohan

Robert van Molken

Oracle ACE Associates:

Elise Valin-Raki

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

sql for first day of month and last day of month

Learn DB Concepts with me... - Fri, 2016-04-01 20:00

select SYSDATE ,
last_day(sysdate) as LAST_DATE_CURR_MNTH,
ADD_MONTHS(last_day(sysdate),-1) as PREVIOUS_MON_LAST_DATE,
last_day(sysdate)+1 as NEXT_MON_FIRST_DATE,
ADD_MONTHS(last_day(sysdate),+1) as NEXT_MON_LAST_DATE,
ADD_MONTHS(last_day(sysdate),+5) as LAST_DATE_OF_5TH_MON,
ADD_MONTHS(last_day(sysdate),+5) +1 as FIRST_DATE_IN_6TH_MON_AFTR_NOW  
from dual;

"SYSDATE"    "LAST_DATE_CURR_MNTH"    "PREVIOUS_MON_LAST_DATE"    "NEXT_MON_FIRST_DATE"    "NEXT_MON_LAST_DATE"    "LAST_DATE_OF_5TH_MON"   
-----------  ---------------------   ------------------------    ---------------------   --------------------    -----------------------
"FIRST_DATE_IN_6TH_MON_AFTR_NOW"
-----------------------

01-APR-16        30-APR-16                31-MAR-16                    01-MAY-16                31-MAY-16            30-SEP-16   
-----------------------
01-OCT-16
Categories: DBA Blogs

compute as a service

Pat Shuff - Fri, 2016-04-01 14:42
In an ongoing learning journey of trying to understand cloud services, I got accounts on the Amazon cloud, Azure cloud, and Oracle cloud. I thought I would start with the basics and grow from there. As an exercise, let's create a Linux server with no software installed in each three platforms. Apart from creating an account on all three platforms (which was non-trivial) creation of a compute server on each platform was relatively simple.

Amazon Web Services

The initial look and feel of the console starts the experience. It does show what the three companies are focused on. Let's start with Amazon (it is first in the alphabet and I had to pick something). The console lists a wide variety of services and things that you can purchase. Without doing research I would not have known that S3 stands for storage and EC2 stands for compute.

I get what a virtual server in the cloud is but how does that differ from a docker container and why should I care? Why should I care about managing Web Apps if I am just looking for raw compute? Why do I want to run code outside of a virtual machine? Which one should I choose? We are not going to go into depth on any of these subjects. If we are just looking at running a Linux instance, the simple EC2 should be adequate. We can install Docker as a package in our Linux instance to help us control how much of a processor is allocated to a service or program. We can install applications like Tomcat or WebLogic to run Web Apps. Linux gives us the foundation to do all of this with packages. Lambda is a totally different beast in that I can run code snippets to do things like voice command interpretation for an Amazon Echo or asynchronous events from devices and launch web sites or REST apis without having to install, manage, and configure an operating system. The rest of the world calls this a Node.js function and offers it as a separate service as well. I realize that I am oversimplifying this but you have to know what you are trying to accomplish before you start to create your first compute instance in the cloud.

Microsoft Azure Services

The Azure services are a little different in that they focus more on the user creation of virtual machines, SQL server, and some app services. Creation of a virtual image is relatively easy and it makes sense what you are doing. The console is relatively simple and clean with more options on the second page instead of the first page as is done with Amazon.

As you click on the Add button for Virtual Image you get an expanded set of operation system options and configurations.

Note that you can search for Oracle Linux and get a listing of various versions of the database. The virtual machine is easy to configure and create using the portal. If, however, you want to configure and create this via a command line, you need to download the PowerShell and run everything inside the application. The command line is Microsoft specific and difficult to port and migrate to other services. With Amazon and Oracle you can easily use RESTapi calls to provision and create services. Microsoft makes it a little more difficult to generically script but easily do this in their shell and language.

Oracle Cloud Services

The Oracle cloud compute services are new to the market. In the past compute services have been sold in bundles of 500 processors but have recently been made available in single processor consumption models. The cloud console has a different look and feel because the focus of the cloud services are more on the PaaS layer and less on the compute and storage layers.

Note that the screen shot starts with the storage and compute services but scrolling down shows database, java, SOA, and more PaaS layers.

To create a virtual image, you need to click on the compute cloud service - Service Console and it will allow you create an instance. The operating system selection is not as graphical or user friendly as the Microsoft interface but does list a variety of operating system options and configurations.

In conclusion, all three of these cloud consoles allow you to create a virtual image. In the next blog entry we will walk through the steps needed to create a Linux 6 instance on each of the three cloud platforms. We will not talk about how to create accounts. We will assume that you can find account setup and creation on your own. All three sites offer "try me" services that give at least 30 days evaluations. The eventual recommendation will be to use services like bitnami.com that takes public domain services like LAMP servers, Wiki engines, blog servers, and other public domain tools. The Bitnami site allows you to select a pre-configured instance and provision it into all three of these cloud services along with a few other cloud providers.

Best practice for setting up MySQL replication filters

Pythian Group - Fri, 2016-04-01 14:23

It is not uncommon that we need to filter out some DBs or Tables while setting up replication. It is important to understand how MySQL evaluates/process the replication filtering rules to avoid the conflicting or confusion while we setting them up.The purpose of this blog is to illustrate the rules and provide some suggestions for best practice.

MySQL provides 3 levels of filters for setting up replication: Binary log, DB and Table. The binlog filters apply on the master to control how to log the changes. Since MySQL replication is based on the binlog, it is the first level filter and has the highest priority. While the DB-level and Table-level filters apply on the slaves, since each table belongs to a schema, the DB-level filters have higher priority than the Table-level ones. Inside the Table-level filters, MySQl will evaluate the options in the order of: –replicate-do-table, –replicate-ignore-table ,  –replicate-wild-do-table , –replicate-wild-ignore-table.

Based on that, we have the following suggestions for setting up MySQL replication filter as best practice:

I)Do not setup any binlog-level filters unless you really need to and can afford losing the chance of  having an extra full copy of data changes for the master.

II)In DB-level filters, use either one or none of the two options: –replicate-do-db or –replicate-ignore-db. Never use both at the same time.

III) While using binlog_format=’statement’ OR ‘mixed’ (in mixed mode, if  a transaction is deterministic then it will be stored in statement format) and set up –replicate-do-db or –replicate-ignore-db on slaves, make sure never make changes on the tables across the default database on master otherwise you might lose the changes on slave due to default database not matching.

IV)In Table-level filters, use only one of the 2 options, or use the following two combination: –replicate-ignore-table and —replicate-wild-do-table to avoid conflicting and confusing.

For MariaDB replication filters within Galera cluster, it should be used with caution. As a general rule except for InnoDB DML updates, the following replication filters are not honored in a Galera cluster :  binlog-do-db ,binlog-ignore-db, replicate-wild-do-db, replicate-wild-ignore-db. However, replicate-do-db,replicate-ignore-db filters are honored for DDL and DML for both InnoDB & MyISAM engines. As they might create discrepancies and replication may abort (see MDEV-421, MDEV-6229). (https://mariadb.com/kb/en/mariadb/mariadb-galera-cluster-known-limitations/), For the slaves replicating from cluster, the rules are similar with normal replication settings as above.

Here are the details/reasons:

1)Binlog-level filters

A)How MySQL process the Binlog-level filters

There are 2 options for setting binlog filter on master:  –binlog-do-db and –binlog-ignore-db. MySQL will check –binlog-do-db first, if there are any options, it will apply this one and ignore –binlog-ignore-db. If the –binlog-do-db is NOT set, then mysql will check –binlog-ignore-db.If both of them are empty, it will log changes for all DBs.

See the below examples. In scenario 1) no binlog level filters are set and so all changes were logged; In scenario 2) -binlog-do-db and –binlog-ignore-db are all set to m_test and changes on the DB m_test were logged and changes on the DB test were NOT logged;In scenario 3) only –binlog-ignore-db is set to m_test and so changes on the DB m_test were NOT logged and changes on the DB test were  logged;
scenario 1)–binlog-do-db and –binlog-ignore-db is NOT set:

mysql> show master status;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000003 |      120 |              |                  |                   |

+——————+———-+————–+——————+——————-+

1 row in set (0.00 sec)

mysql> show binlog events in “vm-01-bin.000003” from 120;  

Empty set (0.00 sec)

mysql> insert into t1(id,insert_time) values(10,now());

Query OK, 1 row affected (0.05 sec)

 

mysql> show binlog events in “vm-01-bin.000003” from 120;

+——————+—–+————+———–+————-+—————————————————————+

| Log_name         | Pos | Event_type | Server_id | End_log_pos | Info                                                          |

+——————+—–+————+———–+————-+—————————————————————+

| vm-01-bin.000003 | 120 | Query      |         1 |         211 | BEGIN                                                         |

| vm-01-bin.000003 | 211 | Query      |         1 |         344 | use `m_test`; insert into t1(id,insert_time) values(10,now()) |

| vm-01-bin.000003 | 344 | Xid        |         1 |         375 | COMMIT /* xid=17 */                                           |

+——————+—–+————+———–+————-+—————————————————————+

3 rows in set (0.00 sec)

scenario 2)–binlog-do-db=m_test and –binlog-ignore-db=m_test:

— insert into tables of DB m_test was logged

mysql> show master status;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000004 |      656 | m_test       | m_test           |                   |

+——————+———-+————–+——————+——————-+

1 row in set (0.00 sec)

 

mysql> use m_test

 

mysql> insert into t1(insert_time) values(now());

Query OK, 1 row affected (0.02 sec)

 

mysql> show binlog events in “vm-01-bin.000004” from 656;

+——————+—–+————+———–+————-+———————————————————+

| Log_name         | Pos | Event_type | Server_id | End_log_pos | Info                                                    |

+——————+—–+————+———–+————-+———————————————————+

| vm-01-bin.000004 | 656 | Query      |         1 |         747 | BEGIN                                                   |

| vm-01-bin.000004 | 747 | Intvar     |         1 |         779 | INSERT_ID=13                                            |

| vm-01-bin.000004 | 779 | Query      |         1 |         906 | use `m_test`; insert into t1(insert_time) values(now()) |

| vm-01-bin.000004 | 906 | Xid        |         1 |         937 | COMMIT /* xid=26 */                                     |

+——————+—–+————+———–+————-+———————————————————+

4 rows in set (0.00 sec)

— insert into tables of DB test was NOT logged

mysql> use test;

 

mysql> show master status ;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000004 |      937 | m_test       | m_test           |                   |

+——————+———-+————–+——————+——————-+

 

mysql> insert into t1(`a`) values(‘ab’);

Query OK, 1 row affected (0.03 sec)

 

mysql> show binlog events in “vm-01-bin.000004” from 937;

Empty set (0.00 sec)

 

scenario 3)–Binlog_Do_DB=null –binlog-ignore-db=m_test:

mysql> use m_test

mysql> show master status;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000005 |      120 |              | m_test           |                   |

+——————+———-+————–+——————+——————-+

mysql> insert into t1(insert_time) values(now());

Query OK, 1 row affected (0.01 sec)

 

mysql> show binlog events in “vm-01-bin.000005” from 120;

Empty set (0.00 sec)

 

mysql> use test

mysql> insert into t1(`a`) values(‘ba’);

Query OK, 1 row affected (0.03 sec)

 

mysql> show binlog events in “vm-01-bin.000005” from 120;

+——————+—–+————+———–+————-+———————————————-+

| Log_name         | Pos | Event_type | Server_id | End_log_pos | Info                                         |

+——————+—–+————+———–+————-+———————————————-+

| vm-01-bin.000005 | 120 | Query      |         1 |         199 | BEGIN                                        |

| vm-01-bin.000005 | 199 | Query      |         1 |         305 | use `test`; insert into t1(`a`) values(‘ba’) |

| vm-01-bin.000005 | 305 | Xid        |         1 |         336 | COMMIT /* xid=22 */                          |

+——————+—–+————+———–+————-+———————————————-+

3 rows in set (0.00 sec)

 

B)Best practice for setting up the Binlog-level filters

So, for Binlog-level filter, we will use either one (and ONLY one or none) of the 2 options: –binlog-do-db to make MySQL log changes for the DBs in the list. OR, –binlog-ignore-db to make MySQL log changes for the DBs NOT in the list. Or leave both of them empty to log changes for all the DBs.

However, we usually recommend NOT to setup any binlog-level filters. The reason is that to log changes for all DBs and set up filters only on slaves will achieve the same purpose and let us have an extra full copy of data changes for the master, in case we will need that for recovery.

 

2)DB-level filters

A)How MySQL process the DB-level filters

There are 2 options for setting DB-level filters:  –replicate-do-db or –replicate-ignore-db. MySQL processes these two filters the similar way as it processes the Binlog-level filters, the difference is that it ONLY applies on the slaves and so affects how the slaves replicate from its master. It will check –replicate-do-db first, if there are any options, it will replicate the DBs in the list and ignore –replicate-ignore-db. If the –replicate-do-db is NOT set, then mysql will check –replicate-ignore-db and replicate all the DBs except for the ones in this list.If both of them are empty, it will replicate all the DBs. you can find the process in the below chart from http://dev.mysql.com/doc/refman/5.7/en/replication-rules-db-options.html

There is a trick for DB-level filters though If the binlog_format is set as statement or mixed. (The binlog_format =mixed also applies here, it is because that  in mixed mode replication, in case the transaction  is deterministic it will be resolved to statement which is equivalent to statement mode) .. Since “With statement-based replication, the default database is checked for a match.” (http://dev.mysql.com/doc/refman/5.7/en/replication-rules-db-options.html). If you set up –replicate-do-db and you update a table out of the default database in master, the update statement will not be replicated if the default database you are running command from is not in the  –replicate-do-db. For example, there are 2 DBs in master, you set binlog_format=’statement’ OR ‘mixed’ and set –replicate-do-db=DB1 on slave. when execute the following commands: use DB2; update DB1.t1 … This update command will not be executed on slave. To make the update statement replicated to slave, you need to do: use DB1, update t1 …

For example: with binlog_format=statement or binlog_format=mixed,  we insert into m_test.t1 in two approaches: one is using default DB as m_test, the other one is using default DB test, the changes are all logged in the master. But in slave, after it caught up, only the insert(default DB is m_test) was replicated to slave, and the insert (default DB is test) was NOT replicated. As shown below:

Scenario 1) binlog_format=statement

In master: insert into m_test.t1 in two approaches: one is using default DB as m_test, the other one is using default DB test, the changes are all logged

mysql> use m_test

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> delete from t1;

Query OK, 16 rows affected (0.02 sec)

 

mysql> select * from m_test.t1;

Empty set (0.00 sec)

 

mysql> use m_test

Database changed

mysql> insert into m_test.t1(insert_time) values(now());

Query OK, 1 row affected (0.04 sec)

 

mysql> use test;

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> insert into m_test.t1(insert_time) values(now());

Query OK, 1 row affected (0.03 sec)

 

mysql> show binlog events in “vm-01-bin.000006” from 654;

+——————+——+————+———–+————-+—————————————————————-+

| Log_name         | Pos  | Event_type | Server_id | End_log_pos | Info                                                           |

+——————+——+————+———–+————-+—————————————————————-+

| vm-01-bin.000006 |  654 | Xid        |         1 |         685 | COMMIT /* xid=39 */                                            |

| vm-01-bin.000006 |  685 | Query      |         1 |         768 | BEGIN                                                          |

| vm-01-bin.000006 |  768 | Query      |         1 |         860 | use `m_test`; delete from t1                                   |

| vm-01-bin.000006 |  860 | Xid        |         1 |         891 | COMMIT /* xid=48 */                                            |

| vm-01-bin.000006 |  891 | Query      |         1 |         982 | BEGIN                                                          |

| vm-01-bin.000006 |  982 | Intvar     |         1 |        1014 | INSERT_ID=17                                                   |

| vm-01-bin.000006 | 1014 | Query      |         1 |        1148 | use `m_test`; insert into m_test.t1(insert_time) values(now()) |

| vm-01-bin.000006 | 1148 | Xid        |         1 |        1179 | COMMIT /* xid=52 */                                            |

| vm-01-bin.000006 | 1179 | Query      |         1 |        1268 | BEGIN                                                          |

| vm-01-bin.000006 | 1268 | Intvar     |         1 |        1300 | INSERT_ID=18                                                   |

| vm-01-bin.000006 | 1300 | Query      |         1 |        1432 | use `test`; insert into m_test.t1(insert_time) values(now())   |

| vm-01-bin.000006 | 1432 | Xid        |         1 |        1463 | COMMIT /* xid=60 */                                            |

+——————+——+————+———–+————-+—————————————————————-+

12 rows in set (0.00 sec)

 

mysql> select * from m_test.t1;

+—-+———————+

| id | insert_time         |

+—-+———————+

| 17 | 2016-03-20 14:59:41 |

| 18 | 2016-03-20 15:00:01 |

+—-+———————+

2 rows in set (0.00 sec)

 

In slave: after it caught up, only the first insert(default DB is m_test) was replicated to slave, and the insert (default DB is test) was NOT replicated

mysql> show slave status\G

*************************** 1. row ***************************

              Slave_IO_State: Waiting for master to send event

                 Master_Host: 10.0.2.6

                 Master_User: repl

                 Master_Port: 3306

               Connect_Retry: 10

             Master_Log_File: vm-01-bin.000006

         Read_Master_Log_Pos: 1463

              Relay_Log_File: ewang-vm-03-relay-bin.000017

               Relay_Log_Pos: 1626

       Relay_Master_Log_File: vm-01-bin.000006

            Slave_IO_Running: Yes

           Slave_SQL_Running: Yes

             Replicate_Do_DB: m_test

         Replicate_Ignore_DB:

          Replicate_Do_Table:

      Replicate_Ignore_Table:

     Replicate_Wild_Do_Table:

 Replicate_Wild_Ignore_Table:

                  Last_Errno: 0

                  Last_Error:

                Skip_Counter: 0

         Exec_Master_Log_Pos: 1463

             Relay_Log_Space: 1805

             Until_Condition: None

              Until_Log_File:

               Until_Log_Pos: 0

          Master_SSL_Allowed: No

          Master_SSL_CA_File:

          Master_SSL_CA_Path:

             Master_SSL_Cert:

           Master_SSL_Cipher:

              Master_SSL_Key:

       Seconds_Behind_Master: 0

Master_SSL_Verify_Server_Cert: No

               Last_IO_Errno: 0

               Last_IO_Error:

              Last_SQL_Errno: 0

              Last_SQL_Error:

 Replicate_Ignore_Server_Ids:

            Master_Server_Id: 1

                 Master_UUID: a22b3fb2-5e70-11e5-b55a-0800279d00c5

            Master_Info_File: /mysql/data/master.info

                   SQL_Delay: 0

         SQL_Remaining_Delay: NULL

     Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it

          Master_Retry_Count: 86400

                 Master_Bind:

     Last_IO_Error_Timestamp:

    Last_SQL_Error_Timestamp:

              Master_SSL_Crl:

          Master_SSL_Crlpath:

          Retrieved_Gtid_Set:

           Executed_Gtid_Set:

               Auto_Position: 0

1 row in set (0.00 sec)

 

mysql> select * from m_test.t1;

+—-+———————+

| id | insert_time         |

+—-+———————+

| 17 | 2016-03-20 14:59:41 |

+—-+———————+

1 row in set (0.00 sec)

 

Scenario 2) binlog_format=mixed

In master:

mysql> show variables like ‘binlog_format’;

+—————+——-+

| Variable_name | Value |

+—————+——-+

| binlog_format | MIXED |

+—————+——-+

1 row in set (0.00 sec)

 

mysql> show master status;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000007 |      120 |              |                  |                   |

+——————+———-+————–+——————+——————-+

1 row in set (0.00 sec)

 

mysql> use m_test

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> insert into m_test.t1(insert_time) values(now());

Query OK, 1 row affected (0.04 sec)

 

mysql> use test;

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> insert into m_test.t1(insert_time) values(now());

Query OK, 1 row affected (0.04 sec)

 

mysql> show binlog events in “vm-01-bin.000007” from 120;

+——————+—–+————+———–+————-+—————————————————————-+

| Log_name         | Pos | Event_type | Server_id | End_log_pos | Info                                                           |

+——————+—–+————+———–+————-+—————————————————————-+

| vm-01-bin.000007 | 120 | Query      |         1 |         211 | BEGIN                                                          |

| vm-01-bin.000007 | 211 | Intvar     |         1 |         243 | INSERT_ID=19                                                   |

| vm-01-bin.000007 | 243 | Query      |         1 |         377 | use `m_test`; insert into m_test.t1(insert_time) values(now()) |

| vm-01-bin.000007 | 377 | Xid        |         1 |         408 | COMMIT /* xid=45 */                                            |

| vm-01-bin.000007 | 408 | Query      |         1 |         497 | BEGIN                                                          |

| vm-01-bin.000007 | 497 | Intvar     |         1 |         529 | INSERT_ID=20                                                   |

| vm-01-bin.000007 | 529 | Query      |         1 |         661 | use `test`; insert into m_test.t1(insert_time) values(now())   |

| vm-01-bin.000007 | 661 | Xid        |         1 |         692 | COMMIT /* xid=53 */                                            |

+——————+—–+————+———–+————-+—————————————————————-+

8 rows in set (0.00 sec)

mysql> select * from m_test.t1;

+—-+———————+

| id | insert_time         |

+—-+———————+

| 17 | 2016-03-20 14:59:41 |

| 18 | 2016-03-20 15:00:01 |

| 19 | 2016-03-20 15:09:14 |

| 20 | 2016-03-20 15:09:25 |

+—-+———————+

4 rows in set (0.00 sec)

 

In slave:

mysql> show variables like ‘binlog_format’;

+—————+——-+

| Variable_name | Value |

+—————+——-+

| binlog_format | MIXED |

+—————+——-+

1 row in set (0.00 sec)

 

mysql> show slave status\G

*************************** 1. row ***************************

              Slave_IO_State: Waiting for master to send event

                 Master_Host: 10.0.2.6

                 Master_User: repl

                 Master_Port: 3306

               Connect_Retry: 10

             Master_Log_File: vm-01-bin.000007

         Read_Master_Log_Pos: 692

              Relay_Log_File: ewang-vm-03-relay-bin.000023

               Relay_Log_Pos: 855

       Relay_Master_Log_File: vm-01-bin.000007

            Slave_IO_Running: Yes

           Slave_SQL_Running: Yes

             Replicate_Do_DB: m_test

         Replicate_Ignore_DB:

          Replicate_Do_Table:

      Replicate_Ignore_Table:

     Replicate_Wild_Do_Table:

 Replicate_Wild_Ignore_Table:

                  Last_Errno: 0

                  Last_Error:

                Skip_Counter: 0

         Exec_Master_Log_Pos: 692

             Relay_Log_Space: 1034

             Until_Condition: None

              Until_Log_File:

               Until_Log_Pos: 0

          Master_SSL_Allowed: No

          Master_SSL_CA_File:

          Master_SSL_CA_Path:

             Master_SSL_Cert:

           Master_SSL_Cipher:

              Master_SSL_Key:

       Seconds_Behind_Master: 0

Master_SSL_Verify_Server_Cert: No

               Last_IO_Errno: 0

               Last_IO_Error:

              Last_SQL_Errno: 0

              Last_SQL_Error:

 Replicate_Ignore_Server_Ids:

            Master_Server_Id: 1

                 Master_UUID: a22b3fb2-5e70-11e5-b55a-0800279d00c5

            Master_Info_File: /mysql/data/master.info

                   SQL_Delay: 0

         SQL_Remaining_Delay: NULL

     Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it

          Master_Retry_Count: 86400

                 Master_Bind:

     Last_IO_Error_Timestamp:

    Last_SQL_Error_Timestamp:

              Master_SSL_Crl:

          Master_SSL_Crlpath:

          Retrieved_Gtid_Set:

           Executed_Gtid_Set:

               Auto_Position: 0

1 row in set (0.00 sec)

 

mysql> select * from m_test.t1;

+—-+———————+

| id | insert_time         |

+—-+———————+

| 17 | 2016-03-20 14:59:41 |

| 19 | 2016-03-20 15:09:14 |

+—-+———————+

2 rows in set (0.00 sec)

 

B)Best practice for setting up the DB-level filters

Use either one or none of the two options: –replicate-do-db or –replicate-ignore-db. Never use both at the same time.

If you use binlog_format=’statement’  OR ‘mixed’ and set up –replicate-do-db or –replicate-ignore-db on slaves, make sure never make changes on the tables across the default database, otherwise the data discrepancy will be expected in the slaves.

 

3)Table-level filters

There are 4 options for setting Table-level filters: –replicate-do-table, –replicate-ignore-table ,  –replicate-wild-do-table or –replicate-wild-ignore-table. MySQL evaluates the options in order. you can find the process in the below chart from http://dev.mysql.com/doc/refman/5.6/en/replication-rules-table-options.html

 

The above chart shows us that MySQL will first check –replicate-do-table, the tables listed here will be replicated and so won’t be ignored by the following options like –replicate-ignore-table , or –replicate-wild-ignore-table. Then MySQL will check –replicate-ignore-table, the tables listed here will be ignored even if it shows up in the following options  –replicate-wild-do-table. The lowest priority is –replicate-wild-ignore-table.

B)Best practice for setting up the Table-level filters

Due to the priorities for the 4 Table_level options, to avoid confusing/conflicting, we suggest using only one of the 4 options, or using the following two options: –replicate-ignore-table and replicate-wild-do-table so that it is clearly that the tables in –replicate-ignore-table will be ignored and the tables in replicate-wild-do-table will be replicated.

 

Categories: DBA Blogs

Deploying your Oracle MAF Apps on Windows Platform

As you may already know Oracle Mobile Application Framework (MAF) 2.3 has been released. And one of the symbolic features is support for Universal Windows Platform (UWP). This means that starting...

We share our skills to maximize your revenue!
Categories: DBA Blogs

What Are Your Options For Migrating Enterprise Applications to the Cloud?

Pythian Group - Fri, 2016-04-01 09:16

Migrating your enterprise applications from on-premises infrastructure to the public cloud is attractive for a number of reasons. It eliminates the costs and complexities of provisioning hardware and managing servers, storage devices, and network infrastructure; it gives you more compute capacity per dollar without upfront capital investment; and you gain opportunities for innovation through easier access to new technologies, such as advanced analytical capabilities.

So how do you get there?

You have a few options. At one end of the spectrum, you could simply wait and rationalize, making continuous incremental changes to gain efficiencies. This is obviously a “slow burn” approach. In the middle is a “lift-and-shift” from your current environment into the public cloud. And at the far extreme, you could plunge right in and re-architect your applications—a costly and probably highly complex task.

 

In fact, a true migration “strategy” will involve elements of each of these. For example, you could perform short-term optimizations and migrations on a subset of applications that are ready for the cloud, while transforming the rest of your application stack over the longer term.

 

What to expect from the major public cloud platforms

There are three leading public cloud platforms: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). As Google doesn’t seem to be driving customers to lift-and-shift their applications to GCP, I’m going to focus on AWS and Azure as potential cloud destinations and, for specificity, take Oracle enterprise databases as the use case.

 

Amazon Web Services

You have two options for migrating Oracle databases to the AWS cloud: infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS).

 

Deploying Oracle applications in AWS IaaS is much like deploying them on your in-house infrastructure. You don’t get flexible licensing options, but you do have the ability to easily allocate more or less capacity as needed for CPU, memory, and storage. However, because AWS IaaS is virtualized infrastructure, you may experience slower performance due to suboptimal CPU core allocation or processor caches. You’ll also have less flexibility with instance sizes, network topology, storage performance tiers, and the like.

 

AWS Relational Database Service (RDS) for Oracle is a managed PaaS offering where, in addition to giving you the benefits of IaaS, Amazon takes on major DBA and system administrator tasks including provisioning, upgrades, backups, and multi-availability zone replication. This significantly simplifies your operations—but also results in less control over areas such as configuration, patching, and maintenance windows. AWS RDS for Oracle can also be used with a pay-as-you-go licensing model included in the hourly rate.

 

Microsoft Azure

Azure does not have a managed offering for Oracle databases, so the only way to run Oracle Database on Azure is through its IaaS platform. The benefits are very similar to AWS IaaS, but Azure offers additional licensing options (with Windows-based license-included images) and its instances are billed by the minute rather than by the hour. What’s important to keep in mind is that Azure is not as broadly adopted as AWS and offers less flexibility for storage performance tiers and instance sizes. Oracle Database software running on Windows is also not as common as running on Linux.

 

For more in-depth technical details on these options, I encourage you to read our white paper, Migrating Oracle Databases to Cloud. My next blog in this series will look at one other option not discussed here: migrating to Oracle Cloud.

migratingtocloud

Categories: DBA Blogs

CHANGE STANDBY DATABASE PROTECTION MODE

Learn DB Concepts with me... - Fri, 2016-04-01 09:00
SQL> select protection_mode from v$database;

PROTECTION_MODE
--------------------
MAXIMUM PERFORMANCE

SQL> show parameter log_archive_dest_2

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2             string     SERVICE=ORCLSTB1 NOAFFIRM ASYN
                         C VALID_FOR=(ONLINE_LOGFILES,P
                         RIMARY_ROLE) DB_UNIQUE_NAME=OR
                         CLSTB1
log_archive_dest_20             string
log_archive_dest_21             string
log_archive_dest_22             string
log_archive_dest_23             string
log_archive_dest_24             string
log_archive_dest_25             string
log_archive_dest_26             string

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_27             string
log_archive_dest_28             string
log_archive_dest_29             string
SQL> show parameter db_unique_name

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
db_unique_name                 string     ORCL
SQL> show parameter log_archive_config

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_config             string     dg_config=(ORCL,ORCLSTB1,ORCLS
                         TB2)
SQL> alter system set log_archive_dest_2='SERVICE=ORCLSTB1 NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCLSTB1';

System altered.

SQL> show parameter log_archive_dest_2

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2             string     SERVICE=ORCLSTB1 NOAFFIRM ASYN
                         C VALID_FOR=(ONLINE_LOGFILES,P
                         RIMARY_ROLE) DB_UNIQUE_NAME=OR
                         CLSTB1
log_archive_dest_20             string
log_archive_dest_21             string
log_archive_dest_22             string
log_archive_dest_23             string
log_archive_dest_24             string
log_archive_dest_25             string
log_archive_dest_26             string

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_27             string
log_archive_dest_28             string
log_archive_dest_29             string

SQL> alter system set log_archive_dest_2='SERVICE=ORCLSTB1 NOAFFIRM SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCLSTB1';

System altered.


SQL> alter database set standby database to maximize availability;

Database altered.

NOTE: You don’t need to shutdown your instance, when you are changing  protection mode from MAXIMUM PERFORMANCE TO MAXIMUM AVAILABILITY.But you need to if you are going to MAXIMUM PROTECTION.

SQL> alter system switch logfile;

System altered.

SQL> select protection_mode from v$database;

PROTECTION_MODE
--------------------
MAXIMUM AVAILABILITY

SQL> archive log list;
Database log mode           Archive Mode
Automatic archival           Enabled
Archive destination           /u01/app/oracle/oraarch/
Oldest online log sequence     239
Next log sequence to archive   241
Current log sequence           241
SQL> select group#,bytes/1024/1024 from v$standby_log;

    GROUP# BYTES/1024/1024
---------- ---------------
     4        52
     5        52
     6        52
     7        52


SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

   THREAD# MAX(SEQUENCE#)
---------- --------------
     1          240

SQL> select protection_mode from v$database;

PROTECTION_MODE
--------------------
MAXIMUM AVAILABILITY



SQL> alter system switch logfile;

System altered.

Categories: DBA Blogs

Install Oracle Real-Time Integration Business Insight

Darwin IT - Fri, 2016-04-01 08:50
Yes, Oracle FMW Integration Insight is available, as I wrote in an earlier post. You can download it here.
But of course we're very curious on how to install it. Do I have to unzip it into my FMW_HOME? Is there a nice Oracle Installer that I can run silently? No, none of that: it comes as a set of OPatch patches on SOASuite 12.2.1:
  1. p22189824_122100_Generic.zip: OPatch containing Oracle Real-Time Integration Business Insight 12.2.1.0.0
  2. p22655174_122100_Generic.zip: OPatch containing updates to SOA and BAM 12.2.1.0.0 
  3. p22659236_122100_Generic.zip: OPatch containing updates to Service Bus 12.2.1.0.0
Following the README.txt in the zip the correct order is to first install ORIBI and then patch SOA&BPM and then SB.


Earlier I wrote about installing BPM QuickStart under Linux. Based on that I created a script to install SOASuite. Maybe I should write about that in another post. I haven't tried if it's possible to install this in a SOA or BPM Quickstart, but I did it in a full FMW installation, that I built using my scripts. So this installation I scripted following the scripting-work I've done earlier


For this setup I have 2 folders:
  • scripts: with the scripts.
  • ofm_integration_insight_1221: with the downloaded ofm_integration_insight_12.2.1.0.0_disk1_1of1.zip
The scripts folder contains two scripts:
fmw12c_env.sh:
#!/bin/bash
echo set Fusion MiddleWare 12cR2 environment
export JAVA_HOME=/usr/java/jdk1.8.0_74
export FMW_HOME=/u01/app/oracle/FMW12210
export SOA_HOME=$FMW_HOME/soa
export OSB_HOME=$FMW_HOME/osb
export MFT_HOME=$FMW_HOME/mft

This provides the settings to the FMW_HOME and the JAVA_HOME. And the product homes I needed for my SOABPM Suite installation (I definately should write that down!).

The actual install script is installOII.sh:
#!/bin/bash
. $PWD/fmw12c_env.sh
#
export CD=$PWD
export OII_INSTALL_HOME=$CD/../ofm_integration_insight_1221
export OII_INSTALL_ZIP=ofm_integration_insight_12.2.1.0.0_disk1_1of1.zip
export OPATCH_SOABPM_ZIP=p22655174_122100_Generic.zip #OPatch containing updates to SOA and BAM 12.2.1.0.0
export OPATCH_SOABPM_NR=22655174
export OPATCH_OSB_ZIP=p22659236_122100_Generic.zip #OPatch containing updates to Service Bus 12.2.1.0.0
export OPATCH_OSB_NR=22659236
export OPATCH_OII_ZIP=p22189824_122100_Generic.zip #OPatch containing Oracle Real-Time Integration Business Insight 12.2.1.0.0
export OPATCH_OII_NR=22189824
export PATCHES_HOME=$FMW_HOME/OPatch/patches
export ORACLE_HOME=$FMW_HOME
# Unzip OII Install zip
if [ ! -f "$OII_INSTALL_HOME/$OPATCH_OII_ZIP" ]; then
if [ -f "$OII_INSTALL_HOME/$OII_INSTALL_ZIP" ]; then
echo Unzip $OII_INSTALL_HOME/$OII_INSTALL_ZIP to $OII_INSTALL_HOME
unzip $OII_INSTALL_HOME/$OII_INSTALL_ZIP -d $OII_INSTALL_HOME
else
echo $OII_INSTALL_HOME/$OII_INSTALL_ZIP does not exist
fi
fi
#
echo Check zips
cd $OII_INSTALL_HOME
md5sum -c patches.MD5
cd $CD
#
# Check patches folder
if [ ! -d "$PATCHES_HOME" ]; then
mkdir $PATCHES_HOME
else
echo $PATCHES_HOME available
fi
#
#Unzip OII patch
if [ ! -d "$PATCHES_HOME/$OPATCH_OII_NR" ]; then
if [ -f "$OII_INSTALL_HOME/$OPATCH_OII_ZIP" ]; then
echo Unzip $OII_INSTALL_HOME/$OPATCH_OII_ZIP to $PATCHES_HOME
unzip $OII_INSTALL_HOME/$OPATCH_OII_ZIP -d $PATCHES_HOME
echo Apply OII Patch
cd $PATCHES_HOME/$OPATCH_OII_NR
$ORACLE_HOME/OPatch/opatch apply
else
echo $OII_INSTALL_HOME/$OPATCH_OII_ZIP does not exist!
fi
else
echo OII Patch $PATCHES_HOME/$OPATCH_OII_NR already available
fi
cd $CD
#Unzip SOA&BPM patch
if [ ! -d "$PATCHES_HOME/$OPATCH_SOABPM_NR" ]; then
if [ -f "$OII_INSTALL_HOME/$OPATCH_SOABPM_ZIP" ]; then
echo Unzip $OII_INSTALL_HOME/$OPATCH_SOABPM_ZIP to $PATCHES_HOME
unzip $OII_INSTALL_HOME/$OPATCH_SOABPM_ZIP -d $PATCHES_HOME
echo Apply SOA BPM Patch
cd $PATCHES_HOME/$OPATCH_SOABPM_NR
$ORACLE_HOME/OPatch/opatch apply
else
echo $OII_INSTALL_HOME/$OPATCH_SOABPM_ZIP does not exist!
fi
else
echo SOA-BPM Patch $PATCHES_HOME/$OPATCH_SOABPM_NR already available
fi
cd $CD
#Unzip OSB patch
if [ ! -d "$PATCHES_HOME/$OPATCH_OSB_NR" ]; then
if [ -f "$OII_INSTALL_HOME/$OPATCH_OSB_ZIP" ]; then
echo Unzip $OII_INSTALL_HOME/$OPATCH_OSB_ZIP to $PATCHES_HOME
unzip $OII_INSTALL_HOME/$OPATCH_OSB_ZIP -d $PATCHES_HOME
echo Apply OSB Patch
cd $PATCHES_HOME/$OPATCH_OSB_NR
$ORACLE_HOME/OPatch/opatch apply
else
echo $OII_INSTALL_HOME/$OPATCH_OSB_ZIP does not exist!
fi
else
echo OSB Patch $PATCHES_HOME/$OPATCH_OSB_NR already available
fi
cd $CD
echo Finished installing Oracle Fusion MiddleWare Integration Insight


The script first unzips the downloaded ofm_integration_insight_12.2.1.0.0_disk1_1of1.zip into:
  • p22189824_122100_Generic.zip
  • p22655174_122100_Generic.zip
  • p22659236_122100_Generic.zip
  • patches.MD5
  • README.txt
Then  it performs 'md5sum -c patches.MD5' to checks on the zips, but it ignores the results, just prints them.
Then for each patch it checks if the patch is already unzipped in the FMW_HOME/Opatch/patches folder. If so, it just assumes that its applied as well. If not the patch-zip is unzipped in the patches folder. And then it will perform opatch apply.
Opatch will ask if you want to proceed (answer with 'y') and if the system is ready to be patched (again answer with 'y'). For the SB Patch (the last in the list) it will look like:
Apply OSB Patch
Oracle Interim Patch Installer version 13.3.0.0.0
Copyright (c) 2016, Oracle Corporation. All rights reserved.


Oracle Home : /u01/app/oracle/FMW12210
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/FMW12210/oraInst.loc
OPatch version : 13.3.0.0.0
OUI version : 13.3.0.0.0
Log file location : /u01/app/oracle/FMW12210/cfgtoollogs/opatch/22659236_Apr_01_2016_09_37_29/apply2016-04-01_09-37-22AM_1.log


OPatch detects the Middleware Home as "/u01/app/oracle/FMW12210"

Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 22659236

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/FMW12210')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '22659236' to OH '/u01/app/oracle/FMW12210'

Patching component oracle.osb.server, 12.2.1.0.0...

Patching component oracle.osb.server, 12.2.1.0.0...
Patch 22659236 successfully applied.
Log file location: /u01/app/oracle/FMW12210/cfgtoollogs/opatch/22659236_Apr_01_2016_09_37_29/apply2016-04-01_09-37-22AM_1.log

OPatch succeeded.
Finished installing Oracle Fusion MiddleWare Integration Insight

If you have a home with only OSB or only SOA-BPM adapt the script yourself to not patch the not-installed product.

Oh, I did not check on the install, for now I assume it worked.  Next step for me is describe the SOA/BPM install and check-out on the Integration Insight product.

Set Operations

Jonathan Lewis - Fri, 2016-04-01 08:20

A recent post on the OTN database forum highlights a couple of important points ideas for optimising SQL. There are: (a) is there a logically equivalent way of stating the SQL and (b) is there a different “natural language” way of posing the problem.

The posting starts with a query, part of an execution plan, and a request to “get rid of the tablescan”. I guessed originally that the query came from an 11g instance, and the OP gave us some code to create the tables and indexes, so I’ve modelled the tables to get the indicated plan (then filled in the original numbers). This is the query, and my cosmetically adjusted version of the plan output that the OP probably got:


SELECT a.hotel_code
  FROM lf_hotel_temp a
WHERE a.service_id = : p_service_id
       AND (NOT EXISTS (SELECT *
          FROM lf_ts_roomtype_properties b
         WHERE a.hotel_code = b.hotel_code)
        or NOT EXISTS (SELECT *
          FROM lf_gta_roomtype_properties b
         WHERE a.hotel_code = b.hotel_code) 
       or  NOT EXISTS (SELECT *
          FROM lf_hb_roomtype_properties b
         WHERE a.hotel_code = b.hotel_code))

-------------------------------------------------------------------------------
| Id  | Operation          | Name                     | Rows  |  Bytes | Cost |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                          | 12613 | 113517 |  135 |
|*  1 |  FILTER            |                          |       |        |      |
|*  2 |   TABLE ACCESS FULL| LF_HOTEL_TEMP            | 88433 | 795897 |  135 |
|*  3 |   INDEX RANGE SCAN | LF_TS_ROOMTYPE_PROP_IDX  |     1 |      7 |    1 |
|*  4 |   INDEX RANGE SCAN | LF_GTA_ROOMTYPE_PROP_IDX |     1 |      9 |    1 |
|*  5 |   INDEX RANGE SCAN | LF_HB_ROOMTYPE_PROP_IDX  |     2 |     14 |    3 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( NOT EXISTS (SELECT 0 FROM "LF_TS_ROOMTYPE_PROPERTIES" "B" WHERE
              "B"."HOTEL_CODE"=:B1) OR  NOT EXISTS (SELECT 0 FROM "LF_GTA_ROOMTYPE_PROPERTIES" "B"
              WHERE "B"."HOTEL_CODE"=:B2) OR  NOT EXISTS (SELECT 0 FROM "LF_HB_ROOMTYPE_PROPERTIES"
              "B" WHERE "B"."HOTEL_CODE"=:B3))
   2 - filter("A"."SERVICE_ID"=TO_NUMBER(:P_SERVICE_ID))
   3 - access("B"."HOTEL_CODE"=:B1)
   4 - access("B"."HOTEL_CODE"=:B1)
   5 - access("B"."HOTEL_CODE"=:B1)

We were told in the original posting that there’s a primary key on lf_hotel_temp declared on (hotel_code, service_id), and we were given the definitions, sizes, and index declarations of all the table in a follow-up posting. It turns out that lf_hotel_temp consists of just those two columns and holds 278,000 rows: the optimizer’s estimate for the number of rows identified by a single service_id is over 88,000, and the nature of the query tells us that the optimizer would have to examine every one of those rows to check if it satisfied any of the three subqueries.

So how might Oracle access the rows ?  Given that the only columns used will all be in the primary key index (which implies not null constraints) there are four basic options: tablescan, index fast full scan, index full scan, and index skip scan. Given the most likely data content (i.e. lots of different hotel_codes), we can assume the skip scan would be a very bad idea. We can be sure that an index fast full scan will be lower cost than an index full scan – for anything except tiny indexes. Ultimately the question is really “why a tablescan instead of an index fast full scan?”. As I pointed out, though, the table consists of just those two columns – which means it’s perfectly reasonable for the index to be larger than the table as each entry of the index will consist of the two columns AND a rowid.

The first interesting bit

The question of why the access to lf_hotel_temp was by tablescan rather than some indexed method isn’t really interesting. The interesting bit is how (in principle) we might make the plan more efficient (if it really needs it); and this leads to two key, and general purpose, observations. As Andrew Sayer pointed out on the thread, we have a compound predicate:

    (not exists A OR not exists B OR not exists C)

and this is logically equivalent to

   not (exists A AND exists B AND exists C)

If we rewrite the query to reflect this equivalence could the optimizer find a different, better way of executing it:


select  /*+ dynamic_sampling(0) */
        a.hotel_code
from    lf_hotel_temp a
where
        a.service_id = :p_service_id
and     not(
                exists (
                        select  null
                        from    lf_ts_roomtype_properties ts
                        where   ts.hotel_code = a.hotel_code
                )
            and exists (
                        select  null
                        from    lf_gta_roomtype_properties gta
                        where   gta.hotel_code = a.hotel_code
                )
            and exists (
                        select  null
                        from    lf_hb_roomtype_properties hb
                        where   hb.hotel_code = a.hotel_code
                )
        )
;

Of course, I didn’t have the original data; so I copied the DDL supplied in the OTN thread and added a little DML to insert a few rows in the tables. The data I used looked like this:


insert into lf_hotel_temp (hotel_code, service_id) values ('A',1);
insert into lf_hotel_temp (hotel_code, service_id) values ('B',1);
insert into lf_hotel_temp (hotel_code, service_id) values ('C',1);
insert into lf_hotel_temp (hotel_code, service_id) values ('D',1);

-- insert into lf_ts_roomtype_properties values ( 'A','x','x',0,1,'x');
insert into lf_ts_roomtype_properties values ( 'B','x','x',0,1,'x');
insert into lf_ts_roomtype_properties values ( 'C','x','x',0,1,'x');
insert into lf_ts_roomtype_properties values ( 'D','x','x',0,1,'x');

-- insert into lf_gta_roomtype_properties values ( 'A','x','x',0,1,'x');
-- insert into lf_gta_roomtype_properties values ( 'B','x','x',0,1,'x');
insert into lf_gta_roomtype_properties values ( 'C','x','x',0,1,'x');
insert into lf_gta_roomtype_properties values ( 'D','x','x',0,1,'x');

-- insert into lf_hb_roomtype_properties values ( 'A','x','x',0,1,'x');
-- insert into lf_hb_roomtype_properties values ( 'B','x','x',0,1,'x');
-- insert into lf_hb_roomtype_properties values ( 'C','x','x',0,1,'x');
insert into lf_hb_roomtype_properties values ( 'D','x','x',0,1,'x');
commit;

It’s possible that with different data volumes you’d get different execution plans, but in 11g the optimizer transformed my query back into the original form – in other words it recognised the equivalence of “not (A and B and C)” and rewrote it as “(not A or not B or not C)” !

However, I also have 12c available, and I had created a script to build a model, so I ran the test on 12c. Both versions of the query produced the following plan:


----------------------------------------------------------------------------------------------------
| Id  | Operation             | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                            |     1 |  2027 |     8  (13)| 00:00:01 |
|*  1 |  HASH JOIN RIGHT ANTI |                            |     1 |  2027 |     8  (13)| 00:00:01 |
|   2 |   VIEW                | VW_SQ_1                    |    82 |   984 |     6   (0)| 00:00:01 |
|*  3 |    HASH JOIN SEMI     |                            |    82 |  2952 |     6   (0)| 00:00:01 |
|*  4 |     HASH JOIN         |                            |    82 |  1968 |     4   (0)| 00:00:01 |
|   5 |      TABLE ACCESS FULL| LF_GTA_ROOMTYPE_PROPERTIES |    82 |   984 |     2   (0)| 00:00:01 |
|   6 |      TABLE ACCESS FULL| LF_HB_ROOMTYPE_PROPERTIES  |    82 |   984 |     2   (0)| 00:00:01 |
|   7 |     TABLE ACCESS FULL | LF_TS_ROOMTYPE_PROPERTIES  |    82 |   984 |     2   (0)| 00:00:01 |
|*  8 |   INDEX FULL SCAN     | LF_HOTEL_TEMP_PK           |   101 |   198K|     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("VW_COL_1"="A"."HOTEL_CODE")
   3 - access(SYS_OP_MAP_NONNULL("HB"."HOTEL_CODE")=SYS_OP_MAP_NONNULL("TS"."HOTEL_CODE"))
   4 - access(SYS_OP_MAP_NONNULL("HB"."HOTEL_CODE")=SYS_OP_MAP_NONNULL("GTA"."HOTEL_CODE"))
   8 - access("A"."SERVICE_ID"=TO_NUMBER(:P_SERVICE_ID))
       filter("A"."SERVICE_ID"=TO_NUMBER(:P_SERVICE_ID))

Ignore the numbers (I hadn’t collected stats, which is why I added the /*+ dynamic_sampling(0) */ hint – with stats in place 12c produced the FILTER plan that 11g had produced) the key feature is that Oracle has managed to transform my three filter subqueries into a single join subquery and then transformed the resulting subquery into an anti-join. It’s a pretty amazing transformation – the optimizer did it automatically in 12c, but if you are aware of the logical equivalence then you may find cases where you can turn “OR’s” into “AND’s” and help the optimizer to find transformations that it can’t find automatically.

The second interesting bit

If you think about the meaning behind the query (prompted, perhaps, by the logical equivalence described above) you might rephrase the question as “find me the hotel codes that fail to appear in all three related tables” – in English this is ambigious and open to catastrophic mis-interpretation so you might have another go and say “find me the hotel codes that appear in every one of the three related tables – those are the hotel codes I don’t want”. This latter expression, of course, is exactly what Oracle is doing by joining the three tables and then doing the “not exists”/anti-join against the result. Obviously you could translate the new English form into SQL by hand, with a three table join in a “not exists” subquery.

I actually took a different approach (which might, or might not, be efficient – depending on the actual data and indexes).  I translated the new English statement into the following:


select  /*+ dynamic_sampling(0) */
        hotel_code
from    lf_hotel_temp
where   service_id = :p_service_id
minus   (
        select  hotel_code
        from    lf_ts_roomtype_properties
        where   hotel_code is not null
        intersect
        select  hotel_code
        from    lf_gta_roomtype_properties
        where   hotel_code is not null
        intersect
        select  hotel_code
        from    lf_hb_roomtype_properties
        where   hotel_code is not null
        )
;

The three way intersection gets me the list of hotels that appear in all three tables; the minus operator takes the list of hotel with the correct service_id and eliminates from it the hotels that appear in the intersection – giving me the result I want.

For my tiny data set, this is the plan I got:

--------------------------------------------------------------------------------------------------
| Id  | Operation             | Name                     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                          |     1 |  2159 |     8  (50)| 00:00:01 |
|   1 |  MINUS                |                          |       |       |            |          |
|   2 |   SORT UNIQUE NOSORT  |                          |     1 |  2015 |     2  (50)| 00:00:01 |
|*  3 |    INDEX FULL SCAN    | LF_HOTEL_TEMP_PK         |     1 |  2015 |     1   (0)| 00:00:01 |
|   4 |   INTERSECTION        |                          |       |       |            |          |
|   5 |    INTERSECTION       |                          |       |       |            |          |
|   6 |     SORT UNIQUE NOSORT|                          |     4 |    48 |     2  (50)| 00:00:01 |
|*  7 |      INDEX FULL SCAN  | LF_TS_ROOMTYPE_PROP_IDX  |     4 |    48 |     1   (0)| 00:00:01 |
|   8 |     SORT UNIQUE NOSORT|                          |     4 |    48 |     2  (50)| 00:00:01 |
|*  9 |      INDEX FULL SCAN  | LF_GTA_ROOMTYPE_PROP_IDX |     4 |    48 |     1   (0)| 00:00:01 |
|  10 |    SORT UNIQUE NOSORT |                          |     4 |    48 |     2  (50)| 00:00:01 |
|* 11 |     INDEX FULL SCAN   | LF_HB_ROOMTYPE_PROP_IDX  |     4 |    48 |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("SERVICE_ID"=TO_NUMBER(:P_SERVICE_ID))
       filter("SERVICE_ID"=TO_NUMBER(:P_SERVICE_ID))
   7 - filter("HOTEL_CODE" IS NOT NULL)
   9 - filter("HOTEL_CODE" IS NOT NULL)
  11 - filter("HOTEL_CODE" IS NOT NULL)

Important note: I am not claiming that this use of set operators will be more efficient than a filter subquery or anti-join/semi-join approach, performance ultimately depends on the volume and patterns in the data combined with the available indexing. In this case you can almost see the classic performance compromise that we often see in Oracle – even in the trade-off between something as simple as choosing between a hash join and a nested loop join – should we operate this query as a tiny number of “bulk” operations, or as a (potentially) large number of tiny, high-precision operations.

If the original query was spending all it’s time on CPU running lots of subqueries, or doing lots of single block random I/Os because of the random ordering of the subqueries, then perhaps a couple of brute force “db file parallel read” index full scans would be a friendlier use of the available resources, run more quickly, and have less impact on every other user.

 


Links for 2016-03-31 [del.icio.us]

Categories: DBA Blogs

I am Oracle ACE

Oracle in Action - Fri, 2016-04-01 00:49

RSS content

It gives me immense pleasure to share the news that I have been honored with the prestigious  Oracle ACE award. I am grateful to Oracle ACE Program for accepting my nomination. I would like to thank to Murali Vallath Sir who nominated me for this award. I am also thankful to my family members without whose support and motivation, this would not have been possible.  Thanks a lot to all the readers of my blog whose comments and suggestions helped me to learn and share whatever little knowledge I have.

I will do my best to participate in the Oracle ACE program.

 

 



Tags:  

Del.icio.us
Digg

Copyright © ORACLE IN ACTION [I am Oracle ACE], All Right Reserved. 2016.

The post I am Oracle ACE appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Catch Me at Collaborate in Las Vegas

David Haimes - Fri, 2016-04-01 00:43

The OAUG Collaborate conference kicks off in Las Vegas on Sunday April 10th.  This conference is organized by Oracle user groups and is packed with content from customers, partners and Oracle have some sessions too.  If you follow this blog you will know I am a big fan of the conference, I learn a lot from our customers and try to give back as much as I can by sharing information and answering questions.  The twitter conversations are already starting, follow #C16LV now and certainly during conference week to see what people are talking about.

I will be presenting a session on Accounting Hub Reporting Cloud Service for Oracle E-Business Suite

1:00 PM–2:00 PM Apr 12, 2016 – South Pacific I

Register now for that one, I will leave plenty of time for questions because this cloud service is generating a lot of buzz.

I will also be at the OAUG GL SIG Meeting 

3:00 PM–4:00 PM Apr 13, 2016 – South Seas J

I’m also planning to attend Meet the Experts: Oracle E-Business Suite Financials 

2:15 PM–3:15 PM Apr 12, 2016 – Breakers G

I will help take questions, you can decide if that qualifies me as an expert or not.

I’ll also be attending Oracle E-Business Suite and Oracle Cloud Solutions: Update, Strategy and Roadmap
Nadia Bendjedou, Sr. Director, Product Strategy, Oracle
Monday, April 11, 2:00 – 3:00 PM – Banyan A

I am combing through the sessions to identify others that I want to go to and learn from, I will share those later, watch this space.


Categories: APPS Blogs

The Oracle ACE Program : My 10 Year Anniversary

Tim Hall - Fri, 2016-04-01 00:02

ace-directorOn 1st April 2006 I got an email telling me I was an Oracle ACE, so either this is the most drawn out April Fools’ Day joke ever, or I’ve been in the Oracle ACE Program for 10 years. Wow!

The numbers look something like this.

  • Nearly 21 years working with Oracle technology. (August 1995)
  • Nearly 16 years doing my website. (Original name: 03 July 2000 or Current name: 31 August 2001)
  • Nearly 11 years blogging. (15 June 2005)
  • 10 years on the Oracle ACE Program. (01 April 2006)
  • Nearly 10 months making videos. (6 July 2015)

It’s quite scary when you look back at it.

Got anyone who needs April Fooling?

RDBMS Insight - Thu, 2016-03-31 20:15

Do you have a sql*plus user who really needs an April Fool’s joke played on them? With a little editing to their glogin.sql, every sql*plus session will exit with what appears to be a pseudo-random TNS error.

(Note: assumes a *nix environment that has sed, grep, awk installed and oerr properly working.)

[oracle@localhost ~]$ cd $ORACLE_HOME/sqlplus/admin
[oracle@localhost admin]$ mv glogin.sql glogin.old
[oracle@localhost admin]$ vim glogin.sql
 
--Added by APRIL FOOL
 
set echo off
set feedback off
set verify off
set head off
col the_err new_value myerrno
col the_user new_value uname
set termout off
select 
'125' || decode( to_char(sysdate,'SS'),'01','61','06','64','07',
'66','17','69','30','70','53','71','59','82', to_char(sysdate,'SS')) 
as the_err
from dual;
select user as the_user from dual;
set termout on
clear columns
prompt ERROR:
host oerr ora &myerrno | grep -v '//' | sed 's/"//g' | awk -F, '{print "ORA-" $1 ": " $3}'
prompt SP2-0751: Unable to connect to Oracle.  Exiting SQL*Plus
exit &myerrno
 
--End added by APRIL FOOL
 
:wq
 
[oracle@localhost admin]$ sqlplus "/ as sysdba"
 
SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 31 17:50:39 2016
 
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
 
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 
ERROR:
ORA-12539:  TNS:buffer over- or under-flow
 
SP2-0751: Unable to connect to Oracle.	Exiting SQL*Plus
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 
[oracle@localhost admin]$ sqlplus "/ as sysdba"
 
SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 31 17:50:52 2016
 
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
 
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 
ERROR:
ORA-12552:  TNS:operation was interrupted
 
SP2-0751: Unable to connect to Oracle.	Exiting SQL*Plus
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 
[oracle@localhost admin]$ sqlplus "/ as sysdba"
 
SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 31 17:51:02 2016
 
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
 
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 
ERROR:
ORA-12502:  TNS:listener received no CONNECT_DATA from client
 
SP2-0751: Unable to connect to Oracle.	Exiting SQL*Plus
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Disclaimer – use at your own risk to workplace harmony!

Categories: DBA Blogs

5 Phases for Migrating to a Cloud Platform

Pythian Group - Thu, 2016-03-31 14:11

Businesses today are increasingly looking to migrate to the cloud to realize lower costs and increase software velocity. They are now asking themselves “when” they should migrate rather than if they “should”, and with many vendors and solutions in the market, it can be difficult to take the first steps in creating a cloud strategy.   

In our latest on-demand webinar, Chris Presley, Solution Architect at Pythian, and Jim Bowyer, Solution Architect at Azure-Microsoft Canada, discuss a five phase framework for cloud transformations, and the benefits of migrating to the cloud with Microsoft Azure.

The five phase framework helps businesses understand the journey to successfully migrate current applications to a cloud platform. Here is a snapshot of the five phases:

 

1. Assessment: Analysis and Planning

A majority of the time investment should be upfront in assessment and preparation because it sets the stage for the actual development and migration, resulting in faster projects, lower costs, and less risk.

In this phase, businesses want to begin understanding the performance and user characteristics of their applications, and any other additional information that will be important during the transformation, such as regulatory, compliance, and legal requirements.

 

2. Preparation: POC, Validation and Final Road Map

The preparation phase is meant to help understand what the rest of the migration is going to look like.

While beneficial in any project, proof of concepts (POCs) are increasingly simple to create and are a great strength when leveraging the cloud. POCs are used to show some functionality and advantage early so you can get everyone – especially business owners – excited about the migration.

 

3. Build: Construct Infrastructure

Once the expectations around the final migration road map are developed, the infrastructure can be built. Jim discusses that beginning to think about automation during this phase is important, and Chris agrees, in particular with developing an automated test bed to help smooth out the migration.

 

4. Migration: Execute Transformation

The migration activity for cloud environments is very short. By this stage, if the planning and preparation has been done properly, “flicking the light switch” to the new environment should be seamless and feel like the easiest part.

Chris talks about creating both detailed success and rollback criteria and how they are both crucial for success in the migration phase. Jim mentions that Microsoft Azure provides a variety of tools to help make rollbacks easier and safer.

 

5. Optimization: IaaS Enhancements

Continually transforming and enhancing after the migration is complete is important for increasing software velocity, which is why businesses migrate to the cloud in the first place. While a piece of functionality may not available today, it may be available tomorrow.

By going back to iterate and take advantage of new functionalities, businesses are able to squeeze out more improvements and create opportunities for differentiation.

 

Learn More

To learn about these five cloud transformation phases in more depth, and how to leverage the cloud with Microsoft Azure, download our free on-demand webinar.

Azure_Webinar (1)

Categories: DBA Blogs

Last Chance! 27 Technical Sessions Delivered by Oracle and Community Experts

OTN TechBlog - Thu, 2016-03-31 12:08

Register today for the last Virtual Technology Network event this spring - April 5, 2016 - 9:30 - 13:30 BST / 10:30 - 14:30 CEST.

Get insights, hands-on and expert views on a wild mix of expert topics across Database, Java Development, Middleware and Systems.
We are looking forward to meet you at the Oracle Technology Network Virtual Technology Summit next Tuesday morning (BST) for a dense mix of technology sessions. View the full agenda and all session abstracts here.

Register today!


Working with MCS Offline? Need some hints and tips?

Angelo Santagata - Thu, 2016-03-31 11:40

Mobile Cloud Service

Another collection of videos I was highlighted today on MCS on how to do data offline sync. Awesome stuff , one note however is that the videos are IOS based today but Im being told that the Android versions are enroute.... These videos provide a end2end discussion on how to work with these capabilities not just the APIs themselves.. Very nice....
Also remember to read the docs which cover both Android and IOS

Pages

Subscribe to Oracle FAQ aggregator