Skip navigation.

Feed aggregator

Introducing the SOLID design principles

Andrew Clarke - Sun, 2016-04-03 11:55
PL/SQL programming standards tend to focus on layout (case of keywords, indentation, etc), naming conventions, and implementation details (such as use of cursors).  These are all important things, but they don't address questions of design. How easy is it to use the written code?  How easy is it to test? How easy will it be to maintain? Is it robust? Is it secure?

Simply put, there are no agreed design principles for PL/SQL. So it's hard to define what makes a well-designed PL/SQL program.
The SOLID principlesIt's different for object-oriented programming. OOP has more design principles and paradigms and patterns than you can shake a stick at. Perhaps the most well-known are the SOLID principles, which were first mooted by Robert C. Martin, AKA Uncle Bob, back in 1995 (although it was Michael Feathers who coined the acronym).

Although Martin put these principles together for Object-Oriented code, they draw on a broader spectrum of programming practice. So they are transferable, or at least translatable, to the other forms of modular programming. For instance, PL/SQL.
Single Responsibility PrincipleThis is the foundation stone of modular programming: a program unit should do only one thing. Modules which do only one thing are easier to understand, easier to test and generally more versatile. Higher level procedures can be composed of lower level ones. Sometimes it can be hard to define what "one thing" means in a given context, but some of the other principles provide clarity. Martin's formulation is that there should be just one axis of change: there's just one set of requirements which, if modified or added to, would lead to a change in the package.
Open/closed PrincipleThe slightly obscure name conceals a straightforward proposal. It means program units are closed to modification but open to extension. If we need to add new functionality to a package, we create a new procedure rather than modifying an existing one. (Betrand Meyer, the father of Design By Contract programming, originally proposed it; in OO programming this principle is implemented through inheritance or polymorphism.) Clearly we must fix bugs in existing code. Also it doesn't rule out refactoring: we can tune the implementation providing we don't change the behaviour. This principle mainly applies to published program units, ones referenced by other programs in Production. Also the principle can be looser when the code is being used within the same project, because we can negotiate changes with our colleagues.
Liskov Substitution PrincipleThis is a real Computer Science-y one, good for dropping in code reviews. Named for Barbara Liskov it defines rules for behavioural sub-typing. If a procedure has a parameter defined as a base type it must be able to take an instance of any sub-type without changing the behaviour of the program. So a procedure which uses
IS OF
to test the type of a passed parameter and do something different is violating Liskov Substitution. Obviously we don't make much use of Inheritance in PL/SQL programming, so this Principle is less relevant than in other programming paradigms.
Interface Segregation PrincipleThis principle is about designing fine-grained interfaces. It is a extension of the Single Responsibility Principle. Instead of build one huge package which contains all the functions relating to a domain build several smaller, more cohesive packages. For example Oracle's Advanced Queuing subsystem comprises five packages, to manage different aspects of AQ. Users who write to or read from queues have
DBMS_AQ
; users who manage queues and subscribers have
DBMS_AQADM
.
Dependency Inversion PrincipleInteractions between programs should be through abstract interfaces rather than concrete ones. Abstraction means the implementation of one side of the interface can change without changing the other side. PL/SQL doesn't support Abstract objects in the way that say Java does. To a certain extent Package Specifications provide a layer of abstraction but there can only be one concrete implementation. Using Types to pass data between Procedures is an interesting idea, which we can use to decouple data providers and data consumers in a useful fashion.
Applicability of SOLID principles in PL/SQLSo it seems like we can apply SOLID practices to PL/SQL.  True, some Principles fit better than others. But we have something which we might use to distinguish good design from bad when it comes to PL/SQL interfaces.

The SOLID principles apply mainly to individual modules. Is there something similar we can use for designing module groups? Why, yes there is. I'm glad you asked.

Part of the Designing PL/SQL Programs series

Introducing the RCCASS design principles

Andrew Clarke - Sun, 2016-04-03 11:54
Rob C Martin actually defined eleven principles for OOP. The first five, the SOLID principles, relate to individual classes. The other six, the RCCASS principles, deal with the design of packages (in the C++ or Java sense, i.e. libraries). They are far less known than the first five. There are two reasons for this:

  • Unlike "SOLID", "RCCASS" is awkward to say and doesn't form a neat mnemonic. 
  • Programmers are far less interested in software architecture. 

Software architecture tends to be an alien concept in PL/SQL. Usually a codebase of packages simply accretes over the years, like a coral reef. Perhaps the RCCASS principles can help change that.
The RCCASS PrinciplesReuse Release Equivalency Principle The Reuse Release Equivalency Principle states that the unit of release matches the unit of reuse, which is the parts of the program unit which are consumed by other programs. Basically the unit of release defines the scope of regression testing for consuming applications. It's an ill-mannered release which forces projects to undertake unnecessary regression testing. Cohesive program units allow consumers to do regression testing only for functionality they actually use. It's less of a problem for PL/SQL because (unlike C++ libraries of Java jars) the unit of release can have a very low level of granularity: individual packages or stored procedures.
Common Reuse Principle The Common Reuse principle supports the definition of cohesive program units. Functions which share a dependency belong together, because they are likely to be used together belong together. For instance, procedures which maintain the Employees table should be co-located in one package (or a group of related packages). They will share sub-routines, constants and exceptions. Packaging related procedures together makes the package easier to write and easier for calling programs to use.
Common Closure PrincipleThe Common Closure principle supports also the definition of cohesive program units. Functions which share a dependency belong together, because they have a common axis of change. Common Closure helps to minimise the number of program units affected by a change. For instance, programs which use the Employees table may need to change if the structure of the table changes. All the changes must be released together: table, PL/SQL, types, etc.
Acyclic Dependencies Principle Avoid cyclic dependencies between program units: if package A depends on package B then B must not have a dependency on B. Cyclic dependencies make application hard to use and harder to deploy. The dependency graph shows the order in which objects must be built. Designing a dependency graph upfront is futile, but we can keep to rough guidelines. Higher level packages implementing business rules tend to depend on generic routines which in turn tend to depend on low-level utilities. There should be no application logic in those lower-level routines. If SALES requires a special logging implementation then that should be handled in the SALES subsystem not in the standard logging package.
Stable Dependencies Principle Any change to the implementation of a program unit which is widely used will generate regression tests for all the programs which call it. At the most extreme, a change to a logging routine could affect all the other programs in our application. As with the Open/Closed Principle we need to fix bugs. But new features should be introduced by extension not modification. And refactoring of low-level dependencies must not done on a whim.
Stable Abstractions PrincipleAbstractions are dependencies, especially when we're talking about PL/SQL. So this Principle is quite similar to Stable Dependencies Principle. The key difference is that this relates to the definition of interfaces rather than implementation. A change to the signature of a logging routine could require code changes to all the other programs in the application. Obviously this is even more inconvenient than enforced regression testing. Avoid changing the signature of a public procedure or the projection of a public view. Again, extension rather than modification is the preferred approach.
Applicability of RCCASS principles in PL/SQL The focus of these principles is the stability of a shared codebase, and minimising the impact of change on the consumers of our code. This is vital in large projects, where communication between teams is often convoluted. It is even more important for open source or proprietary libraries.

We we can apply Common Reuse Principle and Common Closure Principle to define the scope of the Reuse Release Equivalency Principle, and hence define the boundaries of a sub-system (whisper it, schema). Likewise we can apply the Stable Dependencies Principle and Stable Abstractions Principle to enforce the Acyclic Dependencies Principle to build stables PL/SQL libraries. So the RCCASS principles offer some most useful pointers towards a stable PL/SQL software architecture.

Part of the Designing PL/SQL Programs series

The Dependency Inversion Principle: a practical example

Andrew Clarke - Sun, 2016-04-03 11:54
These design principles may seem rather academic, so let's look at a real life demonstration of how applying Dependency Inversion Principle lead to an improved software design.

Here is a simplified version of an ETL framework which uses SQL Types in a similar fashion to the approach described in my blog post here. The loading process is defined using an abstract non-instantiable Type like this:
create or replace type load_t force as object
( txn_date date
, tgt_name varchar2(30)
, member function load return number
, final member function get_tgt return varchar2
)
not final not instantiable;
/

create or replace type body load_t as
member function load return number
is
begin
return 0;
end load;
final member function get_tgt return varchar2
is
begin
return self.tgt_name;
end get_tgt;
end;
/


The concrete behaviour for each target table in the ABC feed is defined by sub-types like this:
create or replace type load_tgt1_t under load_t
( overriding member function load return number
, constructor function load_tgt1_t
(self in out nocopy load_tgt1_t
, txn_date date)
return self as result
)
;
/
create or replace type body load_tgt1_t as
overriding member function load return number
is
begin
insert into tgt1 (col1, col2)
select to_number(col_a), col_b
from stg_abc stg
where stg.txn_date = self.txn_date;
return sql%rowcount;
end load;
constructor function load_tgt1_t
(self in out nocopy load_tgt1_t
, txn_date date)
return self as result
is
begin
self.txn_date := txn_date;
self.tgt_name := 'TGT1';
return;
end load_tgt1_t;
end;
/
This approach is neat because ETL is a fairly generic process: the mappings and behaviour for a particular target table are specific but the shape of the loading process is the same for any and all target tables. So we can build a generic PL/SQL procedure to handle them. This simplistic example does some logging, loops through a set of generic objects and, through the magic of polymorphism, calls a generic method which executes specific code for each target table:
    procedure load  
(p_txn_date in date
, p_load_set in sys_refcursor)
is
type loadset_r is record (
tgtset load_t
);
lrecs loadset_r;
load_count number;
begin
logger.logm('LOAD START::txn_date='||to_char(p_txn_date,'YYYY-MM-DD'));
loop
fetch p_load_set into lrecs;
exit when p_load_set%notfound;
logger.logm(lrecs.tgtset.get_tgt()||' start');
load_count := lrecs.tgtset.load();
logger.logm(lrecs.tgtset.get_tgt()||' loaded='||to_char(load_count));
end loop;
logger.logm('LOAD FINISH');
end load;

So far, so abstract. The catch is the procedure which instantiates the objects:
    procedure load_abc_from_stg  
(p_txn_date in date)
is
rc sys_refcursor;
begin
open rc for
select load_tgt1_t(p_txn_date) from dual union all
select load_tgt2_t(p_txn_date) from dual;
load(p_txn_date, rc);
end load_abc_from_stg;

On casual inspection it doesn't seem problematic but the call to the load() procedure gives the game away. Both procedures are in the same package:
create or replace package loader as
procedure load
(p_txn_date in date
, p_load_set in sys_refcursor);
procedure load_abc_from_stg
(p_txn_date in date);
end loader;
/

So the package mixes generic and concrete functionality. What makes this a problem? After all, it's all ETL so doesn't the package follow the Single Responsibility Principle? Well, up to a point. But if we want to add a new table to the ABC feed we need to update the LOADER package. Likewise if we want to add a new feed, DEF, we need to update the LOADER package. So it breaks the Stable Abstractions principle. It also creates dependency problems, because the abstract load() process has dependencies on higher level modules. We can't deploy the LOADER package without deploying objects for all the feeds.

Applying the Dependency Inversion Principle.The solution is to extract the load_abc() procedure into a concrete package of its own. To make this work we need to improve the interface between the load() procedure and programs which call it. Both sides of the interface should depend on a shared abstraction.

The LOADER package is now properly generic:
create or replace package loader as
type loadset_r is record (
tgtset load_t
);
type loadset_rc is ref cursor return loadset_r;
procedure load
(p_txn_date in date
, p_load_set in loadset_rc)
authid current_user
;
end loader;
/
The loadset_r type has moved into the package specification, and defines a strongly-typed ref cursor. The load() procedure uses the strongly-typed ref cursor.

Similarly the LOAD_ABC package is wholly concrete:
create or replace package loader_abc as
procedure load_from_stg
(p_txn_date in date);
end loader_abc;
/

create or replace package body loader_abc as
procedure load_from_stg
(p_txn_date in date)
is
rc loader.loadset_rc;
begin
open rc for
select load_tgt1_t(p_txn_date) from dual union all
select load_tgt2_t(p_txn_date) from dual;
loader.load(p_txn_date, rc);
end load_from_stg;
end loader_abc;
/
Both package bodies now depend on abstractions: the strongly-typed ref cursor in the LOADER specification and the LOADER_T SQL Type. These should change much less frequently than the tables in the feed or even the loading process itself. This is the Dependency Inversion Principle in action.

Separating generic and concrete functionality into separate packages produces a more stable application. Users of a feed package are shielded from changes in other feeds. The LOADER package relies on strongly-typed abstractions. Consequently we can code a new feed package which can call loader.load() without peeking into that procedure's implementation to see what it's expecting.

Part of the Designing PL/SQL Programs series

Easy Way to Create pfile from spfile

Michael Dinh - Sun, 2016-04-03 10:50

I learned something new today; so excited as this will make automation so much easier.

create pfile=’/tmp/init@.ora’ from spfile;

The @ automatically substitute SID.

oracle@arrow:tiger:/tmp
$ sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sun Apr 3 09:45:49 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ARROW:(SYS@tiger):PRIMARY> set time on
09:45:52 ARROW:(SYS@tiger):PRIMARY> !ls -l /tmp/init*
ls: cannot access /tmp/init*: No such file or directory

09:45:59 ARROW:(SYS@tiger):PRIMARY> create pfile='/tmp/init@.ora' from spfile;

File created.

09:46:04 ARROW:(SYS@tiger):PRIMARY> !ls -l /tmp/init*
-rw-r--r--. 1 oracle oinstall 1165 Apr  3 09:46 /tmp/inittiger.ora

09:46:10 ARROW:(SYS@tiger):PRIMARY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
oracle@arrow:tiger:/tmp
$


FBDA -- 2 : FBDA Archive Table Structure

Hemant K Chitale - Sun, 2016-04-03 10:10
Following up on my earlier post, I look at the FBDA Archive Tables.

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant

SQL*Plus: Release 12.1.0.2.0 Production on Sun Apr 3 23:26:27 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Sat Apr 02 2016 23:32:30 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA
SYS_FBA_DDL_COLMAP_93250
SYS_FBA_HIST_93250
SYS_FBA_TCRV_93250

SQL> desc test_fbda
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE

SQL>
SQL> desc sys_fba_hist_93250
Name Null? Type
----------------------------------------- -------- ----------------------------
RID VARCHAR2(4000)
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OPERATION VARCHAR2(1)
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE

SQL> desc sys_fba_ddl_colmap_93250
Name Null? Type
----------------------------------------- -------- ----------------------------
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OPERATION VARCHAR2(1)
COLUMN_NAME VARCHAR2(255)
TYPE VARCHAR2(255)
HISTORICAL_COLUMN_NAME VARCHAR2(255)

SQL> desc sys_fba_tcrv_93250
Name Null? Type
----------------------------------------- -------- ----------------------------
RID VARCHAR2(4000)
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OP VARCHAR2(1)

SQL>


The HIST table is the History table for my active table. It adds columns that track Rowid, Start SCB and End SCN for a range of rows that are copied into the History Table, Transaction Identifier, Operation and then the actual columns of the active table.
The DDL_COLMAP table seems to track Column Mappings.  See example below.
The TCRV table seems to be tracking Transactions ?

Let's try some DDL to ADD and DROP columns to the active table.

SQL> alter table test_fbda add (new_col_1 varchar2(5));

Table altered.

SQL> desc test_fbda
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE
NEW_COL_1 VARCHAR2(5)

SQL> desc sys_fba_93250
ERROR:
ORA-04043: object sys_fba_93250 does not exist


SQL> desc sys_fba_hist_93250
Name Null? Type
----------------------------------------- -------- ----------------------------
RID VARCHAR2(4000)
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OPERATION VARCHAR2(1)
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE
NEW_COL_1 VARCHAR2(5)

SQL>
SQL> select * from sys_fba_ddl_colmap_93250
2 /

STARTSCN ENDSCN XID O
---------- ---------- ---------------- -
COLUMN_NAME
--------------------------------------------------------------------------------
TYPE
--------------------------------------------------------------------------------
HISTORICAL_COLUMN_NAME
--------------------------------------------------------------------------------
1697151
ID_COLUMN
NUMBER
ID_COLUMN

1697151
DATA_COLUMN
VARCHAR2(15)
DATA_COLUMN

1697151
DATE_INSERTED
DATE
DATE_INSERTED

1728713
NEW_COL_1
VARCHAR2(5)
NEW_COL_1


SQL>


The new column added to the active table is also now reflected in the History Table.  The DDL_COLMAP shows the effective start of each column (notice the STARTSCN mapped to the COLUMN_NAME)

Let's set some values in new column and see if they appear in the History Table.

SQL> update test_fbda set new_col_1 = 'New'
2 where id_column < 6;

5 rows updated.

SQL> commit;

Commit complete.

SQL> select id_column, new_col_1, scn_to_timestamp(startscn), scn_to_timestamp(endscn)
2 from sys_fba_hist_93250
3 where id_column < 6
4 order by 1,3;

ID_COLUMN NEW_C
---------- -----
SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
1
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

2
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

3
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

4
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

5
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM


SQL>


What rows are copied into the History Table are *prior* image rows (copied from the Undo Area).  The STARTSCN and ENDSCN are of *yesterday* (02-April).

Let me DROP the new column.

SQL> alter table test_fbda drop (new_col_1);

Table altered.

SQL> desc test_fbda
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE

SQL> desc sys_fba_hist_93250;
Name Null? Type
----------------------------------------- -------- ----------------------------
RID VARCHAR2(4000)
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OPERATION VARCHAR2(1)
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE
D_1729869_NEW_COL_1 VARCHAR2(5)

SQL>
SQL> select * from sys_fba_ddl_colmap_93250;

STARTSCN ENDSCN XID O
---------- ---------- ---------------- -
COLUMN_NAME
--------------------------------------------------------------------------------
TYPE
--------------------------------------------------------------------------------
HISTORICAL_COLUMN_NAME
--------------------------------------------------------------------------------
1697151
ID_COLUMN
NUMBER
ID_COLUMN

1697151
DATA_COLUMN
VARCHAR2(15)
DATA_COLUMN

1697151
DATE_INSERTED
DATE
DATE_INSERTED

1728713 1729869
D_1729869_NEW_COL_1
VARCHAR2(5)
NEW_COL_1


SQL>


The dropped column is no longer in the active table  and has been renamed in the History table.  (The data in the column has to be preserved but the column is renamed).  Notice how the DDL_COLMAP table now shows an ENDSCN for this column, with the new (renamed)  column as in the History table.  The column name seems to include the SCN (ENDSCN ?)

Let's confirm what data is now present in the History table  (remember : Our earlier query showed the pre-update image for this column).

SQL> select id_column, D_1729869_NEW_COL_1, scn_to_timestamp(startscn), scn_to_timestamp(endscn)
2 from sys_fba_hist_93250
3 where (id_column < 6 OR D_1729869_NEW_COL_1 is not null)
4 order by 1,3;

ID_COLUMN D_172
---------- -----
SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
1
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

1
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

1 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

2
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

2
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

2 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

3
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

3
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

3 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

4
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

4
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

4 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

5
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

5
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

5 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM


15 rows selected.

SQL>
SQL> select scn_to_timestamp(1729869) from dual;

SCN_TO_TIMESTAMP(1729869)
---------------------------------------------------------------------------
03-APR-16 11.45.27.000000000 PM

SQL>


Why do we now have 3 rows in the History table for each row in the Active Table ?  Take ID_COLUMN=1.  The first row -- for the time range 02-Apr 11:32pm to 02-Apr 11:46pm--  is as of yesterday, the same row we saw in the History table after the update in the active table.  The second row is the representation to preserve the row for the time rang 02-Apr 11:46pm to 03-Apr 11:41pm to support AS OF queries upto the time of the UPDATE.  The third row for the time range 03-Apr 11:41pm to 03-Apr 11:45pm is to present the UPDATEd value ('New') in the column upto the last transaction updating it before the column was dropped at 03-Apr 11:45:27pm.

Thus, Oracle maintains multiple versions of the same row, including versions for DROPped columns, in the History Table.

Note :  The History Table is not supposed to be directly queried in the manner I have shown here.  The proper query against the active table would be an AS OF query which is automatically rewritten / redirected to "hit" the History table when necessary.

What about the third table table -- the TCRV table ?

SQL> l
1 select scn_to_timestamp(startscn), op , count(*)
2 from sys_fba_tcrv_93250
3 group by scn_to_timestamp(startscn), op
4* order by 2,1
SQL> /

SCN_TO_TIMESTAMP(STARTSCN) O
--------------------------------------------------------------------------- -
COUNT(*)
----------
03-APR-16 11.45.24.000000000 PM U
1000


SQL>
SQL> select count(distinct(rid)) from sys_fba_tcrv_93250;

COUNT(DISTINCT(RID))
--------------------
1000

SQL>



It shows 1000 rows has having been UPDATEd ? (Does OP='U' mean 'UPDATE). We do know that ADD and DROP column are changes to the table.  But are they UPDATEs ?

Next post : Continuing with DML operations (more rows, some updates).  We'll see if we can decipher anything rom the TCRV table as well. Changed to showing support for TRUNCATEs.
.
.
.



Categories: DBA Blogs

LittleArduinoProjects#198 Hall Effect Switch

Paul Gallagher - Sun, 2016-04-03 04:53
Hall-effect sensors generally support one of three modes:
  • switch - turn on in the presence of a magnetic field of a specified polarity
  • latch - turn on in the presence of a magnetic field, and stay on until exposed to the reverse polarity
  • linear sensor - output is proportional to the magnetic field strength
The components I have for testing are marked "44E/938" and are nominally compatible with the A3144 or OH3144. These are switches that turn on when facing a south magnetic polarity

The chip is rated for continuous output current of 25mA, so that is sufficient to drive an LED (as demonstrated here). For other switching applications, the output can be used to switch a transistor or pull a microcontroller output low.

Testing with some neodymium magnets, I get a strong full-on when the south pole of the magnets are within 23mm directly to the front of the chip. The output remains on until I pull back to over 40mm.

A common use for Hall-effect sensors is to detect and measure rotation. A good demonstration of this is to sequence a PoV display as demonstrated in Great Scott's latest video - HACKED!: Old Fan becomes a POV Display

As always, all notes, schematics and code are in the Little Arduino Projects repo on GitHub.

FBDA -- 1 : Testing Flashback Data Archive in 12c (NonCDB)

Hemant K Chitale - Sat, 2016-04-02 09:53
Note : At the bottom of this post, you'll find links to more (subsequent) posts on this topic.

Some testing I'd done with Flashback Data Archive (henceforth called FBDA in this and subsequent posts, if any) in 11.2.0.4 left me with uncertainty about the automatic purging of data beyond the Retention Period.  I might return to testing 11.2.0.4, but here I shall begin testing in 12.1.0.2  (NonCDB).

Setting up FBDA :

[oracle@ora12102 ~]$ sqlplus system/oracle

SQL*Plus: Release 12.1.0.2.0 Production on Sat Apr 2 23:23:53 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Sat Apr 02 2016 23:20:47 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create tablespace fbda ;

Tablespace created.

SQL> create flashback archive fbda tablespace fbda retention 3 day;

Flashback archive created.

SQL> create tablespace hemant;

Tablespace created.

SQL> create user hemant identified by hemant
2 default tablespace hemant;

User created.

SQL> grant create table to hemant;

Grant succeeded.

SQL> grant create session to hemant;

Grant succeeded.

SQL> alter user hemant quota unlimited on hemant;

User altered.

SQL> alter user hemant quota unlimited on fbda;

User altered.

SQL> grant flashback archive administer to hemant;

Grant succeeded.

SQL> grant flashback archive on fbda to hemant;

Grant succeeded.

SQL>
SQL> connect / as sysdba
Connected.
SQL> grant execute on dbms_flashback_archive to hemant;

Grant succeeded.

SQL>
SQL> connect hemant/hemant
Connected.
SQL> create table test_fbda (id_column number, data_column varchar2(15), date_inserted date) tablespace hemant;

Table created.

SQL> alter table test_fbda flashback archive fbda;

Table altered.

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA

SQL>


Note the Flashback Archive history table corresponding to TEST_FBDA doesn't get created immediately.

SQL> connect hemant/hemant
Connected.
SQL> insert into test_fbda
2 select rownum , to_char(rownum), trunc(sysdate)
3 from dual connect by level < 1001;

1000 rows created.

SQL> commit;

Commit complete.

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA

SQL> select count(*) from test_fbda;

COUNT(*)
----------
1000

SQL> select flashback_archive_name, retention_in_days, status
2 from user_flashback_archive;

FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
RETENTION_IN_DAYS STATUS
----------------- -------
FBDA
3


SQL> select table_name, flashback_archive_name, archive_table_name, status
2 from user_flashback_archive_tables;

TABLE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
ARCHIVE_TABLE_NAME STATUS
----------------------------------------------------- -------------
TEST_FBDA
FBDA
SYS_FBA_HIST_93250 ENABLED


SQL>
SQL> !sleep 300

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA
SYS_FBA_DDL_COLMAP_93250
SYS_FBA_HIST_93250
SYS_FBA_TCRV_93250

SQL>
SQL> select object_id
2 from user_objects
3 where object_name = 'TEST_FBDA'
4 and object_type = 'TABLE'
5
SQL> /

OBJECT_ID
----------
93250

SQL>


So, it took some time for the flashback archive history table (identified on the basis of the OBJECT_ID) to appear.  The background fbda process seems to run (wakeup) every 5minutes although it may wake up more frequently if there is more activity in the database.

SQL> select trunc(date_inserted), count(*)
2 from test_fbda
3 group by trunc(date_inserted)
4 /

TRUNC(DAT COUNT(*)
--------- ----------
02-APR-16 1000

SQL> select trunc(date_inserted), count(*)
2 from sys_fba_hist_93250
3 group by trunc(date_inserted)
4 /

no rows selected

SQL> update test_fbda
2 set data_column = data_column
3 where rownum < 451;

450 rows updated.

SQL> commit;

Commit complete.

SQL> select trunc(date_inserted), count(*)
2 from sys_fba_hist_93250
3 group by trunc(date_inserted)
4 /

no rows selected

SQL>
SQL> !sleep 180

SQL> select trunc(date_inserted), count(*)
2 from sys_fba_hist_93250
3 group by trunc(date_inserted)
4 /

TRUNC(DAT COUNT(*)
--------- ----------
02-APR-16 450

SQL>
SQL> select scn_to_timestamp(startscn), scn_to_timestamp(endscn), date_inserted, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn), date_inserted
4 order by 1;

SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
DATE_INSE COUNT(*)
--------- ----------
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM
02-APR-16 450


SQL>


Notice that not all 1000 rows got copied to the FBDA.  Only the 450 rows that I updated were copied in.  They are tracked by SCN-Timestamp.  (The "DATE_INSERTED" column is my own date column, Oracle wouldn't be using that column to track DML dates for rows as the values in that column are controlled by me -- the application or developer, not Oracle).

Note :  The History Table is not supposed to be directly queried in the manner I have shown here.

Tomorrow :  More Rows, and some DDLs as well.

Post 2 here.  It covers some of the architectural components and support for ADD / DROP column DDL.

Post 3 here.  It shows support for TRUNCATEs.

Post 4 here.  On Partitions and Indexes.

Post 5 here.  On (Auto)Purging.

Post 6 here.  On Bug Notes
.
.
.


Categories: DBA Blogs

Undepersonalized Teaching vs. Learnification

Michael Feldstein - Sat, 2016-04-02 09:43

By Michael FeldsteinMore Posts (1068)

Amy Collier was kind enough to post the video and notes from a recent keynote she gave. (For those of you who don’t know Amy, she is the Associate Provost for Digital Learning at Middlebury College and well worth following. She doesn’t blog that often, but when she does, she has interesting things to say.) A central element of her talk was on the “learnification” of education; that is, how the teacher disappears from the conversation about “good” education and the whole thing gets reduced to learners gobbling up little learning objects to get their competency level-up, like a human game of Pac-Man. This is one reason why Phil and I felt it was important to reframe “personalized learning” as a set of teaching practices that we called “undepersonalized teaching” in our recent EDUCAUSE Review article. When we visited actual classes (including Middlebury’s) and saw what actual teachers were doing as part of our e-Literate TV case study series, it was clear that there were thoughtful teachers using these tools in interesting ways that enabled them to spend more time on the high-value aspects of their teaching craft. But the student-as-Pac-Man story is the one that gets heard.

I would like to bring out one additional point about the learnification of education that Amy briefly hints at in her keynote but that you will miss if you only read her notes is that “learnification” is not something that only comes from the “neo-liberal”[1] emphasis on measurement of learning outcomes, career readiness, scale, and student as consumer. The Gert Biesta paper that she references actually pins some of the blame on constructivism, which very often gets flattened into some version of “Hey, teacher, leave them kids alone!” In its historical context, constructivism is a useful antidote to Pac-Man reductivism of the prevailing educational theory of the time, but like the modern, mechanized variant that it is most frequently deployed to counter today, it tends to de-emphasize the role of the teacher to the point where that role all but disappears.

The picture that I have in my own head for the role of the teacher involves a kind of a bell curve for formal education. At the left-hand tail is the basic stuff that people need to know in order to start reasoning and learning for themselves—facts, basic concepts, habits of mind and of study, and so on. The specifics of what goes into that category are somewhat up for grabs and depend a lot on the student’s long-term goals. But whatever they are, many of them lend themselves to the sort of learning objectification/adaptive learning approach just fine. On the far end of the bell curve of formal education (but hopefully in the fat part of bell curve of normal adult life) is the complex reasoning that students apply on their own to synthesize what they are seeing into their own individual perspectives on the world. It’s the sort of thing that one hopes goes on in an undergraduate’s capstone or senior thesis project (and often sooner for different students in different classes). At that point, the training wheels of formal education come off.

The fat middle part of the curve is where good teachers should be spending most of their time. It’s the work of getting the students ready to ride their bikes without the training wheels. The “not yetness” that Amy talks about in her keynote (and elsewhere) is basically about taking your hand off the kid’s shoulder to give her a few seconds to ride on her own. It’s the moment when she realizes that she is free to steer, to ride a head, and yes, maybe to fall. You catch her if you see her in danger of falling badly enough to hurt herself, but not before. It’s the important how part of Vygotsky’s zone of proximal development.

The shape of the curve changes somewhat depending on the student population, but a classic bell curve image fits well both with the typical student progression in a four-year college as well as within the work to be done in a typical 100- or 200-level course. Most of the work of college should be about coaching students on how to think for themselves. Unfortunately, teachers face some practical problems in getting to what is supposed to be the fat part of the curve. The first one is that some students get stuck on the foundational stuff—different students in different places—and the standard lecture-format class doesn’t give teachers much opportunity to see where they are stuck and help them to get unstuck. As a result, teachers either have to deal with the “slowest car on the freeway” problem in their class or else decide to let more students fail rather than taking the time to help them get unstuck. (Or both. If you slow down too much, the students who understand most of the basics tune out and can miss the few critical pieces that they still need to learn.) Second, we have trained students through that very same lecture format that the basics are not something that they should even try to master for themselves. Basics are transmitted during class time as lectures.

Personalized learning can be seen as a set of techniques designed to fix those two problems through the use of supporting technology. Proper use of adaptive “skill and drill” platforms, student retention early warning systems, and the like can actually free up the teacher to create that environment of “not-yetness” in the classroom. It’s not either/or but both/and.

ERO Graphic 1

  1. Honestly, I still don’t really know what that word means, other than “bad” and “vaguely associated with people who have money.”

The post Undepersonalized Teaching vs. Learnification appeared first on e-Literate.

Welcome New Oracle ACE Program Participants!

OTN TechBlog - Fri, 2016-04-01 19:03
12.00

Congratulations to our newest Oracle ACEs!  The Oracle ACE Program recognizes individuals for their outstanding contribution to the Oracle community. With over 500 participants in more than 60 countries around the globe, the ACE program features three tiers: Associate, ACE and ACE Director, to support community advocates at all stages of their career. Read more about the program here.

Oracle ACEs initiated in March:

Oracle ACE Directors:

Gokhan Atil

John Booth

Ludovico Caldara

Celvin Kattookaran

Carlos Sierra

Oracle ACEs:

Jérôme Françoisse

Anju Garg

Laurent Leturgez

Emrah Mete

Anuj Mohan

Robert van Molken

Oracle ACE Associates:

Elise Valin-Raki

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

sql for first day of month and last day of month

Learn DB Concepts with me... - Fri, 2016-04-01 19:00

select SYSDATE ,
last_day(sysdate) as LAST_DATE_CURR_MNTH,
ADD_MONTHS(last_day(sysdate),-1) as PREVIOUS_MON_LAST_DATE,
last_day(sysdate)+1 as NEXT_MON_FIRST_DATE,
ADD_MONTHS(last_day(sysdate),+1) as NEXT_MON_LAST_DATE,
ADD_MONTHS(last_day(sysdate),+5) as LAST_DATE_OF_5TH_MON,
ADD_MONTHS(last_day(sysdate),+5) +1 as FIRST_DATE_IN_6TH_MON_AFTR_NOW  
from dual;

"SYSDATE"    "LAST_DATE_CURR_MNTH"    "PREVIOUS_MON_LAST_DATE"    "NEXT_MON_FIRST_DATE"    "NEXT_MON_LAST_DATE"    "LAST_DATE_OF_5TH_MON"   
-----------  ---------------------   ------------------------    ---------------------   --------------------    -----------------------
"FIRST_DATE_IN_6TH_MON_AFTR_NOW"
-----------------------

01-APR-16        30-APR-16                31-MAR-16                    01-MAY-16                31-MAY-16            30-SEP-16   
-----------------------
01-OCT-16
Categories: DBA Blogs

compute as a service

Pat Shuff - Fri, 2016-04-01 13:42
In an ongoing learning journey of trying to understand cloud services, I got accounts on the Amazon cloud, Azure cloud, and Oracle cloud. I thought I would start with the basics and grow from there. As an exercise, let's create a Linux server with no software installed in each three platforms. Apart from creating an account on all three platforms (which was non-trivial) creation of a compute server on each platform was relatively simple.

Amazon Web Services

The initial look and feel of the console starts the experience. It does show what the three companies are focused on. Let's start with Amazon (it is first in the alphabet and I had to pick something). The console lists a wide variety of services and things that you can purchase. Without doing research I would not have known that S3 stands for storage and EC2 stands for compute.

I get what a virtual server in the cloud is but how does that differ from a docker container and why should I care? Why should I care about managing Web Apps if I am just looking for raw compute? Why do I want to run code outside of a virtual machine? Which one should I choose? We are not going to go into depth on any of these subjects. If we are just looking at running a Linux instance, the simple EC2 should be adequate. We can install Docker as a package in our Linux instance to help us control how much of a processor is allocated to a service or program. We can install applications like Tomcat or WebLogic to run Web Apps. Linux gives us the foundation to do all of this with packages. Lambda is a totally different beast in that I can run code snippets to do things like voice command interpretation for an Amazon Echo or asynchronous events from devices and launch web sites or REST apis without having to install, manage, and configure an operating system. The rest of the world calls this a Node.js function and offers it as a separate service as well. I realize that I am oversimplifying this but you have to know what you are trying to accomplish before you start to create your first compute instance in the cloud.

Microsoft Azure Services

The Azure services are a little different in that they focus more on the user creation of virtual machines, SQL server, and some app services. Creation of a virtual image is relatively easy and it makes sense what you are doing. The console is relatively simple and clean with more options on the second page instead of the first page as is done with Amazon.

As you click on the Add button for Virtual Image you get an expanded set of operation system options and configurations.

Note that you can search for Oracle Linux and get a listing of various versions of the database. The virtual machine is easy to configure and create using the portal. If, however, you want to configure and create this via a command line, you need to download the PowerShell and run everything inside the application. The command line is Microsoft specific and difficult to port and migrate to other services. With Amazon and Oracle you can easily use RESTapi calls to provision and create services. Microsoft makes it a little more difficult to generically script but easily do this in their shell and language.

Oracle Cloud Services

The Oracle cloud compute services are new to the market. In the past compute services have been sold in bundles of 500 processors but have recently been made available in single processor consumption models. The cloud console has a different look and feel because the focus of the cloud services are more on the PaaS layer and less on the compute and storage layers.

Note that the screen shot starts with the storage and compute services but scrolling down shows database, java, SOA, and more PaaS layers.

To create a virtual image, you need to click on the compute cloud service - Service Console and it will allow you create an instance. The operating system selection is not as graphical or user friendly as the Microsoft interface but does list a variety of operating system options and configurations.

In conclusion, all three of these cloud consoles allow you to create a virtual image. In the next blog entry we will walk through the steps needed to create a Linux 6 instance on each of the three cloud platforms. We will not talk about how to create accounts. We will assume that you can find account setup and creation on your own. All three sites offer "try me" services that give at least 30 days evaluations. The eventual recommendation will be to use services like bitnami.com that takes public domain services like LAMP servers, Wiki engines, blog servers, and other public domain tools. The Bitnami site allows you to select a pre-configured instance and provision it into all three of these cloud services along with a few other cloud providers.

Best practice for setting up MySQL replication filters

Pythian Group - Fri, 2016-04-01 13:23

It is not uncommon that we need to filter out some DBs or Tables while setting up replication. It is important to understand how MySQL evaluates/process the replication filtering rules to avoid the conflicting or confusion while we setting them up.The purpose of this blog is to illustrate the rules and provide some suggestions for best practice.

MySQL provides 3 levels of filters for setting up replication: Binary log, DB and Table. The binlog filters apply on the master to control how to log the changes. Since MySQL replication is based on the binlog, it is the first level filter and has the highest priority. While the DB-level and Table-level filters apply on the slaves, since each table belongs to a schema, the DB-level filters have higher priority than the Table-level ones. Inside the Table-level filters, MySQl will evaluate the options in the order of: –replicate-do-table, –replicate-ignore-table ,  –replicate-wild-do-table , –replicate-wild-ignore-table.

Based on that, we have the following suggestions for setting up MySQL replication filter as best practice:

I)Do not setup any binlog-level filters unless you really need to and can afford losing the chance of  having an extra full copy of data changes for the master.

II)In DB-level filters, use either one or none of the two options: –replicate-do-db or –replicate-ignore-db. Never use both at the same time.

III) While using binlog_format=’statement’ OR ‘mixed’ (in mixed mode, if  a transaction is deterministic then it will be stored in statement format) and set up –replicate-do-db or –replicate-ignore-db on slaves, make sure never make changes on the tables across the default database on master otherwise you might lose the changes on slave due to default database not matching.

IV)In Table-level filters, use only one of the 2 options, or use the following two combination: –replicate-ignore-table and —replicate-wild-do-table to avoid conflicting and confusing.

For MariaDB replication filters within Galera cluster, it should be used with caution. As a general rule except for InnoDB DML updates, the following replication filters are not honored in a Galera cluster :  binlog-do-db ,binlog-ignore-db, replicate-wild-do-db, replicate-wild-ignore-db. However, replicate-do-db,replicate-ignore-db filters are honored for DDL and DML for both InnoDB & MyISAM engines. As they might create discrepancies and replication may abort (see MDEV-421, MDEV-6229). (https://mariadb.com/kb/en/mariadb/mariadb-galera-cluster-known-limitations/), For the slaves replicating from cluster, the rules are similar with normal replication settings as above.

Here are the details/reasons:

1)Binlog-level filters

A)How MySQL process the Binlog-level filters

There are 2 options for setting binlog filter on master:  –binlog-do-db and –binlog-ignore-db. MySQL will check –binlog-do-db first, if there are any options, it will apply this one and ignore –binlog-ignore-db. If the –binlog-do-db is NOT set, then mysql will check –binlog-ignore-db.If both of them are empty, it will log changes for all DBs.

See the below examples. In scenario 1) no binlog level filters are set and so all changes were logged; In scenario 2) -binlog-do-db and –binlog-ignore-db are all set to m_test and changes on the DB m_test were logged and changes on the DB test were NOT logged;In scenario 3) only –binlog-ignore-db is set to m_test and so changes on the DB m_test were NOT logged and changes on the DB test were  logged;
scenario 1)–binlog-do-db and –binlog-ignore-db is NOT set:

mysql> show master status;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000003 |      120 |              |                  |                   |

+——————+———-+————–+——————+——————-+

1 row in set (0.00 sec)

mysql> show binlog events in “vm-01-bin.000003” from 120;  

Empty set (0.00 sec)

mysql> insert into t1(id,insert_time) values(10,now());

Query OK, 1 row affected (0.05 sec)

 

mysql> show binlog events in “vm-01-bin.000003” from 120;

+——————+—–+————+———–+————-+—————————————————————+

| Log_name         | Pos | Event_type | Server_id | End_log_pos | Info                                                          |

+——————+—–+————+———–+————-+—————————————————————+

| vm-01-bin.000003 | 120 | Query      |         1 |         211 | BEGIN                                                         |

| vm-01-bin.000003 | 211 | Query      |         1 |         344 | use `m_test`; insert into t1(id,insert_time) values(10,now()) |

| vm-01-bin.000003 | 344 | Xid        |         1 |         375 | COMMIT /* xid=17 */                                           |

+——————+—–+————+———–+————-+—————————————————————+

3 rows in set (0.00 sec)

scenario 2)–binlog-do-db=m_test and –binlog-ignore-db=m_test:

— insert into tables of DB m_test was logged

mysql> show master status;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000004 |      656 | m_test       | m_test           |                   |

+——————+———-+————–+——————+——————-+

1 row in set (0.00 sec)

 

mysql> use m_test

 

mysql> insert into t1(insert_time) values(now());

Query OK, 1 row affected (0.02 sec)

 

mysql> show binlog events in “vm-01-bin.000004” from 656;

+——————+—–+————+———–+————-+———————————————————+

| Log_name         | Pos | Event_type | Server_id | End_log_pos | Info                                                    |

+——————+—–+————+———–+————-+———————————————————+

| vm-01-bin.000004 | 656 | Query      |         1 |         747 | BEGIN                                                   |

| vm-01-bin.000004 | 747 | Intvar     |         1 |         779 | INSERT_ID=13                                            |

| vm-01-bin.000004 | 779 | Query      |         1 |         906 | use `m_test`; insert into t1(insert_time) values(now()) |

| vm-01-bin.000004 | 906 | Xid        |         1 |         937 | COMMIT /* xid=26 */                                     |

+——————+—–+————+———–+————-+———————————————————+

4 rows in set (0.00 sec)

— insert into tables of DB test was NOT logged

mysql> use test;

 

mysql> show master status ;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000004 |      937 | m_test       | m_test           |                   |

+——————+———-+————–+——————+——————-+

 

mysql> insert into t1(`a`) values(‘ab’);

Query OK, 1 row affected (0.03 sec)

 

mysql> show binlog events in “vm-01-bin.000004” from 937;

Empty set (0.00 sec)

 

scenario 3)–Binlog_Do_DB=null –binlog-ignore-db=m_test:

mysql> use m_test

mysql> show master status;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000005 |      120 |              | m_test           |                   |

+——————+———-+————–+——————+——————-+

mysql> insert into t1(insert_time) values(now());

Query OK, 1 row affected (0.01 sec)

 

mysql> show binlog events in “vm-01-bin.000005” from 120;

Empty set (0.00 sec)

 

mysql> use test

mysql> insert into t1(`a`) values(‘ba’);

Query OK, 1 row affected (0.03 sec)

 

mysql> show binlog events in “vm-01-bin.000005” from 120;

+——————+—–+————+———–+————-+———————————————-+

| Log_name         | Pos | Event_type | Server_id | End_log_pos | Info                                         |

+——————+—–+————+———–+————-+———————————————-+

| vm-01-bin.000005 | 120 | Query      |         1 |         199 | BEGIN                                        |

| vm-01-bin.000005 | 199 | Query      |         1 |         305 | use `test`; insert into t1(`a`) values(‘ba’) |

| vm-01-bin.000005 | 305 | Xid        |         1 |         336 | COMMIT /* xid=22 */                          |

+——————+—–+————+———–+————-+———————————————-+

3 rows in set (0.00 sec)

 

B)Best practice for setting up the Binlog-level filters

So, for Binlog-level filter, we will use either one (and ONLY one or none) of the 2 options: –binlog-do-db to make MySQL log changes for the DBs in the list. OR, –binlog-ignore-db to make MySQL log changes for the DBs NOT in the list. Or leave both of them empty to log changes for all the DBs.

However, we usually recommend NOT to setup any binlog-level filters. The reason is that to log changes for all DBs and set up filters only on slaves will achieve the same purpose and let us have an extra full copy of data changes for the master, in case we will need that for recovery.

 

2)DB-level filters

A)How MySQL process the DB-level filters

There are 2 options for setting DB-level filters:  –replicate-do-db or –replicate-ignore-db. MySQL processes these two filters the similar way as it processes the Binlog-level filters, the difference is that it ONLY applies on the slaves and so affects how the slaves replicate from its master. It will check –replicate-do-db first, if there are any options, it will replicate the DBs in the list and ignore –replicate-ignore-db. If the –replicate-do-db is NOT set, then mysql will check –replicate-ignore-db and replicate all the DBs except for the ones in this list.If both of them are empty, it will replicate all the DBs. you can find the process in the below chart from http://dev.mysql.com/doc/refman/5.7/en/replication-rules-db-options.html

There is a trick for DB-level filters though If the binlog_format is set as statement or mixed. (The binlog_format =mixed also applies here, it is because that  in mixed mode replication, in case the transaction  is deterministic it will be resolved to statement which is equivalent to statement mode) .. Since “With statement-based replication, the default database is checked for a match.” (http://dev.mysql.com/doc/refman/5.7/en/replication-rules-db-options.html). If you set up –replicate-do-db and you update a table out of the default database in master, the update statement will not be replicated if the default database you are running command from is not in the  –replicate-do-db. For example, there are 2 DBs in master, you set binlog_format=’statement’ OR ‘mixed’ and set –replicate-do-db=DB1 on slave. when execute the following commands: use DB2; update DB1.t1 … This update command will not be executed on slave. To make the update statement replicated to slave, you need to do: use DB1, update t1 …

For example: with binlog_format=statement or binlog_format=mixed,  we insert into m_test.t1 in two approaches: one is using default DB as m_test, the other one is using default DB test, the changes are all logged in the master. But in slave, after it caught up, only the insert(default DB is m_test) was replicated to slave, and the insert (default DB is test) was NOT replicated. As shown below:

Scenario 1) binlog_format=statement

In master: insert into m_test.t1 in two approaches: one is using default DB as m_test, the other one is using default DB test, the changes are all logged

mysql> use m_test

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> delete from t1;

Query OK, 16 rows affected (0.02 sec)

 

mysql> select * from m_test.t1;

Empty set (0.00 sec)

 

mysql> use m_test

Database changed

mysql> insert into m_test.t1(insert_time) values(now());

Query OK, 1 row affected (0.04 sec)

 

mysql> use test;

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> insert into m_test.t1(insert_time) values(now());

Query OK, 1 row affected (0.03 sec)

 

mysql> show binlog events in “vm-01-bin.000006” from 654;

+——————+——+————+———–+————-+—————————————————————-+

| Log_name         | Pos  | Event_type | Server_id | End_log_pos | Info                                                           |

+——————+——+————+———–+————-+—————————————————————-+

| vm-01-bin.000006 |  654 | Xid        |         1 |         685 | COMMIT /* xid=39 */                                            |

| vm-01-bin.000006 |  685 | Query      |         1 |         768 | BEGIN                                                          |

| vm-01-bin.000006 |  768 | Query      |         1 |         860 | use `m_test`; delete from t1                                   |

| vm-01-bin.000006 |  860 | Xid        |         1 |         891 | COMMIT /* xid=48 */                                            |

| vm-01-bin.000006 |  891 | Query      |         1 |         982 | BEGIN                                                          |

| vm-01-bin.000006 |  982 | Intvar     |         1 |        1014 | INSERT_ID=17                                                   |

| vm-01-bin.000006 | 1014 | Query      |         1 |        1148 | use `m_test`; insert into m_test.t1(insert_time) values(now()) |

| vm-01-bin.000006 | 1148 | Xid        |         1 |        1179 | COMMIT /* xid=52 */                                            |

| vm-01-bin.000006 | 1179 | Query      |         1 |        1268 | BEGIN                                                          |

| vm-01-bin.000006 | 1268 | Intvar     |         1 |        1300 | INSERT_ID=18                                                   |

| vm-01-bin.000006 | 1300 | Query      |         1 |        1432 | use `test`; insert into m_test.t1(insert_time) values(now())   |

| vm-01-bin.000006 | 1432 | Xid        |         1 |        1463 | COMMIT /* xid=60 */                                            |

+——————+——+————+———–+————-+—————————————————————-+

12 rows in set (0.00 sec)

 

mysql> select * from m_test.t1;

+—-+———————+

| id | insert_time         |

+—-+———————+

| 17 | 2016-03-20 14:59:41 |

| 18 | 2016-03-20 15:00:01 |

+—-+———————+

2 rows in set (0.00 sec)

 

In slave: after it caught up, only the first insert(default DB is m_test) was replicated to slave, and the insert (default DB is test) was NOT replicated

mysql> show slave status\G

*************************** 1. row ***************************

              Slave_IO_State: Waiting for master to send event

                 Master_Host: 10.0.2.6

                 Master_User: repl

                 Master_Port: 3306

               Connect_Retry: 10

             Master_Log_File: vm-01-bin.000006

         Read_Master_Log_Pos: 1463

              Relay_Log_File: ewang-vm-03-relay-bin.000017

               Relay_Log_Pos: 1626

       Relay_Master_Log_File: vm-01-bin.000006

            Slave_IO_Running: Yes

           Slave_SQL_Running: Yes

             Replicate_Do_DB: m_test

         Replicate_Ignore_DB:

          Replicate_Do_Table:

      Replicate_Ignore_Table:

     Replicate_Wild_Do_Table:

 Replicate_Wild_Ignore_Table:

                  Last_Errno: 0

                  Last_Error:

                Skip_Counter: 0

         Exec_Master_Log_Pos: 1463

             Relay_Log_Space: 1805

             Until_Condition: None

              Until_Log_File:

               Until_Log_Pos: 0

          Master_SSL_Allowed: No

          Master_SSL_CA_File:

          Master_SSL_CA_Path:

             Master_SSL_Cert:

           Master_SSL_Cipher:

              Master_SSL_Key:

       Seconds_Behind_Master: 0

Master_SSL_Verify_Server_Cert: No

               Last_IO_Errno: 0

               Last_IO_Error:

              Last_SQL_Errno: 0

              Last_SQL_Error:

 Replicate_Ignore_Server_Ids:

            Master_Server_Id: 1

                 Master_UUID: a22b3fb2-5e70-11e5-b55a-0800279d00c5

            Master_Info_File: /mysql/data/master.info

                   SQL_Delay: 0

         SQL_Remaining_Delay: NULL

     Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it

          Master_Retry_Count: 86400

                 Master_Bind:

     Last_IO_Error_Timestamp:

    Last_SQL_Error_Timestamp:

              Master_SSL_Crl:

          Master_SSL_Crlpath:

          Retrieved_Gtid_Set:

           Executed_Gtid_Set:

               Auto_Position: 0

1 row in set (0.00 sec)

 

mysql> select * from m_test.t1;

+—-+———————+

| id | insert_time         |

+—-+———————+

| 17 | 2016-03-20 14:59:41 |

+—-+———————+

1 row in set (0.00 sec)

 

Scenario 2) binlog_format=mixed

In master:

mysql> show variables like ‘binlog_format’;

+—————+——-+

| Variable_name | Value |

+—————+——-+

| binlog_format | MIXED |

+—————+——-+

1 row in set (0.00 sec)

 

mysql> show master status;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000007 |      120 |              |                  |                   |

+——————+———-+————–+——————+——————-+

1 row in set (0.00 sec)

 

mysql> use m_test

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> insert into m_test.t1(insert_time) values(now());

Query OK, 1 row affected (0.04 sec)

 

mysql> use test;

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> insert into m_test.t1(insert_time) values(now());

Query OK, 1 row affected (0.04 sec)

 

mysql> show binlog events in “vm-01-bin.000007” from 120;

+——————+—–+————+———–+————-+—————————————————————-+

| Log_name         | Pos | Event_type | Server_id | End_log_pos | Info                                                           |

+——————+—–+————+———–+————-+—————————————————————-+

| vm-01-bin.000007 | 120 | Query      |         1 |         211 | BEGIN                                                          |

| vm-01-bin.000007 | 211 | Intvar     |         1 |         243 | INSERT_ID=19                                                   |

| vm-01-bin.000007 | 243 | Query      |         1 |         377 | use `m_test`; insert into m_test.t1(insert_time) values(now()) |

| vm-01-bin.000007 | 377 | Xid        |         1 |         408 | COMMIT /* xid=45 */                                            |

| vm-01-bin.000007 | 408 | Query      |         1 |         497 | BEGIN                                                          |

| vm-01-bin.000007 | 497 | Intvar     |         1 |         529 | INSERT_ID=20                                                   |

| vm-01-bin.000007 | 529 | Query      |         1 |         661 | use `test`; insert into m_test.t1(insert_time) values(now())   |

| vm-01-bin.000007 | 661 | Xid        |         1 |         692 | COMMIT /* xid=53 */                                            |

+——————+—–+————+———–+————-+—————————————————————-+

8 rows in set (0.00 sec)

mysql> select * from m_test.t1;

+—-+———————+

| id | insert_time         |

+—-+———————+

| 17 | 2016-03-20 14:59:41 |

| 18 | 2016-03-20 15:00:01 |

| 19 | 2016-03-20 15:09:14 |

| 20 | 2016-03-20 15:09:25 |

+—-+———————+

4 rows in set (0.00 sec)

 

In slave:

mysql> show variables like ‘binlog_format’;

+—————+——-+

| Variable_name | Value |

+—————+——-+

| binlog_format | MIXED |

+—————+——-+

1 row in set (0.00 sec)

 

mysql> show slave status\G

*************************** 1. row ***************************

              Slave_IO_State: Waiting for master to send event

                 Master_Host: 10.0.2.6

                 Master_User: repl

                 Master_Port: 3306

               Connect_Retry: 10

             Master_Log_File: vm-01-bin.000007

         Read_Master_Log_Pos: 692

              Relay_Log_File: ewang-vm-03-relay-bin.000023

               Relay_Log_Pos: 855

       Relay_Master_Log_File: vm-01-bin.000007

            Slave_IO_Running: Yes

           Slave_SQL_Running: Yes

             Replicate_Do_DB: m_test

         Replicate_Ignore_DB:

          Replicate_Do_Table:

      Replicate_Ignore_Table:

     Replicate_Wild_Do_Table:

 Replicate_Wild_Ignore_Table:

                  Last_Errno: 0

                  Last_Error:

                Skip_Counter: 0

         Exec_Master_Log_Pos: 692

             Relay_Log_Space: 1034

             Until_Condition: None

              Until_Log_File:

               Until_Log_Pos: 0

          Master_SSL_Allowed: No

          Master_SSL_CA_File:

          Master_SSL_CA_Path:

             Master_SSL_Cert:

           Master_SSL_Cipher:

              Master_SSL_Key:

       Seconds_Behind_Master: 0

Master_SSL_Verify_Server_Cert: No

               Last_IO_Errno: 0

               Last_IO_Error:

              Last_SQL_Errno: 0

              Last_SQL_Error:

 Replicate_Ignore_Server_Ids:

            Master_Server_Id: 1

                 Master_UUID: a22b3fb2-5e70-11e5-b55a-0800279d00c5

            Master_Info_File: /mysql/data/master.info

                   SQL_Delay: 0

         SQL_Remaining_Delay: NULL

     Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it

          Master_Retry_Count: 86400

                 Master_Bind:

     Last_IO_Error_Timestamp:

    Last_SQL_Error_Timestamp:

              Master_SSL_Crl:

          Master_SSL_Crlpath:

          Retrieved_Gtid_Set:

           Executed_Gtid_Set:

               Auto_Position: 0

1 row in set (0.00 sec)

 

mysql> select * from m_test.t1;

+—-+———————+

| id | insert_time         |

+—-+———————+

| 17 | 2016-03-20 14:59:41 |

| 19 | 2016-03-20 15:09:14 |

+—-+———————+

2 rows in set (0.00 sec)

 

B)Best practice for setting up the DB-level filters

Use either one or none of the two options: –replicate-do-db or –replicate-ignore-db. Never use both at the same time.

If you use binlog_format=’statement’  OR ‘mixed’ and set up –replicate-do-db or –replicate-ignore-db on slaves, make sure never make changes on the tables across the default database, otherwise the data discrepancy will be expected in the slaves.

 

3)Table-level filters

There are 4 options for setting Table-level filters: –replicate-do-table, –replicate-ignore-table ,  –replicate-wild-do-table or –replicate-wild-ignore-table. MySQL evaluates the options in order. you can find the process in the below chart from http://dev.mysql.com/doc/refman/5.6/en/replication-rules-table-options.html

 

The above chart shows us that MySQL will first check –replicate-do-table, the tables listed here will be replicated and so won’t be ignored by the following options like –replicate-ignore-table , or –replicate-wild-ignore-table. Then MySQL will check –replicate-ignore-table, the tables listed here will be ignored even if it shows up in the following options  –replicate-wild-do-table. The lowest priority is –replicate-wild-ignore-table.

B)Best practice for setting up the Table-level filters

Due to the priorities for the 4 Table_level options, to avoid confusing/conflicting, we suggest using only one of the 4 options, or using the following two options: –replicate-ignore-table and replicate-wild-do-table so that it is clearly that the tables in –replicate-ignore-table will be ignored and the tables in replicate-wild-do-table will be replicated.

 

Categories: DBA Blogs

Deploying your Oracle MAF Apps on Windows Platform

As you may already know Oracle Mobile Application Framework (MAF) 2.3 has been released. And one of the symbolic features is support for Universal Windows Platform (UWP). This means that starting...

We share our skills to maximize your revenue!
Categories: DBA Blogs

What Are Your Options For Migrating Enterprise Applications to the Cloud?

Pythian Group - Fri, 2016-04-01 08:16

Migrating your enterprise applications from on-premises infrastructure to the public cloud is attractive for a number of reasons. It eliminates the costs and complexities of provisioning hardware and managing servers, storage devices, and network infrastructure; it gives you more compute capacity per dollar without upfront capital investment; and you gain opportunities for innovation through easier access to new technologies, such as advanced analytical capabilities.

So how do you get there?

You have a few options. At one end of the spectrum, you could simply wait and rationalize, making continuous incremental changes to gain efficiencies. This is obviously a “slow burn” approach. In the middle is a “lift-and-shift” from your current environment into the public cloud. And at the far extreme, you could plunge right in and re-architect your applications—a costly and probably highly complex task.

 

In fact, a true migration “strategy” will involve elements of each of these. For example, you could perform short-term optimizations and migrations on a subset of applications that are ready for the cloud, while transforming the rest of your application stack over the longer term.

 

What to expect from the major public cloud platforms

There are three leading public cloud platforms: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). As Google doesn’t seem to be driving customers to lift-and-shift their applications to GCP, I’m going to focus on AWS and Azure as potential cloud destinations and, for specificity, take Oracle enterprise databases as the use case.

 

Amazon Web Services

You have two options for migrating Oracle databases to the AWS cloud: infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS).

 

Deploying Oracle applications in AWS IaaS is much like deploying them on your in-house infrastructure. You don’t get flexible licensing options, but you do have the ability to easily allocate more or less capacity as needed for CPU, memory, and storage. However, because AWS IaaS is virtualized infrastructure, you may experience slower performance due to suboptimal CPU core allocation or processor caches. You’ll also have less flexibility with instance sizes, network topology, storage performance tiers, and the like.

 

AWS Relational Database Service (RDS) for Oracle is a managed PaaS offering where, in addition to giving you the benefits of IaaS, Amazon takes on major DBA and system administrator tasks including provisioning, upgrades, backups, and multi-availability zone replication. This significantly simplifies your operations—but also results in less control over areas such as configuration, patching, and maintenance windows. AWS RDS for Oracle can also be used with a pay-as-you-go licensing model included in the hourly rate.

 

Microsoft Azure

Azure does not have a managed offering for Oracle databases, so the only way to run Oracle Database on Azure is through its IaaS platform. The benefits are very similar to AWS IaaS, but Azure offers additional licensing options (with Windows-based license-included images) and its instances are billed by the minute rather than by the hour. What’s important to keep in mind is that Azure is not as broadly adopted as AWS and offers less flexibility for storage performance tiers and instance sizes. Oracle Database software running on Windows is also not as common as running on Linux.

 

For more in-depth technical details on these options, I encourage you to read our white paper, Migrating Oracle Databases to Cloud. My next blog in this series will look at one other option not discussed here: migrating to Oracle Cloud.

migratingtocloud

Categories: DBA Blogs

CHANGE STANDBY DATABASE PROTECTION MODE

Learn DB Concepts with me... - Fri, 2016-04-01 08:00
SQL> select protection_mode from v$database;

PROTECTION_MODE
--------------------
MAXIMUM PERFORMANCE

SQL> show parameter log_archive_dest_2

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2             string     SERVICE=ORCLSTB1 NOAFFIRM ASYN
                         C VALID_FOR=(ONLINE_LOGFILES,P
                         RIMARY_ROLE) DB_UNIQUE_NAME=OR
                         CLSTB1
log_archive_dest_20             string
log_archive_dest_21             string
log_archive_dest_22             string
log_archive_dest_23             string
log_archive_dest_24             string
log_archive_dest_25             string
log_archive_dest_26             string

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_27             string
log_archive_dest_28             string
log_archive_dest_29             string
SQL> show parameter db_unique_name

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
db_unique_name                 string     ORCL
SQL> show parameter log_archive_config

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_config             string     dg_config=(ORCL,ORCLSTB1,ORCLS
                         TB2)
SQL> alter system set log_archive_dest_2='SERVICE=ORCLSTB1 NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCLSTB1';

System altered.

SQL> show parameter log_archive_dest_2

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2             string     SERVICE=ORCLSTB1 NOAFFIRM ASYN
                         C VALID_FOR=(ONLINE_LOGFILES,P
                         RIMARY_ROLE) DB_UNIQUE_NAME=OR
                         CLSTB1
log_archive_dest_20             string
log_archive_dest_21             string
log_archive_dest_22             string
log_archive_dest_23             string
log_archive_dest_24             string
log_archive_dest_25             string
log_archive_dest_26             string

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_27             string
log_archive_dest_28             string
log_archive_dest_29             string

SQL> alter system set log_archive_dest_2='SERVICE=ORCLSTB1 NOAFFIRM SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCLSTB1';

System altered.


SQL> alter database set standby database to maximize availability;

Database altered.

NOTE: You don’t need to shutdown your instance, when you are changing  protection mode from MAXIMUM PERFORMANCE TO MAXIMUM AVAILABILITY.But you need to if you are going to MAXIMUM PROTECTION.

SQL> alter system switch logfile;

System altered.

SQL> select protection_mode from v$database;

PROTECTION_MODE
--------------------
MAXIMUM AVAILABILITY

SQL> archive log list;
Database log mode           Archive Mode
Automatic archival           Enabled
Archive destination           /u01/app/oracle/oraarch/
Oldest online log sequence     239
Next log sequence to archive   241
Current log sequence           241
SQL> select group#,bytes/1024/1024 from v$standby_log;

    GROUP# BYTES/1024/1024
---------- ---------------
     4        52
     5        52
     6        52
     7        52


SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

   THREAD# MAX(SEQUENCE#)
---------- --------------
     1          240

SQL> select protection_mode from v$database;

PROTECTION_MODE
--------------------
MAXIMUM AVAILABILITY



SQL> alter system switch logfile;

System altered.

Categories: DBA Blogs

Install Oracle Real-Time Integration Business Insight

Darwin IT - Fri, 2016-04-01 07:50
Yes, Oracle FMW Integration Insight is available, as I wrote in an earlier post. You can download it here.
But of course we're very curious on how to install it. Do I have to unzip it into my FMW_HOME? Is there a nice Oracle Installer that I can run silently? No, none of that: it comes as a set of OPatch patches on SOASuite 12.2.1:
  1. p22189824_122100_Generic.zip: OPatch containing Oracle Real-Time Integration Business Insight 12.2.1.0.0
  2. p22655174_122100_Generic.zip: OPatch containing updates to SOA and BAM 12.2.1.0.0 
  3. p22659236_122100_Generic.zip: OPatch containing updates to Service Bus 12.2.1.0.0
Following the README.txt in the zip the correct order is to first install ORIBI and then patch SOA&BPM and then SB.


Earlier I wrote about installing BPM QuickStart under Linux. Based on that I created a script to install SOASuite. Maybe I should write about that in another post. I haven't tried if it's possible to install this in a SOA or BPM Quickstart, but I did it in a full FMW installation, that I built using my scripts. So this installation I scripted following the scripting-work I've done earlier


For this setup I have 2 folders:
  • scripts: with the scripts.
  • ofm_integration_insight_1221: with the downloaded ofm_integration_insight_12.2.1.0.0_disk1_1of1.zip
The scripts folder contains two scripts:
fmw12c_env.sh:
#!/bin/bash
echo set Fusion MiddleWare 12cR2 environment
export JAVA_HOME=/usr/java/jdk1.8.0_74
export FMW_HOME=/u01/app/oracle/FMW12210
export SOA_HOME=$FMW_HOME/soa
export OSB_HOME=$FMW_HOME/osb
export MFT_HOME=$FMW_HOME/mft

This provides the settings to the FMW_HOME and the JAVA_HOME. And the product homes I needed for my SOABPM Suite installation (I definately should write that down!).

The actual install script is installOII.sh:
#!/bin/bash
. $PWD/fmw12c_env.sh
#
export CD=$PWD
export OII_INSTALL_HOME=$CD/../ofm_integration_insight_1221
export OII_INSTALL_ZIP=ofm_integration_insight_12.2.1.0.0_disk1_1of1.zip
export OPATCH_SOABPM_ZIP=p22655174_122100_Generic.zip #OPatch containing updates to SOA and BAM 12.2.1.0.0
export OPATCH_SOABPM_NR=22655174
export OPATCH_OSB_ZIP=p22659236_122100_Generic.zip #OPatch containing updates to Service Bus 12.2.1.0.0
export OPATCH_OSB_NR=22659236
export OPATCH_OII_ZIP=p22189824_122100_Generic.zip #OPatch containing Oracle Real-Time Integration Business Insight 12.2.1.0.0
export OPATCH_OII_NR=22189824
export PATCHES_HOME=$FMW_HOME/OPatch/patches
export ORACLE_HOME=$FMW_HOME
# Unzip OII Install zip
if [ ! -f "$OII_INSTALL_HOME/$OPATCH_OII_ZIP" ]; then
if [ -f "$OII_INSTALL_HOME/$OII_INSTALL_ZIP" ]; then
echo Unzip $OII_INSTALL_HOME/$OII_INSTALL_ZIP to $OII_INSTALL_HOME
unzip $OII_INSTALL_HOME/$OII_INSTALL_ZIP -d $OII_INSTALL_HOME
else
echo $OII_INSTALL_HOME/$OII_INSTALL_ZIP does not exist
fi
fi
#
echo Check zips
cd $OII_INSTALL_HOME
md5sum -c patches.MD5
cd $CD
#
# Check patches folder
if [ ! -d "$PATCHES_HOME" ]; then
mkdir $PATCHES_HOME
else
echo $PATCHES_HOME available
fi
#
#Unzip OII patch
if [ ! -d "$PATCHES_HOME/$OPATCH_OII_NR" ]; then
if [ -f "$OII_INSTALL_HOME/$OPATCH_OII_ZIP" ]; then
echo Unzip $OII_INSTALL_HOME/$OPATCH_OII_ZIP to $PATCHES_HOME
unzip $OII_INSTALL_HOME/$OPATCH_OII_ZIP -d $PATCHES_HOME
echo Apply OII Patch
cd $PATCHES_HOME/$OPATCH_OII_NR
$ORACLE_HOME/OPatch/opatch apply
else
echo $OII_INSTALL_HOME/$OPATCH_OII_ZIP does not exist!
fi
else
echo OII Patch $PATCHES_HOME/$OPATCH_OII_NR already available
fi
cd $CD
#Unzip SOA&BPM patch
if [ ! -d "$PATCHES_HOME/$OPATCH_SOABPM_NR" ]; then
if [ -f "$OII_INSTALL_HOME/$OPATCH_SOABPM_ZIP" ]; then
echo Unzip $OII_INSTALL_HOME/$OPATCH_SOABPM_ZIP to $PATCHES_HOME
unzip $OII_INSTALL_HOME/$OPATCH_SOABPM_ZIP -d $PATCHES_HOME
echo Apply SOA BPM Patch
cd $PATCHES_HOME/$OPATCH_SOABPM_NR
$ORACLE_HOME/OPatch/opatch apply
else
echo $OII_INSTALL_HOME/$OPATCH_SOABPM_ZIP does not exist!
fi
else
echo SOA-BPM Patch $PATCHES_HOME/$OPATCH_SOABPM_NR already available
fi
cd $CD
#Unzip OSB patch
if [ ! -d "$PATCHES_HOME/$OPATCH_OSB_NR" ]; then
if [ -f "$OII_INSTALL_HOME/$OPATCH_OSB_ZIP" ]; then
echo Unzip $OII_INSTALL_HOME/$OPATCH_OSB_ZIP to $PATCHES_HOME
unzip $OII_INSTALL_HOME/$OPATCH_OSB_ZIP -d $PATCHES_HOME
echo Apply OSB Patch
cd $PATCHES_HOME/$OPATCH_OSB_NR
$ORACLE_HOME/OPatch/opatch apply
else
echo $OII_INSTALL_HOME/$OPATCH_OSB_ZIP does not exist!
fi
else
echo OSB Patch $PATCHES_HOME/$OPATCH_OSB_NR already available
fi
cd $CD
echo Finished installing Oracle Fusion MiddleWare Integration Insight


The script first unzips the downloaded ofm_integration_insight_12.2.1.0.0_disk1_1of1.zip into:
  • p22189824_122100_Generic.zip
  • p22655174_122100_Generic.zip
  • p22659236_122100_Generic.zip
  • patches.MD5
  • README.txt
Then  it performs 'md5sum -c patches.MD5' to checks on the zips, but it ignores the results, just prints them.
Then for each patch it checks if the patch is already unzipped in the FMW_HOME/Opatch/patches folder. If so, it just assumes that its applied as well. If not the patch-zip is unzipped in the patches folder. And then it will perform opatch apply.
Opatch will ask if you want to proceed (answer with 'y') and if the system is ready to be patched (again answer with 'y'). For the SB Patch (the last in the list) it will look like:
Apply OSB Patch
Oracle Interim Patch Installer version 13.3.0.0.0
Copyright (c) 2016, Oracle Corporation. All rights reserved.


Oracle Home : /u01/app/oracle/FMW12210
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/FMW12210/oraInst.loc
OPatch version : 13.3.0.0.0
OUI version : 13.3.0.0.0
Log file location : /u01/app/oracle/FMW12210/cfgtoollogs/opatch/22659236_Apr_01_2016_09_37_29/apply2016-04-01_09-37-22AM_1.log


OPatch detects the Middleware Home as "/u01/app/oracle/FMW12210"

Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 22659236

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/FMW12210')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '22659236' to OH '/u01/app/oracle/FMW12210'

Patching component oracle.osb.server, 12.2.1.0.0...

Patching component oracle.osb.server, 12.2.1.0.0...
Patch 22659236 successfully applied.
Log file location: /u01/app/oracle/FMW12210/cfgtoollogs/opatch/22659236_Apr_01_2016_09_37_29/apply2016-04-01_09-37-22AM_1.log

OPatch succeeded.
Finished installing Oracle Fusion MiddleWare Integration Insight

If you have a home with only OSB or only SOA-BPM adapt the script yourself to not patch the not-installed product.

Oh, I did not check on the install, for now I assume it worked.  Next step for me is describe the SOA/BPM install and check-out on the Integration Insight product.

Set Operations

Jonathan Lewis - Fri, 2016-04-01 07:20

A recent post on the OTN database forum highlights a couple of important points ideas for optimising SQL. There are: (a) is there a logically equivalent way of stating the SQL and (b) is there a different “natural language” way of posing the problem.

The posting starts with a query, part of an execution plan, and a request to “get rid of the tablescan”. I guessed originally that the query came from an 11g instance, and the OP gave us some code to create the tables and indexes, so I’ve modelled the tables to get the indicated plan (then filled in the original numbers). This is the query, and my cosmetically adjusted version of the plan output that the OP probably got:


SELECT a.hotel_code
  FROM lf_hotel_temp a
WHERE a.service_id = : p_service_id
       AND (NOT EXISTS (SELECT *
          FROM lf_ts_roomtype_properties b
         WHERE a.hotel_code = b.hotel_code)
        or NOT EXISTS (SELECT *
          FROM lf_gta_roomtype_properties b
         WHERE a.hotel_code = b.hotel_code) 
       or  NOT EXISTS (SELECT *
          FROM lf_hb_roomtype_properties b
         WHERE a.hotel_code = b.hotel_code))

-------------------------------------------------------------------------------
| Id  | Operation          | Name                     | Rows  |  Bytes | Cost |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                          | 12613 | 113517 |  135 |
|*  1 |  FILTER            |                          |       |        |      |
|*  2 |   TABLE ACCESS FULL| LF_HOTEL_TEMP            | 88433 | 795897 |  135 |
|*  3 |   INDEX RANGE SCAN | LF_TS_ROOMTYPE_PROP_IDX  |     1 |      7 |    1 |
|*  4 |   INDEX RANGE SCAN | LF_GTA_ROOMTYPE_PROP_IDX |     1 |      9 |    1 |
|*  5 |   INDEX RANGE SCAN | LF_HB_ROOMTYPE_PROP_IDX  |     2 |     14 |    3 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( NOT EXISTS (SELECT 0 FROM "LF_TS_ROOMTYPE_PROPERTIES" "B" WHERE
              "B"."HOTEL_CODE"=:B1) OR  NOT EXISTS (SELECT 0 FROM "LF_GTA_ROOMTYPE_PROPERTIES" "B"
              WHERE "B"."HOTEL_CODE"=:B2) OR  NOT EXISTS (SELECT 0 FROM "LF_HB_ROOMTYPE_PROPERTIES"
              "B" WHERE "B"."HOTEL_CODE"=:B3))
   2 - filter("A"."SERVICE_ID"=TO_NUMBER(:P_SERVICE_ID))
   3 - access("B"."HOTEL_CODE"=:B1)
   4 - access("B"."HOTEL_CODE"=:B1)
   5 - access("B"."HOTEL_CODE"=:B1)

We were told in the original posting that there’s a primary key on lf_hotel_temp declared on (hotel_code, service_id), and we were given the definitions, sizes, and index declarations of all the table in a follow-up posting. It turns out that lf_hotel_temp consists of just those two columns and holds 278,000 rows: the optimizer’s estimate for the number of rows identified by a single service_id is over 88,000, and the nature of the query tells us that the optimizer would have to examine every one of those rows to check if it satisfied any of the three subqueries.

So how might Oracle access the rows ?  Given that the only columns used will all be in the primary key index (which implies not null constraints) there are four basic options: tablescan, index fast full scan, index full scan, and index skip scan. Given the most likely data content (i.e. lots of different hotel_codes), we can assume the skip scan would be a very bad idea. We can be sure that an index fast full scan will be lower cost than an index full scan – for anything except tiny indexes. Ultimately the question is really “why a tablescan instead of an index fast full scan?”. As I pointed out, though, the table consists of just those two columns – which means it’s perfectly reasonable for the index to be larger than the table as each entry of the index will consist of the two columns AND a rowid.

The first interesting bit

The question of why the access to lf_hotel_temp was by tablescan rather than some indexed method isn’t really interesting. The interesting bit is how (in principle) we might make the plan more efficient (if it really needs it); and this leads to two key, and general purpose, observations. As Andrew Sayer pointed out on the thread, we have a compound predicate:

    (not exists A OR not exists B OR not exists C)

and this is logically equivalent to

   not (exists A AND exists B AND exists C)

If we rewrite the query to reflect this equivalence could the optimizer find a different, better way of executing it:


select  /*+ dynamic_sampling(0) */
        a.hotel_code
from    lf_hotel_temp a
where
        a.service_id = :p_service_id
and     not(
                exists (
                        select  null
                        from    lf_ts_roomtype_properties ts
                        where   ts.hotel_code = a.hotel_code
                )
            and exists (
                        select  null
                        from    lf_gta_roomtype_properties gta
                        where   gta.hotel_code = a.hotel_code
                )
            and exists (
                        select  null
                        from    lf_hb_roomtype_properties hb
                        where   hb.hotel_code = a.hotel_code
                )
        )
;

Of course, I didn’t have the original data; so I copied the DDL supplied in the OTN thread and added a little DML to insert a few rows in the tables. The data I used looked like this:


insert into lf_hotel_temp (hotel_code, service_id) values ('A',1);
insert into lf_hotel_temp (hotel_code, service_id) values ('B',1);
insert into lf_hotel_temp (hotel_code, service_id) values ('C',1);
insert into lf_hotel_temp (hotel_code, service_id) values ('D',1);

-- insert into lf_ts_roomtype_properties values ( 'A','x','x',0,1,'x');
insert into lf_ts_roomtype_properties values ( 'B','x','x',0,1,'x');
insert into lf_ts_roomtype_properties values ( 'C','x','x',0,1,'x');
insert into lf_ts_roomtype_properties values ( 'D','x','x',0,1,'x');

-- insert into lf_gta_roomtype_properties values ( 'A','x','x',0,1,'x');
-- insert into lf_gta_roomtype_properties values ( 'B','x','x',0,1,'x');
insert into lf_gta_roomtype_properties values ( 'C','x','x',0,1,'x');
insert into lf_gta_roomtype_properties values ( 'D','x','x',0,1,'x');

-- insert into lf_hb_roomtype_properties values ( 'A','x','x',0,1,'x');
-- insert into lf_hb_roomtype_properties values ( 'B','x','x',0,1,'x');
-- insert into lf_hb_roomtype_properties values ( 'C','x','x',0,1,'x');
insert into lf_hb_roomtype_properties values ( 'D','x','x',0,1,'x');
commit;

It’s possible that with different data volumes you’d get different execution plans, but in 11g the optimizer transformed my query back into the original form – in other words it recognised the equivalence of “not (A and B and C)” and rewrote it as “(not A or not B or not C)” !

However, I also have 12c available, and I had created a script to build a model, so I ran the test on 12c. Both versions of the query produced the following plan:


----------------------------------------------------------------------------------------------------
| Id  | Operation             | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                            |     1 |  2027 |     8  (13)| 00:00:01 |
|*  1 |  HASH JOIN RIGHT ANTI |                            |     1 |  2027 |     8  (13)| 00:00:01 |
|   2 |   VIEW                | VW_SQ_1                    |    82 |   984 |     6   (0)| 00:00:01 |
|*  3 |    HASH JOIN SEMI     |                            |    82 |  2952 |     6   (0)| 00:00:01 |
|*  4 |     HASH JOIN         |                            |    82 |  1968 |     4   (0)| 00:00:01 |
|   5 |      TABLE ACCESS FULL| LF_GTA_ROOMTYPE_PROPERTIES |    82 |   984 |     2   (0)| 00:00:01 |
|   6 |      TABLE ACCESS FULL| LF_HB_ROOMTYPE_PROPERTIES  |    82 |   984 |     2   (0)| 00:00:01 |
|   7 |     TABLE ACCESS FULL | LF_TS_ROOMTYPE_PROPERTIES  |    82 |   984 |     2   (0)| 00:00:01 |
|*  8 |   INDEX FULL SCAN     | LF_HOTEL_TEMP_PK           |   101 |   198K|     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("VW_COL_1"="A"."HOTEL_CODE")
   3 - access(SYS_OP_MAP_NONNULL("HB"."HOTEL_CODE")=SYS_OP_MAP_NONNULL("TS"."HOTEL_CODE"))
   4 - access(SYS_OP_MAP_NONNULL("HB"."HOTEL_CODE")=SYS_OP_MAP_NONNULL("GTA"."HOTEL_CODE"))
   8 - access("A"."SERVICE_ID"=TO_NUMBER(:P_SERVICE_ID))
       filter("A"."SERVICE_ID"=TO_NUMBER(:P_SERVICE_ID))

Ignore the numbers (I hadn’t collected stats, which is why I added the /*+ dynamic_sampling(0) */ hint – with stats in place 12c produced the FILTER plan that 11g had produced) the key feature is that Oracle has managed to transform my three filter subqueries into a single join subquery and then transformed the resulting subquery into an anti-join. It’s a pretty amazing transformation – the optimizer did it automatically in 12c, but if you are aware of the logical equivalence then you may find cases where you can turn “OR’s” into “AND’s” and help the optimizer to find transformations that it can’t find automatically.

The second interesting bit

If you think about the meaning behind the query (prompted, perhaps, by the logical equivalence described above) you might rephrase the question as “find me the hotel codes that fail to appear in all three related tables” – in English this is ambigious and open to catastrophic mis-interpretation so you might have another go and say “find me the hotel codes that appear in every one of the three related tables – those are the hotel codes I don’t want”. This latter expression, of course, is exactly what Oracle is doing by joining the three tables and then doing the “not exists”/anti-join against the result. Obviously you could translate the new English form into SQL by hand, with a three table join in a “not exists” subquery.

I actually took a different approach (which might, or might not, be efficient – depending on the actual data and indexes).  I translated the new English statement into the following:


select  /*+ dynamic_sampling(0) */
        hotel_code
from    lf_hotel_temp
where   service_id = :p_service_id
minus   (
        select  hotel_code
        from    lf_ts_roomtype_properties
        where   hotel_code is not null
        intersect
        select  hotel_code
        from    lf_gta_roomtype_properties
        where   hotel_code is not null
        intersect
        select  hotel_code
        from    lf_hb_roomtype_properties
        where   hotel_code is not null
        )
;

The three way intersection gets me the list of hotels that appear in all three tables; the minus operator takes the list of hotel with the correct service_id and eliminates from it the hotels that appear in the intersection – giving me the result I want.

For my tiny data set, this is the plan I got:

--------------------------------------------------------------------------------------------------
| Id  | Operation             | Name                     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                          |     1 |  2159 |     8  (50)| 00:00:01 |
|   1 |  MINUS                |                          |       |       |            |          |
|   2 |   SORT UNIQUE NOSORT  |                          |     1 |  2015 |     2  (50)| 00:00:01 |
|*  3 |    INDEX FULL SCAN    | LF_HOTEL_TEMP_PK         |     1 |  2015 |     1   (0)| 00:00:01 |
|   4 |   INTERSECTION        |                          |       |       |            |          |
|   5 |    INTERSECTION       |                          |       |       |            |          |
|   6 |     SORT UNIQUE NOSORT|                          |     4 |    48 |     2  (50)| 00:00:01 |
|*  7 |      INDEX FULL SCAN  | LF_TS_ROOMTYPE_PROP_IDX  |     4 |    48 |     1   (0)| 00:00:01 |
|   8 |     SORT UNIQUE NOSORT|                          |     4 |    48 |     2  (50)| 00:00:01 |
|*  9 |      INDEX FULL SCAN  | LF_GTA_ROOMTYPE_PROP_IDX |     4 |    48 |     1   (0)| 00:00:01 |
|  10 |    SORT UNIQUE NOSORT |                          |     4 |    48 |     2  (50)| 00:00:01 |
|* 11 |     INDEX FULL SCAN   | LF_HB_ROOMTYPE_PROP_IDX  |     4 |    48 |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("SERVICE_ID"=TO_NUMBER(:P_SERVICE_ID))
       filter("SERVICE_ID"=TO_NUMBER(:P_SERVICE_ID))
   7 - filter("HOTEL_CODE" IS NOT NULL)
   9 - filter("HOTEL_CODE" IS NOT NULL)
  11 - filter("HOTEL_CODE" IS NOT NULL)

Important note: I am not claiming that this use of set operators will be more efficient than a filter subquery or anti-join/semi-join approach, performance ultimately depends on the volume and patterns in the data combined with the available indexing. In this case you can almost see the classic performance compromise that we often see in Oracle – even in the trade-off between something as simple as choosing between a hash join and a nested loop join – should we operate this query as a tiny number of “bulk” operations, or as a (potentially) large number of tiny, high-precision operations.

If the original query was spending all it’s time on CPU running lots of subqueries, or doing lots of single block random I/Os because of the random ordering of the subqueries, then perhaps a couple of brute force “db file parallel read” index full scans would be a friendlier use of the available resources, run more quickly, and have less impact on every other user.

 


Links for 2016-03-31 [del.icio.us]

Categories: DBA Blogs

I am Oracle ACE

Oracle in Action - Thu, 2016-03-31 23:49

RSS content

It gives me immense pleasure to share the news that I have been honored with the prestigious  Oracle ACE award. I am grateful to Oracle ACE Program for accepting my nomination. I would like to thank to Murali Vallath Sir who nominated me for this award. I am also thankful to my family members without whose support and motivation, this would not have been possible.  Thanks a lot to all the readers of my blog whose comments and suggestions helped me to learn and share whatever little knowledge I have.

I will do my best to participate in the Oracle ACE program.

 

 



Tags:  

Del.icio.us
Digg

Comments:  36 comments on this item
You might be interested in this:  
Copyright © ORACLE IN ACTION [I am Oracle ACE], All Right Reserved. 2016.

The post I am Oracle ACE appeared first on ORACLE IN ACTION.

Categories: DBA Blogs