Feed aggregator

Links for 2016-04-03 [del.icio.us]

Categories: DBA Blogs

OEM and Passwords

Anthony Shorten - Sun, 2016-04-03 18:33

I wanted to outline an interesting experience I had recently around security. Oracle, like a lot of companies requires their employees to regularly change their passwords as it is considered good security practice. There are strict rules around the password formats and their history. Luckily Oracle uses its own Identity Management solutions so the experience is simple and quick.

Recently my passwords were set to expire. I have a process I use to ensure the passwords are changed across all the technologies I use. I usually do that one morning a couple of days before they are due to expire. I did that this time at the end of the day, as it was a particularly busy day. It was a Friday and all was well..

Except I forgot one important change. My credentials in my demonstration instance of Oracle Enterprise Manager. I have a demonstration environment where I do research and development as well as record training and do demonstration against. After that weekend I logged to my demonstration environment to see alerts that it could not connect via some credentials.

I have three credentials to worry about in Oracle Enterprise Manager:

  • There is a credential for Oracle Enterprise Manager to connect to My Oracle Support. This is used for checking patches, looking for critical advice as well as register Service Requests directly from Oracle Enterprise Manager in online mode. Typically, you would nominate an account to link to My Oracle Support (along with a Service Identifier for your site).
  • I have two named credentials I use regularly for host interaction such as installations and running regular jobs on the machines. These are administration accounts used for the product at the operating system level. The way the machine is setup, I use two as one is the Administration account and the other is a privileged account used for low level administration. In some cases some sites will only need one per user.

I was able to correct the passwords and all my environments reported back correctly. Credential management is one of the strengths of Oracle Enterprise Manager. Next time I will add the OEM credentials to my checklist.

Partner Webcast – Oracle Mobile Strategy and Mobility Offerings Overview

Mobile is on the mind of every business, as Mobile Is the New First Screen. Enabling existing business applications on handhelds can be very challenging, but even building new business applications...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Three more principles

Andrew Clarke - Sun, 2016-04-03 13:00
Here are some more principles which can help us design better programs. These principles aren't part of an organized theory, and they're aren't particularly related to any programming paradigm. But each is part of the canon, and each is about the relationship between a program's interface and its implementation.
The Principle Of Least AstonishmentAlso known as the Principle of Least Surprise, the rule is simple: programs should do what we expect them to do. This is more than simply honouring the contract of the interface. It means complying with accepted conventions of our programming. In PL/SQL programming there is a convention that functions are read-only, or at least do not change database state. Another such convention is that low-level routines do not execute COMMIT statements; transaction management is the prerogative of the program at the top of the call stack, which may be interacting directly with a user or may be an autonomous batch process.

Perhaps the most common flouting of the Principle Of Least Astonishment is this:

   exception
when others then
null;

It is reasonable to expect that a program will hurl an exception if something as gone awry. Unfortunately, we are not as astonished as we should be when we find a procedure with an exception handle which swallows any and every exception.
Information Hiding Principle Another venerable principle, this one was expounded by David Parnas in 1972. It requires that a calling program should not need to know anything about the implementation of a called program. The definition of the interface should be sufficient. It is the cornerstone of black-box programming. The virtue of Information Hiding is that knowledge of internal details inevitably leads to coupling between the called and calling routines: when we change the called program we need to change the caller too. We honour this principle any time we call a procedure in a package owned by another schema, because the EXECUTE privilege grants visibility of the package specification (the interface) but not the body (the implementation).
The Law Of Leaky AbstractionsJoel Spolsky coined this one: "All non-trivial abstractions, to some degree, are leaky." No matter how hard we try, some details of the implementation of a called program will be exposed to the calling programming, and will need to be acknowledged. Let's consider this interface again:

    function get_employee_recs
( p_deptno in number )
return emp_refcursor;

We know it returns a result set of employee records. But in what order? Sorting by EMPNO would be pretty useless, given that it is a surrogate key (and hence without meaning). Other candidates - HIREDATE, SAL - will be helpful for some cases and irrelevant for others. One approach is to always return an unsorted set and leave it to the caller to sort the results; but it is usually more efficient to sort records in a query rather than a collection. Another approach would be to write several functions - get_employee_recs_sorted_hiredate(), get_employee_recs_sorted_sal() - but that leads to a bloated interface which is hard to understand. Tricky.
ConclusionPrinciples are guidelines. There are tensions between them. Good design is a matter of trade-offs. We cannot blindly follow Information Hiding and ignore the Leaky Abstractions. We need to exercise our professional judgement (which is a good thing).

Part of the Designing PL/SQL Programs series

It's all about the interface

Andrew Clarke - Sun, 2016-04-03 12:59
When we talk about program design we're mainly talking about interface design. The interface is the part of our program that the users interact with. Normally discussion of UI focuses on GUI or UX, that is, the interface with the end user of our application.

But developers are users too.

Another developer writing a program which calls a routine in my program is a user of my code (and, I must remember, six months after I last touched the program, I am that other developer). A well-designed interface is frictionless: it can be slotted into a calling program without too much effort. A poor interface breaks the flow: it takes time and thought to figure it out. In the worst case we have to scramble around in the documentation or the source code.

Formally, an interface is the mechanism which allows the environment (the user or agent) to interact with the system (the program). What the system actually does is the implementation: the interface provides access to the implementation without the environment needing to understand the details. In PL/SQL programs the implementation will usually contain a hefty chunk of SQL. The interface mediates access to data.

An interface is a contract. It specifies what the caller must do and what the called program will do in return. Take this example:

function get_employee_recs
     ( p_deptno in number )
     return emp_refcursor;

The contract says, if the calling program passes a valid DEPTNO the function will return records for all the employees in that department, as a strongly-typed ref cursor. Unfortunately the contract doesn't say what will happen if the calling program passes an invalid DEPTNO. Does the function return an empty set or throw an exception? The short answer is we can't tell. We must rely on convention or the document, which is an unfortunate gap in the PL/SQL language; the Java keyword throws is quite neat in this respect.
The interface is here to helpThe interface presents an implementation of business logic. The interface is a curated interpretation, and doesn't enable unfettered access. Rather, a well-designed interface helps a developer use the business logic in a sensible fashion. Dan Lockton calls this Design With Intent: Good design expresses how a product should be used. It doesn't have to be complicated. We can use simple control mechanisms which to help other developers use our code properly.
Restriction of accessSimply, the interface restricts access to certain functions or denies it altogether. Only certain users are allowed to view salaries, and even fewer to modify them. The interface to Employee records should separate salary functions from more widely-available functions. Access restriction can be implemented in a hard fashion, using architectural constructs (views, packages, schemas) or in a soft fashion (using VPD or Data Vault). The hard approach benefits from clarity, the soft approach offers flexibility.
Forcing functionsIf certain things must be done in a specific order then the interface should only offer a method which enforces the correct order. For instance, if we need to insert records into a parent table and a child table in the same transaction (perhaps a super-type/sub-type implementation of a foreign key arc) a helpful interface will only expose a procedure which inserts both records in the correct order.
Mistake-proofingA well-design interface prevents its users from making obvious mistakes. The signature of a procedure should be clear and unambiguous. Naming is important. If a parameter presents a table attribute the parameter name should echo the column name: p_empno is better than p_id. Default values for parameters should lead developers to sensible and safe choices. If several parameters have default values they must play nicely together: accepting all the defaults should not generate an error condition.
AbstractionAbstraction is just another word for interface. It allows us to focus on the details of our own code without need to understand the concrete details of the other code we depend upon. That's why good interfaces are the key to managing large codebases.

Part of the Designing PL/SQL Programs series

Working with the Interface Segregation Principle

Andrew Clarke - Sun, 2016-04-03 12:55
Obviously Interface Segregation is crucial for implementing restricted access. For any given set of data there are three broad categories of access:

  • reporting 
  • manipulation 
  • administration and governance 

So we need to define at least one interface - packages - for each category in order that we can grant the appropriate access to different groups of users: read-only users, regular users, power users.

But there's more to Interface Segregation. This example is based on a procedure posted on a programming forum. Its purpose is to maintain medical records relating to a patient's drug treatments. The procedure has some business logic (which I've redacted) but its overall structure is defined by the split between the Verification task and the De-verification task, and flow is controlled by the value of the p_verify_mode parameter.
 
procedure rx_verification
(p_drh_id in number,
p_patient_name in varchar2,
p_verify_mode in varchar2)
as
new_rxh_id number;
rxh_count number;
rxl_count number;
drh_rec drug_admin_history%rowtype;
begin
select * into drh_rec ....;
select count(*) into rxh_count ....;

if p_verify_mode = 'VERIFY' then

update drug_admin_history ....;
if drh_rec.pp_id <> 0 then
update patient_prescription ....;
end if;
if rxh_count = 0 then
insert into prescription_header ....;
else
select rxh_id into new_rxh_id ....;
end if;
insert into prescription_line ....;
if drh_rec.threshhold > 0
insert into prescription_line ....;
end if;

elsif p_verify_mode = 'DEVERIFY' then

update drug_admin_history ....;
if drh_rec.pp_id <> 0 then
update patient_prescription ....;
end if;
select rxl_rxh_id into new_rxh_id ....;
delete prescription_line ....;
delete prescription_header ....;

end if;
end;
Does this procedure have a Single Responsibility?  Hmmm. It conforms to Common Reuse - users who can verify can also de-verify. It doesn't break Common Closure, because both tasks work with the same tables. But there is a nagging doubt. It appears to be doing two things: Verification and De-verification.

So, how does this does this procedure work as an interface? There is a definite problem when it comes to calling the procedure: how do I as a developer know what value to pass to p_verify_mode?

  rx_management.rx_verification
(p_drh_id => 1234,
p_patient_name => 'John Yaya',
p_verify_mode => ???);
The only way to know is to inspect the source code of the procedure. That breaks the Information Hiding principle, and it might not be viable (if the procedure is owned by a different schema). Clearly the interface could benefit from a redesign. One approach would be to declare constants for the acceptable values; while we're at it, why not define a PL/SQL subtype for verification mode and tweak the procedure's signature to make it clear that's what's expected:         

create or replace package rx_management is

subtype verification_mode_subt is varchar2(10);
c_verify constant verification_mode_subt := 'VERIFY';
c_deverify constant verification_mode_subt := 'DEVERIFY';

procedure rx_verification
(p_drh_id in number,
p_patient_name in varchar2,
p_verify_mode in verification_mode_subt);

end rx_management;
Nevertheless it is still possible for a caller program to pass a wrong value: 

  rx_management.rx_verification
(p_drh_id => 1234,
p_patient_name => 'John Yaya',
p_verify_mode => 'Verify');
What happens then? Literally nothing. The value drops through the control structure without satisfying any condition. It's an unsatisfactory outcome. We could change the implementation of rx_verification() to validate the parameter value and raise and exception. Or we could add an ELSE branch and raise an exception. But those are runtime exceptions. It would be better to mistake-proof the interface so that it is not possible to pass an invalid value in the first place.

Which leads us to to a Segregated Interface :
create or replace package rx_management is

procedure rx_verification
(p_drh_id in number,
p_patient_name in varchar2);

procedure rx_deverification
(p_drh_id in number);

end rx_management;
Suddenly it becomes clear that the original procedure was poorly named (I call rx_verification() to issue an RX de-verification?!)  We have two procedures but their usage is now straightforward and the signatures are cleaner (the p_patient_name is only used in the Verification branch so there's no need to pass it when issuing a De-verification).
SummaryInterface Segregation creates simpler and safer controls but more of them. This is a general effect of the Information Hiding principle. It is a trade-off. We need to be sensible. Also, this is not a proscription against flags. There will always be times when we need to pass instructions to called procedures to modify their behaviour. In those cases it is important that the interface includes a definition of acceptable values.

Part of the Designing PL/SQL Programs series

Introducing the SOLID design principles

Andrew Clarke - Sun, 2016-04-03 12:55
PL/SQL programming standards tend to focus on layout (case of keywords, indentation, etc), naming conventions, and implementation details (such as use of cursors).  These are all important things, but they don't address questions of design. How easy is it to use the written code?  How easy is it to test? How easy will it be to maintain? Is it robust? Is it secure?

Simply put, there are no agreed design principles for PL/SQL. So it's hard to define what makes a well-designed PL/SQL program.
The SOLID principlesIt's different for object-oriented programming. OOP has more design principles and paradigms and patterns than you can shake a stick at. Perhaps the most well-known are the SOLID principles, which were first mooted by Robert C. Martin, AKA Uncle Bob, back in 1995 (although it was Michael Feathers who coined the acronym).

Although Martin put these principles together for Object-Oriented code, they draw on a broader spectrum of programming practice. So they are transferable, or at least translatable, to the other forms of modular programming. For instance, PL/SQL.
Single Responsibility PrincipleThis is the foundation stone of modular programming: a program unit should do only one thing. Modules which do only one thing are easier to understand, easier to test and generally more versatile. Higher level procedures can be composed of lower level ones. Sometimes it can be hard to define what "one thing" means in a given context, but some of the other principles provide clarity. Martin's formulation is that there should be just one axis of change: there's just one set of requirements which, if modified or added to, would lead to a change in the package.
Open/closed PrincipleThe slightly obscure name conceals a straightforward proposal. It means program units are closed to modification but open to extension. If we need to add new functionality to a package, we create a new procedure rather than modifying an existing one. (Betrand Meyer, the father of Design By Contract programming, originally proposed it; in OO programming this principle is implemented through inheritance or polymorphism.) Clearly we must fix bugs in existing code. Also it doesn't rule out refactoring: we can tune the implementation providing we don't change the behaviour. This principle mainly applies to published program units, ones referenced by other programs in Production. Also the principle can be looser when the code is being used within the same project, because we can negotiate changes with our colleagues.
Liskov Substitution PrincipleThis is a real Computer Science-y one, good for dropping in code reviews. Named for Barbara Liskov it defines rules for behavioural sub-typing. If a procedure has a parameter defined as a base type it must be able to take an instance of any sub-type without changing the behaviour of the program. So a procedure which uses
IS OF
to test the type of a passed parameter and do something different is violating Liskov Substitution. Obviously we don't make much use of Inheritance in PL/SQL programming, so this Principle is less relevant than in other programming paradigms.
Interface Segregation PrincipleThis principle is about designing fine-grained interfaces. It is a extension of the Single Responsibility Principle. Instead of build one huge package which contains all the functions relating to a domain build several smaller, more cohesive packages. For example Oracle's Advanced Queuing subsystem comprises five packages, to manage different aspects of AQ. Users who write to or read from queues have
DBMS_AQ
; users who manage queues and subscribers have
DBMS_AQADM
.
Dependency Inversion PrincipleInteractions between programs should be through abstract interfaces rather than concrete ones. Abstraction means the implementation of one side of the interface can change without changing the other side. PL/SQL doesn't support Abstract objects in the way that say Java does. To a certain extent Package Specifications provide a layer of abstraction but there can only be one concrete implementation. Using Types to pass data between Procedures is an interesting idea, which we can use to decouple data providers and data consumers in a useful fashion.
Applicability of SOLID principles in PL/SQLSo it seems like we can apply SOLID practices to PL/SQL.  True, some Principles fit better than others. But we have something which we might use to distinguish good design from bad when it comes to PL/SQL interfaces.

The SOLID principles apply mainly to individual modules. Is there something similar we can use for designing module groups? Why, yes there is. I'm glad you asked.

Part of the Designing PL/SQL Programs series

Introducing the RCCASS design principles

Andrew Clarke - Sun, 2016-04-03 12:54
Rob C Martin actually defined eleven principles for OOP. The first five, the SOLID principles, relate to individual classes. The other six, the RCCASS principles, deal with the design of packages (in the C++ or Java sense, i.e. libraries). They are far less known than the first five. There are two reasons for this:

  • Unlike "SOLID", "RCCASS" is awkward to say and doesn't form a neat mnemonic. 
  • Programmers are far less interested in software architecture. 

Software architecture tends to be an alien concept in PL/SQL. Usually a codebase of packages simply accretes over the years, like a coral reef. Perhaps the RCCASS principles can help change that.
The RCCASS PrinciplesReuse Release Equivalency Principle The Reuse Release Equivalency Principle states that the unit of release matches the unit of reuse, which is the parts of the program unit which are consumed by other programs. Basically the unit of release defines the scope of regression testing for consuming applications. It's an ill-mannered release which forces projects to undertake unnecessary regression testing. Cohesive program units allow consumers to do regression testing only for functionality they actually use. It's less of a problem for PL/SQL because (unlike C++ libraries of Java jars) the unit of release can have a very low level of granularity: individual packages or stored procedures.
Common Reuse Principle The Common Reuse principle supports the definition of cohesive program units. Functions which share a dependency belong together, because they are likely to be used together belong together. For instance, procedures which maintain the Employees table should be co-located in one package (or a group of related packages). They will share sub-routines, constants and exceptions. Packaging related procedures together makes the package easier to write and easier for calling programs to use.
Common Closure PrincipleThe Common Closure principle supports also the definition of cohesive program units. Functions which share a dependency belong together, because they have a common axis of change. Common Closure helps to minimise the number of program units affected by a change. For instance, programs which use the Employees table may need to change if the structure of the table changes. All the changes must be released together: table, PL/SQL, types, etc.
Acyclic Dependencies Principle Avoid cyclic dependencies between program units: if package A depends on package B then B must not have a dependency on B. Cyclic dependencies make application hard to use and harder to deploy. The dependency graph shows the order in which objects must be built. Designing a dependency graph upfront is futile, but we can keep to rough guidelines. Higher level packages implementing business rules tend to depend on generic routines which in turn tend to depend on low-level utilities. There should be no application logic in those lower-level routines. If SALES requires a special logging implementation then that should be handled in the SALES subsystem not in the standard logging package.
Stable Dependencies Principle Any change to the implementation of a program unit which is widely used will generate regression tests for all the programs which call it. At the most extreme, a change to a logging routine could affect all the other programs in our application. As with the Open/Closed Principle we need to fix bugs. But new features should be introduced by extension not modification. And refactoring of low-level dependencies must not done on a whim.
Stable Abstractions PrincipleAbstractions are dependencies, especially when we're talking about PL/SQL. So this Principle is quite similar to Stable Dependencies Principle. The key difference is that this relates to the definition of interfaces rather than implementation. A change to the signature of a logging routine could require code changes to all the other programs in the application. Obviously this is even more inconvenient than enforced regression testing. Avoid changing the signature of a public procedure or the projection of a public view. Again, extension rather than modification is the preferred approach.
Applicability of RCCASS principles in PL/SQL The focus of these principles is the stability of a shared codebase, and minimising the impact of change on the consumers of our code. This is vital in large projects, where communication between teams is often convoluted. It is even more important for open source or proprietary libraries.

We we can apply Common Reuse Principle and Common Closure Principle to define the scope of the Reuse Release Equivalency Principle, and hence define the boundaries of a sub-system (whisper it, schema). Likewise we can apply the Stable Dependencies Principle and Stable Abstractions Principle to enforce the Acyclic Dependencies Principle to build stables PL/SQL libraries. So the RCCASS principles offer some most useful pointers towards a stable PL/SQL software architecture.

Part of the Designing PL/SQL Programs series

The Dependency Inversion Principle: a practical example

Andrew Clarke - Sun, 2016-04-03 12:54
These design principles may seem rather academic, so let's look at a real life demonstration of how applying Dependency Inversion Principle lead to an improved software design.

Here is a simplified version of an ETL framework which uses SQL Types in a similar fashion to the approach described in my blog post here. The loading process is defined using an abstract non-instantiable Type like this:
create or replace type load_t force as object
( txn_date date
, tgt_name varchar2(30)
, member function load return number
, final member function get_tgt return varchar2
)
not final not instantiable;
/

create or replace type body load_t as
member function load return number
is
begin
return 0;
end load;
final member function get_tgt return varchar2
is
begin
return self.tgt_name;
end get_tgt;
end;
/


The concrete behaviour for each target table in the ABC feed is defined by sub-types like this:
create or replace type load_tgt1_t under load_t
( overriding member function load return number
, constructor function load_tgt1_t
(self in out nocopy load_tgt1_t
, txn_date date)
return self as result
)
;
/
create or replace type body load_tgt1_t as
overriding member function load return number
is
begin
insert into tgt1 (col1, col2)
select to_number(col_a), col_b
from stg_abc stg
where stg.txn_date = self.txn_date;
return sql%rowcount;
end load;
constructor function load_tgt1_t
(self in out nocopy load_tgt1_t
, txn_date date)
return self as result
is
begin
self.txn_date := txn_date;
self.tgt_name := 'TGT1';
return;
end load_tgt1_t;
end;
/
This approach is neat because ETL is a fairly generic process: the mappings and behaviour for a particular target table are specific but the shape of the loading process is the same for any and all target tables. So we can build a generic PL/SQL procedure to handle them. This simplistic example does some logging, loops through a set of generic objects and, through the magic of polymorphism, calls a generic method which executes specific code for each target table:
    procedure load  
(p_txn_date in date
, p_load_set in sys_refcursor)
is
type loadset_r is record (
tgtset load_t
);
lrecs loadset_r;
load_count number;
begin
logger.logm('LOAD START::txn_date='||to_char(p_txn_date,'YYYY-MM-DD'));
loop
fetch p_load_set into lrecs;
exit when p_load_set%notfound;
logger.logm(lrecs.tgtset.get_tgt()||' start');
load_count := lrecs.tgtset.load();
logger.logm(lrecs.tgtset.get_tgt()||' loaded='||to_char(load_count));
end loop;
logger.logm('LOAD FINISH');
end load;

So far, so abstract. The catch is the procedure which instantiates the objects:
    procedure load_abc_from_stg  
(p_txn_date in date)
is
rc sys_refcursor;
begin
open rc for
select load_tgt1_t(p_txn_date) from dual union all
select load_tgt2_t(p_txn_date) from dual;
load(p_txn_date, rc);
end load_abc_from_stg;

On casual inspection it doesn't seem problematic but the call to the load() procedure gives the game away. Both procedures are in the same package:
create or replace package loader as
procedure load
(p_txn_date in date
, p_load_set in sys_refcursor);
procedure load_abc_from_stg
(p_txn_date in date);
end loader;
/

So the package mixes generic and concrete functionality. What makes this a problem? After all, it's all ETL so doesn't the package follow the Single Responsibility Principle? Well, up to a point. But if we want to add a new table to the ABC feed we need to update the LOADER package. Likewise if we want to add a new feed, DEF, we need to update the LOADER package. So it breaks the Stable Abstractions principle. It also creates dependency problems, because the abstract load() process has dependencies on higher level modules. We can't deploy the LOADER package without deploying objects for all the feeds.

Applying the Dependency Inversion Principle.The solution is to extract the load_abc() procedure into a concrete package of its own. To make this work we need to improve the interface between the load() procedure and programs which call it. Both sides of the interface should depend on a shared abstraction.

The LOADER package is now properly generic:
create or replace package loader as
type loadset_r is record (
tgtset load_t
);
type loadset_rc is ref cursor return loadset_r;
procedure load
(p_txn_date in date
, p_load_set in loadset_rc)
authid current_user
;
end loader;
/
The loadset_r type has moved into the package specification, and defines a strongly-typed ref cursor. The load() procedure uses the strongly-typed ref cursor.

Similarly the LOAD_ABC package is wholly concrete:
create or replace package loader_abc as
procedure load_from_stg
(p_txn_date in date);
end loader_abc;
/

create or replace package body loader_abc as
procedure load_from_stg
(p_txn_date in date)
is
rc loader.loadset_rc;
begin
open rc for
select load_tgt1_t(p_txn_date) from dual union all
select load_tgt2_t(p_txn_date) from dual;
loader.load(p_txn_date, rc);
end load_from_stg;
end loader_abc;
/
Both package bodies now depend on abstractions: the strongly-typed ref cursor in the LOADER specification and the LOADER_T SQL Type. These should change much less frequently than the tables in the feed or even the loading process itself. This is the Dependency Inversion Principle in action.

Separating generic and concrete functionality into separate packages produces a more stable application. Users of a feed package are shielded from changes in other feeds. The LOADER package relies on strongly-typed abstractions. Consequently we can code a new feed package which can call loader.load() without peeking into that procedure's implementation to see what it's expecting.

Part of the Designing PL/SQL Programs series

Easy Way to Create pfile from spfile

Michael Dinh - Sun, 2016-04-03 11:50

I learned something new today; so excited as this will make automation so much easier.

create pfile=’/tmp/init@.ora’ from spfile;

The @ automatically substitute SID.

oracle@arrow:tiger:/tmp
$ sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Sun Apr 3 09:45:49 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ARROW:(SYS@tiger):PRIMARY> set time on
09:45:52 ARROW:(SYS@tiger):PRIMARY> !ls -l /tmp/init*
ls: cannot access /tmp/init*: No such file or directory

09:45:59 ARROW:(SYS@tiger):PRIMARY> create pfile='/tmp/init@.ora' from spfile;

File created.

09:46:04 ARROW:(SYS@tiger):PRIMARY> !ls -l /tmp/init*
-rw-r--r--. 1 oracle oinstall 1165 Apr  3 09:46 /tmp/inittiger.ora

09:46:10 ARROW:(SYS@tiger):PRIMARY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
oracle@arrow:tiger:/tmp
$


FBDA -- 2 : FBDA Archive Table Structure

Hemant K Chitale - Sun, 2016-04-03 11:10
Following up on my earlier post, I look at the FBDA Archive Tables.

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant

SQL*Plus: Release 12.1.0.2.0 Production on Sun Apr 3 23:26:27 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Sat Apr 02 2016 23:32:30 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA
SYS_FBA_DDL_COLMAP_93250
SYS_FBA_HIST_93250
SYS_FBA_TCRV_93250

SQL> desc test_fbda
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE

SQL>
SQL> desc sys_fba_hist_93250
Name Null? Type
----------------------------------------- -------- ----------------------------
RID VARCHAR2(4000)
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OPERATION VARCHAR2(1)
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE

SQL> desc sys_fba_ddl_colmap_93250
Name Null? Type
----------------------------------------- -------- ----------------------------
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OPERATION VARCHAR2(1)
COLUMN_NAME VARCHAR2(255)
TYPE VARCHAR2(255)
HISTORICAL_COLUMN_NAME VARCHAR2(255)

SQL> desc sys_fba_tcrv_93250
Name Null? Type
----------------------------------------- -------- ----------------------------
RID VARCHAR2(4000)
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OP VARCHAR2(1)

SQL>


The HIST table is the History table for my active table. It adds columns that track Rowid, Start SCB and End SCN for a range of rows that are copied into the History Table, Transaction Identifier, Operation and then the actual columns of the active table.
The DDL_COLMAP table seems to track Column Mappings.  See example below.
The TCRV table seems to be tracking Transactions ?

Let's try some DDL to ADD and DROP columns to the active table.

SQL> alter table test_fbda add (new_col_1 varchar2(5));

Table altered.

SQL> desc test_fbda
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE
NEW_COL_1 VARCHAR2(5)

SQL> desc sys_fba_93250
ERROR:
ORA-04043: object sys_fba_93250 does not exist


SQL> desc sys_fba_hist_93250
Name Null? Type
----------------------------------------- -------- ----------------------------
RID VARCHAR2(4000)
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OPERATION VARCHAR2(1)
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE
NEW_COL_1 VARCHAR2(5)

SQL>
SQL> select * from sys_fba_ddl_colmap_93250
2 /

STARTSCN ENDSCN XID O
---------- ---------- ---------------- -
COLUMN_NAME
--------------------------------------------------------------------------------
TYPE
--------------------------------------------------------------------------------
HISTORICAL_COLUMN_NAME
--------------------------------------------------------------------------------
1697151
ID_COLUMN
NUMBER
ID_COLUMN

1697151
DATA_COLUMN
VARCHAR2(15)
DATA_COLUMN

1697151
DATE_INSERTED
DATE
DATE_INSERTED

1728713
NEW_COL_1
VARCHAR2(5)
NEW_COL_1


SQL>


The new column added to the active table is also now reflected in the History Table.  The DDL_COLMAP shows the effective start of each column (notice the STARTSCN mapped to the COLUMN_NAME)

Let's set some values in new column and see if they appear in the History Table.

SQL> update test_fbda set new_col_1 = 'New'
2 where id_column < 6;

5 rows updated.

SQL> commit;

Commit complete.

SQL> select id_column, new_col_1, scn_to_timestamp(startscn), scn_to_timestamp(endscn)
2 from sys_fba_hist_93250
3 where id_column < 6
4 order by 1,3;

ID_COLUMN NEW_C
---------- -----
SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
1
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

2
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

3
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

4
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

5
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM


SQL>


What rows are copied into the History Table are *prior* image rows (copied from the Undo Area).  The STARTSCN and ENDSCN are of *yesterday* (02-April).

Let me DROP the new column.

SQL> alter table test_fbda drop (new_col_1);

Table altered.

SQL> desc test_fbda
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE

SQL> desc sys_fba_hist_93250;
Name Null? Type
----------------------------------------- -------- ----------------------------
RID VARCHAR2(4000)
STARTSCN NUMBER
ENDSCN NUMBER
XID RAW(8)
OPERATION VARCHAR2(1)
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE
D_1729869_NEW_COL_1 VARCHAR2(5)

SQL>
SQL> select * from sys_fba_ddl_colmap_93250;

STARTSCN ENDSCN XID O
---------- ---------- ---------------- -
COLUMN_NAME
--------------------------------------------------------------------------------
TYPE
--------------------------------------------------------------------------------
HISTORICAL_COLUMN_NAME
--------------------------------------------------------------------------------
1697151
ID_COLUMN
NUMBER
ID_COLUMN

1697151
DATA_COLUMN
VARCHAR2(15)
DATA_COLUMN

1697151
DATE_INSERTED
DATE
DATE_INSERTED

1728713 1729869
D_1729869_NEW_COL_1
VARCHAR2(5)
NEW_COL_1


SQL>


The dropped column is no longer in the active table  and has been renamed in the History table.  (The data in the column has to be preserved but the column is renamed).  Notice how the DDL_COLMAP table now shows an ENDSCN for this column, with the new (renamed)  column as in the History table.  The column name seems to include the SCN (ENDSCN ?)

Let's confirm what data is now present in the History table  (remember : Our earlier query showed the pre-update image for this column).

SQL> select id_column, D_1729869_NEW_COL_1, scn_to_timestamp(startscn), scn_to_timestamp(endscn)
2 from sys_fba_hist_93250
3 where (id_column < 6 OR D_1729869_NEW_COL_1 is not null)
4 order by 1,3;

ID_COLUMN D_172
---------- -----
SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
1
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

1
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

1 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

2
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

2
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

2 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

3
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

3
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

3 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

4
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

4
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

4 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM

5
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM

5
02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM

5 New
03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM


15 rows selected.

SQL>
SQL> select scn_to_timestamp(1729869) from dual;

SCN_TO_TIMESTAMP(1729869)
---------------------------------------------------------------------------
03-APR-16 11.45.27.000000000 PM

SQL>


Why do we now have 3 rows in the History table for each row in the Active Table ?  Take ID_COLUMN=1.  The first row -- for the time range 02-Apr 11:32pm to 02-Apr 11:46pm--  is as of yesterday, the same row we saw in the History table after the update in the active table.  The second row is the representation to preserve the row for the time rang 02-Apr 11:46pm to 03-Apr 11:41pm to support AS OF queries upto the time of the UPDATE.  The third row for the time range 03-Apr 11:41pm to 03-Apr 11:45pm is to present the UPDATEd value ('New') in the column upto the last transaction updating it before the column was dropped at 03-Apr 11:45:27pm.

Thus, Oracle maintains multiple versions of the same row, including versions for DROPped columns, in the History Table.

Note :  The History Table is not supposed to be directly queried in the manner I have shown here.  The proper query against the active table would be an AS OF query which is automatically rewritten / redirected to "hit" the History table when necessary.

What about the third table table -- the TCRV table ?

SQL> l
1 select scn_to_timestamp(startscn), op , count(*)
2 from sys_fba_tcrv_93250
3 group by scn_to_timestamp(startscn), op
4* order by 2,1
SQL> /

SCN_TO_TIMESTAMP(STARTSCN) O
--------------------------------------------------------------------------- -
COUNT(*)
----------
03-APR-16 11.45.24.000000000 PM U
1000


SQL>
SQL> select count(distinct(rid)) from sys_fba_tcrv_93250;

COUNT(DISTINCT(RID))
--------------------
1000

SQL>



It shows 1000 rows has having been UPDATEd ? (Does OP='U' mean 'UPDATE). We do know that ADD and DROP column are changes to the table.  But are they UPDATEs ?

Next post : Continuing with DML operations (more rows, some updates).  We'll see if we can decipher anything rom the TCRV table as well. Changed to showing support for TRUNCATEs.
.
.
.



Categories: DBA Blogs

FBDA -- 1 : Testing Flashback Data Archive in 12c (NonCDB)

Hemant K Chitale - Sat, 2016-04-02 10:53
Note : At the bottom of this post, you'll find links to more (subsequent) posts on this topic.

Some testing I'd done with Flashback Data Archive (henceforth called FBDA in this and subsequent posts, if any) in 11.2.0.4 left me with uncertainty about the automatic purging of data beyond the Retention Period.  I might return to testing 11.2.0.4, but here I shall begin testing in 12.1.0.2  (NonCDB).

Setting up FBDA :

[oracle@ora12102 ~]$ sqlplus system/oracle

SQL*Plus: Release 12.1.0.2.0 Production on Sat Apr 2 23:23:53 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Sat Apr 02 2016 23:20:47 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create tablespace fbda ;

Tablespace created.

SQL> create flashback archive fbda tablespace fbda retention 3 day;

Flashback archive created.

SQL> create tablespace hemant;

Tablespace created.

SQL> create user hemant identified by hemant
2 default tablespace hemant;

User created.

SQL> grant create table to hemant;

Grant succeeded.

SQL> grant create session to hemant;

Grant succeeded.

SQL> alter user hemant quota unlimited on hemant;

User altered.

SQL> alter user hemant quota unlimited on fbda;

User altered.

SQL> grant flashback archive administer to hemant;

Grant succeeded.

SQL> grant flashback archive on fbda to hemant;

Grant succeeded.

SQL>
SQL> connect / as sysdba
Connected.
SQL> grant execute on dbms_flashback_archive to hemant;

Grant succeeded.

SQL>
SQL> connect hemant/hemant
Connected.
SQL> create table test_fbda (id_column number, data_column varchar2(15), date_inserted date) tablespace hemant;

Table created.

SQL> alter table test_fbda flashback archive fbda;

Table altered.

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA

SQL>


Note the Flashback Archive history table corresponding to TEST_FBDA doesn't get created immediately.

SQL> connect hemant/hemant
Connected.
SQL> insert into test_fbda
2 select rownum , to_char(rownum), trunc(sysdate)
3 from dual connect by level < 1001;

1000 rows created.

SQL> commit;

Commit complete.

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA

SQL> select count(*) from test_fbda;

COUNT(*)
----------
1000

SQL> select flashback_archive_name, retention_in_days, status
2 from user_flashback_archive;

FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
RETENTION_IN_DAYS STATUS
----------------- -------
FBDA
3


SQL> select table_name, flashback_archive_name, archive_table_name, status
2 from user_flashback_archive_tables;

TABLE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
ARCHIVE_TABLE_NAME STATUS
----------------------------------------------------- -------------
TEST_FBDA
FBDA
SYS_FBA_HIST_93250 ENABLED


SQL>
SQL> !sleep 300

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
TEST_FBDA
SYS_FBA_DDL_COLMAP_93250
SYS_FBA_HIST_93250
SYS_FBA_TCRV_93250

SQL>
SQL> select object_id
2 from user_objects
3 where object_name = 'TEST_FBDA'
4 and object_type = 'TABLE'
5
SQL> /

OBJECT_ID
----------
93250

SQL>


So, it took some time for the flashback archive history table (identified on the basis of the OBJECT_ID) to appear.  The background fbda process seems to run (wakeup) every 5minutes although it may wake up more frequently if there is more activity in the database.

SQL> select trunc(date_inserted), count(*)
2 from test_fbda
3 group by trunc(date_inserted)
4 /

TRUNC(DAT COUNT(*)
--------- ----------
02-APR-16 1000

SQL> select trunc(date_inserted), count(*)
2 from sys_fba_hist_93250
3 group by trunc(date_inserted)
4 /

no rows selected

SQL> update test_fbda
2 set data_column = data_column
3 where rownum < 451;

450 rows updated.

SQL> commit;

Commit complete.

SQL> select trunc(date_inserted), count(*)
2 from sys_fba_hist_93250
3 group by trunc(date_inserted)
4 /

no rows selected

SQL>
SQL> !sleep 180

SQL> select trunc(date_inserted), count(*)
2 from sys_fba_hist_93250
3 group by trunc(date_inserted)
4 /

TRUNC(DAT COUNT(*)
--------- ----------
02-APR-16 450

SQL>
SQL> select scn_to_timestamp(startscn), scn_to_timestamp(endscn), date_inserted, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn), date_inserted
4 order by 1;

SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
DATE_INSE COUNT(*)
--------- ----------
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM
02-APR-16 450


SQL>


Notice that not all 1000 rows got copied to the FBDA.  Only the 450 rows that I updated were copied in.  They are tracked by SCN-Timestamp.  (The "DATE_INSERTED" column is my own date column, Oracle wouldn't be using that column to track DML dates for rows as the values in that column are controlled by me -- the application or developer, not Oracle).

Note :  The History Table is not supposed to be directly queried in the manner I have shown here.

Tomorrow :  More Rows, and some DDLs as well.

Post 2 here.  It covers some of the architectural components and support for ADD / DROP column DDL.

Post 3 here.  It shows support for TRUNCATEs.

Post 4 here.  On Partitions and Indexes.

Post 5 here.  On (Auto)Purging.

Post 6 here.  On Bug Notes

Post 7 here.  Maintainig Partitioned Source Table
.
.
.


Categories: DBA Blogs

Welcome New Oracle ACE Program Participants!

OTN TechBlog - Fri, 2016-04-01 20:03
12.00

Congratulations to our newest Oracle ACEs!  The Oracle ACE Program recognizes individuals for their outstanding contribution to the Oracle community. With over 500 participants in more than 60 countries around the globe, the ACE program features three tiers: Associate, ACE and ACE Director, to support community advocates at all stages of their career. Read more about the program here.

Oracle ACEs initiated in March:

Oracle ACE Directors:

Gokhan Atil

John Booth

Ludovico Caldara

Celvin Kattookaran

Carlos Sierra

Oracle ACEs:

Jérôme Françoisse

Anju Garg

Laurent Leturgez

Emrah Mete

Anuj Mohan

Robert van Molken

Oracle ACE Associates:

Elise Valin-Raki

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

sql for first day of month and last day of month

Learn DB Concepts with me... - Fri, 2016-04-01 20:00

select SYSDATE ,
last_day(sysdate) as LAST_DATE_CURR_MNTH,
ADD_MONTHS(last_day(sysdate),-1) as PREVIOUS_MON_LAST_DATE,
last_day(sysdate)+1 as NEXT_MON_FIRST_DATE,
ADD_MONTHS(last_day(sysdate),+1) as NEXT_MON_LAST_DATE,
ADD_MONTHS(last_day(sysdate),+5) as LAST_DATE_OF_5TH_MON,
ADD_MONTHS(last_day(sysdate),+5) +1 as FIRST_DATE_IN_6TH_MON_AFTR_NOW  
from dual;

"SYSDATE"    "LAST_DATE_CURR_MNTH"    "PREVIOUS_MON_LAST_DATE"    "NEXT_MON_FIRST_DATE"    "NEXT_MON_LAST_DATE"    "LAST_DATE_OF_5TH_MON"   
-----------  ---------------------   ------------------------    ---------------------   --------------------    -----------------------
"FIRST_DATE_IN_6TH_MON_AFTR_NOW"
-----------------------

01-APR-16        30-APR-16                31-MAR-16                    01-MAY-16                31-MAY-16            30-SEP-16   
-----------------------
01-OCT-16
Categories: DBA Blogs

compute as a service

Pat Shuff - Fri, 2016-04-01 14:42
In an ongoing learning journey of trying to understand cloud services, I got accounts on the Amazon cloud, Azure cloud, and Oracle cloud. I thought I would start with the basics and grow from there. As an exercise, let's create a Linux server with no software installed in each three platforms. Apart from creating an account on all three platforms (which was non-trivial) creation of a compute server on each platform was relatively simple.

Amazon Web Services

The initial look and feel of the console starts the experience. It does show what the three companies are focused on. Let's start with Amazon (it is first in the alphabet and I had to pick something). The console lists a wide variety of services and things that you can purchase. Without doing research I would not have known that S3 stands for storage and EC2 stands for compute.

I get what a virtual server in the cloud is but how does that differ from a docker container and why should I care? Why should I care about managing Web Apps if I am just looking for raw compute? Why do I want to run code outside of a virtual machine? Which one should I choose? We are not going to go into depth on any of these subjects. If we are just looking at running a Linux instance, the simple EC2 should be adequate. We can install Docker as a package in our Linux instance to help us control how much of a processor is allocated to a service or program. We can install applications like Tomcat or WebLogic to run Web Apps. Linux gives us the foundation to do all of this with packages. Lambda is a totally different beast in that I can run code snippets to do things like voice command interpretation for an Amazon Echo or asynchronous events from devices and launch web sites or REST apis without having to install, manage, and configure an operating system. The rest of the world calls this a Node.js function and offers it as a separate service as well. I realize that I am oversimplifying this but you have to know what you are trying to accomplish before you start to create your first compute instance in the cloud.

Microsoft Azure Services

The Azure services are a little different in that they focus more on the user creation of virtual machines, SQL server, and some app services. Creation of a virtual image is relatively easy and it makes sense what you are doing. The console is relatively simple and clean with more options on the second page instead of the first page as is done with Amazon.

As you click on the Add button for Virtual Image you get an expanded set of operation system options and configurations.

Note that you can search for Oracle Linux and get a listing of various versions of the database. The virtual machine is easy to configure and create using the portal. If, however, you want to configure and create this via a command line, you need to download the PowerShell and run everything inside the application. The command line is Microsoft specific and difficult to port and migrate to other services. With Amazon and Oracle you can easily use RESTapi calls to provision and create services. Microsoft makes it a little more difficult to generically script but easily do this in their shell and language.

Oracle Cloud Services

The Oracle cloud compute services are new to the market. In the past compute services have been sold in bundles of 500 processors but have recently been made available in single processor consumption models. The cloud console has a different look and feel because the focus of the cloud services are more on the PaaS layer and less on the compute and storage layers.

Note that the screen shot starts with the storage and compute services but scrolling down shows database, java, SOA, and more PaaS layers.

To create a virtual image, you need to click on the compute cloud service - Service Console and it will allow you create an instance. The operating system selection is not as graphical or user friendly as the Microsoft interface but does list a variety of operating system options and configurations.

In conclusion, all three of these cloud consoles allow you to create a virtual image. In the next blog entry we will walk through the steps needed to create a Linux 6 instance on each of the three cloud platforms. We will not talk about how to create accounts. We will assume that you can find account setup and creation on your own. All three sites offer "try me" services that give at least 30 days evaluations. The eventual recommendation will be to use services like bitnami.com that takes public domain services like LAMP servers, Wiki engines, blog servers, and other public domain tools. The Bitnami site allows you to select a pre-configured instance and provision it into all three of these cloud services along with a few other cloud providers.

Best practice for setting up MySQL replication filters

Pythian Group - Fri, 2016-04-01 14:23

It is not uncommon that we need to filter out some DBs or Tables while setting up replication. It is important to understand how MySQL evaluates/process the replication filtering rules to avoid the conflicting or confusion while we setting them up.The purpose of this blog is to illustrate the rules and provide some suggestions for best practice.

MySQL provides 3 levels of filters for setting up replication: Binary log, DB and Table. The binlog filters apply on the master to control how to log the changes. Since MySQL replication is based on the binlog, it is the first level filter and has the highest priority. While the DB-level and Table-level filters apply on the slaves, since each table belongs to a schema, the DB-level filters have higher priority than the Table-level ones. Inside the Table-level filters, MySQl will evaluate the options in the order of: –replicate-do-table, –replicate-ignore-table ,  –replicate-wild-do-table , –replicate-wild-ignore-table.

Based on that, we have the following suggestions for setting up MySQL replication filter as best practice:

I)Do not setup any binlog-level filters unless you really need to and can afford losing the chance of  having an extra full copy of data changes for the master.

II)In DB-level filters, use either one or none of the two options: –replicate-do-db or –replicate-ignore-db. Never use both at the same time.

III) While using binlog_format=’statement’ OR ‘mixed’ (in mixed mode, if  a transaction is deterministic then it will be stored in statement format) and set up –replicate-do-db or –replicate-ignore-db on slaves, make sure never make changes on the tables across the default database on master otherwise you might lose the changes on slave due to default database not matching.

IV)In Table-level filters, use only one of the 2 options, or use the following two combination: –replicate-ignore-table and —replicate-wild-do-table to avoid conflicting and confusing.

For MariaDB replication filters within Galera cluster, it should be used with caution. As a general rule except for InnoDB DML updates, the following replication filters are not honored in a Galera cluster :  binlog-do-db ,binlog-ignore-db, replicate-wild-do-db, replicate-wild-ignore-db. However, replicate-do-db,replicate-ignore-db filters are honored for DDL and DML for both InnoDB & MyISAM engines. As they might create discrepancies and replication may abort (see MDEV-421, MDEV-6229). (https://mariadb.com/kb/en/mariadb/mariadb-galera-cluster-known-limitations/), For the slaves replicating from cluster, the rules are similar with normal replication settings as above.

Here are the details/reasons:

1)Binlog-level filters

A)How MySQL process the Binlog-level filters

There are 2 options for setting binlog filter on master:  –binlog-do-db and –binlog-ignore-db. MySQL will check –binlog-do-db first, if there are any options, it will apply this one and ignore –binlog-ignore-db. If the –binlog-do-db is NOT set, then mysql will check –binlog-ignore-db.If both of them are empty, it will log changes for all DBs.

See the below examples. In scenario 1) no binlog level filters are set and so all changes were logged; In scenario 2) -binlog-do-db and –binlog-ignore-db are all set to m_test and changes on the DB m_test were logged and changes on the DB test were NOT logged;In scenario 3) only –binlog-ignore-db is set to m_test and so changes on the DB m_test were NOT logged and changes on the DB test were  logged;
scenario 1)–binlog-do-db and –binlog-ignore-db is NOT set:

mysql> show master status;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000003 |      120 |              |                  |                   |

+——————+———-+————–+——————+——————-+

1 row in set (0.00 sec)

mysql> show binlog events in “vm-01-bin.000003” from 120;  

Empty set (0.00 sec)

mysql> insert into t1(id,insert_time) values(10,now());

Query OK, 1 row affected (0.05 sec)

 

mysql> show binlog events in “vm-01-bin.000003” from 120;

+——————+—–+————+———–+————-+—————————————————————+

| Log_name         | Pos | Event_type | Server_id | End_log_pos | Info                                                          |

+——————+—–+————+———–+————-+—————————————————————+

| vm-01-bin.000003 | 120 | Query      |         1 |         211 | BEGIN                                                         |

| vm-01-bin.000003 | 211 | Query      |         1 |         344 | use `m_test`; insert into t1(id,insert_time) values(10,now()) |

| vm-01-bin.000003 | 344 | Xid        |         1 |         375 | COMMIT /* xid=17 */                                           |

+——————+—–+————+———–+————-+—————————————————————+

3 rows in set (0.00 sec)

scenario 2)–binlog-do-db=m_test and –binlog-ignore-db=m_test:

— insert into tables of DB m_test was logged

mysql> show master status;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000004 |      656 | m_test       | m_test           |                   |

+——————+———-+————–+——————+——————-+

1 row in set (0.00 sec)

 

mysql> use m_test

 

mysql> insert into t1(insert_time) values(now());

Query OK, 1 row affected (0.02 sec)

 

mysql> show binlog events in “vm-01-bin.000004” from 656;

+——————+—–+————+———–+————-+———————————————————+

| Log_name         | Pos | Event_type | Server_id | End_log_pos | Info                                                    |

+——————+—–+————+———–+————-+———————————————————+

| vm-01-bin.000004 | 656 | Query      |         1 |         747 | BEGIN                                                   |

| vm-01-bin.000004 | 747 | Intvar     |         1 |         779 | INSERT_ID=13                                            |

| vm-01-bin.000004 | 779 | Query      |         1 |         906 | use `m_test`; insert into t1(insert_time) values(now()) |

| vm-01-bin.000004 | 906 | Xid        |         1 |         937 | COMMIT /* xid=26 */                                     |

+——————+—–+————+———–+————-+———————————————————+

4 rows in set (0.00 sec)

— insert into tables of DB test was NOT logged

mysql> use test;

 

mysql> show master status ;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000004 |      937 | m_test       | m_test           |                   |

+——————+———-+————–+——————+——————-+

 

mysql> insert into t1(`a`) values(‘ab’);

Query OK, 1 row affected (0.03 sec)

 

mysql> show binlog events in “vm-01-bin.000004” from 937;

Empty set (0.00 sec)

 

scenario 3)–Binlog_Do_DB=null –binlog-ignore-db=m_test:

mysql> use m_test

mysql> show master status;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000005 |      120 |              | m_test           |                   |

+——————+———-+————–+——————+——————-+

mysql> insert into t1(insert_time) values(now());

Query OK, 1 row affected (0.01 sec)

 

mysql> show binlog events in “vm-01-bin.000005” from 120;

Empty set (0.00 sec)

 

mysql> use test

mysql> insert into t1(`a`) values(‘ba’);

Query OK, 1 row affected (0.03 sec)

 

mysql> show binlog events in “vm-01-bin.000005” from 120;

+——————+—–+————+———–+————-+———————————————-+

| Log_name         | Pos | Event_type | Server_id | End_log_pos | Info                                         |

+——————+—–+————+———–+————-+———————————————-+

| vm-01-bin.000005 | 120 | Query      |         1 |         199 | BEGIN                                        |

| vm-01-bin.000005 | 199 | Query      |         1 |         305 | use `test`; insert into t1(`a`) values(‘ba’) |

| vm-01-bin.000005 | 305 | Xid        |         1 |         336 | COMMIT /* xid=22 */                          |

+——————+—–+————+———–+————-+———————————————-+

3 rows in set (0.00 sec)

 

B)Best practice for setting up the Binlog-level filters

So, for Binlog-level filter, we will use either one (and ONLY one or none) of the 2 options: –binlog-do-db to make MySQL log changes for the DBs in the list. OR, –binlog-ignore-db to make MySQL log changes for the DBs NOT in the list. Or leave both of them empty to log changes for all the DBs.

However, we usually recommend NOT to setup any binlog-level filters. The reason is that to log changes for all DBs and set up filters only on slaves will achieve the same purpose and let us have an extra full copy of data changes for the master, in case we will need that for recovery.

 

2)DB-level filters

A)How MySQL process the DB-level filters

There are 2 options for setting DB-level filters:  –replicate-do-db or –replicate-ignore-db. MySQL processes these two filters the similar way as it processes the Binlog-level filters, the difference is that it ONLY applies on the slaves and so affects how the slaves replicate from its master. It will check –replicate-do-db first, if there are any options, it will replicate the DBs in the list and ignore –replicate-ignore-db. If the –replicate-do-db is NOT set, then mysql will check –replicate-ignore-db and replicate all the DBs except for the ones in this list.If both of them are empty, it will replicate all the DBs. you can find the process in the below chart from http://dev.mysql.com/doc/refman/5.7/en/replication-rules-db-options.html

There is a trick for DB-level filters though If the binlog_format is set as statement or mixed. (The binlog_format =mixed also applies here, it is because that  in mixed mode replication, in case the transaction  is deterministic it will be resolved to statement which is equivalent to statement mode) .. Since “With statement-based replication, the default database is checked for a match.” (http://dev.mysql.com/doc/refman/5.7/en/replication-rules-db-options.html). If you set up –replicate-do-db and you update a table out of the default database in master, the update statement will not be replicated if the default database you are running command from is not in the  –replicate-do-db. For example, there are 2 DBs in master, you set binlog_format=’statement’ OR ‘mixed’ and set –replicate-do-db=DB1 on slave. when execute the following commands: use DB2; update DB1.t1 … This update command will not be executed on slave. To make the update statement replicated to slave, you need to do: use DB1, update t1 …

For example: with binlog_format=statement or binlog_format=mixed,  we insert into m_test.t1 in two approaches: one is using default DB as m_test, the other one is using default DB test, the changes are all logged in the master. But in slave, after it caught up, only the insert(default DB is m_test) was replicated to slave, and the insert (default DB is test) was NOT replicated. As shown below:

Scenario 1) binlog_format=statement

In master: insert into m_test.t1 in two approaches: one is using default DB as m_test, the other one is using default DB test, the changes are all logged

mysql> use m_test

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> delete from t1;

Query OK, 16 rows affected (0.02 sec)

 

mysql> select * from m_test.t1;

Empty set (0.00 sec)

 

mysql> use m_test

Database changed

mysql> insert into m_test.t1(insert_time) values(now());

Query OK, 1 row affected (0.04 sec)

 

mysql> use test;

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> insert into m_test.t1(insert_time) values(now());

Query OK, 1 row affected (0.03 sec)

 

mysql> show binlog events in “vm-01-bin.000006” from 654;

+——————+——+————+———–+————-+—————————————————————-+

| Log_name         | Pos  | Event_type | Server_id | End_log_pos | Info                                                           |

+——————+——+————+———–+————-+—————————————————————-+

| vm-01-bin.000006 |  654 | Xid        |         1 |         685 | COMMIT /* xid=39 */                                            |

| vm-01-bin.000006 |  685 | Query      |         1 |         768 | BEGIN                                                          |

| vm-01-bin.000006 |  768 | Query      |         1 |         860 | use `m_test`; delete from t1                                   |

| vm-01-bin.000006 |  860 | Xid        |         1 |         891 | COMMIT /* xid=48 */                                            |

| vm-01-bin.000006 |  891 | Query      |         1 |         982 | BEGIN                                                          |

| vm-01-bin.000006 |  982 | Intvar     |         1 |        1014 | INSERT_ID=17                                                   |

| vm-01-bin.000006 | 1014 | Query      |         1 |        1148 | use `m_test`; insert into m_test.t1(insert_time) values(now()) |

| vm-01-bin.000006 | 1148 | Xid        |         1 |        1179 | COMMIT /* xid=52 */                                            |

| vm-01-bin.000006 | 1179 | Query      |         1 |        1268 | BEGIN                                                          |

| vm-01-bin.000006 | 1268 | Intvar     |         1 |        1300 | INSERT_ID=18                                                   |

| vm-01-bin.000006 | 1300 | Query      |         1 |        1432 | use `test`; insert into m_test.t1(insert_time) values(now())   |

| vm-01-bin.000006 | 1432 | Xid        |         1 |        1463 | COMMIT /* xid=60 */                                            |

+——————+——+————+———–+————-+—————————————————————-+

12 rows in set (0.00 sec)

 

mysql> select * from m_test.t1;

+—-+———————+

| id | insert_time         |

+—-+———————+

| 17 | 2016-03-20 14:59:41 |

| 18 | 2016-03-20 15:00:01 |

+—-+———————+

2 rows in set (0.00 sec)

 

In slave: after it caught up, only the first insert(default DB is m_test) was replicated to slave, and the insert (default DB is test) was NOT replicated

mysql> show slave status\G

*************************** 1. row ***************************

              Slave_IO_State: Waiting for master to send event

                 Master_Host: 10.0.2.6

                 Master_User: repl

                 Master_Port: 3306

               Connect_Retry: 10

             Master_Log_File: vm-01-bin.000006

         Read_Master_Log_Pos: 1463

              Relay_Log_File: ewang-vm-03-relay-bin.000017

               Relay_Log_Pos: 1626

       Relay_Master_Log_File: vm-01-bin.000006

            Slave_IO_Running: Yes

           Slave_SQL_Running: Yes

             Replicate_Do_DB: m_test

         Replicate_Ignore_DB:

          Replicate_Do_Table:

      Replicate_Ignore_Table:

     Replicate_Wild_Do_Table:

 Replicate_Wild_Ignore_Table:

                  Last_Errno: 0

                  Last_Error:

                Skip_Counter: 0

         Exec_Master_Log_Pos: 1463

             Relay_Log_Space: 1805

             Until_Condition: None

              Until_Log_File:

               Until_Log_Pos: 0

          Master_SSL_Allowed: No

          Master_SSL_CA_File:

          Master_SSL_CA_Path:

             Master_SSL_Cert:

           Master_SSL_Cipher:

              Master_SSL_Key:

       Seconds_Behind_Master: 0

Master_SSL_Verify_Server_Cert: No

               Last_IO_Errno: 0

               Last_IO_Error:

              Last_SQL_Errno: 0

              Last_SQL_Error:

 Replicate_Ignore_Server_Ids:

            Master_Server_Id: 1

                 Master_UUID: a22b3fb2-5e70-11e5-b55a-0800279d00c5

            Master_Info_File: /mysql/data/master.info

                   SQL_Delay: 0

         SQL_Remaining_Delay: NULL

     Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it

          Master_Retry_Count: 86400

                 Master_Bind:

     Last_IO_Error_Timestamp:

    Last_SQL_Error_Timestamp:

              Master_SSL_Crl:

          Master_SSL_Crlpath:

          Retrieved_Gtid_Set:

           Executed_Gtid_Set:

               Auto_Position: 0

1 row in set (0.00 sec)

 

mysql> select * from m_test.t1;

+—-+———————+

| id | insert_time         |

+—-+———————+

| 17 | 2016-03-20 14:59:41 |

+—-+———————+

1 row in set (0.00 sec)

 

Scenario 2) binlog_format=mixed

In master:

mysql> show variables like ‘binlog_format’;

+—————+——-+

| Variable_name | Value |

+—————+——-+

| binlog_format | MIXED |

+—————+——-+

1 row in set (0.00 sec)

 

mysql> show master status;

+——————+———-+————–+——————+——————-+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+——————+———-+————–+——————+——————-+

| vm-01-bin.000007 |      120 |              |                  |                   |

+——————+———-+————–+——————+——————-+

1 row in set (0.00 sec)

 

mysql> use m_test

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> insert into m_test.t1(insert_time) values(now());

Query OK, 1 row affected (0.04 sec)

 

mysql> use test;

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> insert into m_test.t1(insert_time) values(now());

Query OK, 1 row affected (0.04 sec)

 

mysql> show binlog events in “vm-01-bin.000007” from 120;

+——————+—–+————+———–+————-+—————————————————————-+

| Log_name         | Pos | Event_type | Server_id | End_log_pos | Info                                                           |

+——————+—–+————+———–+————-+—————————————————————-+

| vm-01-bin.000007 | 120 | Query      |         1 |         211 | BEGIN                                                          |

| vm-01-bin.000007 | 211 | Intvar     |         1 |         243 | INSERT_ID=19                                                   |

| vm-01-bin.000007 | 243 | Query      |         1 |         377 | use `m_test`; insert into m_test.t1(insert_time) values(now()) |

| vm-01-bin.000007 | 377 | Xid        |         1 |         408 | COMMIT /* xid=45 */                                            |

| vm-01-bin.000007 | 408 | Query      |         1 |         497 | BEGIN                                                          |

| vm-01-bin.000007 | 497 | Intvar     |         1 |         529 | INSERT_ID=20                                                   |

| vm-01-bin.000007 | 529 | Query      |         1 |         661 | use `test`; insert into m_test.t1(insert_time) values(now())   |

| vm-01-bin.000007 | 661 | Xid        |         1 |         692 | COMMIT /* xid=53 */                                            |

+——————+—–+————+———–+————-+—————————————————————-+

8 rows in set (0.00 sec)

mysql> select * from m_test.t1;

+—-+———————+

| id | insert_time         |

+—-+———————+

| 17 | 2016-03-20 14:59:41 |

| 18 | 2016-03-20 15:00:01 |

| 19 | 2016-03-20 15:09:14 |

| 20 | 2016-03-20 15:09:25 |

+—-+———————+

4 rows in set (0.00 sec)

 

In slave:

mysql> show variables like ‘binlog_format’;

+—————+——-+

| Variable_name | Value |

+—————+——-+

| binlog_format | MIXED |

+—————+——-+

1 row in set (0.00 sec)

 

mysql> show slave status\G

*************************** 1. row ***************************

              Slave_IO_State: Waiting for master to send event

                 Master_Host: 10.0.2.6

                 Master_User: repl

                 Master_Port: 3306

               Connect_Retry: 10

             Master_Log_File: vm-01-bin.000007

         Read_Master_Log_Pos: 692

              Relay_Log_File: ewang-vm-03-relay-bin.000023

               Relay_Log_Pos: 855

       Relay_Master_Log_File: vm-01-bin.000007

            Slave_IO_Running: Yes

           Slave_SQL_Running: Yes

             Replicate_Do_DB: m_test

         Replicate_Ignore_DB:

          Replicate_Do_Table:

      Replicate_Ignore_Table:

     Replicate_Wild_Do_Table:

 Replicate_Wild_Ignore_Table:

                  Last_Errno: 0

                  Last_Error:

                Skip_Counter: 0

         Exec_Master_Log_Pos: 692

             Relay_Log_Space: 1034

             Until_Condition: None

              Until_Log_File:

               Until_Log_Pos: 0

          Master_SSL_Allowed: No

          Master_SSL_CA_File:

          Master_SSL_CA_Path:

             Master_SSL_Cert:

           Master_SSL_Cipher:

              Master_SSL_Key:

       Seconds_Behind_Master: 0

Master_SSL_Verify_Server_Cert: No

               Last_IO_Errno: 0

               Last_IO_Error:

              Last_SQL_Errno: 0

              Last_SQL_Error:

 Replicate_Ignore_Server_Ids:

            Master_Server_Id: 1

                 Master_UUID: a22b3fb2-5e70-11e5-b55a-0800279d00c5

            Master_Info_File: /mysql/data/master.info

                   SQL_Delay: 0

         SQL_Remaining_Delay: NULL

     Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it

          Master_Retry_Count: 86400

                 Master_Bind:

     Last_IO_Error_Timestamp:

    Last_SQL_Error_Timestamp:

              Master_SSL_Crl:

          Master_SSL_Crlpath:

          Retrieved_Gtid_Set:

           Executed_Gtid_Set:

               Auto_Position: 0

1 row in set (0.00 sec)

 

mysql> select * from m_test.t1;

+—-+———————+

| id | insert_time         |

+—-+———————+

| 17 | 2016-03-20 14:59:41 |

| 19 | 2016-03-20 15:09:14 |

+—-+———————+

2 rows in set (0.00 sec)

 

B)Best practice for setting up the DB-level filters

Use either one or none of the two options: –replicate-do-db or –replicate-ignore-db. Never use both at the same time.

If you use binlog_format=’statement’  OR ‘mixed’ and set up –replicate-do-db or –replicate-ignore-db on slaves, make sure never make changes on the tables across the default database, otherwise the data discrepancy will be expected in the slaves.

 

3)Table-level filters

There are 4 options for setting Table-level filters: –replicate-do-table, –replicate-ignore-table ,  –replicate-wild-do-table or –replicate-wild-ignore-table. MySQL evaluates the options in order. you can find the process in the below chart from http://dev.mysql.com/doc/refman/5.6/en/replication-rules-table-options.html

 

The above chart shows us that MySQL will first check –replicate-do-table, the tables listed here will be replicated and so won’t be ignored by the following options like –replicate-ignore-table , or –replicate-wild-ignore-table. Then MySQL will check –replicate-ignore-table, the tables listed here will be ignored even if it shows up in the following options  –replicate-wild-do-table. The lowest priority is –replicate-wild-ignore-table.

B)Best practice for setting up the Table-level filters

Due to the priorities for the 4 Table_level options, to avoid confusing/conflicting, we suggest using only one of the 4 options, or using the following two options: –replicate-ignore-table and replicate-wild-do-table so that it is clearly that the tables in –replicate-ignore-table will be ignored and the tables in replicate-wild-do-table will be replicated.

 

Categories: DBA Blogs

Deploying your Oracle MAF Apps on Windows Platform

As you may already know Oracle Mobile Application Framework (MAF) 2.3 has been released. And one of the symbolic features is support for Universal Windows Platform (UWP). This means that starting...

We share our skills to maximize your revenue!
Categories: DBA Blogs

What Are Your Options For Migrating Enterprise Applications to the Cloud?

Pythian Group - Fri, 2016-04-01 09:16

Migrating your enterprise applications from on-premises infrastructure to the public cloud is attractive for a number of reasons. It eliminates the costs and complexities of provisioning hardware and managing servers, storage devices, and network infrastructure; it gives you more compute capacity per dollar without upfront capital investment; and you gain opportunities for innovation through easier access to new technologies, such as advanced analytical capabilities.

So how do you get there?

You have a few options. At one end of the spectrum, you could simply wait and rationalize, making continuous incremental changes to gain efficiencies. This is obviously a “slow burn” approach. In the middle is a “lift-and-shift” from your current environment into the public cloud. And at the far extreme, you could plunge right in and re-architect your applications—a costly and probably highly complex task.

 

In fact, a true migration “strategy” will involve elements of each of these. For example, you could perform short-term optimizations and migrations on a subset of applications that are ready for the cloud, while transforming the rest of your application stack over the longer term.

 

What to expect from the major public cloud platforms

There are three leading public cloud platforms: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). As Google doesn’t seem to be driving customers to lift-and-shift their applications to GCP, I’m going to focus on AWS and Azure as potential cloud destinations and, for specificity, take Oracle enterprise databases as the use case.

 

Amazon Web Services

You have two options for migrating Oracle databases to the AWS cloud: infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS).

 

Deploying Oracle applications in AWS IaaS is much like deploying them on your in-house infrastructure. You don’t get flexible licensing options, but you do have the ability to easily allocate more or less capacity as needed for CPU, memory, and storage. However, because AWS IaaS is virtualized infrastructure, you may experience slower performance due to suboptimal CPU core allocation or processor caches. You’ll also have less flexibility with instance sizes, network topology, storage performance tiers, and the like.

 

AWS Relational Database Service (RDS) for Oracle is a managed PaaS offering where, in addition to giving you the benefits of IaaS, Amazon takes on major DBA and system administrator tasks including provisioning, upgrades, backups, and multi-availability zone replication. This significantly simplifies your operations—but also results in less control over areas such as configuration, patching, and maintenance windows. AWS RDS for Oracle can also be used with a pay-as-you-go licensing model included in the hourly rate.

 

Microsoft Azure

Azure does not have a managed offering for Oracle databases, so the only way to run Oracle Database on Azure is through its IaaS platform. The benefits are very similar to AWS IaaS, but Azure offers additional licensing options (with Windows-based license-included images) and its instances are billed by the minute rather than by the hour. What’s important to keep in mind is that Azure is not as broadly adopted as AWS and offers less flexibility for storage performance tiers and instance sizes. Oracle Database software running on Windows is also not as common as running on Linux.

 

For more in-depth technical details on these options, I encourage you to read our white paper, Migrating Oracle Databases to Cloud. My next blog in this series will look at one other option not discussed here: migrating to Oracle Cloud.

migratingtocloud

Categories: DBA Blogs

CHANGE STANDBY DATABASE PROTECTION MODE

Learn DB Concepts with me... - Fri, 2016-04-01 09:00
SQL> select protection_mode from v$database;

PROTECTION_MODE
--------------------
MAXIMUM PERFORMANCE

SQL> show parameter log_archive_dest_2

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2             string     SERVICE=ORCLSTB1 NOAFFIRM ASYN
                         C VALID_FOR=(ONLINE_LOGFILES,P
                         RIMARY_ROLE) DB_UNIQUE_NAME=OR
                         CLSTB1
log_archive_dest_20             string
log_archive_dest_21             string
log_archive_dest_22             string
log_archive_dest_23             string
log_archive_dest_24             string
log_archive_dest_25             string
log_archive_dest_26             string

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_27             string
log_archive_dest_28             string
log_archive_dest_29             string
SQL> show parameter db_unique_name

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
db_unique_name                 string     ORCL
SQL> show parameter log_archive_config

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_config             string     dg_config=(ORCL,ORCLSTB1,ORCLS
                         TB2)
SQL> alter system set log_archive_dest_2='SERVICE=ORCLSTB1 NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCLSTB1';

System altered.

SQL> show parameter log_archive_dest_2

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2             string     SERVICE=ORCLSTB1 NOAFFIRM ASYN
                         C VALID_FOR=(ONLINE_LOGFILES,P
                         RIMARY_ROLE) DB_UNIQUE_NAME=OR
                         CLSTB1
log_archive_dest_20             string
log_archive_dest_21             string
log_archive_dest_22             string
log_archive_dest_23             string
log_archive_dest_24             string
log_archive_dest_25             string
log_archive_dest_26             string

NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_27             string
log_archive_dest_28             string
log_archive_dest_29             string

SQL> alter system set log_archive_dest_2='SERVICE=ORCLSTB1 NOAFFIRM SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCLSTB1';

System altered.


SQL> alter database set standby database to maximize availability;

Database altered.

NOTE: You don’t need to shutdown your instance, when you are changing  protection mode from MAXIMUM PERFORMANCE TO MAXIMUM AVAILABILITY.But you need to if you are going to MAXIMUM PROTECTION.

SQL> alter system switch logfile;

System altered.

SQL> select protection_mode from v$database;

PROTECTION_MODE
--------------------
MAXIMUM AVAILABILITY

SQL> archive log list;
Database log mode           Archive Mode
Automatic archival           Enabled
Archive destination           /u01/app/oracle/oraarch/
Oldest online log sequence     239
Next log sequence to archive   241
Current log sequence           241
SQL> select group#,bytes/1024/1024 from v$standby_log;

    GROUP# BYTES/1024/1024
---------- ---------------
     4        52
     5        52
     6        52
     7        52


SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

   THREAD# MAX(SEQUENCE#)
---------- --------------
     1          240

SQL> select protection_mode from v$database;

PROTECTION_MODE
--------------------
MAXIMUM AVAILABILITY



SQL> alter system switch logfile;

System altered.

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator