Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 19 min 20 sec ago

Documentum story – How to avoid “DFC_FILE_LOCK_ACQUIRE_WARN” messages in Java Method Server (jms) LOG

Tue, 2016-12-20 02:00

Ref : EMC article number
The last publication date is Sat Feb 20 21:39:14 GMT 2016. Here the link: https://support.emc.com/kb/335987

After upgrading from 6.7.x to 7.2, the following warning message is logged in JMS log files: com.documentum.fc.common.DfNewInterprocessLockImpl – [DFC_FILE_LOCK_ACQUIRE_WARN] Failed to acquire lock proceeding ahead with no lock java.nio.channels.OverlappingFileLockException at sun.nio.ch.SharedFileLockTable.checkList FileLockTable.java:255)

In order to avoid this warning, EMC has provided a solution (SR #69856498) that will be described below:

By default ACS and ServerApp dfc.properties are pointing to $DOCUMENTUM_SHARED/config/dfc.properties.

Adding separate ‘dfc.data.dir’ cache folder location in ACS and ServerApp dfc.properties.
After JAVA Method Server restart, two separate cache folders are created inside $DOCUMENTUM_SHARED/jboss7.1.1/server and then, WARNING messages had gone from acs.log.

In fact, this is just a warning that someone else has acquired lock on the physical file (in this case it is dfc.keystore).  Since ServerApps (Method Server) and ACS are invoking DFC simultaneously and both try to acquire lock on dfc.keystore file and Java throws OverlappingFileLockException. Then DFC warns that it could not lock the file and proceeds without lock. Ideally this should be just info message in this case, where file lock is acquired for read-only. But the same logic is used by other functionality like registry update and BOF Cache update, where this failure should be treated as genuine warning or error. Going forward, engineering will have to correct this code by taking appropriate actions for each functionality. There is no functional impact to use different data directory folder.

Please proceed as below to solve It:

  • Login to the Content Server
  • Change the current user to dmadmin :(administrator account)
  • Create some folders using:
 mkdir $DOCUMENTUM_SHARED/acs
 mkdir $DOCUMENTUM_SHARED/ServerApps
 mkdir $DOCUMENTUM_SHARED/bpm

 

  • Update all necessary dfc.properties files (with vi editor):

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/acs

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/APP-INF/classes/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/ServerApps

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/bpm.ear/APP-INF/classes/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/bpm

===============================================================================================================================

  • Verify that the recently created folders are empty using:
cd $DOCUMENTUM_SHARED
ls -l acs/ ServerApps/ bpm/

 

  • Restart the JMS using:
sh -c "cd $DOCUMENTUM_SHARED/jboss7.1.1/server;./stopMethodServer.sh"
sh -c "$DOCUMENTUM_SHARED/jboss7.1.1/server/startMethodServer.sh"

 

Verification
  • Verify that the recently created folders are now populated with default files and folders using:
cd $DOCUMENTUM_SHARED
ls -l acs/ ServerApps/ bpm/

Files must not be empty now.

  • Disconnect from the Content Server.

 

Using this procedure, you won’t see this WARNING message anymore.
Regards,

Source : EMC article number : 000335987

 

Cet article Documentum story – How to avoid “DFC_FILE_LOCK_ACQUIRE_WARN” messages in Java Method Server (jms) LOG est apparu en premier sur Blog dbi services.

Oracle 12cR2: AWR views in multitenant

Mon, 2016-12-19 13:42

In a previous post I explained how the AWR views have changed between 12.1.0.1 and 12.1.0.2 and now in 12.2.0.1 they have changed again. This is a good illustration of multitenant object link usage.

What’s new in 12cR2 is the ability to run AWR snapshots at CDB or PDB level. I really think that it makes more sense to read an AWR report at CDB level because it’s about analysing the system (=instance) activity. But with PDBaaS I can understand the need to give a report to analyse PDB sessions, resource and statements.

I’ll start with the conclusion – a map of AWR view to show which ones read from CDB level snapshots, or PDB snapshots, or both:

C0DLx2GXEAALIG4
I’ll explain AWR reports in a future post. Basically when you run awrrpt.sql from CDB$ROOT you get CDB snapshots and when you run it from PDB you have the choice.

In the diagram above, just follow the arrows to know which view reads from PDB or CDB or both. You see two switches between the root and the PDB: data link for one way and common view for the other way. Note that all are metadata links so switches occurs also at parse time.

WRM$_

Let’s start from the table where AWR snapshots are stored:


SQL> select owner,object_name,object_type,sharing from dba_objects where object_name='WRM$_SNAPSHOT';
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ------------------------------ ----------------------- ------------------
SYS WRM$_SNAPSHOT TABLE METADATA LINK

This is a table. METADATA LINK means that the structure is the same in all containers, but data is different.

I have the following containers:

SQL> select con_id,dbid,name from v$containers;
 
CON_ID DBID NAME
---------- ---------- ------------------------------
1 904475458 CDB$ROOT
2 2066620152 PDB$SEED
3 2623271973 PDB1

From CDB$ROOT I see data for the CDB:

SQL> select dbid,count(*) from WRM$_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

and from PDB I see snapshots taken from PDB:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from WRM$_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

So remember, CDB$ROOT has 91 snapshots with DBID= 904475458 and PDB1 has 79 snapshots with DBID=2623271973

AWR_ROOT_ and AWR_PDB_

Views on WRM$_SNAPSHOT are referenced in DBA_DEPENDENCIES:


SQL> select owner,name,type from dba_dependencies where referenced_name='WRM$_SNAPSHOT' and type like 'VIEW';
 
OWNER NAME TYPE
----- ------------------------------ -------------------
SYS AWR_ROOT_SNAPSHOT VIEW
SYS AWR_ROOT_SYSSTAT VIEW
SYS AWR_ROOT_ACTIVE_SESS_HISTORY VIEW
SYS AWR_ROOT_ASH_SNAPSHOT VIEW
SYS AWR_PDB_SNAPSHOT VIEW
SYS AWR_PDB_ACTIVE_SESS_HISTORY VIEW
SYS AWR_PDB_ASH_SNAPSHOT VIEW

I’m interested in views that show snapshot information: AWR_ROOT_SNAPSHOT and AWR_PDB_SNAPSHOT


SQL> select owner,object_name,object_type,sharing from dba_objects where object_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT') order by 3;
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
------ ------------------------------ ----------------------- ------------------
PUBLIC AWR_ROOT_SNAPSHOT SYNONYM METADATA LINK
PUBLIC AWR_PDB_SNAPSHOT SYNONYM METADATA LINK
SYS AWR_ROOT_SNAPSHOT VIEW DATA LINK
SYS AWR_PDB_SNAPSHOT VIEW METADATA LINK

Besides the synonyms, we have a metadata link view AWR_PDB_SNAPSHOT and a data link view AWR_ROOT_SNAPSHOT. The data link one means that it switches to CDB$ROOT when queried from a PDB. Here is the definition:


SQL> select owner,view_name,container_data,text from dba_views where view_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT');
 
OWNER VIEW_NAME C TEXT
------ ------------------------------ - --------------------------------------------------------------------------------
SYS AWR_ROOT_SNAPSHOT Y select snap_id, dbid, instance_number, startup_time,
begin_interval_time, end_interval_time,
flush_elapsed, snap_level, error_count, snap_flag, snap_timezone,
decode(con_dbid_to_id(dbid), 1, 0, con_dbid_to_id(dbid)) con_id
from WRM$_SNAPSHOT
where status = 0
 
SYS AWR_PDB_SNAPSHOT N select snap_id, dbid, instance_number, startup_time,
begin_interval_time, end_interval_time,
flush_elapsed, snap_level, error_count, snap_flag, snap_timezone,
decode(con_dbid_to_id(dbid), 1, 0, con_dbid_to_id(dbid)) con_id
from WRM$_SNAPSHOT
where status = 0

Same definition. The difference is that AWR_PDB_SNAPSHOT reads from the current container but AWR_ROOT_SNAPSHOT being a DATA LINK always read from CDB$ROOT.

This is what we can see:

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from AWR_ROOT_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
 
SQL> select dbid,count(*) from AWR_PDB_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
 
SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from AWR_ROOT_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

This query when run in PDB1 displays the 91 snapshots from the CDB.

SQL> select dbid,count(*) from AWR_PDB_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

This one shows what is in the current container.

Those are the views used by the AWR report, depending on the AWR location choice. But what about the DBA_HIST_ views that we know and use from previous versions?

DBA_HIST_ and CDB_HIST_

I continue to follow the dependencies:

SQL> select owner,name,type from dba_dependencies where referenced_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT') and name like '%SNAPSHOT' order by 3;
 
OWNER NAME TYPE
------ ------------------------------ -------------------
PUBLIC AWR_ROOT_SNAPSHOT SYNONYM
PUBLIC AWR_PDB_SNAPSHOT SYNONYM
SYS DBA_HIST_SNAPSHOT VIEW
SYS CDB_HIST_SNAPSHOT VIEW
 
SQL> select owner,object_name,object_type,sharing from dba_objects where object_name in ('CDB_HIST_SNAPSHOT','DBA_HIST_SNAPSHOT');
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
------ ------------------------------ ----------------------- ------------------
SYS DBA_HIST_SNAPSHOT VIEW METADATA LINK
SYS CDB_HIST_SNAPSHOT VIEW METADATA LINK
PUBLIC DBA_HIST_SNAPSHOT SYNONYM METADATA LINK
PUBLIC CDB_HIST_SNAPSHOT SYNONYM METADATA LINK

Here are the views I’m looking for. They are metadata link only. Not data link. This means that they do not switch to CDB$ROOT.

But there’s more in the view definition:

SQL> select owner,view_name,container_data,text from dba_views where view_name in ('CDB_HIST_SNAPSHOT','DBA_HIST_SNAPSHOT');
 
OWNER VIEW_NAME C TEXT
------ ------------------------------ - --------------------------------------------------------------------------------
SYS DBA_HIST_SNAPSHOT N select "SNAP_ID","DBID","INSTANCE_NUMBER","STARTUP_TIME","BEGIN_INTERVAL_TIME","
END_INTERVAL_TIME","FLUSH_ELAPSED","SNAP_LEVEL","ERROR_COUNT","SNAP_FLAG","SNAP_
TIMEZONE","CON_ID" from AWR_ROOT_SNAPSHOT
 
SYS CDB_HIST_SNAPSHOT Y SELECT k."SNAP_ID",k."DBID",k."INSTANCE_NUMBER",k."STARTUP_TIME",k."BEGIN_INTERV
AL_TIME",k."END_INTERVAL_TIME",k."FLUSH_ELAPSED",k."SNAP_LEVEL",k."ERROR_COUNT",
k."SNAP_FLAG",k."SNAP_TIMEZONE",k."CON_ID", k.CON$NAME, k.CDB$NAME FROM CONTAINE
RS("SYS"."AWR_PDB_SNAPSHOT") k

The DBA_HIST_SNAPSHOT is a simple one view on AWR_ROOT_SNAPSHOT which, as we have seen above, always show snapshots from CDB:


SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from DBA_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
&nbsp
SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from DBA_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

Then CDB_HIST_SNAPSHOT reads AWR_PDB_SNAPSHOT which show current container snapshots. But this view is a COMMON DATA one, with the CONTAINER() function. This means that from CDB$ROOT when executed with a common user data from all open containers will be retrieved:


SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from CDB_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79
904475458 91

However, from a PDB you cannot see anything else:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from CDB_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

So what?

Multitenant adds a new dimension in the dictionary views and we must be aware of that. However, compatibility is still there. The scripts that we used to run to query DBA_HIST views should still work. Don’t forget to always join on DBID and INSTANCE_NUMBER in addition to SNAP_ID so that your scripts are still working in RAC, and cross containers.
In 12.2 you can do the same for your application: used metadata links, data links, and common views for your tables. But remember to keep it simple…

 

Cet article Oracle 12cR2: AWR views in multitenant est apparu en premier sur Blog dbi services.

RMAN> TRANSPORT TABLESPACE

Sun, 2016-12-18 03:12

In a previous post I explained how to use transportabel tablespace from a standby database. Here I’m showing an alternative where you can transport from a backup instead of a standby database. RMAN can do that since 10gR2.

Transportable Tablespace is a beautiful feature: the performance of physical copy and the flexibility of logical export/import. But it has one drawback: the source tablespace must be opened read only when you copy it and export the metadata. This means that you cannot use it from production, such as moving data to a datawarehouse ODS. There’s an alternative to that: restore the tablespace with TSPITR (tablespace point-in-time recovery) into a temporary instance and transport from there.
This is what is automated by RMAN with a simple command: RMAN> TRANSPORT TABLESPACE.

Multitenant

This blog post shows how to do that when you are in 12c multitenant architecture. Even if 12.2 comes with online PDB clone, you may want to transport a single tablespace.

You cannot run TRANSPORT TABLESPACE when connected to a PDB. Let’s test it:

RMAN> connect target sys/oracle@//localhost/PDB1
connected to target database: CDB1:PDB1 (DBID=1975603085)

Here are the datafiles:

RMAN> report schema;
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1A
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
9 250 SYSTEM NO /u02/oradata/CDB1A/PDB1/system01.dbf
10 350 SYSAUX NO /u02/oradata/CDB1A/PDB1/sysaux01.dbf
11 520 UNDOTBS1 NO /u02/oradata/CDB1A/PDB1/undotbs01.dbf
12 5 USERS NO /u02/oradata/CDB1A/PDB1/users01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
3 20 TEMP 32767 /u02/oradata/CDB1A/PDB1/temp01.dbf

Let’s run the TRANSPORT TABLESPACE command:

RMAN> transport tablespace USERS auxiliary destination '/var/tmp/AUX' tablespace destination '/var/tmp/TTS';
RMAN-05026: warning: presuming following set of tablespaces applies to specified point-in-time
 
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1
 
Creating automatic instance, with SID='jlDa'
 
initialization parameters used for automatic instance:
db_name=CDB1
db_unique_name=jlDa_pitr_CDB1
compatible=12.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/u01/app/oracle
_system_trig_enabled=FALSE
sga_target=768M
processes=200
db_create_file_dest=/var/tmp/AUX
log_archive_dest_1='location=/var/tmp/AUX'
enable_pluggable_database=true
_clone_one_pdb_recovery=true
#No auxiliary parameter file used
 
starting up automatic instance CDB1
 
Oracle instance started
 
Total System Global Area 805306368 bytes
 
Fixed Size 8793056 bytes
Variable Size 234882080 bytes
Database Buffers 553648128 bytes
Redo Buffers 7983104 bytes
Automatic instance created
 
Removing automatic instance
shutting down automatic instance
Oracle instance shut down
Automatic instance removed
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of transport tablespace command at 12/17/2016 21:33:14
RMAN-07538: Pluggable Database qualifier not allowed when connected to a Pluggable Database

You got the idea: an auxiliary instance is automatically created but then it failed because an internal command cannot be run from a PDB.

Run from CDB

So let’s run it when connected to CDB$ROOT:

echo set on
 
RMAN> connect target sys/oracle
connected to target database: CDB1 (DBID=894360530)

Whe can see all pluggable databases and all datafiles:

RMAN> report schema;
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1A
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 800 SYSTEM YES /u02/oradata/CDB1A/system01.dbf
3 480 SYSAUX NO /u02/oradata/CDB1A/sysaux01.dbf
4 65 UNDOTBS1 YES /u02/oradata/CDB1A/undotbs01.dbf
5 250 PDB$SEED:SYSTEM NO /u02/oradata/CDB1A/pdbseed/system01.dbf
6 350 PDB$SEED:SYSAUX NO /u02/oradata/CDB1A/pdbseed/sysaux01.dbf
7 5 USERS NO /u02/oradata/CDB1A/users01.dbf
8 520 PDB$SEED:UNDOTBS1 NO /u02/oradata/CDB1A/pdbseed/undotbs01.dbf
9 250 PDB1:SYSTEM YES /u02/oradata/CDB1A/PDB1/system01.dbf
10 350 PDB1:SYSAUX NO /u02/oradata/CDB1A/PDB1/sysaux01.dbf
11 520 PDB1:UNDOTBS1 YES /u02/oradata/CDB1A/PDB1/undotbs01.dbf
12 5 PDB1:USERS NO /u02/oradata/CDB1A/PDB1/users01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 240 TEMP 32767 /u02/oradata/CDB1A/temp01.dbf
2 32 PDB$SEED:TEMP 32767 /u02/oradata/CDB1A/pdbseed/temp012016-08-23_14-12-45-799-PM.dbf
3 20 PDB1:TEMP 32767 /u02/oradata/CDB1A/PDB1/temp01.dbf

We can run the TRANSPORT TABLESPACE command from here, naming the tablespace prefixed with the PDB name PDB1:USERS

transport tablespace … auxiliary destination … tablespace destination

The TRANSPORT TABLESPACE command needs a destination where to put the datafiles and dump file to transport (TABLESPACE DESTINATION) and also needs an auxiliary destination (AUXILIARY DESTINATION). It seems it is mandatory, which is different from the PDBPITR where the FRA is used by default.


RMAN> transport tablespace PDB1:USERS auxiliary destination '/var/tmp/AUX' tablespace destination '/var/tmp/TTS';

And then you will see all what RMAN does for you. I’ll show most of the output.

UNDO

Restoring a tablespace will need to apply redo and then rollback the transactions that were opened at that PIT. This is why RMAN has to restore all tablespaces that may contain UNDO:

RMAN-05026: warning: presuming following set of tablespaces applies to specified point-in-time
 
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace PDB1:SYSTEM
Tablespace UNDOTBS1
Tablespace PDB1:UNDOTBS1

I suppose that when the UNDO_TABLESPACE has changed in the meantime, RMAN cannot guess which tablespace covered the transactions at the requested PIT but I seen nothing in the TRANSPORT TABLESPACE syntax to specify it. That’s probably for a future post and /or SR.

Auxiliary instance

So RMAN creates an auxiliary instance with some specific parameters to be sure there’s no side effect on the source database (the RMAN target one).

Creating automatic instance, with SID='qnDA'
 
initialization parameters used for automatic instance:
db_name=CDB1
db_unique_name=qnDA_pitr_PDB1_CDB1
compatible=12.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/u01/app/oracle
_system_trig_enabled=FALSE
sga_target=768M
processes=200
db_create_file_dest=/var/tmp/AUX
log_archive_dest_1='location=/var/tmp/AUX'
enable_pluggable_database=true
_clone_one_pdb_recovery=true
#No auxiliary parameter file used
 
 
starting up automatic instance CDB1
 
Oracle instance started
 
Total System Global Area 805306368 bytes
 
Fixed Size 8793056 bytes
Variable Size 234882080 bytes
Database Buffers 553648128 bytes
Redo Buffers 7983104 bytes
Automatic instance created

Restore

The goal is to transport the tablespace, so RMAN checks that they are self-contained:

Running TRANSPORT_SET_CHECK on recovery set tablespaces
TRANSPORT_SET_CHECK completed successfully

and starts the restore of controlfile and datafiles (the CDB SYSTEM, SYSAUX, UNDO and the PDB SYSTEM, SYSAUX, UNDO and the tablespaces to transport)

contents of Memory Script:
{
# set requested point in time
set until scn 1836277;
# restore the controlfile
restore clone controlfile;
 
# mount the controlfile
sql clone 'alter database mount clone database';
 
# archive current online log
sql 'alter system archive log current';
}
executing Memory Script
 
executing command: SET until clause
 
Starting restore at 17-DEC-16
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=253 device type=DISK
 
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/autobackup/2016_12_17/o1_mf_s_930864638_d5c83gxl_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/autobackup/2016_12_17/o1_mf_s_930864638_d5c83gxl_.bkp tag=TAG20161217T213038
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
output file name=/var/tmp/AUX/CDB1A/controlfile/o1_mf_d5c88zp3_.ctl
Finished restore at 17-DEC-16
 
sql statement: alter database mount clone database
 
sql statement: alter system archive log current
 
contents of Memory Script:
{
# set requested point in time
set until scn 1836277;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile 1 to new;
set newname for clone datafile 9 to new;
set newname for clone datafile 4 to new;
set newname for clone datafile 11 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 10 to new;
set newname for clone tempfile 1 to new;
set newname for clone tempfile 3 to new;
set newname for datafile 12 to
"/var/tmp/TTS/users01.dbf";
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 1, 9, 4, 11, 3, 10, 12;
 
switch clone datafile all;
}
executing Memory Script
 
executing command: SET until clause
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
renamed tempfile 1 to /var/tmp/AUX/CDB1A/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 3 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_temp_%u_.tmp in control file
 
Starting restore at 17-DEC-16
using channel ORA_AUX_DISK_1
 
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /var/tmp/AUX/CDB1A/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00004 to /var/tmp/AUX/CDB1A/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /var/tmp/AUX/CDB1A/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c83n81_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c83n81_.bkp tag=TAG20161217T213044
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:35
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00009 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00011 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00010 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00012 to /var/tmp/TTS/users01.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c851hh_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c851hh_.bkp tag=TAG20161217T213044
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:25
Finished restore at 17-DEC-16
 
datafile 1 switched to datafile copy
input datafile copy RECID=11 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_system_d5c8993k_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=12 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_system_d5c8d8ow_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=13 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_undotbs1_d5c8998b_.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=14 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undotbs1_d5c8d8g6_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=15 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_sysaux_d5c8996o_.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=16 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_sysaux_d5c8d8o7_.dbf
datafile 12 switched to datafile copy
input datafile copy RECID=17 STAMP=930865006 file name=/var/tmp/TTS/users01.dbf

You noticed that the SYSTEM, SYSAUX, UNDO are restored in the auxiliary location but the tablespaces to transport (USERS here) goes to its destination. If you want to transport it on the same server, you can avoid any copying of it.

Recover

The recovery continues automatically to the PIT (which you can also specify with an UNTIL clause or restore point)


contents of Memory Script:
{
# set requested point in time
set until scn 1836277;
# online the datafiles restored or switched
sql clone "alter database datafile 1 online";
sql clone 'PDB1' "alter database datafile
9 online";
sql clone "alter database datafile 4 online";
sql clone 'PDB1' "alter database datafile
11 online";
sql clone "alter database datafile 3 online";
sql clone 'PDB1' "alter database datafile
10 online";
sql clone 'PDB1' "alter database datafile
12 online";
# recover and open resetlogs
recover clone database tablespace "PDB1":"USERS", "SYSTEM", "PDB1":"SYSTEM", "UNDOTBS1", "PDB1":"UNDOTBS1", "SYSAUX", "PDB1":"SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script
 
executing command: SET until clause
 
sql statement: alter database datafile 1 online
 
sql statement: alter database datafile 9 online
 
sql statement: alter database datafile 4 online
 
sql statement: alter database datafile 11 online
 
sql statement: alter database datafile 3 online
 
sql statement: alter database datafile 10 online
 
sql statement: alter database datafile 12 online
 
Starting recover at 17-DEC-16
using channel ORA_AUX_DISK_1
 
starting media recovery
 
archived log for thread 1 with sequence 30 is already on disk as file /u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_30_d5c83ll5_.arc
archived log for thread 1 with sequence 31 is already on disk as file /u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_31_d5c8783v_.arc
archived log file name=/u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_30_d5c83ll5_.arc thread=1 sequence=30
archived log file name=/u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_31_d5c8783v_.arc thread=1 sequence=31
media recovery complete, elapsed time: 00:00:02
Finished recover at 17-DEC-16
 
database opened
 
contents of Memory Script:
{
sql clone 'alter pluggable database PDB1 open';
}
executing Memory Script
 
sql statement: alter pluggable database PDB1 open

Export TTS

The restored tablespaces can be set read only, which was the goal.

contents of Memory Script:
{
# make read only the tablespace that will be exported
sql clone 'PDB1' 'alter tablespace
USERS read only';
# create directory for datapump export
sql clone 'PDB1' "create or replace directory
STREAMS_DIROBJ_DPDIR as ''
/var/tmp/TTS''";
}
executing Memory Script
 
sql statement: alter tablespace USERS read only

Now the export of metadata run (equivalent to expdp transport_tablespace=Y)


sql statement: create or replace directory STREAMS_DIROBJ_DPDIR as ''/var/tmp/TTS''
 
Performing export of metadata...
EXPDP> Starting "SYS"."TSPITR_EXP_qnDA_urDb":
 
EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
EXPDP> Master table "SYS"."TSPITR_EXP_qnDA_urDb" successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_qnDA_urDb is:
EXPDP> /var/tmp/TTS/dmpfile.dmp
EXPDP> ******************************************************************************
EXPDP> Datafiles required for transportable tablespace USERS:
EXPDP> /var/tmp/TTS/users01.dbf
EXPDP> Job "SYS"."TSPITR_EXP_qnDA_urDb" successfully completed at Sat Dec 17 21:41:06 2016 elapsed 0 00:00:47
Export completed
 
Not performing table import after point-in-time recovery

The last message let me think that the RMAN codes shares the one that manages RECOVER TABLE.

Then RMAN lists the commands to run the import (also available in a generated script) and removes the auxiliary instance.

Cleanup

Not everything has been removed:
[oracle@VM117 blogs]$ du -ha /var/tmp/AUX
0 /var/tmp/AUX/CDB1A/controlfile
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_51_d5c8k0oo_.log
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_52_d5c8kcjp_.log
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_53_d5c8kskz_.log
601M /var/tmp/AUX/CDB1A/onlinelog
0 /var/tmp/AUX/CDB1A/datafile
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undo_1_d5c8m1nx_.dbf
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9
1.1G /var/tmp/AUX/CDB1A
1.1G /var/tmp/AUX

Import TTS

In the destination you find the tablespace datafiles, the dump of metadata and a script that can be run to import it to the destination:

[oracle@VM117 blogs]$ du -ha /var/tmp/TTS
5.1M /var/tmp/TTS/users01.dbf
132K /var/tmp/TTS/dmpfile.dmp
4.0K /var/tmp/TTS/impscrpt.sql
5.2M /var/tmp/TTS

For this example, I import it on the same server, in a different pluggable database:


SQL> connect / as sysdba
Connected.
SQL> alter session set container=PDB2;
Session altered.

and simply run the script provided:

SQL> set echo on
 
SQL> @/var/tmp/TTS/impscrpt.sql
 
SQL> /*
SQL> The following command may be used to import the tablespaces.
SQL> Substitute values for and .
SQL>
SQL> impdp directory= dumpfile= 'dmpfile.dmp' transport_datafiles= /var/tmp/TTS/users01.dbf
SQL> */
SQL>
SQL> --
SQL> --
SQL> --
SQL> --
SQL> CREATE DIRECTORY STREAMS$DIROBJ$1 AS '/var/tmp/TTS/';
Directory created.
 
SQL> CREATE DIRECTORY STREAMS$DIROBJ$DPDIR AS '/var/tmp/TTS';
Directory created.
 
SQL> /* PL/SQL Script to import the exported tablespaces */
SQL> DECLARE
2 --
3 tbs_files dbms_streams_tablespace_adm.file_set;
4 cvt_files dbms_streams_tablespace_adm.file_set;
5
6 --
7 dump_file dbms_streams_tablespace_adm.file;
8 dp_job_name VARCHAR2(30) := NULL;
9
10 --
11 ts_names dbms_streams_tablespace_adm.tablespace_set;
12 BEGIN
13 --
14 dump_file.file_name := 'dmpfile.dmp';
15 dump_file.directory_object := 'STREAMS$DIROBJ$DPDIR';
16
17 --
18 tbs_files( 1).file_name := 'users01.dbf';
19 tbs_files( 1).directory_object := 'STREAMS$DIROBJ$1';
20
21 --
22 dbms_streams_tablespace_adm.attach_tablespaces(
23 datapump_job_name => dp_job_name,
24 dump_file => dump_file,
25 tablespace_files => tbs_files,
26 converted_files => cvt_files,
27 tablespace_names => ts_names);
28
29 --
30 IF ts_names IS NOT NULL AND ts_names.first IS NOT NULL THEN
31 FOR i IN ts_names.first .. ts_names.last LOOP
32 dbms_output.put_line('imported tablespace '|| ts_names(i));
33 END LOOP;
34 END IF;
35 END;
36 /
PL/SQL procedure successfully completed.
 
SQL>
SQL> --
SQL> DROP DIRECTORY STREAMS$DIROBJ$1;
Directory dropped.
 
SQL> DROP DIRECTORY STREAMS$DIROBJ$DPDIR;
Directory dropped.
 
SQL> --------------------------------------------------------------
SQL> -- End of sample PL/SQL script
SQL> --------------------------------------------------------------

Of course, you don’t need to and you can run the import with IMPDP:

SQL> alter session set container=pdb2;
Session altered.
SQL> create directory tts as '/var/tmp/TTS';
Directory created.
SQL> host impdp pdbadmin/oracle@//localhost/PDB2 directory=TTS dumpfile='dmpfile.dmp' transport_datafiles=/var/tmp/TTS/users01.dbf

You may also use convert to transport to a different endianness.

Local Undo

Note that if you run it on current 12.2.0.1.0 cloud first DBaaS you will get an error when RMAN opens the PDB in the auxiliary instance because there’s a bug with local undo. Here is the alert.log part:

PDB1(3):Opening pdb with no Resource Manager plan active
PDB1(3):CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 188743680 AUTOEXTEND ON NEXT 5242880 MAXSIZE 34359721984 ONLINE
PDB1(3):Force tablespace UNDO_1 to be encrypted with AES128
2016-12-17T18:05:14.759732+00:00
PDB1(3):ORA-00060: deadlock resolved; details in file /u01/app/oracle/diag/rdbms/fqkn_pitr_pdb1_cdb1/fqkn/trace/fqkn_ora_26146.trc
PDB1(3):ORA-60 signalled during: CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 188743680 AUTOEXTEND ON NEXT 5242880 MAXSIZE 34359721984 ONLINE...
PDB1(3):Automatic creation of undo tablespace failed with error 604 60
ORA-604 signalled during: alter pluggable database PDB1 open...

I did this demo with LOCAL UNDO OFF.

So what?

You can use Transportable Tablespaces from a database where you cannot put the tablespace read-only. The additional cost of it is to recover it from a backup, along with SYSTEM, SYSAUX and UNDO. But it is fully automated with only one RMAN command.

 

Cet article RMAN> TRANSPORT TABLESPACE est apparu en premier sur Blog dbi services.

Oracle 12cR2 : Partitioning improvements – multi-column list partitioning & read-only partitions

Fri, 2016-12-16 10:10

In my last blog post I presented auto-list partitioning, a new partitioning functionality coming with 12cR2.
In this one I will introduce two others : multi-column list partitioning and read-only partitions.

Multi-column list partitioning

auto-list-partImage : “Oracle Partitioning in Oracle Database 12c Release 2″ – Hermann Bär

With the first release of 12c it wasn’t possible to create list partitioned tables based on multi-column partition key :
ORA-14304: List partitioning method expects a single partitioning column
But now you can easily implement this functionality :
SQL> CREATE TABLE cars(
car_make VARCHAR2(30),
car_model VARCHAR2(30)
)
PARTITION BY LIST (car_make, car_model)
(PARTITION P1 VALUES ('Ford','Focus'),
PARTITION P_DEFAULT VALUES (DEFAULT));

Table created.

Check partitions :
SQL> SELECT partition_name, high_value FROM dba_tab_partitions WHERE table_name = 'CARS';


PARTITION_NAME HIGH_VALUE
-------------------- ------------------------------
P1 ( 'Ford', 'Focus' )
P_DEFAULT DEFAULT


SQL> SELECT partitioning_type, partition_count, partitioning_key_count FROM dba_part_tables WHERE table_name = 'CARS';


PARTITION PARTITION_COUNT PARTITIONING_KEY_COUNT
--------- --------------- ----------------------
LIST 2 2

This functionality allows you to use up to 16 columns for the partition key and as you can see it in this example, it’s also possible to define a DEFAULT partition.
Multi-column list partitioning on subpartitions is also permitted.

Read-only partitions

auto-list-partImage : “Oracle Partitioning in Oracle Database 12c Release 2″ – Hermann Bär

We all know that Oracle is able to define a tablespace or a table as read-only. But did you know that with 12cR2 it’s now possible to define this attribute at the partition level ? :
CREATE TABLE sales(
sales_product VARCHAR2(30),
sales_date DATE
) READ WRITE
PARTITION BY RANGE (sales_date)
(
PARTITION P_2013 VALUES LESS THAN (TO_DATE('01-JAN-2014','dd-MON-yyyy')) READ ONLY,
PARTITION P_2014 VALUES LESS THAN (TO_DATE('01-JAN-2015','dd-MON-yyyy')) READ ONLY,
PARTITION P_2015 VALUES LESS THAN (TO_DATE('01-JAN-2016','dd-MON-yyyy')) READ ONLY,
PARTITION P_2016 VALUES LESS THAN (TO_DATE('01-JAN-2017','dd-MON-yyyy')));

Data insertion :
SQL> INSERT INTO sales VALUES ('SHIRT', '02-MAY-2013');
INSERT INTO sales VALUES ('SHIRT', '02-MAY-2013')
*
ERROR at line 1:
ORA-14466: Data in a read-only partition or subpartition cannot be modified.

SQL> INSERT INTO sales VALUES ('SHOES', '29-MAR-2016');


1 row created.

Easy to implement and reliable way to protect data changes inside a table.

 

Cet article Oracle 12cR2 : Partitioning improvements – multi-column list partitioning & read-only partitions est apparu en premier sur Blog dbi services.

Dataguard Oracle 12.2 : Support for Multiple Automatic Failover Targets

Thu, 2016-12-15 15:41

In a previous blog, we talked about Fast-Start Failover with the MaxProtection mode.
With Oracle 12.2 Oracle Data Guard now supports multiple failover targets in a fast-start failover configuration.
Previous functionality allowed for only a single fast-start failover target. If the failover target was unable to meet the requirements for fast-start failover at the time of primary failure, then an automatic failure would not occur.
Designating multiple failover targets significantly improves the likelihood that there will always be a standby suitable for automatic failover when needed.
Multiple failover targets increase high availability by making an automatic failover more likely to occur if there is a primary outage.
Indeed FastStartFailoverTarget can now have as value one site

DGMGRL> edit database 'ORCL_SITE' set property FastStartFailoverTarget='ORCL_SITE1';
Property "faststartfailovertarget" updated
DGMGRL>


DGMGRL> enable fast_start failover;
Enabled.
DGMGRL> show fast_start failover;
Fast-Start Failover: ENABLED
Threshold: 30 seconds
Target: ORCL_SITE1
Observer: standserver1.localdomain
Lag Limit: 30 seconds (not in use)
Shutdown Primary: TRUE
Auto-reinstate: TRUE
Observer Reconnect: (none)
Observer Override: FALSE

It can also have as values two sites (Priority following the order)

DGMGRL> disable Fast_start Failover;
Disabled.
DGMGRL> edit database 'ORCL_SITE' set property FastStartFailoverTarget='ORCL_SITE1,ORCL_SITE2';
Property "faststartfailovertarget" updated
DGMGRL>


DGMGRL> enable fast_start failover;
Enabled.
DGMGRL> show fast_start failover;
Fast-Start Failover: ENABLED
Threshold: 30 seconds
Target: ORCL_SITE1
Candidate Targets: ORCL_SITE1,ORCL_SITE2
Observer: standserver1.localdomain
Lag Limit: 30 seconds (not in use)
Shutdown Primary: TRUE
Auto-reinstate: TRUE
Observer Reconnect: (none)
Observer Override: FALSE

It can also have as value the keyword ANY (Priority defined by oracle)

DGMGRL> edit database 'ORCL_SITE' set property FastStartFailoverTarget='ANY';
Property "faststartfailovertarget" updated
DGMGRL> edit database 'ORCL_SITE1' set property FastStartFailoverTarget='ANY';
Property "faststartfailovertarget" updated
DGMGRL> edit database 'ORCL_SITE2' set property FastStartFailoverTarget='ANY';
Property "faststartfailovertarget" updated

With the value set to ANY we can see that there is a target failover but also there are also candidate targets. If a failover occurs the order will be ORCL_SITE1 first and if Not available ORCL_SITE2 will be the target.

DGMGRL> enable fast_start failover;
Enabled.
DGMGRL> show fast_start failover;
Fast-Start Failover: ENABLED
Threshold: 30 seconds
Target: ORCL_SITE1
Candidate Targets: ORCL_SITE1,ORCL_SITE2
Observer: standserver1.localdomain
Lag Limit: 30 seconds (not in use)
Shutdown Primary: TRUE
Auto-reinstate: TRUE
Observer Reconnect: (none)
Observer Override: FALSE

From the observer we can confirm that target is ORCL_SITE1

[W000 12/15 14:40:25.12] This observer is no longer registered with the configuration.
[W000 12/15 14:40:25.20] FSFO target standby is ORCL_SITE1
[W000 12/15 14:40:27.66] Observer trace level is set to USER
[W000 12/15 14:40:27.66] Try to connect to the primary.
[W000 12/15 14:40:27.66] Try to connect to the primary ORCL_SITE.
[W000 12/15 14:40:27.67] The standby ORCL_SITE1 is ready to be a FSFO target
[W000 12/15 14:40:29.67] Connection to the primary restored!

Now let’s simulate the case that ORCL_SITE1 is no longer available, for this let’s simply shutdown ORCL_SITE1 database, the failover target should be changed to ORCL_SITE2

SQL> shutdown abort
ORACLE instance shut down.
SQL>


DGMGRL> show fast_start failover;
Fast-Start Failover: ENABLED
Threshold: 30 seconds
Target: ORCL_SITE2
Candidate Targets: ORCL_SITE1,ORCL_SITE2
Observer: standserver1.localdomain
Lag Limit: 30 seconds (not in use)
Shutdown Primary: TRUE
Auto-reinstate: TRUE
Observer Reconnect: (none)
Observer Override: FALSE

From the observer we can see following messages that confirm that the failover target was switched to ORCL_SITE2

[W000 12/15 14:54:52.00] Permission granted to the primary database for target switch.
[W000 12/15 14:54:54.03] Standby database has changed to ORCL_SITE2.
[W000 12/15 14:54:54.04] Try to connect to the primary.
[W000 12/15 14:54:54.04] Try to connect to the primary ORCL_SITE.
[W000 12/15 14:54:55.48] The primary database returned to SYNC/NOT LAGGING state with the standby database ORCL_SITE2.
[W000 12/15 14:54:55.48] Connection to the primary restored!
[W000 12/15 14:54:58.48] Disconnecting from database ORCL_SITE.
[W000 12/15 14:55:04.52] The standby ORCL_SITE2 is ready to be a FSFO target

Conclusion
We have seen that we can now have multiple targets for the Fast-Start Failover and how to implement it

 

Cet article Dataguard Oracle 12.2 : Support for Multiple Automatic Failover Targets est apparu en premier sur Blog dbi services.

Oracle 12cR2 – Howto setup Active DataGuard with Temporary Tablespace Groups

Thu, 2016-12-15 06:01

Temporary Tablespaces Groups exist for quite a while now (since 10gR2), but they are, for whatever reason not so often used. Personally, I think they are quite cool. Very easy to setup, and especially in big environments with a lot of parallel processing very useful. But this blog will not be about Temporary Tablespace Groups. They are already explained in the 12.2 Admin Guide.

https://docs.oracle.com/database/122/ADMIN/managing-tablespaces.htm#ADMIN-GUID-55A6AE10-6875-4B73-9A5C-CB4965AD5AFE

For my Active DataGuard environment, I would like to use Temporary Tablespace Groups in combination with Temporary Undo, which is enabled by default as soon as you activate the ADG feature. And of course, I would like to do it online, without shutting down neither the primary nor the ADG standby.

Is this possible? We will see. I start first with creating a Temporary Tablespace Group called DBITEMP
with 3 bigfile temp tablespaces DBITEMP1/DBITEMP2/DBITEMP3.

-- On Primary

SQL> CREATE BIGFILE TEMPORARY TABLESPACE DBITEMP1 TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T TABLESPACE GROUP DBITEMP;

Tablespace created.

SQL> CREATE BIGFILE TEMPORARY TABLESPACE DBITEMP2 TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T TABLESPACE GROUP DBITEMP;

Tablespace created.

SQL> CREATE BIGFILE TEMPORARY TABLESPACE DBITEMP3 TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T TABLESPACE GROUP DBITEMP;

Tablespace created.

SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE DBITEMP;

Database altered.

SQL> alter system archive log current;

System altered.

SQL> select TABLESPACE_NAME, CONTENTS, BIGFILE from dba_tablespaces where CONTENTS = 'TEMPORARY' and TABLESPACE_NAME like '%DBI%';

TABLESPACE_NAME                CONTENTS              BIG
------------------------------ --------------------- ---
DBITEMP1                       TEMPORARY             YES
DBITEMP2                       TEMPORARY             YES
DBITEMP3                       TEMPORARY             YES

SQL> select * from DBA_TABLESPACE_GROUPS;

GROUP_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
DBITEMP                        DBITEMP1
DBITEMP                        DBITEMP2
DBITEMP                        DBITEMP3

SQL> select NAME, BYTES from v$tempfile where name like '%dbitemp%';

NAME                                                                         BYTES
-------------------------------------------------------------------- -------------
/u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp          33554432
/u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp2_d52c6yqv_.tmp          33554432
/u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp3_d52c7954_.tmp          33554432

SQL> !ls -l /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp
-rw-r----- 1 oracle oinstall 33562624 Dec 14 12:28 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp

SQL> !ls -l /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp2_d52c6yqv_.tmp
-rw-r----- 1 oracle oinstall 33562624 Dec 14 12:29 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp2_d52c6yqv_.tmp

SQL> !ls -l /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp3_d52c7954_.tmp
-rw-r----- 1 oracle oinstall 33562624 Dec 14 12:28 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp3_d52c7954_.tmp

SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM database_properties
  2  WHERE property_name='DEFAULT_TEMP_TABLESPACE';

PROPERTY_NAME            PROPERTY_VALUE
------------------------ ----------------------
DEFAULT_TEMP_TABLESPACE  DBITEMP


DGMGRL> show database 'DBIT122_SITE1';

Database - DBIT122_SITE1

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    DBIT122

Database Status:
SUCCESS

DGMGRL> show database 'DBIT122_SITE2';

Database - DBIT122_SITE2

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 0 Byte/s
  Real Time Query:    ON
  Instance(s):
    DBIT122

Database Status:
SUCCESS

On the primary database, it looks good. Everything was setup correctly, and the Temporary Tablespace Group is ready to be used. Now lets check what we have on the Standby … almost nothing.

My temporary tablespaces and the group are there, however, the tempfiles are missing. So … not really useful at that stage.

SQL> select TABLESPACE_NAME, CONTENTS, BIGFILE from dba_tablespaces where CONTENTS = 'TEMPORARY' and TABLESPACE_NAME like '%DBI%';

TABLESPACE_NAME                CONTENTS              BIG
------------------------------ --------------------- ---
DBITEMP1                       TEMPORARY             YES
DBITEMP2                       TEMPORARY             YES
DBITEMP3                       TEMPORARY             YES

SQL> select * from DBA_TABLESPACE_GROUPS;

GROUP_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
DBITEMP                        DBITEMP1
DBITEMP                        DBITEMP2
DBITEMP                        DBITEMP3


SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM database_properties
  2  WHERE property_name='DEFAULT_TEMP_TABLESPACE';

PROPERTY_NAME            PROPERTY_VALUE
------------------------ ----------------------
DEFAULT_TEMP_TABLESPACE  DBITEMP

SQL> select NAME, BYTES from v$tempfile where name like '%dbitemp%';

no rows selected

Ok. Lets take a look at the alert.log, if Oracle is telling something. And yes, indeed, a big WARNING message is posted into the alert.log saying that the temporary tablespaces DBITEMP1/2/3 are empty.

*********************************************************************
WARNING: The following temporary tablespaces contain no files.
         This condition can occur when a backup controlfile has
         been restored.  It may be necessary to add files to these
         tablespaces.  That can be done using the SQL statement:

         ALTER TABLESPACE <tablespace_name> ADD TEMPFILE

         Alternatively, if these temporary tablespaces are no longer
         needed, then they can be dropped.
           Empty temporary tablespace: DBITEMP1
           Empty temporary tablespace: DBITEMP2
           Empty temporary tablespace: DBITEMP3
*********************************************************************

Ok. With 12.2 it is like it always was. You have to do it manually.

SQL> ALTER TABLESPACE DBITEMP1 ADD TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T;

Tablespace altered.

SQL> ALTER TABLESPACE DBITEMP2 ADD TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T;

Tablespace altered.

SQL> ALTER TABLESPACE DBITEMP3 ADD TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T;

Tablespace altered.

Now I have everthing in place. What about dropping a temporary tablespace out of the tablespace group First, I take it out of temporary tablespace group. This change is immediately reflected on the Standby.

-- Primary

SQL> alter tablespace DBITEMP3 TABLESPACE GROUP '';

Tablespace altered.

SQL> select * from DBA_TABLESPACE_GROUPS;

GROUP_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
DBITEMP                        DBITEMP1
DBITEMP                        DBITEMP2

-- Standby 

SQL> select * from DBA_TABLESPACE_GROUPS;

GROUP_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
DBITEMP                        DBITEMP1
DBITEMP                        DBITEMP2

And now you can drop it.

-- Primary

SQL> drop tablespace DBITEMP3 including contents and datafiles;

Tablespace dropped.

alert.log
...
drop tablespace DBITEMP3 including contents and datafiles
2016-12-14T16:35:25.567597+01:00
Deleted Oracle managed file /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp3_d52c7954_.tmp
Completed: drop tablespace DBITEMP3 including contents and datafiles
...

-- Standby

alert.log
...
Deleted Oracle managed file /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp3_d52dv3dv_.tmp
Recovery deleting tempfile #4:'/u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp3_d52dv3dv_.tmp'
Recovery dropped temporary tablespace 'DBITEMP3'
...

ok. Dropping works fine. This change is also immediately reflected. What about resizing a temporary file? Is this change also immediately reflected on the standby? Due to OMF, we do have different names for the tempfiles, so should we use a “alter database tempfile <NAME> resize” or a “alter database tempfile <NUMBER> resize” to make it work. In regards of regular datafiles, it doesn’t matter, you can use both ways and Oracle does it correctly.

- Primary

SQL> select file#, name, bytes from v$tempfile where name like '%dbitemp%';

     FILE# NAME                                                                      BYTES
---------- -------------------------------------------------------------------- ----------
         2 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp1_d52dthwg_.tmp       33554432
         3 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp2_d52dtx7f_.tmp       33554432

		 
SQL> alter database tempfile '/u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp' resize 512M;

Database altered.

SQL> select file#, name, bytes from v$tempfile where name like '%dbitemp%';

     FILE# NAME                                                                      BYTES
---------- -------------------------------------------------------------------- ----------
         2 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp      536870912
         3 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp2_d52c6yqv_.tmp       33554432


-- Standby

SQL> select file#, name, bytes from v$tempfile where name like '%dbitemp%';

     FILE# NAME                                                                      BYTES
---------- -------------------------------------------------------------------- ----------
         2 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp1_d52dthwg_.tmp       33554432
         3 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp2_d52dtx7f_.tmp       33554432

Ok. Resizing by name is not reflected on the standby. But what about resizing the tempfile by using the file number instead of the name?

-- Primary
 
SQL> alter database tempfile 3 resize 768M;

Database altered.

SQL> select file#, name, bytes from v$tempfile where name like '%dbitemp%';

     FILE# NAME                                                                      BYTES
---------- -------------------------------------------------------------------- ----------
         2 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp      536870912
         3 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp2_d52c6yqv_.tmp      805306368

		 
-- Standby

SQL> select file#, name, bytes from v$tempfile where name like '%dbitemp%';

     FILE# NAME                                                                      BYTES
---------- -------------------------------------------------------------------- ----------
         2 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp1_d52dthwg_.tmp       33554432
         3 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp2_d52dtx7f_.tmp       33554432

As you can see, I doesn’t matter. Resize operations on tempfiles are not replicated to the standby.

Conclusion

Temporary Tablespace Groups can be used with Active DataGuard, and of course, it can be done online. However, when you initially create them, or when you resized them, some manual work has to be done on the standby. And yes, the Temporary Tablespace Groups can be used in combination with the Temporary UNDO feature on the Active DataGuard Standby.

 

 

Cet article Oracle 12cR2 – Howto setup Active DataGuard with Temporary Tablespace Groups est apparu en premier sur Blog dbi services.

Dataguard Oracle 12.2 : Fast-Start Failover with Maximum Protection

Wed, 2016-12-14 17:04

With Oracle 12.1 the one requirement to configure Fast-start Failover is to ensure the broker configuration is operating in either Maximum Availability mode or Maximum Performance mode.
With 12.2 Fast-Start Failover can be now configured with the Maximum Protection
Below our broker configuration

DGMGRL> show configuration;
Configuration - ORCL_DR
Protection Mode: MaxPerformance
Members:
ORCL_SITE - Primary database
ORCL_SITE1 - Physical standby database
ORCL_SITE2 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 48 seconds ago)

Let’s configure the maximum protection mode.
We first have to update some database properties.

DGMGRL> edit database 'ORCL_SITE1' set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database 'ORCL_SITE' set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database 'ORCL_SITE2' set property LogXptMode='SYNC';
Property "logxptmode" updated

After we have to enable the Maximum Availability before enabling the Maximum protection

DGMGRL> edit configuration set protection mode as maxavailability;
Succeeded.
DGMGRL> edit configuration set protection mode as maxprotection;
Succeeded.
DGMGRL>

And now let’s enable Fast-Start failover

DGMGRL> enable fast_start failover;
Error: ORA-16693: requirements not met for enabling fast-start failover
Failed.
DGMGRL>

Oh what happens?
Remember before enabling Fast-Start Failover we have to enable flashback for databases and also to set the database property FastStartFailoverTarget
Let’s enable flashback for databases
For the Primary

SQL> alter database flashback on;
Database altered.

For Standby databases

DGMGRL> edit database 'ORCL_SITE1' set state='APPLY-OFF';
Succeeded.
SQL> alter database flashback on;
Database altered.
SQL>
DGMGRL> edit database 'ORCL_SITE1' set state='APPLY-ON';
Succeeded.
DGMGRL>


DGMGRL> edit database 'ORCL_SITE2' set state='APPLY-OFF';
Succeeded.
DGMGRL>
SQL> alter database flashback on;
Database altered.
DGMGRL> edit database 'ORCL_SITE2' set state='APPLY-ON';
Succeeded.
DGMGRL>

Let’s set FastStartFailoverTarget property for the primary database

DGMGRL> edit database 'ORCL_SITE' set property FastStartFailoverTarget='ORCL_SITE2';
Property "faststartfailovertarget" updated

And then now we can enable the Fast-Start Failover with the maximum protection

DGMGRL> enable fast_start failover;
Enabled.
DGMGRL>

Checking after our configuration. The observer must be started, otherwise you will get warning about observer

DGMGRL> show configuration;
Configuration - ORCL_DR
Protection Mode: MaxProtection
Members:
ORCL_SITE - Primary database
ORCL_SITE2 - (*) Physical standby database
ORCL_SITE1 - Physical standby database
Fast-Start Failover: ENABLED
Configuration Status:
SUCCESS (status updated 1 second ago)
DGMGRL>

Just note that At least 2 standby databases must be available, otherwise the mode will be retrograded to MAXPERFORMANCE after Failover

 

Cet article Dataguard Oracle 12.2 : Fast-Start Failover with Maximum Protection est apparu en premier sur Blog dbi services.

Oracle Database 12c Release 2 Multitenant (Oracle Press)

Wed, 2016-12-14 16:00

Here it is. The multitenant book is out for sale…

CaptureBooks
One year ago, at DOAG2015, Vit Spinka came to me with this idea: with Anton Els they planned to write a book on multitenant and proposed me to be a co-author. I was already quite busy at that time and my short-term plan was to prepare and pass the OCM 12c exam. But this book idea was something great and that had to be started quickly. At that time, we expected the 12cR2 to be out on June 2016 and then the book should be at for Oracle Open World. So no time to waste: propose the idea to Oracle Press, find a reviewer and start as soon as possible.

For reviewers, I was very happy that Deiby Gomez accepted to do the technical review. And Mike Donovan was volunteer to do the English review. I think he didn’t imagine how hard it can be to take non-native English speakers writing, with very limited vocabulary, and put that to something that makes sense to read. It’s an amazing chance to have the language review done by someone with deep technical knowledge. This ensures that the improved style do not change the meaning. Having that language review is also a good way to uniformise the style for what is written by three different authors. I bet you cannot guess who has written what. In addition to that, Oracle Press asked to Arup Nanda to do an additional review which was great because Arup has experience about book writing.

So we worked on the 12.2 beta, tested everything (there are lot of code listings in the book), filled bugs, clarified everything. We had good interaction with support engineers and product managers. The result is a book on multitenant which covers all administration tasks you can do on a 12c database.

Cs11EMPWcAAdlSqIt was an amazing adventure from the get-go. You know people for their skills, blogs, presentations and discussions at events. And then you start to work with them on a common thing – the book – and remotely – we’re all on different timezones. How to be sure that you can work together? Actually, it was easy and went smooth. We listed the chapters and each of us has marked which chapter he prefers. And that was done: in one or two e-mail exchange the distribution of tasks was done with everybody happy. We had very short schedule: need to deliver one chapter every 2 or 3 weeks. I was happy with what I wrote and was equally happy with what I’ve read from Vit and Anton. Reviews from Deiby, Mike, Arup were all adding higher precision and clarity. Incredible team work without the need for long discussions. Besides the hard work and the delightful result, working with this team was an amazing human adventure.

Oracle Database 12c Release 2 Multitenant (Oracle Press)

Master the Powerful Multitenant Features of Oracle Database 12c
• Build high-performance multitenant Oracle databases
• Create single-tenant, multitenant, and application containers
• Establish network connections and manage services
• Handle security using authentication, authorization, and encryption
• Back up and restore your mission-critical data
• Work with point-in-time recovery and Oracle Flashback
• Move data and replicate and clone databases
• Work with Oracle’s Resource Manager and Data Guard

 

Cet article Oracle Database 12c Release 2 Multitenant (Oracle Press) est apparu en premier sur Blog dbi services.

Oracle 12cR2 – Is the SYSDG Administrative Privilege enough for doing Oracle Data Guard Operations?

Wed, 2016-12-14 01:33

For security reasons, you may want that your DataGuard operations are done with a different UNIX user and with a different Oracle user which is not so highly privileged like the SYSDBA.  This is exactly where the SYSDG Administrative Privilege for Oracle Data Guard Operations comes into play.

The SYSDG privilege is quite powerful and allows you to work with the Broker (DGMGRL) command line interface and besides that, it enables the following operations:

  • STARTUP
  • SHUTDOWN
  • ALTER DATABASE
  • ALTER SESSION
  • ALTER SYSTEM
  • CREATE RESTORE POINT (including GUARANTEED restore points)
  • CREATE SESSION
  • DROP RESTORE POINT (including GUARANTEED restore points)
  • FLASHBACK DATABASE
  • SELECT ANY DICTIONARY
  • SELECT
    • X$ tables (that is, the fixed tables)
    • V$ and GV$ views (that is, the dynamic performance views
    • APPQOSSYS.WLM_CLASSIFIER_PLAN
  • DELETE
    • APPQOSSYS.WLM_CLASSIFIER_PLAN
  • EXECUTE
    • SYS.DBMS_DRS

In addition, the SYSDG privilege enables you to connect to the database even if it is not open.

Ok. Let’s give it a try. I want to give the user scott all the privileges he needs to do the DataGuard operational tasks. So … I create a UNIX user scott and a database user scott with the SYSDG privilege.

[root@dbidg02 ~]# useradd scott
[root@dbidg02 ~]# usermod -a -G sysdg scott
[root@dbidg02 ~]# cat /etc/group |grep sysdg
sysdg:x:54324:oracle,scott

SQL> create user scott identified by tiger;

User created.

SQL> grant sysdg to scott;

Grant succeeded.

SQL> col username format a22
SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS where USERNAME = 'SCOTT';

USERNAME               SYSDB SYSOP SYSBA SYSDG SYSKM
---------------------- ----- ----- ----- ----- -----
SCOTT                  FALSE FALSE FALSE TRUE  FALSE

So far so good. Everything works. Scott can do switchovers, convert the physical standby to a snapshot database, create restore points and many more. But what happens when an error pops up? You need to take a look into the most important log files which are the alert log and broker log file in a DataGuard environment.

If you do a “show database verbose”, you will find at the end of the output the locations of the log files, which is quite useful from my point of view. This is new with Oracle 12cR2.

DGMGRL> show database verbose 'DBIT122_SITE1';

Database - DBIT122_SITE1

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 3.00 KByte/s
  Active Apply Rate:  152.00 KByte/s
  Maximum Apply Rate: 152.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    DBIT122
  ...
  ...
Broker shows you the Log file location:

    Alert log               : /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log
    Data Guard Broker log   : /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log

But unfortunately, the scott user can’t read those files, because there are no read permissions for others and
scott is not part of the oinstall group.

[scott@dbidg01 ~]$ tail -40f /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log
tail: cannot open ‘/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log’ for reading: Permission denied
tail: no files remaining

[scott@dbidg01 ~]$ tail -40f /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log
tail: cannot open ‘/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log’ for reading: Permission denied
tail: no files remaining

[scott@dbidg01 trace]$ ls -l drcDBIT122.log
-rw-r----- 1 oracle oinstall 37787 Dec 13 10:36 drcDBIT122.log
[scott@dbidg01 trace]$ ls -l alert_DBIT122.log
-rw-r----- 1 oracle oinstall 221096 Dec 13 12:04 alert_DBIT122.log

So what possibilities do we have to overcome this issue?

1. We can add user scott to the oinstall group, but then we haven’t won to much security
2. We can set the parameter “_trace_files_public”=true, but when this one is enable, then all oracle
trace files are world readable, not just the alert and broker log
3. We can configure XFS access control lists, so that user scott gets only the permissions he needs

For security reasons, I decided to go for the last one.

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(sysbkp),54324(sysdg),54325(syskm),54326(oper)
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log
-rw-r----- 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l drcDBIT122.log
-rw-r----- 1 oracle oinstall 56145 Dec 13 13:47 drcDBIT122.log

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] setfacl -m u:scott:r alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] setfacl -m u:scott:r drcDBIT122.log


oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log
-rw-r-----+ 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l drcDBIT122.log
-rw-r-----+ 1 oracle oinstall 56145 Dec 13 13:47 drcDBIT122.log

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] getfacl alert_DBIT122.log
# file: alert_DBIT122.log
# owner: oracle
# group: oinstall
user::rw-
user:scott:r--
group::r--
mask::r--
other::---

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] getfacl drcDBIT122.log
# file: drcDBIT122.log
# owner: oracle
# group: oinstall
user::rw-
user:scott:r--
group::r--
mask::r--
other::---

Cool. Now the scott user is really able to do a lot of DataGuard operation tasks, including some debugging.

[scott@dbidg01 ~]$ cat /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log | grep 'MAXIMUM AVAILABILITY mode' | tail -1
Primary database is in MAXIMUM AVAILABILITY mode

[scott@dbidg01 ~]$ cat /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log |grep "Protection Mode" | tail -1
      Protection Mode:            Maximum Availability
Conclusion

Using XFS ACL lists is quite cool if you want to give a specific user permissions to a file, but you don’t want to add him to a group, or make all files world readable. But be careful, that you configure the same ACL list on all other Standby nodes as well, and make sure that you use a Backup solution which supports ACL’s.

For example, using ‘cp’ or ‘cp -p’ makes a huge difference. In one case you loose your ACL list in the copy, in the other case you preserve it. The (+) sign at the end of the file permissions shows the difference.

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] cp alert_DBIT122.log alert_DBIT122.log.a
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] cp -p alert_DBIT122.log alert_DBIT122.log.b
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log.a
-rw-r----- 1 oracle oinstall 312894 Dec 13 14:25 alert_DBIT122.log.a
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log.b
-rw-r-----+ 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log.b
 

Cet article Oracle 12cR2 – Is the SYSDG Administrative Privilege enough for doing Oracle Data Guard Operations? est apparu en premier sur Blog dbi services.

Oracle 12cR2 – DataGuard and the REDO_TRANSPORT_USER

Wed, 2016-12-14 01:18

In a DataGuard environment, by default, the password of the SYS user is used to authenticate redo transport sessions when a password file is used. But for security reasons you might not want to use such a high privileged user only for the redo transmission. To overcome this issue, Oracle has implemented the REDO_TRANSPORT_USER initialization parameter.

The REDO_TRANSPORT_USER specifies the name of the user whose password verifier is used when a remote login password file is used for redo transport authentication.

But take care, the password must be the same at both databases to create a redo transport session, and the value of this parameter is case sensitive and must exactly match the value of the USERNAME column in the V$PWFILE_USERS view.

Besides that, this user must have the SYSDBA or SYSOPER privilege. However, we don’t want to grant the SYSDBA privilege. For administrative ease, Oracle recommends that the REDO_TRANSPORT_USER parameter be set to the same value on the redo source database and at each redo transport destination.

Ok. Let’s give it a try. I am creating an user called ‘DBIDG’ which will be used for redo transmission between my primary and standby.

SQL> create user DBIDG identified by manager;

User created.

SQL> grant connect to DBIDG;

Grant succeeded.

SQL> grant sysoper to DBIDG;

Grant succeeded.

Once done, I check the v$pwfile_users to see if my new user ‘DBIDG’ exist.

-- On Primary

SQL> col username format a22
SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS
  2  where USERNAME = 'DBIDG';

USERNAME               SYSDB SYSOP SYSBA SYSDG SYSKM
---------------------- ----- ----- ----- ----- -----
DBIDG                  FALSE TRUE  FALSE FALSE FALSE


-- On Standby
SQL> col username format a22
SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS
  2  where USERNAME = 'DBIDG';

no rows selected

Ok. Like in previous versions of Oracle, I have to copy the password myself to the destination host to make it work.

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] scp -p orapwDBIT122 oracle@dbidg02:$PWD

SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS
  2  where USERNAME = 'DBIDG';

USERNAME               SYSDB SYSOP SYSBA SYSDG SYSKM
---------------------- ----- ----- ----- ----- -----
DBIDG                  FALSE TRUE  FALSE FALSE FALSE

 

By connecting with the ‘DBIDG’ user, you almost can’t do anything. Not even selecting from the dba_tablespaces view e.g. From the security perspective, this user is much less of a concern.

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] sqlplus dbidg/Manager1@DBIT122_SITE1 as sysoper

SQL*Plus: Release 12.2.0.1.0 Production on Tue Dec 13 11:08:00 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> desc dba_tablespaces
ERROR:
ORA-04043: object "SYS"."DBA_TABLESPACES" does not exist

Nevertheless, the ‘DBIDG’ user is completely sufficient for my use case. Now, as I got my ‘DBIDG’ redo transport user in both password files (primary and standby), I can activate the redo_transport_user feature on (primary and standby) and check if everything works, by doing a switch over and switch back.

-- On Primary and Standby

SQL> alter system set redo_transport_user='DBIDG';

System altered.


DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 33 seconds ago)

DGMGRL> SWITCHOVER TO 'DBIT122_SITE2' WAIT 5;
Stopping services and waiting up to 5 seconds for sessions to drain...
Performing switchover NOW, please wait...
Operation requires a connection to database "DBIT122_SITE2"
Connecting ...
Connected to "DBIT122_SITE2"
Connected as SYSDBA.
New primary database "DBIT122_SITE2" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE1"
Starting instance "DBIT122"...
ORACLE instance started.
Database mounted.
Connected to "DBIT122_SITE1"
Switchover succeeded, new primary is "DBIT122_SITE2"

DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE2 - Primary database
    DBIT122_SITE1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 71 seconds ago)


DGMGRL> SWITCHOVER TO 'DBIT122_SITE1' WAIT 5;
Stopping services and waiting up to 5 seconds for sessions to drain...
Performing switchover NOW, please wait...
Operation requires a connection to database "DBIT122_SITE1"
Connecting ...
Connected to "DBIT122_SITE1"
Connected as SYSDBA.
New primary database "DBIT122_SITE1" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE2"
Starting instance "DBIT122"...
ORACLE instance started.
Database mounted.
Connected to "DBIT122_SITE2"
Switchover succeeded, new primary is "DBIT122_SITE1"

Looks very good so far. But what happens if I have to change the password of the ‘DBIDG’ user?

-- On Primary

SQL> alter user dbidg identified by Manager1;

User altered.

-- On Primary
oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] ls -l orapwDBIT122
-rw-r----- 1 oracle oinstall 4096 Dec 13 10:30 orapwDBIT122

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] md5sum orapwDBIT122
3b7b2787943a07641b8af9f9e5284389  orapwDBIT122


-- On Standby
oracle@dbidg02:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] ls -l orapwDBIT122
-rw-r----- 1 oracle oinstall 4096 Dec 13 10:30 orapwDBIT122

oracle@dbidg02:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] md5sum orapwDBIT122
3b7b2787943a07641b8af9f9e5284389  orapwDBIT122

That’s cool. Passwords on both sites have been updated successfully. They have the same time stamps and even the MD5 checksums are exactly the same. This is because of the new “Automatic Password Propagation to Standby” feature of 12cR2.

Conclusion

REDO_TRANSPORT_USER and “Automatic Password Propagation to Standby” are nice little features from Oracle.  The REDO_TRANSPORT_USER exists for quite a while now, at least since 11gR2, however, the “Automatic Password Propagation to Standby” is new with 12cR2.

 

Cet article Oracle 12cR2 – DataGuard and the REDO_TRANSPORT_USER est apparu en premier sur Blog dbi services.

Upgrade to Enterprise Manager 13.2

Tue, 2016-12-13 08:11

I will describe how to upgrade Enterprise Manager 13.1.0.0 to the new 13.2.0.0 version.

At the beginning we have to ensure that we applied the latest PSU on the repository database.

It is mandatory to apply the following patch before upgrading to Enterprise Manager Cloud Control 13.2.0.0:

DATABASE PATCH SET UPDATE 12.1.0.2.160419

To install the patch you have to check the following point:

- you have to use Oracle Interim Patch Installer version 12.2.0.1.8

Once you have downloaded the patch and unzipped it, you can check for potential conflicts:

oracle@vmCC13c:/u01/app/oracle/ [EMREP13C] opatch prereq 
CheckConflictAgainstOHWithDetail 
-phBaseDir /oracle/u01/app/oracle/software/22899531/22806133
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed. 
OPatch succeeded.

 

oracle@vmCC13c:/u01/app/oracle/ [EMREP13C] opatch prereq 
CheckConflictAgainstOHWithDetail -phBaseDir 
/oracle/u01/app/oracle/software/22899531/23006522
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

And finally, you check for the system space available:

oracle@vmCC13c:/u01/app/oracle/ [EMREP13C] $ORACLE_HOME/OPatch/opatch prereq 
CheckSystemSpace -phBaseFile /oracle/u01/app/oracle/software/patch_list_dbhome.txt
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
Invoking prereq "checksystemspace"
Prereq "checkSystemSpace" passed.
OPatch succeeded.

Then once the pre requisites are ok, you stop the repository database, and you run the classical opatch apply command from the directory where you have unzipped the PSU.

You finally check the Oracle inventory:

oracle@vmtestoraCC13c:/home/oracle/ [EMREP13C] opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
--------------------------------------------------------------------------------
Local Machine Information:
ARU platform id: 226
ARU platform description:: Linux x86-64
Installed Top-level Products (1):
Oracle Database 12c                                      12.1.0.2.0
There are 1 products installed in this Oracle Home. 
Interim patches (1) : 
Patch  22806133     : applied on Tue Nov 22 11:19:55 CET 2016
Unique Patch ID:  19983161
Patch description:  "DATABASE BUNDLE PATCH: 12.1.0.2.160419 (22806133)"

Secondly you have to disable the optimizer_adaptive_features parameter in the repository database:

SQL> alter system set optimizer_adaptive_features=false scope=both;

System altered.

Then we have to ensure that the tables in the Management Repository do not have any snapshots created:

SQL> select master , log_table from all_mview_logs where log_owner='SYSMAN';
no rows selected

Then we have to ensure that the tables in the Management Repository do not have any snapshots created:

SQL> select master , log_table from all_mview_logs where log_owner='SYSMAN';
no rows selected

We verify if any login triggers are set:

SQL> SELECT COUNT (trigger_name) FROM sys.dba_triggers 
WHERE TRIGGERING_EVENT LIKE 'LOGON%' AND status='ENABLED';
SQL> SELECT trigger_name FROM sys.dba_triggers 
WHERE TRIGGERING_EVENT LIKE 'LOGON%' AND status='ENABLED';

We verify if any logoff triggers are set:

SQL> SELECT COUNT (trigger_name) FROM sys.dba_triggers
WHERE TRIGGERING_EVENT LIKE 'LOGOFF%' AND status='ENABLED';
SQL> SELECT trigger_name FROM sys.dba_triggers
WHERE TRIGGERING_EVENT LIKE 'LOGOFF%' AND status='ENABLED';

If we find a trigger enabled, we disable it:

SQL> SELECT trigger_name, owner from sys.dba_triggers 
WHERE TRIGGERING_EVENT LIKE 'LOGOFF%' AND status='ENABLED';
 
TRIGGER_NAME         OWNER
GSMLOGOFF            GSMADMIN_INTERNAL
 
SQL> alter trigger gsmadmin_internal.gsmlogoff disable;

Then you have to copy the EMKEY to the repository database:

oracle@vmCC13c:/u03/app/oracle/oms13c/bin/ [oms13c] emctl config emkey 
-copy_to_repos -sysman_pwd dbi05manager
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
The EMKey has been copied to the Management Repository. 
This operation will cause the EMKey to become unsecure.
After the required operation has been completed, 
secure the EMKey by running "emctl config emkey -remove_from_repos".

Concerning the OMS shutdown we follow the following procedure:

We stop the JVMD and ADP engines explicitly:

oracle@vmCC13c:/u03/app/oracle/oms13c/bin/ [oms13c] emctl extended oms jvmd stop -all
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
No verb found to be registered with emctl extensibles framework
 
oracle@vmCC13c:/u03/app/oracle/oms13c/bin/ [oms13c] emctl extended oms adp stop -all
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
No verb found to be registered with emctl extensibles framework

Then we shut down completely the OMS:

oracle@v333:/home/oracle/ [oms13c] emctl stop oms -all
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Stopping Oracle Management Server...
WebTier Successfully Stopped
Oracle Management Server Successfully Stopped
Oracle Management Server is Down
JVMD Engine is Down
Stopping BI Publisher Server...
BI Publisher Server Successfully Stopped
AdminServer Successfully Stopped
BI Publisher Server is Down

It is mandatory to stop the management agent in order to avoid errors during the migration phase:

oracle@vmtestoraCC13c:/home/oracle/ [agent13c] emctl stop agent
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Stopping agent ... stopped.

WATCH OUT: before running the installer unset PERLLIB and PERL5LIB variables (if for example you have environment variables defined for your oracle user) otherwise you will get the following error:

/opt/oracle/Middleware13c/perl/lib/5.10.0/x86_64-linux-thread-multi/auto/Cwd/Cwd.so: 
undefined symbol: Perl_Gthr_key_ptr
ERROR: Unable to continue with the installation 
because some one-off patches could not be applied successfully.

Then once the oms and the management agent are stopped you can run:

oracle@v333:/opt/software/ [oms13c] ./em13200_linux64.bin
Checking monitor: must be configured to display at least 256 colors.   
Actual 16777216    Passed
Checking swap space: must be greater than 512 MB.   Actual 7999 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    
Passed (64-bit not required)
Preparing to launch the Oracle Universal Installer 
from /tmp/OraInstall2016-11-18_10-07-45AM
====Prereq Config Location main===
/tmp/OraInstall2016-11-18_10-07-45AM/stage/prereq
EMGCInstaller args -scratchPath
EMGCInstaller args /tmp/OraInstall2016-11-18_10-07-45AM
EMGCInstaller args -sourceType
EMGCInstaller args network
EMGCInstaller args -timestamp
EMGCInstaller args 2016-11-18_10-07-45AM
EMGCInstaller args -paramFile
EMGCInstaller args /tmp/sfx_f8wrWz/Disk1/install/linux64/oraparam.ini
EMGCInstaller args -nocleanUpOnExit
DiskLoc inside SourceLoc/opt/software
EMFileLoc:/tmp/OraInstall2016-11-18_10-07-45AM/oui/em/
ScratchPathValue :/tmp/OraInstall2016-11-18_10-07-45AM

The first installer screen appears:

em1

I choosed not to receive Security alerts then I Select Next:

em2

I select Next, I skip the software Updates

em3

We check the prerequisites are ok

em4

We choose to upgrade an existing Enterprise Manager system, we enter the old Middleware home, and we select Next

em5

We enter the new Middleware home, and we select Next

em6

We enter the repository connection details, the sys and sysman passords, we confirm we have a correct repository backupm we disable the DDMP jobs and we select Next

em7

We select Yes to fix the issues

em8

We review the plugins and we select Next

em9

We have the possibility to add plugins we want to deploy while upgrading to EM 13.2

em10

We enter the weblogic username and password and we select Next

em11

I choosed not to configure a shared location for Oracle BI publisher

em12

We choose the default ports and we select Next

em13

We select Upgrade:

em14

The upgrade is running fine:=)

At the end of the upgrade, you have to run connected as root from the new OMS home the shell root.sh:

root@vmCC13c oms13cr2]# . root.sh
/etc exist
/u03/app/oracle/oms13cr2

Finally you have successfully upgraded to EM 13.2:

em15

Once the OMs is upgraded, you have to upgrade the management agents with the classical procedure. From the EM console you select upgrade agent as follows:

em16

em17

You add the management agent, and you select Submit

em18

You can follow the upgrade phase, finally you have to run the root.sh script on the management agent and cleanup the old agent environment with the EM console as follows:

You select the Post Agent Upgrade Tasks, you select the agent previously migrated:

em19

And you select Submit

The old agent home will be removed; you will just have to adapt your /etc/oratab file to give the new ORACLE_HOME for your agent 13.2

The last phase consists in deleting the old OMS home. As we were in 13.1 version, we only have to check nothing is running with the old environment, then delete the old home.

oracle@vmCC13c:/u03/app/oracle/oms13cr2/ [oms13c] ps -ef | grep oms | grep -v 13cr2
oracle   28463 27736  0 15:52 pts/5    00:00:00 grep --color=auto oms
 
oracle@vmCC13c:/u03/app/oracle/ [oms13c] ls
agent13c  CC13c_setupinfo.txt  gc_inst  gc_inst1  oms13c  oms13cr2  swlib
oracle@vmCC13c:/u03/app/oracle/ [oms13c] rm -rf oms13c

The new features are listed in the following URL and mainly concern the virtualization, the Middleware and Cloud Management, and some new features about incident management (always on monitoring installed on a different host for example).

http://docs.oracle.com/cd/E73210_01/EMCON/GUID-503991BC-D1CD-46EC-8373-8423B2D43437.htm#EMCON-GUID-503991BC-D1CD-46EC-8373-8423B2D43437

Finally the upgrade phase to EM 13.2 is finished in a couple of hours and did not present any errors.

 

 

 

 

 

 

 

 

Cet article Upgrade to Enterprise Manager 13.2 est apparu en premier sur Blog dbi services.

Getting started with Docker – 2 – building your own base image

Mon, 2016-12-12 13:30

In the last post we quickly looked at how you can install everything you need to run docker containers on a CentOS 7 host and did bring up the official PostgreSQL image. However there are good reasons not to rely on an existing image: You want to deliver a pre-configured base image which includes everything your application requires and you want to have control over that image. When this image needs to get updated you can update the base image and then re-deploy it to wherever you need to update the base image.

For CentOS there is a script which you can use to build your base image on GitHub. For other distributions you can check the docker documentation. This script basically makes use of the “–installroot” switch of yum which allows to install packages to another location than the default one.

Using this script we can build a CentOS 7 base image. There are three parameters you can use:

  -p ""  The list of packages to install in the container.
                   The default is blank.
  -g ""    The groups of packages to install in the container.
                   The default is "Core".
  -y      The path to the yum config to install packages from. The
                   default is /etc/yum.conf for Centos/RHEL and /etc/dnf/dnf.conf for Fedora

We’ll use the “-p” switch to install all the packages which are required to build PostgreSQL from source (this depends on the compile options, of course) and some additional packages which are useful for daily work (such as screen):

[docker@centos7 ~]$ sudo ./mkimage-yum.sh -p "gcc openldap-devel python-devel readline-devel redhat-lsb bison flex perl-ExtUtils-Embed zlib-devel crypto-utils openssl-devel pam-devel libxml2-devel libxslt-devel tcl tcl-devel openssh-clients bzip2 net-tools wget screen ksh unzip" centospg

Once done the new image is available locally:

[docker@centos7 ~]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centospg            7.2.1511            184509483d52        40 seconds ago      510.6 MB
postgres            latest              78e3985acac0        2 days ago          264.7 MB
hello-world         latest              c54a2cc56cbb        5 months ago        1.848 kB

If you wonder how that image made it into docker take a look a the last lines of the script:

tar --numeric-owner -c -C "$target" . | docker import - $name:$version

What happened is that a tar file of the temporary chroot directory was created:

[docker@centos7 ~]$ ls -latr /tmp/mkimage-yum.sh.dKHtsq/
total 24
drwxrwxrwt.  2 root root    6 Aug 12  2015 tmp
dr-xr-xr-x.  2 root root    6 Aug 12  2015 sys
drwxr-xr-x.  2 root root    6 Aug 12  2015 srv
drwxr-xr-x.  2 root root    6 Aug 12  2015 run
dr-xr-x---.  2 root root    6 Aug 12  2015 root
dr-xr-xr-x.  2 root root    6 Aug 12  2015 proc
drwxr-xr-x.  2 root root    6 Aug 12  2015 opt
drwxr-xr-x.  2 root root    6 Aug 12  2015 mnt
drwxr-xr-x.  2 root root    6 Aug 12  2015 media
drwxr-xr-x.  2 root root    6 Aug 12  2015 home
drwxr-xr-x.  2 root root 4096 Aug 12  2015 dev
dr-xr-xr-x.  2 root root    6 Aug 12  2015 boot
lrwxrwxrwx.  1 root root    7 Dec 10 15:02 bin -> usr/bin
lrwxrwxrwx.  1 root root    8 Dec 10 15:02 sbin -> usr/sbin
lrwxrwxrwx.  1 root root    9 Dec 10 15:02 lib64 -> usr/lib64
lrwxrwxrwx.  1 root root    7 Dec 10 15:02 lib -> usr/lib
dr-xr-xr-x. 17 root root 4096 Dec 10 15:02 .
drwxr-xr-x. 13 root root 4096 Dec 10 15:02 usr
drwxr-xr-x. 17 root root 4096 Dec 10 15:02 var
drwxr-xr-x. 19 root root 4096 Dec 10 15:02 etc
drwxrwxrwt. 11 root root 4096 Dec 12 11:43 ..

… which then was imported into docker with the “docker import” command.

To test if it really works we can start it and execute bash:

[docker@centos7 ~]$ docker run -it --rm  centospg:7.2.1511 bash
[root@cf690e9d9476 /]$ cat /etc/os-release 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

Fine, so far so good. Now we can really begin: We have our own CentOS base image where we want our PostgreSQL image to be based on. How then can we create a PostgreSQL image on top of our base image?

We’ll execute the commands necessary first and provide explanations afterwards. So, the first step:

[docker@centos7 ~]$ sudo yum install -y git
[docker@centos7 ~]$ mkdir centospg
[docker@centos7 ~]$ cd centospg
[docker@centos7 centospg]$ git init
Initialized empty Git repository in /home/docker/centospglatest/.git/
[docker@centos7 centospg]$ git config --global user.email "daniel@abc.def"
[docker@centos7 centospg]$ git config --global user.name "Daniel"

Why do we need git? It is not required to use git at all but you probably would like to have your files which are used to build your containers managed by git so that you can use all advantages of GIT combined with the advantages of docker. It will will make more and more sense as we step through all the commands.

What we need to create now is a so called Dockerfile. This file lists the instructions that Docker will execute to build you image. Lets go:

[docker@centos7 centospg]$ touch Dockerfile
[docker@centos7 centospg]$ git add Dockerfile 
[docker@centos7 centospg]$ git commit -m "initial" Dockerfile 
[master (root-commit) ce3727a] initial
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 Dockerfile

Our very basic Dockerfile will look like this:

# use our CentOS base images as source
FROM centospg:7.2.1511
# set the PostgreSQL we will download
ENV PG_VERSION 9.6.1
# include the PostgreSQL binaries in the PATH
ENV PATH /u01/app/postgres/product/96/db_01/bin:$PATH
# add a postgres group and postgres user
RUN groupadd postgres
RUN useradd -g postgres -m postgres
# prepare the directories
RUN mkdir -p /u01/app/postgres
RUN chown postgres:postgres /u01/app/postgres
# allow sudo for the postgres user
RUN echo 'postgres ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
# download, configure, compile and install PostgreSQL from source
USER postgres
RUN wget https://ftp.postgresql.org/pub/source/v${PG_VERSION}/postgresql-${PG_VERSION}.tar.bz2 -O /var/tmp/postgresql-${PG_VERSION}.tar.bz2
RUN cd /var/tmp; tar -axf /var/tmp/postgresql-${PG_VERSION}.tar.bz2
RUN rm -f /var/tmp/postgresql-${PG_VERSION}.tar.bz2
RUN cd /var/tmp/postgres*; ./configure --prefix=/u01/app/postgres/product/96/db_01
RUN cd /var/tmp/postgres*; make
RUN cd /var/tmp/postgres*; make install
# cleanup
RUN rm -rf /var/tmp/postgres*

Using this Dockerfile we can build our PostgreSQL image:

[docker@centos7 centospg]$ docker build -t centospg:PG9.6.1 .

The output of this is quite long, here a snippet:

Sending build context to Docker daemon 45.06 kB
Step 1 : FROM centospg:7.2.1511
 ---> 184509483d52
Step 2 : ENV PG_VERSION 9.6.1
 ---> Running in 054900c7ebe1
 ---> 866815b9f092
Removing intermediate container 054900c7ebe1
Step 3 : ENV PATH /u01/app/postgres/product/96/db_01/bin:$PATH
 ---> Running in 46bcf7667a06
 ---> 94c9adb0402b
Removing intermediate container 46bcf7667a06
Step 4 : RUN groupadd postgres
 ---> Running in 24a7d9b7a1ea
 ---> eb4ff8268e2e
Removing intermediate container 24a7d9b7a1ea
Step 5 : RUN useradd -g postgres -m postgres
 ---> Running in 3e09b556fed8
 ---> acff1dcf2d4c
Removing intermediate container 3e09b556fed8
Step 6 : RUN mkdir -p /u01/app/postgres
 ---> Running in 999a62d075c0
 ---> fa4bdfa74d31
Removing intermediate container 999a62d075c0
Step 7 : RUN chown postgres:postgres /u01/app/postgres
 ---> Running in 37773e484260
 ---> 668c491b534b
Removing intermediate container 37773e484260
Step 8 : RUN echo 'postgres ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
 ---> Running in bb9cbfd20623
 ---> 589959efbda5
Removing intermediate container bb9cbfd20623
Step 9 : USER postgres
 ---> Running in f70b8c70c3fc
 ---> 32d3d3d603d2
Removing intermediate container f70b8c70c3fc
Step 10 : RUN wget https://ftp.postgresql.org/pub/source/v${PG_VERSION}/postgresql-${PG_VERSION}.tar.bz2 -O /var/tmp/postgresql-${PG_VERSION}.tar.bz2
 ---> Running in c5cc11840a15
--2016-12-12 12:43:05--  https://ftp.postgresql.org/pub/source/v9.6.1/postgresql-9.6.1.tar.bz2
Resolving ftp.postgresql.org (ftp.postgresql.org)... 174.143.35.246, 217.196.149.55, 87.238.57.227, ...
Connecting to ftp.postgresql.org (ftp.postgresql.org)|174.143.35.246|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 19260568 (18M) [application/x-bzip]
Saving to: '/var/tmp/postgresql-9.6.1.tar.bz2'

     0K .......... .......... .......... .......... ..........  0%  180K 1m44s
    50K .......... .......... .......... .......... ..........  0%  368K 77s
...
 ---> 645cf59717f4
Removing intermediate container c5cc11840a15
Step 11 : RUN cd /var/tmp; tar -axf /var/tmp/postgresql-${PG_VERSION}.tar.bz2
 ---> Running in 6a47968ddeb5
...
# PostgreSQL configure, make, make install
...
PostgreSQL installation complete.
 ---> 7f6b11b357d7
Removing intermediate container 041441816c4d
Step 16 : RUN rm -rf /var/tmp/postgres*
 ---> Running in 480cc2157b9a
 ---> a7b0bf1d1c35
Removing intermediate container 480cc2157b9a
Successfully built a7b0bf1d1c35

Once all of that completed we have a new Docker image:

[docker@centos7 centospg]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centospg            PG9.6.1             a7b0bf1d1c35        45 seconds ago      706.7 MB
centospg            7.2.1511            184509483d52        47 hours ago        510.6 MB
postgres            latest              78e3985acac0        4 days ago          264.7 MB
hello-world         latest              c54a2cc56cbb        5 months ago        1.848 kB

Using this image we can now bring up a container which complies to all our standards we build into the base images:

[docker@centos7 centospg]$ docker run -it centospg:PG9.6.1 bash
[postgres@7ac7780b0b1b /]$ which initdb
/u01/app/postgres/product/96/db_01/bin/initdb
[postgres@7ac7780b0b1b /]$ initdb --version
initdb (PostgreSQL) 9.6.1

Finally we commit our Dockerfile:

[docker@centos7 centospg]$ git commit -m "First working version" Dockerfile 
[master f0ba897] First working version
 1 file changed, 25 insertions(+)

Something to start with, isn’t it?

Note: The Docker best practices tell you to add only few instructions to a Dockerfile because every instruction creates a new layer. In general you should run only one service per image. This makes it easier to scale you containers.

 

Cet article Getting started with Docker – 2 – building your own base image est apparu en premier sur Blog dbi services.

Getting started with Docker – 1 – overview and installation

Sun, 2016-12-11 04:30

Everybody is talking about Docker nowadays. What it is about? Do you remember Solaris Zones or Containers? It is more or less the same although development of Docker during the last years made Linux Containers the de-facto standard for deploying applications in a standardized and isolated way. Docker is build in a classical client server model: There is the docker server (or daemon) which servers the requests of docker clients. The client is the one you’ll use to tell the server what you want to do. The main difference from the classical client/server model is that docker uses the same binary for the server as well as for the client. It is just a matter of how you invoke the docker binary that makes it a server or client application. In contrast to the Solaris Zones Docker containers are stateless by default, that means: When you shutdown a docker container you’ll lose everything that was done when the container started to what happened when container got destroyed (Although there are ways to avoid that). This is important to remember.

When you start a docker container on a host the host’s resources are shared with the container (Although you can limit that). It is not like when you fire up a virtual machine (which brings up an instance of a whole operating system) but more like a process that shares resources with the host it is running on. This might be as simple as running a “wget” command but it might be as complicated as bringing up a whole infrastructure that serves your service desk. Docker containers should be lightweight.

So what does make docker unique then? It is the concept of a layered filesystem. We’ll come to that soon. Lets start by installing everything we need to run a docker daemon. As always we’ll start with as CentOS 7 minimal installation:

[root@centos7 ~]$ cat /etc/centos-release
CentOS Linux release 7.2.1511 (Core) 
[root@centos7 ~]$ 

The easiest way to get docker installed is to add the official docker yum repository (for CentOS in this case):

[root@centos7 ~]$ echo "[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg" > /etc/yum.repos.d/docker.repo

Working directly as root never is a good idea so lets create a user for that and let this user do everything via sudo ( not a good practice, I know :) ):

[root@centos7 ~]$ groupadd docker
[root@centos7 ~]$ useradd -g docker docker
[root@centos7 ~]$ passwd docker
[root@centos7 ~]$ echo "docker ALL=(ALL)   NOPASSWD: ALL" >> /etc/sudoers
[root@centos7 ~]$ su - docker
[docker@centos7 ~]$ sudo ls

Ready to install:

[docker@centos7 ~]$ sudo yum install docker-engine

This will install the docker engine and these additional packages:

======================================================================================================================================
 Package                                Arch                   Version                               Repository                  Size
======================================================================================================================================
Installing:
 docker-engine                          x86_64                 1.12.3-1.el7.centos                   dockerrepo                  19 M
Installing for dependencies:
 audit-libs-python                      x86_64                 2.4.1-5.el7                           base                        69 k
 checkpolicy                            x86_64                 2.1.12-6.el7                          base                       247 k
 docker-engine-selinux                  noarch                 1.12.3-1.el7.centos                   dockerrepo                  28 k
 libcgroup                              x86_64                 0.41-8.el7                            base                        64 k
 libseccomp                             x86_64                 2.2.1-1.el7                           base                        49 k
 libsemanage-python                     x86_64                 2.1.10-18.el7                         base                        94 k
 libtool-ltdl                           x86_64                 2.4.2-21.el7_2                        updates                     49 k
 policycoreutils-python                 x86_64                 2.2.5-20.el7                          base                       435 k
 python-IPy                             noarch                 0.75-6.el7                            base                        32 k
 setools-libs                           x86_64                 3.3.7-46.el7                          base                       485 k

Transaction Summary
======================================================================================================================================

Enable the service:

[docker@centos7 ~]$ sudo systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

Start the service:

[docker@centos7 ~]$ sudo systemctl start docker
[docker@centos7 ~]$ sudo systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2016-12-10 12:26:46 CET; 6s ago
     Docs: https://docs.docker.com
 Main PID: 2957 (dockerd)
   Memory: 12.9M
   CGroup: /system.slice/docker.service
           ├─2957 /usr/bin/dockerd
           └─2960 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --...

Dec 10 12:26:45 centos7.local dockerd[2957]: time="2016-12-10T12:26:45.481380483+01:00" level=info msg="Graph migration to co...conds"
Dec 10 12:26:45 centos7.local dockerd[2957]: time="2016-12-10T12:26:45.481751429+01:00" level=warning msg="mountpoint for pid...found"
Dec 10 12:26:45 centos7.local dockerd[2957]: time="2016-12-10T12:26:45.481751451+01:00" level=info msg="Loading containers: start."
Dec 10 12:26:45 centos7.local dockerd[2957]: time="2016-12-10T12:26:45.574330143+01:00" level=info msg="Firewalld running: false"
Dec 10 12:26:45 centos7.local dockerd[2957]: time="2016-12-10T12:26:45.822997195+01:00" level=info msg="Default bridge (docke...dress"
Dec 10 12:26:46 centos7.local dockerd[2957]: time="2016-12-10T12:26:46.201798804+01:00" level=info msg="Loading containers: done."
Dec 10 12:26:46 centos7.local dockerd[2957]: time="2016-12-10T12:26:46.201984648+01:00" level=info msg="Daemon has completed ...ation"
Dec 10 12:26:46 centos7.local dockerd[2957]: time="2016-12-10T12:26:46.202003760+01:00" level=info msg="Docker daemon" commit...1.12.3
Dec 10 12:26:46 centos7.local dockerd[2957]: time="2016-12-10T12:26:46.207416263+01:00" level=info msg="API listen on /var/ru....sock"
Dec 10 12:26:46 centos7.local systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.

And we’re done. Lets check if docker is working as expected:

[docker@centos7 ~]$ sudo docker run --rm hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c04b14da8d14: Pull complete 
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:

https://hub.docker.com

For more examples and ideas, visit:

https://docs.docker.com/engine/userguide/

What happened here is that we already executed our first docker image: “hello-world”. The “–rm” flag tells docker to automatically remove the image once it exits. As the image was not available on our host it was automatically downloaded from the docker hub:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c04b14da8d14: Pull complete 
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest

You can browse the docker hub for many, many other images using your favorite browser or you can use the command line:

[docker@centos7 ~]$ docker search postgres
NAME                      DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
postgres                  The PostgreSQL object-relational database ...   2939                 [OK]       
kiasaki/alpine-postgres   PostgreSQL docker image based on Alpine Linux   28                   [OK]
abevoelker/postgres       Postgres 9.3 + WAL-E + PL/V8 and PL/Python...   10                   [OK]
onjin/alpine-postgres     PostgreSQL / v9.1 - v9.6 / size <  50MB      9                    [OK]
macadmins/postgres        Postgres that accepts remote connections b...   8                    [OK]
jamesbrink/postgres       Highly configurable PostgreSQL container.       5                    [OK]
eeacms/postgres           Docker image for PostgreSQL (RelStorage re...   4                    [OK]
cptactionhank/postgres                                                    4                    [OK]
azukiapp/postgres         Docker image to run PostgreSQL by Azuki - ...   2                    [OK]
kampka/postgres           A postgresql image build on top of an arch...   2                    [OK]
clkao/postgres-plv8       Docker image for running PLV8 1.4 on Postg...   2                    [OK]
2020ip/postgres           Docker image for PostgreSQL with PLV8           1                    [OK]
steenzout/postgres        Steenzout's docker image packaging for Pos.1                    [OK]
blacklabelops/postgres    Postgres Image for Atlassian Applications       1                    [OK]
buker/postgres            postgres                                        0                    [OK]
kobotoolbox/postgres      Postgres image for KoBo Toolbox.                0                    [OK]
vrtsystems/postgres       PostgreSQL image with added init hooks, bu...   0                    [OK]
timbira/postgres          Postgres  containers                            0                    [OK]
coreroller/postgres       official postgres:9.4 image but it adds 2 ...   0                    [OK]
livingdocs/postgres       Postgres v9.3 with the plv8 extension inst...   0                    [OK]
1maa/postgres             PostgreSQL base image                           0                    [OK]
opencog/postgres          This is a configured postgres database for...   0                    [OK]
khipu/postgres            postgres with custom uids                       0                    [OK]
travix/postgres           A container to run the PostgreSQL database.     0                    [OK]
beorc/postgres            Ubuntu-based PostgreSQL server                  0                    [OK]

The first one is the official PostgreSQL image. How do I run it?

[docker@centos7 ~]$ docker run -it postgres
Unable to find image 'postgres:latest' locally
latest: Pulling from library/postgres
386a066cd84a: Pull complete 
e6dd80b38d38: Pull complete 
9cd706823821: Pull complete 
40c17ac202a9: Pull complete 
7380b383ba3d: Pull complete 
538e418b46ce: Pull complete 
c3b9d41b7758: Pull complete 
dd4f9522dd30: Pull complete 
920e548f9635: Pull complete 
628af7ef2ee5: Pull complete 
004275e6f5b5: Pull complete 
Digest: sha256:e761829c4b5ec27a0798a867e5929049f4cbf243a364c81cad07e4b7ac2df3f1
Status: Downloaded newer image for postgres:latest
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /var/lib/postgresql/data -l logfile start

****************************************************
WARNING: No password has been set for the database.
         This will allow anyone with access to the
         Postgres port to access your database. In
         Docker's default configuration, this is
         effectively any other container on the same
         system.

         Use "-e POSTGRES_PASSWORD=password" to set
         it in "docker run".
****************************************************
waiting for server to start....LOG:  database system was shut down at 2016-12-10 11:42:01 UTC
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
 done
server started
ALTER ROLE


/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*

waiting for server to shut down....LOG:  received fast shutdown request
LOG:  aborting any active transactions
LOG:  autovacuum launcher shutting down
LOG:  shutting down
LOG:  database system is shut down
 done
server stopped

PostgreSQL init process complete; ready for start up.

LOG:  database system was shut down at 2016-12-10 11:42:04 UTC
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

And ready. As with the “hello:world” image docker had to download the image as it was not available locally. Once that was done the image was started and new PostgreSQL instance was created automatically. Here you can see what the layered filesystem is about:

386a066cd84a: Pull complete 
e6dd80b38d38: Pull complete 
9cd706823821: Pull complete 
40c17ac202a9: Pull complete 
7380b383ba3d: Pull complete 
538e418b46ce: Pull complete 
c3b9d41b7758: Pull complete 
dd4f9522dd30: Pull complete 
920e548f9635: Pull complete 
628af7ef2ee5: Pull complete 
004275e6f5b5: Pull complete 

Each of this lines represents a layered/stacked filesystem on top of the previous one. This is an important concept because when you change things only the layer that contains the change needs to be rebuild, but not the layers below. In other words you could build an image based on a CentOS 7 image and then deploy your changes on top of that. You deliver that image and some time later you need to make some modifications: The only thing you need to deliver are the modifications you did because the layers below did not change.

You will notice that you cannot type any command when the image was started. As soon as you enter “CRTL-C” the container will shutdown (this is because of the “-it” switch, which is “interactive” and “pseudo terminal”):

^CLOG:  received fast shutdown request
LOG:  aborting any active transactions
LOG:  autovacuum launcher shutting down
LOG:  shutting down
LOG:  database system is shut down

Everything what happened inside the container is now gone. The correct way to launch it is:

[docker@centos7 ~]$ docker run --name my-first-postgres -e POSTGRES_PASSWORD=postgres -d postgres
d51abc52108d3040817474fa8c85ab15020c12cb753515543c2d064143277155

The “-d” switch tells docker to detach, so we get back our shell. The magic string dockers returns is the container id:

[docker@centos7 ~]$ docker ps --no-trunc
CONTAINER ID                                                       IMAGE               COMMAND                            CREATED             STATUS              PORTS               NAMES
d51abc52108d3040817474fa8c85ab15020c12cb753515543c2d064143277155   postgres            "/docker-entrypoint.sh postgres"   3 minutes ago       Up 3 minutes        5432/tcp            my-first-postgres

When you want to know what images you have available locally you can ask docker for that:

[docker@centos7 ~]$ docker images 
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
postgres            latest              78e3985acac0        2 days ago          264.7 MB
hello-world         latest              c54a2cc56cbb        5 months ago        1.848 kB

How do you now connect to the PostgreSQL image?

[docker@centos7 ~]$ docker run -it --rm --link my-first-postgres:postgres postgres psql -h postgres -U postgres
Password for user postgres: 
psql (9.6.1)
Type "help" for help.

postgres=# \l+
                                                                   List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   |  Size   | Tablespace |                Description                 
-----------+----------+----------+------------+------------+-----------------------+---------+------------+--------------------------------------------
 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 |                       | 7063 kB | pg_default | default administrative connection database
 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +| 6953 kB | pg_default | unmodifiable empty database
           |          |          |            |            | postgres=CTc/postgres |         |            | 
 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +| 6953 kB | pg_default | default template for new databases
           |          |          |            |            | postgres=CTc/postgres |         |            | 
(3 rows)

Or to get bash:

[docker@centos7 ~]$ docker run -it --rm --link my-first-postgres:postgres postgres bash
root@f8c3b3738336:/$ cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
NAME="Debian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=debian
HOME_URL="http://www.debian.org/"
SUPPORT_URL="http://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

Ok, this PostgreSQL image is based on Debian 8. Lets say this is not what I like because I want my PostgreSQL image based on CentOS. This is the topic for the next post: We’ll build our own CentOS image and get deeper in what the stacked filesystem is about. Once we’ll have that available we’ll use that image to build a PostgreSQL image on top of that.

 

Cet article Getting started with Docker – 1 – overview and installation est apparu en premier sur Blog dbi services.

Live from SQL Saturday Slovenia 2016!

Sat, 2016-12-10 04:18

SQL Saturday

After a little trip, just 1-hour flying from Zürich to Ljubljana yesterday, the SQL Saturday Slovenia 2016 begins this morning at the Faculty of Computer and Information Science of the University of Ljubljana

IMG_3774

I needed to wake up very fast because my session was the first of the day at 9:00 AM.

IMG_3775

I also very happy to meet and to share my expertise with Slovenian and other SQL Server experts.

My session was about the famous ErrorLog.

As a DBA, the Error log is an essential daily tool in our life.

Learning and understanding its content are not the last part of the job, we also have to manage it to obtain a better interpretation.

This session is to answer of questions that you perhaps never ask yourself:

  • What are ErrorLog files?
  • Where are ErrorLog files?
  • Need I manage the ErrorLog?
  • How to read and understand?
  • How to write in the ErrorLog?

IMG_3776

I hope that people have learn something on this session!

You can download the presentation here.

I will thank all organizers of this very nice event and a special thanks to Vedran for these photos!

Now, I will also go to see the others sessions. ;-)

I give you « rendez-vous » in the IT-Tage in Tuesday for this session in german!

 

Cet article Live from SQL Saturday Slovenia 2016! est apparu en premier sur Blog dbi services.

Oracle 12c – RMAN and Unified Auditing – Does it really work?

Fri, 2016-12-09 09:44

The new Oracle Unified Auditing feature, audits RMAN operation per default as soon as
you relink your Oracle binary and start your instance. A quite cool new feature, because it allows me
to audit RMAN operation out of the box. For example, someone could create a RMAN backup to ‘/tmp’ and then copy it to somewhere else. And I would like to know that. ;-)

Oracle added 5 columns to the unified_audit_trail view only for RMAN, to find out what RMAN operation was done on the database. The new columns are the following.

  • RMAN_SESSION_RECID
  • RMAN_SESSION_STAMP
  • RMAN_OPERATION
  • RMAN_OBJECT_TYPE
  • RMAN_DEVICE_TYPE

Due to the Oracle documentation, the column description is the following.

RMAN_SESSION_RECID

RMAN session identifier. Together with RMAN_SESSION_STAMP uniquely identifies an RMAN job
(note that this is not same as user session ID; the value is a recid in controlfile that identifies RMAN job)

RMAN_SESSION_STAMP

Timestamp for the session

RMAN_OPERATION

The RMAN operation executed by the job. One row will be added for each distinct operation within an RMAN session. For example, a backup job would contain BACKUP in the RMAN_OPERATION column.

RMAN_OBJECT_TYPE

Type of objects involved for backup or restore/recover or change/delete/crosscheck commands. It contains one of the following values. If RMAN command does not satisfy one of them,  then preference is given in order, from top to bottom of the list:

  • DB FULL
  • RECVR AREA
  • DB INCR
  • DATAFILE FULL
  • DATAFILE INCR
  • ARCHIVELOG
  • CONTROLFILE
  • SPFILE
RMAN_DEVICE_TYPE

Device involved in the RMAN job. It may be DISK or SBT_TAPE or *  (An * indicates that more than one location is involved). For a backup job, it will be the output device type. For other commands (such as restore or crosscheck),  it will be the input device type.

Ok. Let’s start with a first test. Just for the protocol, I am using here 12cR1 PSU Oct 2016.

First of all, I am activating the “immediate-write” feature, meaning that audit records are written immediately,  and not to the audit buffer first.

SQL> select parameter, value from v$option where parameter like '%Unified%';

PARAMETER              VALUE
---------------------- --------
Unified Auditing       TRUE

-- Modify OUA to use the immediate-write mode

SQL> BEGIN
  2  DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_PROPERTY(
  3  DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED,
  4  DBMS_AUDIT_MGMT.AUDIT_TRAIL_WRITE_MODE,
  5  DBMS_AUDIT_MGMT.AUDIT_TRAIL_IMMEDIATE_WRITE);
  6  END;
  7  /

PL/SQL procedure successfully completed.

SQL> select * from DBA_AUDIT_MGMT_CONFIG_PARAMS where PARAMETER_NAME = 'AUDIT WRITE MODE';

PARAMETER_NAME                   PARAMETER_VALUE        AUDIT_TRAIL
-------------------------------- ---------------------- ----------------------------
AUDIT WRITE MODE                 IMMEDIATE WRITE MODE   UNIFIED AUDIT TRAIL

 

Ok. Cool. So far so good. Let’s start a RMAN backup job now.

 

oracle@dbidg01:/home/oracle/ [DBIT121] rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Dec 9 15:59:44 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBIT121 (DBID=172831209)

RMAN> backup database plus archivelog delete input;
Starting backup at 09-DEC-2016 16:03:41
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=12 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=22 RECID=6 STAMP=930153822
channel ORA_DISK_1: starting piece 1 at 09-DEC-2016 16:03:43
channel ORA_DISK_1: finished piece 1 at 09-DEC-2016 16:03:44
piece handle=/u03/fast_recovery_area/DBIT121_SITE1/backupset/2016_12_09/o1_mf_annnn_TAG20161209T160342_d4okyh1t_.bkp tag=TAG20161209T160342 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2016_12_09/o1_mf_1_22_d4okyfo5_.arc RECID=6 STAMP=930153822
Finished backup at 09-DEC-2016 16:03:44

Starting backup at 09-DEC-2016 16:03:44
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u02/oradata/DBIT121_SITE1/datafile/o1_mf_example_d4fjz1fz_.dbf
input datafile file number=00001 name=/u02/oradata/DBIT121_SITE1/datafile/o1_mf_system_d4fjt03j_.dbf
input datafile file number=00003 name=/u02/oradata/DBIT121_SITE1/datafile/o1_mf_sysaux_d4fjrlvs_.dbf
input datafile file number=00004 name=/u02/oradata/DBIT121_SITE1/datafile/o1_mf_undotbs1_d4fjvtd1_.dbf
input datafile file number=00006 name=/u02/oradata/DBIT121_SITE1/datafile/o1_mf_users_d4fjvqb1_.dbf
channel ORA_DISK_1: starting piece 1 at 09-DEC-2016 16:03:44
channel ORA_DISK_1: finished piece 1 at 09-DEC-2016 16:05:19
piece handle=/u03/fast_recovery_area/DBIT121_SITE1/backupset/2016_12_09/o1_mf_nnndf_TAG20161209T160344_d4okyjny_.bkp tag=TAG20161209T160344 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:35
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 09-DEC-2016 16:05:20
channel ORA_DISK_1: finished piece 1 at 09-DEC-2016 16:05:21
piece handle=/u03/fast_recovery_area/DBIT121_SITE1/backupset/2016_12_09/o1_mf_ncsnf_TAG20161209T160344_d4ol1jnj_.bkp tag=TAG20161209T160344 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 09-DEC-2016 16:05:21

Starting backup at 09-DEC-2016 16:05:21
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=23 RECID=7 STAMP=930153921
channel ORA_DISK_1: starting piece 1 at 09-DEC-2016 16:05:21
channel ORA_DISK_1: finished piece 1 at 09-DEC-2016 16:05:23
piece handle=/u03/fast_recovery_area/DBIT121_SITE1/backupset/2016_12_09/o1_mf_annnn_TAG20161209T160521_d4ol1ktz_.bkp tag=TAG20161209T160521 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2016_12_09/o1_mf_1_23_d4ol1kpj_.arc RECID=7 STAMP=930153921
Finished backup at 09-DEC-2016 16:05:23

RMAN>

 

After my RMAN Backup has finished, I open another session and checked the unified_audit_trail, but nothing is there.

SQL> select EVENT_TIMESTAMP, ACTION_NAME, RMAN_SESSION_RECID,
  2  RMAN_SESSION_STAMP, RMAN_OPERATION, RMAN_OBJECT_TYPE, RMAN_DEVICE_TYPE
  3  from unified_audit_trail where ACTION_NAME like '%RMAN%' order by 1;

no rows selected

Now I do a clean exit of my RMAN session, and here we go. Now I have an audit entry, saying that a RMAN backup to disk took place. Perfect, this is exactly what I wanted to see.

...
RMAN> exit

Recovery Manager complete.


SQL> select EVENT_TIMESTAMP, ACTION_NAME, RMAN_SESSION_RECID,
  2  RMAN_SESSION_STAMP, RMAN_OPERATION, RMAN_OBJECT_TYPE, RMAN_DEVICE_TYPE
  3  from unified_audit_trail where ACTION_NAME like '%RMAN%' order by 1;

EVENT_TIMESTAMP              ACTION_NAME    RMAN_SESSION_RECID RMAN_SESSION_STAMP RMAN_OPERATION       RMAN_OBJECT_TYPE     RMAN_
---------------------------- -------------- ------------------ ------------------ -------------------- -------------------- -----
09-DEC-16 04.08.10.532931 PM RMAN ACTION                    22          930153584 Backup               DB Full              Disk

 

This brings me to an idea. What happens if a Hacker logs into my system, starts a RMAN backup, and kills his own RMAN session, after the backup has finished? Sounds crazy, but Hackers are usually very creative.

Ok. The Hacker logs in now, and because the Hacker is smart, he gives his RMAN backup a TAG, so it is easier to delete it afterwards.

oracle@dbidg01:/home/oracle/ [DBIT121] rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Dec 9 16:09:58 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBIT121 (DBID=172831209)

RMAN> alter system archive log current;

using target database control file instead of recovery catalog
Statement processed

RMAN> backup archivelog all format '/tmp/%U' TAG 'HACKER';

Starting backup at 09-DEC-2016 16:11:58
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=12 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=24 RECID=8 STAMP=930154279
input archived log thread=1 sequence=25 RECID=9 STAMP=930154318
channel ORA_DISK_1: starting piece 1 at 09-DEC-2016 16:11:59
channel ORA_DISK_1: finished piece 1 at 09-DEC-2016 16:12:00
piece handle=/tmp/0ern21qf_1_1 tag=HACKER comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 09-DEC-2016 16:12:00

 

At this point, still no further entry in the unified_audit_trail.

SQL> r
  1  select EVENT_TIMESTAMP, ACTION_NAME, RMAN_SESSION_RECID,
  2  RMAN_SESSION_STAMP, RMAN_OPERATION, RMAN_OBJECT_TYPE, RMAN_DEVICE_TYPE
  3* from unified_audit_trail where ACTION_NAME like '%RMAN%' order by 1

EVENT_TIMESTAMP              ACTION_NAME    RMAN_SESSION_RECID RMAN_SESSION_STAMP RMAN_OPERATION       RMAN_OBJECT_TYPE     RMAN_
---------------------------- -------------- ------------------ ------------------ -------------------- -------------------- -----
09-DEC-16 04.08.10.532931 PM RMAN ACTION                    22          930153584 Backup               DB Full              Disk

Meanwhile, the Hacker copies the data away, and because the Hacker is a good boy, he cleans up everything afterwards. :-)

RMAN> delete noprompt backuppiece tag=HACKER;

using channel ORA_DISK_1

List of Backup Pieces
BP Key  BS Key  Pc# Cp# Status      Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
14      14      1   1   AVAILABLE   DISK        /tmp/0ern21qf_1_1
deleted backup piece
backup piece handle=/tmp/0ern21qf_1_1 RECID=14 STAMP=930154319
Deleted 1 objects

At the moment, there is still nothing new in the unified_audit_trail. Now, to avoid entries into the unified_audit_trail view, the hacker kills his own session.

oracle@dbidg01:/tmp/ [DBIT121] ps -ef | grep rman | grep -v grep
oracle    8829  2839  0 16:09 pts/1    00:00:00 rman target /
oracle@dbidg01:/tmp/ [DBIT121] kill -9 8829


...
RMAN> Killed

And now the 1Million Dollar question … do we have a new entry or not?

SQL> r
  1  select EVENT_TIMESTAMP, ACTION_NAME, RMAN_SESSION_RECID,
  2  RMAN_SESSION_STAMP, RMAN_OPERATION, RMAN_OBJECT_TYPE, RMAN_DEVICE_TYPE
  3* from unified_audit_trail where ACTION_NAME like '%RMAN%' order by 1

EVENT_TIMESTAMP              ACTION_NAME    RMAN_SESSION_RECID RMAN_SESSION_STAMP RMAN_OPERATION       RMAN_OBJECT_TYPE     RMAN_
---------------------------- -------------- ------------------ ------------------ -------------------- -------------------- -----
09-DEC-16 04.08.10.532931 PM RMAN ACTION                    22          930153584 Backup               DB Full              Disk

No, no new entry. This entry is still the one from my regular RMAN backup with the clean exit.

Conclusion

Don’t rely too much on the unified_audit_trail records, in case you want to audit RMAN backups.

Cheers,
William

 

 

Cet article Oracle 12c – RMAN and Unified Auditing – Does it really work? est apparu en premier sur Blog dbi services.

EnterpriseDB Backup and Recovery Tool (BART)

Thu, 2016-12-08 14:21

Many tools are proposed for backup and recovery operation of a PostgreSQL database. EnterpriseDB has developed a product named BART (Backup and Recovery Tool). Use of this product needs a subscription but you can download a trial version for test purpose.
In this blog we are using BART with PostgreSQL 9.6.1. BART allows backing up over the net and can centralize backups of many PostgreSQL servers.
We are using the following configuration:
Bart Host: pgserver.localdomain
PostgreSQL server 1: pgserve1.localdomain
PostgreSQL server 2: pgserve2.localdomain
We suppose that there are clusters running on pgserver1 and pgserver2 and the archive mode is activated for both clusters
What is needed on the server hosting BART (i.e. pgserver in our case)?
1. We need to install rpm package using yum or rpm command
2. We need PostgreSQL binaries installed. Note that the version of the binaries should be at least the same than those installed on the servers to be backed up. In our case as we are using 9.6.1 for databases, binaries should be 9.6.1 or higher. Note that we don’t need any cluster running on pgserver.
For the package installation, we are using yum.

[root@pgserver 96]# yum localinstall edb-bart-1.1.0-1.rhel7.x86_64.rpm

By default BART is installed in /usr/edb-bart-1.1.

[root@pgserver 96]# cd /usr/edb-bart-1.1/
[root@pgserver edb-bart-1.1]# ls
bart_license.txt bin etc
[root@pgserver edb-bart-1.1]# ls -l
total 16
-rw-r--r--. 1 root root 15272 Jul 3 2015 bart_license.txt
drwxr-xr-x. 2 root root 17 Dec 6 14:15 bin
drwxr-xr-x. 2 root root 21 Dec 6 14:15 etc
[root@pgserver edb-bart-1.1]#

Before configuring the BART configuration file, the ssh connectivity should be configured between servers for the Linux user used for backups, here we are using postgres.

[postgres@pgserver ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/postgres/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/postgres/.ssh/id_rsa.
Your public key has been saved in /home/postgres/.ssh/id_rsa.pub.
The key fingerprint is:
b9:fb:1a:90:c2:76:73:6e:fe:28:38:cf:42:52:7a:81 postgres@pgserver.localdomain
The key's randomart image is:
+--[ RSA 2048]----+
| |
| |
| . |
| E.o . . |
| o+.= S |
| o.oo = . |
| + . = |
| +..o + |
| +o.=+o |
+-----------------+
[postgres@pgserver ~]$ ssh-copy-id -i .ssh/id_rsa.pub pgserver1
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
postgres@pgserver1's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'pgserver1'"
and check to make sure that only the key(s) you wanted were added.
[postgres@pgserver ~]$ ssh-copy-id -i .ssh/id_rsa.pub pgserver2
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
postgres@pgserver2's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'pgserver2'"
and check to make sure that only the key(s) you wanted were added.
[postgres@pgserver ~]$


[postgres@pgserver ~]$ ssh pgserver1 date
Tue Dec 6 14:24:35 CET 2016
[postgres@pgserver ~]$ ssh pgserver2 date
Tue Dec 6 14:24:40 CET 2016
[postgres@pgserver ~]$

We also need a super user created on each cluster to be backed up

postgres=# CREATE ROLE backupuser WITH LOGIN SUPERUSER PASSWORD 'root';
CREATE ROLE
postgres=#

And we will create a .pgpass for this user to be able to connect without being asked for password

[postgres@pgserver1 ~]$ echo "*:5432:*:backupuser:root" >> ~/.pgpass
[postgres@pgserver1 ~]$ chmod 600 .pgpass

[postgres@pgserver2 ~]$ echo "*:5432:*:backupuser:root" >> ~/.pgpass
[postgres@pgserver2 ~]$ chmod 600 .pgpass

The configuration file is located under $BART_INSTALL_DIRECTORY/etc. It must contain a part for BART itself and a part for each server to be backed up. Below an example of our bart.cfg

[root@pgserver etc]# pwd
/usr/edb-bart-1.1/etc


[root@pgserver etc]# cat bart.cfg | grep -v ^#
[BART] bart-host= postgres@192.168.56.35
backup_path = /u01/app/backup
pg_basebackup_path = /u01/app/PostgreSQL/9.6/bin/pg_basebackup
logfile = /tmp/bart.log
xlog-method = fetch
retention_policy = 1 DAYS
wal_compression = enabled

[PGSERVER1] host = 192.168.56.36
port = 5432
user = backupuser
backup-name = PGSERVER1_%year-%month-%dayT%hour:%minute
remote-host = postgres@pgserver1
description = "Postgres server1"

[PGSERSERVER2] host = 192.168.56.37
port = 5432
user = backupuser
backup-name = PGSERVER2_%year-%month-%dayT%hour:%minute
remote-host = postgres@pgserver2
description = "Postgres server2"
[root@pgserver etc]#

The backup_path /u01/app/backup is only needed on the BART host (pgserver) as backup will be done over the net. But on both servers pgserver1 and pgserver2 the pg_hba.conf need to be updated to allow connection from the BART host. Below two lines we add

[postgres@pgserver1 data]$ grep 192.168.56.0 pg_hba.conf
host all all 192.168.56.0/24 md5
host replication backupuser 192.168.56.0/24 md5
[postgres@pgserver1 data]$


[postgres@pgserver2 data]$ grep 192.168.56.0 pg_hba.conf
host all all 192.168.56.0/24 md5
host replication backupuser 192.168.56.0/24 md5
[postgres@pgserver2 data]$

Now we can initialize the catalog, first for pgserver1 and after for pgserver2 (note this can be done for both servers in a single command)

[postgres@pgserver bin]$ ./bart -d -c /usr/edb-bart-1.1/etc/bart.cfg init -s PGSERVER1 -o
DEBUG: Server: Global, Now: 2016-12-06 15:51:04 CET, RetentionWindow: 86400 (secs) ==> 24 hour(s)
INFO: setting archive_command for server 'pgserver1'
DEBUG: SHOW archive_command: test ! -f /u01/app/archive/%f && cp %p /u01/app/archive/%f
DEBUG: archive command set to: ALTER SYSTEM SET archive_command TO 'scp %p postgres@192.168.56.35:/u01/app/backup/pgserver1/archived_wals/%f'
WARNING: archive_command is set. server restart is required
[postgres@pgserver bin]$


[postgres@pgserver bin]$ ./bart -d -c /usr/edb-bart-1.1/etc/bart.cfg init -s PGSERVER2 -o
DEBUG: Server: Global, Now: 2016-12-06 19:01:08 CET, RetentionWindow: 86400 (secs) ==> 24 hour(s)
INFO: setting archive_command for server 'pgserver2'
DEBUG: SHOW archive_command: test ! -f /u01/app/archive/%f && cp %p /u01/app/archive/%f
DEBUG: archive command set to: ALTER SYSTEM SET archive_command TO 'scp %p postgres@192.168.56.35:/u01/app/backup/pgserver2/archived_wals/%f'
WARNING: archive_command is set. server restart is required
[postgres@pgserver bin]$

Now it’s time to launch a backup

[postgres@pgserver bin]$ ./bart -c /usr/edb-bart-1.1/etc/bart.cfg BACKUP -s PGSERVER1

INFO: creating backup for server 'pgserver1'
INFO: backup identifier: '1481037296469'
39433/39433 kB (100%), 1/1 tablespace

INFO: backup completed successfully
INFO: backup checksum: fb105a542d8856e9d5bd95410337463f of base.tar
INFO:
BACKUP DETAILS:
BACKUP STATUS: active
BACKUP IDENTIFIER: 1481037296469
BACKUP NAME: PGSERVER1_2016-12-06T16:14
BACKUP LOCATION: /u01/app/backup/pgserver1/1481037296469
BACKUP SIZE: 38.51 MB
BACKUP FORMAT: tar
XLOG METHOD: fetch
BACKUP CHECKSUM(s): 1
ChkSum File
fb105a542d8856e9d5bd95410337463f base.tar

TABLESPACE(s): 0
START WAL LOCATION: 000000010000000000000008
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2016-12-06 16:14:56 CET
STOP TIME: 2016-12-06 16:14:58 CET
TOTAL DURATION: 2 sec(s)

[postgres@pgserver bin]$


postgres@pgserver bin]$ ./bart -c /usr/edb-bart-1.1/etc/bart.cfg BACKUP -s PGSERVER2

INFO: creating backup for server 'pgserver2'
INFO: backup identifier: '1481047475191'
39216/39216 kB (100%), 1/1 tablespace

INFO: backup completed successfully
INFO: backup checksum: 052d82f3df1def72e9962cdef965fe8f of base.tar
INFO:
BACKUP DETAILS:
BACKUP STATUS: active
BACKUP IDENTIFIER: 1481047475191
BACKUP NAME: PGSERVER2_2016-12-06T19:04
BACKUP LOCATION: /u01/app/backup/pgserver2/1481047475191
BACKUP SIZE: 38.30 MB
BACKUP FORMAT: tar
XLOG METHOD: fetch
BACKUP CHECKSUM(s): 1
ChkSum File
052d82f3df1def72e9962cdef965fe8f base.tar

TABLESPACE(s): 0
START WAL LOCATION: 000000010000000000000003
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2016-12-06 19:04:36 CET
STOP TIME: 2016-12-06 19:04:38 CET
TOTAL DURATION: 2 sec(s)

[postgres@pgserver bin]$

We can list backups for pgserver2 for example

[postgres@pgserver bin]$ ./bart SHOW-BACKUPS -s pgserver2
SERVER NAME BACKUP ID BACKUP TIME BACKUP SIZE WAL(s) SIZE WAL FILES STATUS

pgserver2 1481047475191 2016-12-06 19:04:38 CET 38.30 MB 26.54 kB 1 active

[postgres@pgserver bin]$

It’s good to have a backup, but it is better to know how to restore. And now we are going to simulate a crash of pgserver2 and then do a restore. First let’s create a database and a table.

[postgres@pgserver2 data]$ psql
Password:
psql.bin (9.6.1)
Type "help" for help.
postgres=# create database sales;
CREATE DATABASE
postgres=# \c sales
You are now connected to database "sales" as user "postgres".
sales=# create table article(id int);
CREATE TABLE
sales=# insert into article values (1);
INSERT 0 1
sales=# table article;
id
----
1
(1 row)
sales=#

And then let’s do a backup

[postgres@pgserver bin]$ ./bart -c /usr/edb-bart-1.1/etc/bart.cfg BACKUP -s PGSERVER2
INFO: creating backup for server 'pgserver2'
INFO: backup identifier: '1481051725125'
46814/46814 kB (100%), 1/1 tablespace
INFO: backup completed successfully
INFO: backup checksum: 19069965fa53444b31459f6d984682d0 of base.tar
INFO:
BACKUP DETAILS:
BACKUP STATUS: active
BACKUP IDENTIFIER: 1481051725125
BACKUP NAME: PGSERVER2_2016-12-06T20:15
BACKUP LOCATION: /u01/app/backup/pgserver2/1481051725125
BACKUP SIZE: 45.72 MB
BACKUP FORMAT: tar
XLOG METHOD: fetch
BACKUP CHECKSUM(s): 1
ChkSum File
19069965fa53444b31459f6d984682d0 base.tar
TABLESPACE(s): 0
START WAL LOCATION: 000000010000000000000005
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2016-12-06 20:15:28 CET
STOP TIME: 2016-12-06 20:15:28 CET
TOTAL DURATION: 0 sec(s)


[postgres@pgserver bin]$ ./bart SHOW-BACKUPS -s PGSERVER2
SERVER NAME BACKUP ID BACKUP TIME BACKUP SIZE WAL(s) SIZE WAL FILES STATUS
pgserver2 1481051725125 2016-12-06 20:15:28 CET 45.72 MB 16.00 MB 1 active
pgserver2 1481047475191 2016-12-06 19:04:38 CET 38.30 MB 16.03 MB 2 active
[postgres@pgserver bin]$

Now let’s crash our cluster

[root@pgserver2 9.6]# pwd
/u01/app/PostgreSQL/9.6
[root@pgserver2 9.6]# mv data/ data_crash
[root@pgserver2 9.6]#


[postgres@pgserver2 9.6]$ pg_ctl start
pg_ctl: directory "/u01/app/PostgreSQL/9.6/data" does not exist
[postgres@pgserver2 9.6]$

For the restore, we have to provide the backup id and the path to restore. This path must exist.

[postgres@pgserver bin]$ ./bart RESTORE -s pgserver2 -i 1481047475191 -p /u01/app/PostgreSQL/9.6/data
INFO: restoring backup '1481047475191' of server 'pgserver2'
INFO: restoring backup to postgres@pgserver2:/u01/app/PostgreSQL/9.6/data
INFO: base backup restored
INFO: copying WAL file(s) to postgres@pgserver2:/u01/app/PostgreSQL/9.6/data/archived_wals
INFO: archiving is disabled
[postgres@pgserver bin]$

Before starting cluster, let’s create a recovery.conf file

[postgres@pgserver2 data]$ cat recovery.conf
restore_command='cp /u01/app/PostgreSQL/9.6/data/archived_wals/%f %p'
[postgres@pgserver2 data]$

And then let’s start the cluster

[postgres@pgserver2 data]$ pg_ctl start
server starting
[postgres@pgserver2 data]$ 2016-12-06 20:33:01 CET LOG: redirecting log output to logging collector process
2016-12-06 20:33:01 CET HINT: Future log output will appear in directory "pg_log".

After the restore finished, the recovery.conf is automatically renamed to recovery.done because PostgreSQL will go into recovery mode again after a restart otherwise.

[postgres@pgserver2 data]$ cat recovery.done
restore_command='cp /u01/app/PostgreSQL/9.6/data/archived_wals/%f %p'
[postgres@pgserver2 data]$

While connecting we can verify our data

[postgres@pgserver2 data]$ psql
Password:
psql.bin (9.6.1)
Type "help" for help.
postgres=# \c sales
You are now connected to database "sales" as user "postgres".
sales=# table article
sales-# ;
id
----
1
(1 row)
sales=#

In this blog we have seen how to configure EDB BART, how to use it for backup and recovery.

 

Cet article EnterpriseDB Backup and Recovery Tool (BART) est apparu en premier sur Blog dbi services.

Installing PostgreSQL 9.6.1 with EnterpriseDB Installer

Thu, 2016-12-08 14:20

As you may know PostgreSQL 9.6 was released a few months ago. And EnterpriseDB (EDB) offers a package installer for installing the community version. This EDB package can be downloaded here.
In this blog we are going to install PostgreSQL using edb installer. The installation is done using a graphical tool. A user with root privileges must be used otherwise we will get an error. Indeed the default installation creates the systemd service which need root privileges. For example using the user postgres for installation

[postgres@pgserver1 96]$ chmod +x postgresql-9.6.1-1-linux-x64.run
[postgres@pgserver1 96]$

[postgres@pgserver1 96]$ ./postgresql-9.6.1-1-linux-x64.run

install1
An alternative is to launch the installer with the option “–extract-only” which does not require root (but then we do not get the service)

Now logged as root, let’s start

[root@pgserver1 96]# ./postgresql-9.6.1-1-linux-x64.run

install2
Click on Next
install3
Specify PGHOME
install4
Specify the PGDATA
install5
Specify the password for user postgres
install6
The default port is 5432
install7
Choose your Locale
install8
And launch installation process
install9
install10
Once installation done we can verify that PostgreSQL is running

[postgres@pgserver1 ~]$ ps -ef | grep postgres
root 3135 976 0 09:53 ? 00:00:00 sshd: postgres [priv] postgres 3140 3135 0 09:54 ? 00:00:00 sshd: postgres@pts/0
postgres 3142 3140 0 09:54 pts/0 00:00:00 -bash
root 3657 976 0 09:55 ? 00:00:00 sshd: postgres [priv] postgres 3661 3657 0 09:55 ? 00:00:00 sshd: postgres@pts/2
postgres 3664 3661 0 09:55 pts/2 00:00:00 -bash
postgres 6255 1 0 10:23 ? 00:00:00 /u01/app/PostgreSQL/9.6/bin/postgres -D /u01/app/PostgreSQL/9.6/data
postgres 6256 6255 0 10:23 ? 00:00:00 postgres: logger process
postgres 6258 6255 0 10:23 ? 00:00:00 postgres: checkpointer process
postgres 6259 6255 0 10:23 ? 00:00:00 postgres: writer process
postgres 6260 6255 0 10:23 ? 00:00:00 postgres: wal writer process
postgres 6261 6255 0 10:23 ? 00:00:00 postgres: autovacuum launcher process
postgres 6262 6255 0 10:23 ? 00:00:00 postgres: stats collector process
postgres 6332 3664 0 10:26 pts/2 00:00:00 ps -ef
postgres 6333 3664 0 10:26 pts/2 00:00:00 grep --color=auto postgres
[postgres@pgserver1 ~]$

A file pg_env.sh is created during installation and can be used to setup all variables we need for the connection

[postgres@pgserver1 9.6]$ pwd
/u01/app/PostgreSQL/9.6
[postgres@pgserver1 9.6]$ cp pg_env.sh /home/postgres/
[postgres@pgserver1 9.6]$ cd
[postgres@pgserver1 ~]$


[postgres@pgserver1 ~]$ chmod +x pg_env.sh
[postgres@pgserver1 ~]$ . ./pg_env.sh
[postgres@pgserver1 ~]$ env | grep PG
PGPORT=5432
PGUSER=postgres
PGDATABASE=postgres
PGLOCALEDIR=/u01/app/PostgreSQL/9.6/share/locale
PGDATA=/u01/app/PostgreSQL/9.6/data
[postgres@pgserver1 ~]$

And now we can connect

[postgres@pgserver1 ~]$ psql
Password:
psql.bin (9.6.1)
Type "help" for help.

postgres=#

We can see that installation of PostgreSQL edb 9.6 is very easy.

 

Cet article Installing PostgreSQL 9.6.1 with EnterpriseDB Installer est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 7 – Partitioning

Thu, 2016-12-08 04:19

PostgreSQL supports tables up to 32TB. Do you want to be the one responsible for managing such a table? I guess not. Usually you start to partition your tables when they grow very fast and consume more than hundreds of gigabytes. Can PostgreSQL do this? Do you you know what table inheritance is? No? PostgreSQL implements partitioning by using table inheritance and constraint exclusion. Sounds strange? Lets have a look …

Us usual I am running the currently latest version of PostgreSQL:

postgres@pgbox:/home/postgres/ [PG961] psql postgres
psql (9.6.1 dbi services build)
Type "help" for help.

(postgres@[local]:5439) [postgres] > select version();
                                                          version                                                           
----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6.1 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit
(1 row)

Time: 0.513 ms
(postgres@[local]:5439) [postgres] > 

So, what is table inheritance. In PostgreSQL you do things like this:

(postgres@[local]:5439) [postgres] > create table databases ( name varchar(10), vendor varchar(10) );
CREATE TABLE
Time: 20.477 ms
(postgres@[local]:5439) [postgres] > create table databases_rdbms ( rdbms boolean ) inherits (databases);
CREATE TABLE
Time: 20.080 ms
(postgres@[local]:5439) [postgres] > create table databases_nosql ( nosql boolean ) inherits (databases);
CREATE TABLE
Time: 22.048 ms

What we’ve done here is: We created three tables in total. The “databases_rdbms” and “databases_nosql” tables inherit from the “databases” table. What does that mean? Lets insert some data into the tables that inherit from the “databases” table:

(postgres@[local]:5439) [postgres] > insert into databases_rdbms values ('PostgreSQL','Community',true);
INSERT 0 1
Time: 20.215 ms
(postgres@[local]:5439) [postgres] > insert into databases_rdbms values ('MariaDB','MariaDB',true);
INSERT 0 1
Time: 1.666 ms
(postgres@[local]:5439) [postgres] > insert into databases_nosql values ('MongoDB','MongoDB',true);
INSERT 0 1
Time: 1.619 ms
(postgres@[local]:5439) [postgres] > insert into databases_nosql values ('Cassandra','Apache',true);
INSERT 0 1
Time: 0.833 ms

Note that we did not insert any data into the “databases” table, but when we query the “databases” table we get this result:

(postgres@[local]:5439) [postgres] > select * from databases;
    name    |  vendor   
------------+-----------
 PostgreSQL | Community
 MariaDB    | MariaDB
 MongoDB    | MongoDB
 Cassandra  | Apache
(4 rows)

All the data from all child tables has been retrieved (of course without the additional column on the child tables). We can still query the child tables:

(postgres@[local]:5439) [postgres] > select * from databases_rdbms;
    name    |  vendor   | rdbms 
------------+-----------+-------
 PostgreSQL | Community | t
 MariaDB    | MariaDB   | t
(2 rows)

Time: 0.224 ms
(postgres@[local]:5439) [postgres] > select * from databases_nosql;
   name    | vendor  | nosql 
-----------+---------+-------
 MongoDB   | MongoDB | t
 Cassandra | Apache  | t
(2 rows)

But when we query “only” on the master table there is no result:

(postgres@[local]:5439) [postgres] > select * from only databases;
 name | vendor 
------+--------
(0 rows)

Of course for this specific example it would be better to add an additional column to the master table which specifies if a database is a NoSQL database or not. This is just to show how it works. There is a good example for another use case in the documentation.

What does all this have to do with partitioning? When you want to partition your tables in PostgreSQL you’ll do exactly the same thing:

(postgres@[local]:5439) [postgres] > create table log_data ( id int, some_data varchar(10), ts date );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create table log_data_2016() inherits ( log_data );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create table log_data_2015() inherits ( log_data );
CREATE TABLE

We want to partition our log data by year, so we create a child table for each year we know we have data for. We additionally need is a check constraint on each of the child tables:

(postgres@[local]:5439) [postgres] > \d+ log_data_2016
                             Table "public.log_data_2016"
  Column   |         Type          | Modifiers | Storage  | Stats target | Description 
-----------+-----------------------+-----------+----------+--------------+-------------
 id        | integer               |           | plain    |              | 
 some_data | character varying(10) |           | extended |              | 
 ts        | date                  |           | plain    |              | 
Check constraints:
    "cs1" CHECK (ts >= '2016-01-01'::date AND ts  \d+ log_data_2015
                             Table "public.log_data_2015"
  Column   |         Type          | Modifiers | Storage  | Stats target | Description 
-----------+-----------------------+-----------+----------+--------------+-------------
 id        | integer               |           | plain    |              | 
 some_data | character varying(10) |           | extended |              | 
 ts        | date                  |           | plain    |              | 
Check constraints:
    "cs1" CHECK (ts >= '2015-01-01'::date AND ts < '2016-01-01'::date)
Inherits: log_data

This guarantees that the child tables only get data for a specific year. So far so good. But how does PostgreSQL know that inserts into the master table should get routed to the corresponding child table? This is done by using triggers:

(postgres@[local]:5439) [postgres] > CREATE OR REPLACE FUNCTION log_data_insert_trigger()
RETURNS TRIGGER AS $$
BEGIN
IF ( NEW.ts >= DATE '2015.01.01' AND
NEW.ts < DATE '2016-01-01' ) THEN INSERT INTO log_data_2015 VALUES (NEW.*); ELSIF ( NEW.ts >= DATE '2016-01-01' AND
NEW.ts < DATE '2017-01-01' ) THEN
INSERT INTO log_data_2016 VALUES (NEW.*);
ELSE
RAISE EXCEPTION 'Date out of range!';
END IF;
RETURN NULL;
END;
$$
LANGUAGE plpgsql;

CREATE TRIGGER insert_log_data_trigger
BEFORE INSERT ON log_data
FOR EACH ROW EXECUTE PROCEDURE log_data_insert_trigger();

When there are inserts against the master table, from now on these go to the corresponding child table:

(postgres@[local]:5439) [postgres] > insert into log_data values ( 1, 'aaaa', date('2016.03.03'));
INSERT 0 0
(postgres@[local]:5439) [postgres] > insert into log_data values ( 2, 'aaaa', date('2015.03.03'));
INSERT 0 0
(postgres@[local]:5439) [postgres] > select * from log_data;
 id | some_data |     ts     
----+-----------+------------
  1 | aaaa      | 2016-03-03
  2 | aaaa      | 2015-03-03
(2 rows)
(postgres@[local]:5439) [postgres] > select * from log_data_2015;
 id | some_data |     ts     
----+-----------+------------
  2 | aaaa      | 2015-03-03
(1 row)

(postgres@[local]:5439) [postgres] > select * from log_data_2016;
 id | some_data |     ts     
----+-----------+------------
  1 | aaaa      | 2016-03-03
(1 row)

Selects against the master table where we use the ts column in the where condition now only select from the child table:

(postgres@[local]:5439) [postgres] > explain analyze select * from log_data where ts = date ('2016.03.03');
                                                  QUERY PLAN                                                   
---------------------------------------------------------------------------------------------------------------
 Append  (cost=0.00..23.75 rows=7 width=46) (actual time=0.006..0.006 rows=1 loops=1)
   ->  Seq Scan on log_data  (cost=0.00..0.00 rows=1 width=46) (actual time=0.002..0.002 rows=0 loops=1)
         Filter: (ts = '2016-03-03'::date)
   ->  Seq Scan on log_data_2016  (cost=0.00..23.75 rows=6 width=46) (actual time=0.004..0.004 rows=1 loops=1)
         Filter: (ts = '2016-03-03'::date)
 Planning time: 0.131 ms
 Execution time: 0.019 ms
(7 rows)
(postgres@[local]:5439) [postgres] > explain analyze select * from log_data where ts = date ('2015.03.03');
                                                  QUERY PLAN                                                   
---------------------------------------------------------------------------------------------------------------
 Append  (cost=0.00..23.75 rows=7 width=46) (actual time=0.007..0.007 rows=1 loops=1)
   ->  Seq Scan on log_data  (cost=0.00..0.00 rows=1 width=46) (actual time=0.002..0.002 rows=0 loops=1)
         Filter: (ts = '2015-03-03'::date)
   ->  Seq Scan on log_data_2015  (cost=0.00..23.75 rows=6 width=46) (actual time=0.004..0.004 rows=1 loops=1)
         Filter: (ts = '2015-03-03'::date)
 Planning time: 0.102 ms
 Execution time: 0.019 ms
(7 rows)

Of course you can create indexes on the child tables as well. This is how partitioning basically works in PostgreSQL. To be honest, this is not the most beautiful way to do partitioning and this can become tricky to manage. But as always there are projects that assist you, e.g. pg_partman or pg_pathman.

Wouldn’t it be nice to have a SQL syntax to do table partitioning? Exactly this was committed yesterday and will probably be there in PostgreSQL 10 next year. The development documentation already describes the syntax:

[ PARTITION BY { RANGE | LIST } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace_name ]
 

Cet article Can I do it with PostgreSQL? – 7 – Partitioning est apparu en premier sur Blog dbi services.

OEL 7 – Project Quotas on Oracle Homes with XFS on Oracle Linux 7

Thu, 2016-12-08 02:43

User and group quotas do exist for quite a while for all kind of file systems, like ext4 or vxfs and maybe many others.

However, for my use case, I do need quotas for different directories on the same file system. My mount point is /u01 and in that file system, I do have different Oracle Homes which belong to the same user, oracle. However, I do not want that my 11.2.0.4 Oracle Home influences my 12.1.0.2 Oracle Home in respect of file system usage. e.g. if the 11.2.0.4 home is core dumping, it should not fill up the space of the 12.1.0.2 home, and the other way around. But how can I do that?

The idea is to create XFS project quotas, and this is how it works.

First of all, we need to enable project quotas on the XFS file system.  /u01 is currently mounted with XFS default options, which are (rw,relatime,attr2,inode64,noquota). As you can see, the default is “noquota”.

[root@dbidg01 ~]# mount | grep u01
/dev/mapper/vg_ora_u01-lv_ora_u01 on /u01 type xfs (rw,relatime,attr2,inode64,noquota)

We can enable project quotas by adding the “prjquota” mount option to the /etc/fstab and afterwards remounting the file system.

[root@dbidg01 ~]# mount | grep u01
/dev/mapper/vg_ora_u01-lv_ora_u01 on /u01 type xfs (rw,relatime,attr2,inode64,noquota)

Unfortunately, the remount option does not work with the XFS filesystem. Meaning, I can remount the file system, but my new option “prjquota” is not reflected.

[root@dbidg01 ~]# mount -o remount,rw,relatime,attr2,inode64,prjquota /u01
[root@dbidg01 ~]# mount | grep u01
/dev/mapper/vg_ora_u01-lv_ora_u01 on /u01 type xfs (rw,relatime,attr2,inode64,noquota)

So I have to do a umount, mount. Not a good thing from my point of view, because it means that I cannot enable project quotas online for my /u01 directory, where I have different Oracle homes located. In other words, I need to shutdown all Oracle databases.

[root@dbidg01 ~]# umount /u01
[root@dbidg01 ~]# mount /u01
[root@dbidg01 ~]# mount | grep u01
/dev/mapper/vg_ora_u01-lv_ora_u01 on /u01 type xfs (rw,relatime,attr2,inode64,prjquota)

Ok. Now it looks better. The next step is to define unique project id’s for the different directory hierarchies in the /etc/projects file.

For example, to set a project ID of 11 for the directory hierarchy /u01/app/oracle/product/11.2.0, and the project ID of 12 for the directory hierarchy /u01/app/oracle/product/12.1.0 we can do the following.

# echo "11:/u01/app/oracle/product/11.2.0" >> /etc/projects
# echo "12:/u01/app/oracle/product/12.1.0" >> /etc/projects

[root@dbidg01 ~]# cat /etc/projects
11:/u01/app/oracle/product/11.2.0
12:/u01/app/oracle/product/12.1.0

If you don’t want to work with ID’s, you have the possibility to map project names to the project ID’s in your /etc/projid file. It is much easier in regards of reporting quota usage, which we will see later.

For example, to map the project name oracle11gR2 to the project with ID 11 or to map the project name oracle12cR1 to 12 do the following.

# echo "oracle11gR2:11" >> /etc/projid
# echo "oracle12cR1:12" >> /etc/projid

Now use the project subcommand of xfs_quota to define a managed tree in the XFS file system for the different projects.

For example, to define a managed tree in the /u01 file system for the project oracle11gR2, which corresponds to the directory hierarchy /u01/app/oracle/product/11.2.0, do the following.

# xfs_quota -x -c 'project -s oracle11gR2' /u01

[root@dbidg01 etc]# xfs_quota -x -c 'project -s oracle11gR2' /u01
Setting up project oracle11gR2 (path /u01/app/oracle/product/11.2.0)...
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/bin/lbuilder
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/jdk/jre/lib/amd64/server/libjsig.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libagtsh.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libodm11.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libclntsh.so.10.1
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libocci.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libclntsh.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/liborasdk.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/liborasdkbase.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/sqlnet.ora
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames.ora
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/ldap.ora
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/SQLCA.H
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/ORACA.H
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/SQLDA.H
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/SQLCA.COB
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/ORACA.COB
Processed 1 (/etc/projects and cmdline) paths for project oracle11gR2 with recursion depth infinite (-1).

The same applies to project oracle12cR1.

# xfs_quota -x -c 'project -s oracle12cR1' /u01

[root@dbidg01 etc]# xfs_quota -x -c 'project -s oracle12cR1' /u01
Setting up project oracle12cR1 (path /u01/app/oracle/product/12.1.0)...
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/bin/lbuilder
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/admin/classes.bin
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/admin/cbp.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/admin/libjtcjt.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/security/US_export_policy.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/security/cacerts
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/security/java.security
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/security/local_policy.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/jce.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/sunjce_provider.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/jdk/bin/ControlPanel
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/jdk/jre/bin/ControlPanel
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/jdk/jre/javaws/javaws
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/jdk/jre/lib/amd64/server/libjsig.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libagtsh.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libjavavm12.a
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libodm12.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libocci.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libclntshcore.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libclntsh.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libclntsh.so.10.1
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libclntsh.so.11.1
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/sqlnet.ora
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/ldap.ora
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/tnsnames.ora
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/SQLCA.H
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/ORACA.H
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/SQLDA.H
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/SQLCA.COB
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/ORACA.COB
Processed 1 (/etc/projects and cmdline) paths for project oracle12cR1 with recursion depth infinite (-1).

Now the fun part begins, and we can start using the limit subcommand to set the limits on the disk usage for the different projects. My 11.2.0.4 Oracle home is currently 5.7G in size, and the 12.1.0.2 Oracle home is 7.1G big. I want to configure for the 11g home a soft limit of 8G and a hard limit of 12G, and for the 12c home a soft limit of 10G and a hard limit of 16G.

oracle@dbidg01:/u01/app/oracle/product/ [rdbms112] du -hs 11.2.0
5.7G    11.2.0

oracle@dbidg01:/u01/app/oracle/product/ [rdbms112] du -hs 12.1.0
7.1G    12.1.0

# xfs_quota -x -c 'limit -p bsoft=8g bhard=12g oracle11gR2' /u01
# xfs_quota -x -c 'limit -p bsoft=10g bhard=16g oracle12cR1' /u01

[root@dbidg01 ~]# xfs_quota -x -c 'report -p' /u01
Project quota on /u01 (/dev/mapper/vg_ora_u01-lv_ora_u01)
                               Blocks
Project ID       Used       Soft       Hard    Warn/Grace
---------- --------------------------------------------------
#0            8776636          0          0     00 [--------]
oracle11gR2    5883604    8388608   12582912     00 [--------]
oracle12cR1    7415292   10485760   16777216     00 [--------]

[root@dbidg01 ~]# xfs_quota -x -c 'report -h -p' /u01
Project quota on /u01 (/dev/mapper/vg_ora_u01-lv_ora_u01)
                        Blocks
Project ID   Used   Soft   Hard Warn/Grace
---------- ---------------------------------
#0           8.4G      0      0  00 [------]
oracle11gR2   5.6G     8G    12G  00 [------]
oracle12cR1   7.1G    10G    16G  00 [------]

The quotas are immediately seen by the df command. The df on the /u01 shows the full filesystem size, which is 50G, however, if I navigate to my 11g or 12c home, I see the soft limit which I have configured beforehand.

[root@dbidg01 product]# df -h /u01
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/vg_ora_u01-lv_ora_u01   50G   22G   29G  43% /u01

[root@dbidg01 ~]# cd /u01/app/oracle/product/11.2.0
[root@dbidg01 11.2.0]# df -h .
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/vg_ora_u01-lv_ora_u01  8.0G  5.7G  2.4G  71% /u01

[root@dbidg01 ~]# cd /u01/app/oracle/product/12.1.0
[root@dbidg01 12.1.0]# df -h .
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/vg_ora_u01-lv_ora_u01   10G  7.1G  3.0G  71% /u01
Conclusion

The project quotas with XFS are a quite cool feature. Limiting file system usage per projects gives you a huge flexibility. Setting quotas on individual Oracle homes is one thing, but you could also limit filesystem usage for your Oracle databases in the DIAG directory based on ORACLE_SID, so that /u01/app/oracle/diag/rdbms/<SID1> can never fill up /u01/app/oracle/diag/rdbms/<SID2>. The only drawback I see, is that enabling project quotas is not an online operation. You need to umount and mount the file system to activate this feature.

Have fun with XFS quotas.

Cheers,
William

 

Cet article OEL 7 – Project Quotas on Oracle Homes with XFS on Oracle Linux 7 est apparu en premier sur Blog dbi services.

UKOUG 2016 DAY 4

Wed, 2016-12-07 09:29

uk4

Today is the last day at UKOUG in Birmingham; the first session I attended this morning was presented by Julian Dyke about installing and upgrading Oracle 12c release 2 Grid infrastructure and RAC.

He had the possibility to test the installation and upgrade phases at Oracle during 5 days at Oracle last spring. The following tests were done:

single instance : install 12.2.0.1, create database with dbca, upgrade 12.1.0.2 to 12..2.0.1 with dbua

RAC: install 12.2.0.2 grid infrastructure, install 12.2.0.1 RDBMS software, create ASM disk groups (ASMCA), create 12.2.0.2 RAC database (DBCA) , upgrade 12.1.0.2 Grid infrastructure to 12.2.0.1 (gridSetup.sh), upgrade 12.1.0.2 RAC database to 12.2.0.1.

He showed us the main different screenshots describing the installation phases and told us that they did not meet a lot of problems during their installation or upgrade  phases. To upgrade the Grid infrastructure, it is important to run the CVU connected as grid user for example :

runcluvfy.sh -src_crshome=/u00/app/12.1.0.2 -dest_crshome=/u00/app/12.2.0.1
 -dest_version=12.2.0.1 -fixupnoexec

Then after you have the possibility to resolve any issues detected using the generated fixup script.

In his opinion, the use of DBUA is sufficiently robust to use for most upgrades, expecially when the upgrade concerns non critical databases, or databases with fast recovery times or databases on virtual machines. By the way he also mentioned that Oracle is still recommending using scripts for upgrades of large or business critical databases.

He encountered some isssues concerning the upgrade phase for Grid Infrastructure. In particular with the memory_target parameter setting because the ASM and GIMR instances use more memory than in 12.1.0.2, he received the classical ORA-00845 error message. He also encountered problems with invalid objects  and had to extend the root file system of his virtual machine.

Then I attended to Franck Pachot’s session about Statistics Gathering, Best Practices  and Statistic Advisor:

uk5

His session described us his findings and recommendations about how to gather statistics, with a lot of technical demonstrations done on the Cloud. A lot of cases were shown, for example volatile tables, preferences for partitioned tables. index gathering statistics.

He showed us the Oracle 12c release 2 statistics Advisor which might be a useful tool, I will check if it is available in Enterprise Manager 13.2.

He finished his by giving us hsi own recommendations: use automatic job for most of the tables, customize the statistics gathering for volatile tables, gather statistics for tables that you load, and important customize the maintenance window for the gathering statistics job.

Finally I wanted to attend at the OEM round table, but unfortunately the session has been canceled :=((

UK6

Well,this was a very interesting week with a lot of exchanges and sharing experiences with other Oracle DBA. hope to come back at UKOUG next year !

 

 

 

Cet article UKOUG 2016 DAY 4 est apparu en premier sur Blog dbi services.

Pages