Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 12 hours 27 min ago

EDB Failover Manager 2.1, upgrading

Thu, 2016-09-08 05:53

Some days ago EnterpriseDB released a new version of its EDB Failover Manager which brings one feature that really sounds great: “Controlled switchover and switchback for easier maintenance and disaster recovery tests”. This is exactly what you want when you are used to operate Oracle DataGuard. Switching back and forward as you like without caring much about the old master. The old master shall just be converted to a standby which follows the new master automatically. This post is about upgrading EFM from version 2.0 to 2.1.

As I still have the environment available which was used for describing the maintenance scenarios with EDB Failover Manager (Maintenance scenarios with EDB Failover Manager (1) – Standby node, Maintenance scenarios with EDB Failover Manager (2) – Primary node and Maintenance scenarios with EDB Failover Manager (3) – Witness node) I will use the same environment to upgrade to the new release. Lets start …

This is the current status of my failover cluster:

[root@edbbart ~]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Master      192.168.22.245       UP     UP        
	Witness     192.168.22.244       UP     N/A       
	Standby     192.168.22.243       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.245 192.168.22.243

Standby priority host list:
	192.168.22.243

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/3B01C5E0       
	Standby     192.168.22.243       0/3B01C5E0       

	Standby database(s) in sync with master. It is safe to promote.
[root@edbbart ~]$ 

Obviously you have to download the new version to begin the upgrade. Once the rpm is available on all nodes simply install it on all the nodes:

[root@edbppas tmp]$ yum localinstall efm21-2.1.0-1.rhel7.x86_64.rpm

EFM 2.1 comes with an utility command that helps in upgrading a cluster. You should invoke it on each node:

[root@edbbart tmp]$ /usr/efm-2.1/bin/efm upgrade-conf efm
Processing efm.properties file.
Setting new property node.timeout to 40 (sec) based on existing timeout 5000 (ms) and max tries 8.

Processing efm.nodes file.

Upgrade of files is finished. Please ensure that the new file permissions match those of the template files before starting EFM.
The db.service.name property should be set before starting a non-witness agent.

This created a new configuration file in the new directory under /etc which was created when the new version was installed:

[root@edbbart tmp]$ ls /etc/efm-2.1
efm.nodes  efm.nodes.in  efm.properties  efm.properties.in

All the values from the old EFM cluster should be there in the new configuration files:

[root@edbbart efm-2.1]$ pwd
/etc/efm-2.1
[root@edbbart efm-2.1]$ cat efm.properties | grep daniel
user.email=daniel.westermann...

Before going further check the new configuration parameters for EFM 2.1, which are:

auto.allow.hosts
auto.resume.period
db.service.name
jvm.options
minimum.standbys
node.timeout
promotable
recovery.check.period
script.notification
script.resumed

I’ll leave everything as it was before for now. Notice that a new service got created:

[root@edbppas efm-2.1]$ systemctl list-unit-files | grep efm
efm-2.0.service                             enabled 
efm-2.1.service                             disabled

Lets try to shutdown the old service on all nodes and then start the new one. Step 1 (on all nodes):

[root@edbppas efm-2.1]$ systemctl stop efm-2.0.service
[root@edbppas efm-2.1]$ systemctl disable efm-2.0.service
rm '/etc/systemd/system/multi-user.target.wants/efm-2.0.service'

Then enable the new service:

[root@edbppas efm-2.1]$ systemctl enable efm-2.1.service
ln -s '/usr/lib/systemd/system/efm-2.1.service' '/etc/systemd/system/multi-user.target.wants/efm-2.1.service'
[root@edbppas efm-2.1]$ systemctl list-unit-files | grep efm
efm-2.0.service                             disabled
efm-2.1.service                             enabled 

Make sure your efm.nodes file contains all the nodes which make up the cluster, in my case:

[root@edbppas efm-2.1]$ cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.22.243:9998 192.168.22.244:9998 192.168.22.245:9998

Lets try to start the new service on the witness node first:

[root@edbbart efm-2.1]$ systemctl start efm-2.1.service
[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       

Allowed node host list:
	192.168.22.244

Membership coordinator: 192.168.22.244

Standby priority host list:
	(List is empty.)

Promote Status:

Did not find XLog location for any nodes.

Looks good. Are we really running the new version?

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm -v
Failover Manager, version 2.1.0

Looks fine as well. Time to add the other nodes:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm add-node efm 192.168.22.243
add-node signal sent to local agent.
[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm add-node efm 192.168.22.245
add-node signal sent to local agent.
[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       

Allowed node host list:
	192.168.22.244 192.168.22.243

Membership coordinator: 192.168.22.244

Standby priority host list:
	(List is empty.)

Promote Status:

Did not find XLog location for any nodes.

Proceed on the master:

[root@ppasstandby efm-2.1]$ systemctl start efm-2.1.service
[root@ppasstandby efm-2.1]$ systemctl status efm-2.1.service
efm-2.1.service - EnterpriseDB Failover Manager 2.1
   Loaded: loaded (/usr/lib/systemd/system/efm-2.1.service; enabled)
   Active: active (running) since Thu 2016-09-08 12:04:11 CEST; 25s ago
  Process: 4020 ExecStart=/bin/bash -c /usr/efm-2.1/bin/runefm.sh start ${CLUSTER} (code=exited, status=0/SUCCESS)
 Main PID: 4075 (java)
   CGroup: /system.slice/efm-2.1.service
           └─4075 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.77-0.b03.el7_2.x86_64/jre/bin/java -cp /usr/e...

Sep 08 12:04:07 ppasstandby systemd[1]: Starting EnterpriseDB Failover Manager 2.1...
Sep 08 12:04:08 ppasstandby sudo[4087]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/efm-... efm
Sep 08 12:04:08 ppasstandby sudo[4098]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/efm-... efm
Sep 08 12:04:08 ppasstandby sudo[4114]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAND=/usr/... efm
Sep 08 12:04:08 ppasstandby sudo[4125]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAND=/usr/... efm
Sep 08 12:04:10 ppasstandby sudo[4165]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/efm-...9998
Sep 08 12:04:10 ppasstandby sudo[4176]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/efm-...4075
Sep 08 12:04:11 ppasstandby systemd[1]: Started EnterpriseDB Failover Manager 2.1.
Hint: Some lines were ellipsized, use -l to show in full.

And then continue on the standby:

[root@edbppas efm-2.1]$ systemctl start efm-2.1.service
[root@edbppas efm-2.1]$ systemctl status efm-2.1.service
efm-2.1.service - EnterpriseDB Failover Manager 2.1
   Loaded: loaded (/usr/lib/systemd/system/efm-2.1.service; enabled)
   Active: active (running) since Thu 2016-09-08 12:05:28 CEST; 3s ago
  Process: 3820 ExecStart=/bin/bash -c /usr/efm-2.1/bin/runefm.sh start ${CLUSTER} (code=exited, status=0/SUCCESS)
 Main PID: 3875 (java)
   CGroup: /system.slice/efm-2.1.service
           └─3875 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.77-0.b03.el7_2.x86_64/jre/bin/jav...

Sep 08 12:05:24 edbppas systemd[1]: Starting EnterpriseDB Failover Manager 2.1...
Sep 08 12:05:25 edbppas sudo[3887]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/u...efm
Sep 08 12:05:25 edbppas sudo[3898]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/u...efm
Sep 08 12:05:25 edbppas sudo[3914]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAN...efm
Sep 08 12:05:25 edbppas sudo[3925]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAN...efm
Sep 08 12:05:25 edbppas sudo[3945]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAN...efm
Sep 08 12:05:28 edbppas sudo[3981]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/u...998
Sep 08 12:05:28 edbppas sudo[3994]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/u...875
Sep 08 12:05:28 edbppas systemd[1]: Started EnterpriseDB Failover Manager 2.1.
Hint: Some lines were ellipsized, use -l to show in full.

What is the cluster status now?:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Master      192.168.22.245       UP     UP        
	Witness     192.168.22.244       UP     N/A       
	Standby     192.168.22.243       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.243 192.168.22.245

Membership coordinator: 192.168.22.244

Standby priority host list:
	192.168.22.243

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/3B01C7A0       
	Standby     192.168.22.243       0/3B01C7A0       

	Standby database(s) in sync with master. It is safe to promote.

Cool. Back in operation on the new release. Quite easy.

PS: Remember to re-point your symlinks in /etc and /usr if you created symlinks for easy of use.

 

Cet article EDB Failover Manager 2.1, upgrading est apparu en premier sur Blog dbi services.

Multitenant thin provisioning: PDB snapshots on ACFS

Tue, 2016-09-06 16:07

Database on ACFS is a long story. At first, the main reason for ASM was to bypass a filesystem layer that is not required by the database. ACFS was for the non-database files that had to be accessed by all cluster nodes. But then, storage vendors and other providers came with snapshots, compression and thin provisioning and Oracle had to answer: they implemented those storage features in ACFS and allowed database files on it.

When you create a database on an ODA X5, datafiles are created on an ACFS mount. There is only one ACFS mount for many databases. You probably want to snapshot at database level, but ACFS snapshots are only at filesystem level. To avoid that any write on the filesystem is copied when you need the snapshot for a single database, they implemented this way: At installation, the ACFS mount is created and a snapshot is taken when empty. Then each database created will create a snapshot. This means that in each snapshot you access only to one database. There is no overhead and no additional copies because the master is empty.
Then came multitenant where you can snapshot at PDB level for thin cloning (create PDB from … snapshot copy). But multitenant cannot be created on a snapshot. CDB must be at root level on ACFS. In ODA X6, an ACFS filesystem is created for each database. But then, when you thin clone a PDB, a snapshot is taken for the whole database. But this one is not empty: any write will have additional copy and overhead.

There’s more info about ACFS, copy-on-write which is not copy-on-write but redirect-on-write, and performance overhead in the excellent presentation and demo from Ludovico Caldara. Here I’ll show the snapshot overhead in multitenant when writing to the master, the clone, and the others.

PDB snapshots on ACFS

I start with a brand new CDB on ACFS with no snapshots:

[oracle@rac1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvdb3 16G 5.0G 9.9G 34% /
tmpfs 7.3G 631M 6.7G 9% /dev/shm
/dev/xvdb1 477M 118M 330M 27% /boot
/dev/xvdc1 69G 22G 44G 34% /u01
/dev/asm/data-52 25G 6.6G 19G 27% /u02
/dev/asm/fra-401 18G 3.4G 15G 19% /u03
/dev/asm/redo-358 19G 8.2G 11G 44% /u04

[oracle@rac1 ~]$ acfsutil snap info /u02
number of snapshots: 0
snapshot space usage: 0 ( 0.00 )

This is what is created by the Oracle Public Cloud for a RAC DBaaS.

I have a PDB1 pluggable database.

I create another one, PDBx, which will be totally independent.

SQL> create pluggable database PDBx admin user admin identified by "Ach1z0#d";
Pluggable database created.
SQL> alter pluggable database PDBx open;
Pluggable Database opened

I create a thin clone pluggable database database PDB2,using PDB1 as the master:

SQL> create pluggable database PDB2 from PDB1 snapshot copy;
Pluggable database created.
SQL> alter pluggable database PDB2 open;
Pluggable Database opened

Here are my pluggable databases:

SQL> select pdb_name,GUID from dba_pdbs;
 
PDB_NAME GUID
-------- --------------------------------
PDB$SEED 3360B2A306C60098E053276DD60A9928
PDB1 3BDAA124651F103DE0531ADBC40A5DD3
PDBX 3BDCCBE4C1B64A5AE0531ADBC40ADBB7
PDB2 3BDCCBE4C1B74A5AE0531ADBC40ADBB7

The PDB2 being a snapshot clone, it has created a snapshot on the /u02 which is the ACFS filesystem where datafiles are:

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 3588096 ( 3.42 MB )

Space usage is minimal here as no write happened yet.

datafiles

Here are the datafiles of my CDB to see if PDB2 are on the snapshot:

RMAN> report schema;
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 780 SYSTEM YES /u02/app/oracle/oradata/ORCL/datafile/o1_mf_system_cwxwcgz4_.dbf
2 260 PDB$SEED:SYSTEM NO /u02/app/oracle/oradata/ORCL/3360B2A306C60098E053276DD60A9928/datafile/o1_mf_system_cwxwbzrd_.dbf
3 1030 SYSAUX NO /u02/app/oracle/oradata/ORCL/datafile/o1_mf_sysaux_cwxw98jl_.dbf
4 760 PDB$SEED:SYSAUX NO /u02/app/oracle/oradata/ORCL/3360B2A306C60098E053276DD60A9928/datafile/o1_mf_sysaux_cwxwdof3_.dbf
7 545 UNDOTBS1 YES /u02/app/oracle/oradata/ORCL/datafile/o1_mf_undotbs1_cwxwdl6s_.dbf
8 200 UNDOTBS2 YES /u02/app/oracle/oradata/ORCL/datafile/o1_mf_undotbs2_cwxwrw7y_.dbf
9 370 PDB1:SYSTEM NO /u02/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_system_cwxx3fb0_.dbf
10 800 PDB1:SYSAUX NO /u02/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_sysaux_cwxx3fbl_.dbf
11 5 USERS NO /u02/app/oracle/oradata/ORCL/datafile/o1_mf_users_cwxxop2q_.dbf
12 5 PDB1:USERS NO /u02/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_users_cwxxopm9_.dbf
49 370 PDBX:SYSTEM NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B64A5AE0531ADBC40ADBB7/datafile/o1_mf_system_cwy6688l_.dbf
50 800 PDBX:SYSAUX NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B64A5AE0531ADBC40ADBB7/datafile/o1_mf_sysaux_cwy6688r_.dbf
51 5 PDBX:USERS NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B64A5AE0531ADBC40ADBB7/datafile/o1_mf_users_cwy6688z_.dbf
52 370 PDB2:SYSTEM NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile/o1_mf_system_cwy6725s_.dbf
53 800 PDB2:SYSAUX NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile/o1_mf_sysaux_cwy67261_.dbf
54 5 PDB2:USERS NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile/o1_mf_users_cwy67268_.dbf

The PDB2 datafiles are actually symbolic links to the snapshot:


[oracle@rac1 ~]$ ls -l /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile
/u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile:
total 62484
lrwxrwxrwx 1 oracle oinstall 142 Sep 6 19:28 o1_mf_sysaux_cwy67261_.dbf -> /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_sysaux_cwxx3fbl_.dbf
lrwxrwxrwx 1 oracle oinstall 142 Sep 6 19:28 o1_mf_system_cwy6725s_.dbf -> /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_system_cwxx3fb0_.dbf
-rw-r----- 1 oracle oinstall 63971328 Sep 6 19:28 o1_mf_temp_cwy67267_.dbf
lrwxrwxrwx 1 oracle oinstall 141 Sep 6 19:28 o1_mf_users_cwy67268_.dbf -> /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_users_cwxxopm9_.dbf

So, you have a snapshot on the/u02 which contains all the CDB datafiles but only the PDB2 datafiles will be read/written on the snapshot (though the symbolic links). The other CDB files are included in the snapshot, but without any reason. Nothing will read or write them. They are there only because ACFS cannot snapshot a folder or a set of file. It’s only a filesystem level.

wite on master

For the moment, the snapshot is small: the blocks are shared.

If I write 100MB on the master (PDB1), those blocks will be copied in order to persist the old image for the snapshot:

SQL> alter session set container=PDB1
Session altered.
SQL> truncate table DEMO;
Table truncated.
SQL> insert /*+ append */into DEMO select lpad('b',900,'b') x from xmltable('1 to 100000');
100000 rows created.

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 105025536 ( 100.16 MB )

Snapshot size increased by the volume that has been written, which is expected: old image is required by PDB2.

write on thin clone

If I write to the clone, copy has to happen as well:

SQL> alter session set container=PDB2
Session altered.
SQL> truncate table DEMO;
Table truncated.
SQL> insert /*+ append */into DEMO select lpad('b',900,'b') x from xmltable('1 to 100000');
100000 rows created.

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 211275776 ( 201.48 MB )

So, because I’ve written 200MB to blocks involved in the snapshots, the snapshot size is now 200MB.

However, look at the way I did it with truncate and insert. I’m writing on the same blocks as I did when writing on PDB1. To be sure I checked it from DBA_EXTENTS and had the same result in both pdbs:
SQL> select file_id,listagg(block_id,',')within group(order by block_id),blocks from dba_extents where segment_name='DEMO' and segment_type='TABLE' group by file_id,blocks;
 
FILE_ID LISTAGG(BLOCK_ID,',')WITHINGROUP(ORDERBYBLOCK_ID) BLOCKS
---------- ------------------------------------------------------------------------------------------------------------------------------------------------------ ----------
64 33024,33032,33040,33048,33056,33064,33072,33080,33088,33096,33104,33112,33120,33128,33136,33144 8
64 33152,33280,33408,33536,33664,33792,33920,34048,34176,34304,34432,34560,34688,34816,34944,35072,35200,35328,35456,35584,35712,35840,35968,36096,36224, 128
36352,36480,36608,36736,36864,36992,37120,37248,37376,37504,37632,37760,37888,38016,38144,38272,38400,38528,38656,38784,38912,39040,39168,39296,39424,
39552,39680,39808,39936,40064,40192,40320,40448,40576,40704,40832,40960,41088 1024
64 41216,42240,43264,44288,45312

So why do I have additional 100MB on my snapshots? Writing to the original blocks would be sufficient as they have been redirected to new ones by the write from PDB1? But because the ACFS snapshot, the previous image is kept. In addition to the current state of PDB1 and PDB2, the snapshot also keeps the blocks from PDB1 as they were at the time of the PDB2 clone. Who needs it?

Ok. This is not a big issue if we consider that you usually don’t write on the master, because it’s the master.

write on other PDB

Remember that multitenant is for consolidation. You don’t use a CDB only for the master and its clones. You may want to host other PDBs. If we had a snapshot for PDB1 and PDB2, writes to other files should not be concerned: no write overhead and no additional storage. However, because the snapshot was made on the whole filesystem, any write to it will copy the blocks, even those that are not concerned by the thin cloned PDB. Here I’m writing 100MB to PDBx which has nothing in common with PDB1 nor PDB2:

SQL> alter session set container=PDBx
Session altered.
SQL> truncate table DEMO;
Table truncated.
SQL> insert /*+ append */into DEMO select lpad('b',900,'b') x from xmltable('1 to 100000');
100000 rows created.

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 311214080 ( 296.79 MB )

This is an additional 100MB. Before image of PDBx has been saved, for no reason because we will never read this previous image.

snapshot keeps growing

after a few minutes without any user activity, snapshot has grown further:

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 332947456 ( 317.52 MB )

On the /u02 filesystem, there is all the CDB files, SYSTEM, UNDO, controlfile, etc. They have activity and they are copied when written, once again for no reason.

drop thin clone

Only when I drop the PDB2 this space is released:

SQL> alter pluggable database pdb2 close;
Pluggable database altered.
SQL> drop pluggable database pdb2 including datafiles;
Pluggable database dropped.

[oracle@rac1 ~]$ acfsutil snap info /u02
number of snapshots: 0
snapshot space usage: 0 ( 0.00 )

So what?

There’s no bug here. It works as designed because ACFS snapshot is at filesystem level. If you want to use multitenant thin provisioning, the recommendation is to dedicate a CDB for the master and its clones. Any other PDB will have their writes copied to the snapshot for no reason. Writes to common files: UNDO (in 12.1), SYSTEM, SYSAUX will also be concerned. The duration of the clones should be short life, refreshed frequently. And of course, thin cloning is not for production. Very few snapshot/compression/clone technologies can be used in production. Look at storage vendors solutions for that (XtremIO for example).

 

Cet article Multitenant thin provisioning: PDB snapshots on ACFS est apparu en premier sur Blog dbi services.

Disable Red Hat 7 Transparent Hugepages

Mon, 2016-09-05 09:56
Ausgangslage

Ein Kunde hat mich kürzlich angefragt, wie am effizientesten alle Kernel Tuning Parameter für Oracle bei Red Hat 7 konfiguriert werden können. Daraus habe ich untersucht welche Möglichkeiten es gibt, ausser dem seit langer Zeit bekannten /etc/sysctl.conf. Bei dieser Analyse führt der Weg natürlich über tuned. Dieser Dienst erlaubt es Profile zu definieren welche ein ganzes Set von Kernel Tuningparametern enthält. Das habe ich bereits in einem BLOG “Stay tuned with kernel parameters” publiziert. Tuned ist seit Red Hat Linux 7 standardmässig enabled!

Welche Möglichkeiten gibt es?

Bei Red Hat Linux 7 sind standardmässig die THP (Transparent Hugepages) eingeschalten. Wird eine Oracle Datenbank auf diesem Server installiert, ist es dringend zu empfehlen die THP auszuschalten. Oracle gibt einen erheblichen Performance verlust an werden die THP nicht ausgeschalten.

Um die THP auszuschalten gibt es nun unterschiedliche Wege (der Weg führt über Kernel Boot-Parameter):

  • Variante 1: Kernel Boot-Parameter /boot/grub2/grub.cfg
  • Variante 2: Kernel Boot-Parameter /etc/default/grub.conf
  • Variante 3: Kernel Boot-Parameter /usr/lib/tuned/oracle/tuned.conf

Jeder der aufgeführten Varianten hat wie immer Vor und Nachteile.

Variante 1: (/boot/grub2/grub.cfg)
menuentry 'Red Hat Enterprise Linux Server (3.10.0-327.28.3.el7.x86_64) 7.2 (Maipo)' --class red --class gnu-linux --class gnu --class os --unrestri
cted $menuentry_id_option 'gnulinux-3.10.0-327.28.3.el7.x86_64-advanced-5b1a0ee8-c384-47be-99be-6d09a44d9583' {
        load_video
        set gfxpayload=keep
        insmod gzio
        insmod part_msdos
        insmod xfs
        set root='hd0,msdos1'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  69
06b305-a050-4815-8e4b-79202d7ba9fa
        else
          search --no-floppy --fs-uuid --set=root 6906b305-a050-4815-8e4b-79202d7ba9fa
        fi
        linux16 /vmlinuz-3.10.0-327.28.3.el7.x86_64 root=/dev/mapper/rhel-root ro crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quie
t transparent_hugepage=never


        initrd16 /initramfs-3.10.0-327.28.3.el7.x86_64.img
}

Hier wurde die Zeile in der der Kernel Aufruf ist, mit “transparent_hugepage=never” erweitert.

Nachteil, wird eine neue Kernel Version installiert muss dieser Eintrag dort manuell nachgeführt werden. Hier handelt es sich bestenfalls um eine schnelle Möglichkeit für einen Test und nicht um eine permanente Lösung.

Variante 2: (/etc/default/grub.cfg) [root@redhat72  ~]# cat /etc/default/grub

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet transparent_hugepage=never hugepages=100"
GRUB_DISABLE_RECOVERY="true" GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_DEFAULT:+$GRUB_CMDLINE_LINUX_DEFAULT }"

Hier wurde die Zeile “GRUB_CMDLINE_LINUX=”, mit “transparent_hugepage=never hugepages=100″ erweitert. Vorteil, es wird bei jedem neuen Kernel der installiert wird automatisch ergänzt im /boot/grub2/grub.cfg. Im Vergleich zur Variante 1, schon eher am richtigen Ort, aber es geht noch etwas besser….

Variante 3: (/usr/lib/tuned/oracle/tuned.conf)

Jetzt kommen in Abhängigkeit des gewählten tuned Profile, die Kernel Boot-Parameter in die /boot/grub2/grub.cfg. Ein solches tuned Profile z.B. für Oracle:

[root@redhat72 ~]# cat /usr/lib/tuned/oracle/tuned.conf 
#
# tuned configuration
# maintained by dbi services, GGR 24.08.2016
[main]
include=throughput-performance
[bootloader]
cmdline = "transparent_hugepage=never hugepages=100"
[sysctl]
vm.swappiness = 1
vm.dirty_background_ratio = 3
vm.dirty_ratio = 80
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs = 100
kernel.shmmax = 4398046511104
#
# Half the size of physical memory in bytes
# See My Oracle Support Note 567506.1 for additional information about
# configuring shmmax
#
kernel.shmall = 1073741824
#
# 40 percent of the size of physical memory in pages
# Note: If the server supports multiple databases, or uses a large SGA,
# then set this parameter to a value that is equal to the total amount
# of shared memory, in 4K pages, that the system can use at one time
#
kernel.shmmni = 4096
kernel.msgmni = 2878
kernel.msgmnb = 65535
kernel.msgmax = 65536
# kernel.sem = 250 32000 100 128
# on oracle engineered systems
kernel.sem = 4096 256000 100 4096
fs.file-max = 6815744
fs.aio-max-nr = 3145728
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
kernel.panic_on_oops = 1

 

Der spezielle Abschnitt in dem Konfigurationsfile ist die “bootloader” Sektion, mit darauf folgendem Befehl: tuned-adm profile oracle wird das Profile aktiviert.

Das hat den grossen Vorteil, wird ein anderes tuned Profile selektiert können andere Parameter gesetzt werden. Diese Methode erlaubt die Konfiguration, ausschalten der THP und festlegen der grösse der HugePages in einem Profile und das nur an einer Stelle, im oracle/tune.cfg.

Nachteil, es muss nach der Anpassung mit grub2-mkconfig -o /boot/grub2/grub.cfg, noch die /boot/grub2/grub.cfg neu Erstellt werden.

[root@redhat72 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-327.28.3.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-327.28.3.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-327.28.2.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-327.28.2.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-327.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-327.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-29b38e2132e04639809cde7749271d64
Found initrd image: /boot/initramfs-0-rescue-29b38e2132e04639809cde7749271d64.img
done
[root@redhat72 ~]#

 

Mit dem letzten Schritt, werden einige Dinge automatisch angepasst:

  • Parameter auf der command_line_linux_default im /etc/default/grub
  • Parameter auf der Kernelzeile im /boot/grub2/grub.cfg
  • angepasstes File unter /etc/tuned/bootcmdline

 

/boot/grubs/grub.cfg

linux16 /vmlinuz-3.10.0-327.28.3.el7.x86_64 root=/dev/mapper/rhel-root ro crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quie
t $tuned_params

 

/etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_DEFAULT:+$GRUB_CMDLINE_LINUX_DEFAULT }\$tuned_params"

 

/etc/tuned/bootcmdline

TUNED_BOOT_CMDLINE="transparent_hugepage=never hugepages=100"

Damit die ganzen Anpassungen Wirkung zeigen, muss ein Reboot des Servers durchgeführt werden.

Prüfen der Anpassungen:

Nun wollen wir die ganzen Anpassungen auch überprüfen, speziell die AnonHugePages sind die Transparenten Hugepages, auf das achten wir jetzt:

Wie sehen uns die HugePage Konfiguration vor der Konfiguration an:

[root@redhat72 ~]# cat /proc/meminfo | grep Huge
AnonHugePages: 6144 kB
HugePages_Total: 100
HugePages_Free: 100
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

Wir sehen uns die Konfiguration nach der Konfiguration an:

[root@redhat72 ~]# cat /proc/meminfo | grep Huge
AnonHugePages: 0 kB
HugePages_Total: 100
HugePages_Free: 100
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

 

Fazit:

Wird mit tuned gearbeitet und werden auf diese Art die Kernel-Parameter für z.B. für Oracle gesetzt, so können auf die gleiche Art auch gerade noch die THP ausgeschaltet und die Anzahl der HugePages welche bereitsgestellt werden sollen konfiguriert werden.

 

Cet article Disable Red Hat 7 Transparent Hugepages est apparu en premier sur Blog dbi services.

Adaptive Plans and cost of inactive branches

Sat, 2016-09-03 06:27

Here are the details about an execution plan screenshot I’ve tweeted recently because the numbers looked odd. It’s not a big problem, or maybe not a problem at all. Just something surprising. I don’t like when the numbers don’t match and then I try to reproduce and get an explanation, just to be sure there is not something hidden that I misunderstood.

Here is a similar test case joining two small tables DEMO1 and DEMO2 with specific stale statistics.

Hash Join

I start by forcing a full table scan to get a hash join:

select /*+ full(DEMO2) */ * from DEMO1 natural join DEMO2
Plan hash value: 3212315601
------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 130 (100)| 0 |00:00:00.01 | 3 | | | |
|* 1 | HASH JOIN | | 1 | 200 | 130 (1)| 0 |00:00:00.01 | 3 | 1696K| 1696K| 520K (0)|
| 2 | TABLE ACCESS FULL| DEMO1 | 1 | 200 | 3 (0)| 0 |00:00:00.01 | 3 | | | |
| 3 | TABLE ACCESS FULL| DEMO2 | 0 | 100K| 127 (1)| 0 |00:00:00.01 | 0 | | | |
------------------------------------------------------------------------------------------------------------------------------

The cost of DEMO1 full table scan is 3. The cost of DEMO2 full table scan is 127. That’s a total of 130 (the cost of the hash join itself is negligible here)

Nested Loop

When forcing an index access, a nested loop will be used:

select /*+ index(DEMO2) */ * from DEMO1 natural join DEMO2
Plan hash value: 995663177
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 203 (100)| 0 |00:00:00.01 | 3 |
| 1 | NESTED LOOPS | | 1 | 200 | 203 (0)| 0 |00:00:00.01 | 3 |
| 2 | NESTED LOOPS | | 1 | 200 | 203 (0)| 0 |00:00:00.01 | 3 |
| 3 | TABLE ACCESS FULL | DEMO1 | 1 | 200 | 3 (0)| 0 |00:00:00.01 | 3 |
|* 4 | INDEX UNIQUE SCAN | DEMOPK | 0 | 1 | 0 (0)| 0 |00:00:00.01 | 0 |
| 5 | TABLE ACCESS BY INDEX ROWID| DEMO2 | 0 | 1 | 1 (0)| 0 |00:00:00.01 | 0 |
--------------------------------------------------------------------------------------------------------------

The cost of the index access is 1 and as it expected to run 200 loops the total cost is 200. With the full table scan of DEMO1 the total is 203.

Adaptive plan

Here is an explain plan to see the initial plan with active and inactive branches:

SQL> explain plan for
2 select /*+ */ * from DEMO1 natural join DEMO2;
SQL> select * from table(dbms_xplan.display(format=>'adaptive'));
Plan hash value: 3212315601
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 200 | 6400 | 130 (1)| 00:00:01 |
| * 1 | HASH JOIN | | 200 | 6400 | 130 (1)| 00:00:01 |
|- 2 | NESTED LOOPS | | 200 | 6400 | 130 (1)| 00:00:01 |
|- 3 | NESTED LOOPS | | | | | |
|- 4 | STATISTICS COLLECTOR | | | | | |
| 5 | TABLE ACCESS FULL | DEMO1 | 200 | 1000 | 3 (0)| 00:00:01 |
|- * 6 | INDEX UNIQUE SCAN | DEMOPK | | | | |
|- 7 | TABLE ACCESS BY INDEX ROWID| DEMO2 | 1 | 27 | 127 (1)| 00:00:01 |
| 8 | TABLE ACCESS FULL | DEMO2 | 100K| 2636K| 127 (1)| 00:00:01 |
------------------------------------------------------------------------------------------

The active branches (full table scan) have the correct cost: 127 + 3 = 130

However, it’s not the case with inactive ones: no estimations for ‘INDEX UNIQUE SCAN’ and it seems that the ‘TABLE ACCESS BY INDEX ROWID’ get the cost from the full table scan (here 127).

It’s just an observation here. I’ve no explanation about it and I’ve no idea about the consequences except the big surprise when you see the numbers. I guess that the cost of the inactive branches is meaningless. What is important is that the right cost has been used to determine the inflection point.

The index access having a cost of 1, the cost of the nested loop will be higher than full table scan (estimated to 127) when there are more than 127 loops. This is what we see from the 10053 trace:
SQL> host grep ^DP DEMO14_ora_19470_OPTIMIZER.trc
DP: Found point of inflection for NLJ vs. HJ: card = 127.34

Now, as I have no rows in the tables, the nested loop branch will be activated in place of the hash join. So if we display the plan once it is resolved, we will see the lines with an unexpected cost:

Plan hash value: 995663177
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 130 (100)| 0 |00:00:00.01 | 3 |
| 1 | NESTED LOOPS | | 1 | 200 | 130 (1)| 0 |00:00:00.01 | 3 |
| 2 | NESTED LOOPS | | 1 | | | 0 |00:00:00.01 | 3 |
| 3 | TABLE ACCESS FULL | DEMO1 | 1 | 200 | 3 (0)| 0 |00:00:00.01 | 3 |
|* 4 | INDEX UNIQUE SCAN | DEMOPK | 0 | | | 0 |00:00:00.01 | 0 |
| 5 | TABLE ACCESS BY INDEX ROWID| DEMO2 | 0 | 1 | 127 (1)| 0 |00:00:00.01 | 0 |
--------------------------------------------------------------------------------------------------------------

I think it’s harmless, just a bit misleading. 127 is not the cost of the index access. It’s the cost of the full table scan.
I had this surprise when trying to understand why the optimizer choose a full scan instead of index access. This is probably the only reason why I look at the cost: use hints to force the plan that I think is better, in order to understand where the optimizer thinks it is more expensive.

 

Cet article Adaptive Plans and cost of inactive branches est apparu en premier sur Blog dbi services.

Initiate a local GIT repository in command line

Fri, 2016-09-02 10:08

GIT

Objective of the document is to describe how to start manually with command lines a development project, from an existing GIT repository.

Usage of GIT protocol for software development empowers projects team management. It is intended to ease source code management in terms of versioning, branching and sharing between all team members.

    GIT platform Architecture

GIT is a distributed version control system, it means developers can share source code from their workstation to others without the need of any centralized repository. However, at dbi-services we made the choice to deploy a centralized repository platform, first in order to avoid manual synchronization between all developers, then to benefit from a shared common project collaboration platform, like GitHub or GitLab.

Prior being allowed to make a push request to centralized repository, a developer must first ensure having got latest source code revision into its local workstation’s repository (pull request). Then he can commit locally his changes, eventually correct merge conflicts, and finally make the push request to centralized platform.

GIT Architecture

 

    Manual / Command line management

This section will demonstrate how to initiate developer’s local source code management with a remote GIT repository, (as well as from a collaboration platform like GitLab), using the command line.

These commands run out of the box in a Linux operating system.
Under Windows, you must install “git-bash” application.

There are 2 cases for a project initialization:

–    Starting a project from your source code
–    Getting source code from a shared repository

First of all a GIT repository has to be created in the GIT collaboration platform. Do ask GIT platform’s administrators for project creation.

Before starting, it is recommended to update your GIT personal information:

git config --global user.name user
git config --global user.email user@xxx.com

 

Check status of your GIT configuration:

git config –list

       

        Project initialization from local source code

First you must go to your project folder. It is recommended to have the “src” folder underneath.

GIT repository initialization:

git init

 

Create a “master” branch on your local and on remote GIT repository

For local branch creation, you will need to add and commit something (like a README.txt file):

git add README.txt
git commit -m adding README.txt
git branch
* master

 

For remote branch creation, you must first create the local branch, add the remote repository “origin”, then make a pull request to shared repository:

git remote add origin http://<your git server>/<your repo>.git
git push origin master

“origin” represents a pointer name to remote repository.

 

        Project initialization getting source code from shared repository

Get source code from the repository:

git clone http://<your git server>/<your repo>.git <your destination folder>

 

Congratulations, you are now ready to use GIT with your new project !

 

Cet article Initiate a local GIT repository in command line est apparu en premier sur Blog dbi services.

Auditing in PostgreSQL

Fri, 2016-09-02 09:51

Today, especially in the Pharma and Banking sectors, sooner or later you will be faced with the requirement of auditing. Detailed requirements will vary but usually at least tracking logons to the database is a must. Some companies need more information to pass their internal audits such as: Who created which objects, who fired which sql against the database, who was given which permissions and so on. In this post we’ll look at what PostgreSQL can offer here.

PostgreSQL comes with a comprehensive logging system by default. In my 9.5.4 instance there are 28 parameters related to logging:

(postgres@[local]:5438) [postgres] > select count(*) from pg_settings where name like 'log%';
 count 
-------
    28
(1 row)

Not all of them are relevant when it comes to auditing but some can be used for a minimal auditing setup. For logons and loggoffs there are “log_connections” and “log_disconnections”:

(postgres@[local]:5438) [postgres] > alter system set log_connections=on;
ALTER SYSTEM
(postgres@[local]:5438) [postgres] > alter system set log_disconnections=on;
ALTER SYSTEM
(postgres@[local]:5438) [postgres] > select context from pg_settings where name in ('log_dicconnections','log_connections');
      context      
-------------------
 superuser-backend
(1 row)
(postgres@[local]:5438) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

From now on, as soon as someone connects to or disconnects from the instance it is reported in the logfile:

2016-09-02 10:35:56.983 CEST - 2 - 13021 - [local] - postgres@postgres LOG:  connection authorized: user=postgres database=postgres
2016-09-02 10:36:04.820 CEST - 3 - 13021 - [local] - postgres@postgres LOG:  disconnection: session time: 0:00:07.837 user=postgres database=postgres host=[local]

Another parameter that might be useful for auditing is “log_statement”. When you set this to “ddl” all DDLs are logged, when you set it to “mod” all DDLs plus all statements that modify data will be logged. To log all statements there is the value of “all”.

(postgres@[local]:5438) [postgres] > alter system set log_statement='all';
ALTER SYSTEM

For new session all statements will be logged from now on:

2016-09-02 10:45:15.859 CEST - 3 - 13086 - [local] - postgres@postgres LOG:  statement: create table t ( a int );
2016-09-02 10:46:44.064 CEST - 4 - 13098 - [local] - postgres@postgres LOG:  statement: insert into t values (1);
2016-09-02 10:47:00.162 CEST - 5 - 13098 - [local] - postgres@postgres LOG:  statement: update t set a = 2;
2016-09-02 10:47:10.606 CEST - 6 - 13098 - [local] - postgres@postgres LOG:  statement: delete from t;
2016-09-02 10:47:22.012 CEST - 7 - 13098 - [local] - postgres@postgres LOG:  statement: truncate table t;
2016-09-02 10:47:25.284 CEST - 8 - 13098 - [local] - postgres@postgres LOG:  statement: drop table t;

Be aware that your logfile can grow significantly if you turn this on and especially if you set the value to “all”.

That’s it more or less when it comes to auditing: You can audit logons, logoffs and SQL statements. This might be sufficient for your requirements but this also might not be sufficient for requirements. What do you do if you need e.g. to audit on object level? With the default logging parameters you can not do this. But, as always in PostgreSQL, there is an extension: pgaudit.

If you want to install this extension you’ll need the PostgreSQL source code. To show the complete procedure here is a PostgreSQL setup from source. Obiously the first step is to download and extract the source code:

postgres@pgbox:/u01/app/postgres/software/ [PG953] cd /u01/app/postgres/software/
postgres@pgbox:/u01/app/postgres/software/ [PG953] wget https://ftp.postgresql.org/pub/source/v9.5.4/postgresql-9.5.4.tar.bz2
--2016-09-02 09:39:29--  https://ftp.postgresql.org/pub/source/v9.5.4/postgresql-9.5.4.tar.bz2
Resolving ftp.postgresql.org (ftp.postgresql.org)... 213.189.17.228, 217.196.149.55, 87.238.57.227, ...
Connecting to ftp.postgresql.org (ftp.postgresql.org)|213.189.17.228|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 18496299 (18M) [application/x-bzip-compressed-tar]
Saving to: ‘postgresql-9.5.4.tar.bz2’

100%[==================================================================================>] 18'496'299  13.1MB/s   in 1.3s   

2016-09-02 09:39:30 (13.1 MB/s) - ‘postgresql-9.5.4.tar.bz2’ saved [18496299/18496299]

postgres@pgbox:/u01/app/postgres/software/ [PG953] tar -axf postgresql-9.5.4.tar.bz2 
postgres@pgbox:/u01/app/postgres/software/ [PG953] cd postgresql-9.5.4

Then do the usual configure, make and make install:

postgres@pgbox:/u01/app/postgres/software/ [PG953] PGHOME=/u01/app/postgres/product/95/db_4
postgres@pgbox:/u01/app/postgres/software/ [PG953] SEGSIZE=2
postgres@pgbox:/u01/app/postgres/software/ [PG953] BLOCKSIZE=8
postgres@pgbox:/u01/app/postgres/software/ [PG953] ./configure --prefix=${PGOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGOME}/bin \
            --libdir=${PGOME}/lib \
            --sysconfdir=${PGOME}/etc \
            --includedir=${PGOME}/include \
            --datarootdir=${PGOME}/share \
            --datadir=${PGOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-tcl \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
            --with-wal-segsize=16  \
            --with-extra-version=" dbi services build"
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] make world
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] make install
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] cd contrib
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] make install

Once this is done you can continue with the installation of the pgaudit extension:

postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] git clone https://github.com/pgaudit/pgaudit.git
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/contrib/ [PG953] cd pgaudit/
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/contrib/pgaudit/ [PG953] make -s check
============== creating temporary instance            ==============
============== initializing database system           ==============
============== starting postmaster                    ==============
running on port 57736 with PID 8635
============== creating database "contrib_regression" ==============
CREATE DATABASE
ALTER DATABASE
============== running regression test queries        ==============
test pgaudit                  ... ok
============== shutting down postmaster               ==============
============== removing temporary instance            ==============

=====================
 All 1 tests passed. 
=====================

postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/contrib/pgaudit/ [PG953] make install
/usr/bin/mkdir -p '/u01/app/postgres/product/95/db_4/lib'
/usr/bin/mkdir -p '/u01/app/postgres/product/95/db_4/share/extension'
/usr/bin/mkdir -p '/u01/app/postgres/product/95/db_4/share/extension'
/usr/bin/install -c -m 755  pgaudit.so '/u01/app/postgres/product/95/db_4/lib/pgaudit.so'
/usr/bin/install -c -m 644 ./pgaudit.control '/u01/app/postgres/product/95/db_4/share/extension/'
/usr/bin/install -c -m 644 ./pgaudit--1.0.sql  '/u01/app/postgres/product/95/db_4/share/extension/'

That’s it. Initialize a new cluster:

postgres@pgbox:/u01/app/postgres/software/ [PG954] initdb -D /u02/pgdata/PG954 -X /u03/pgdata/PG954
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /u02/pgdata/PG954 ... ok
creating directory /u03/pgdata/PG954 ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /u02/pgdata/PG954/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /u02/pgdata/PG954 -l logfile start

… and install the extension:

postgres@pgbox:/u02/pgdata/PG954/ [PG954] psql postgres
psql (9.5.4 dbi services build)
Type "help" for help.

(postgres@[local]:5438) [postgres] > create extension pgaudit;
ERROR:  pgaudit must be loaded via shared_preload_libraries
Time: 2.226 ms

(postgres@[local]:5438) [postgres] > alter system set shared_preload_libraries='pgaudit';
ALTER SYSTEM
Time: 18.236 ms

##### Restart the PostgreSQL instance

(postgres@[local]:5438) [postgres] > show shared_preload_libraries ;
 shared_preload_libraries 
--------------------------
 pgaudit
(1 row)

Time: 0.278 ms
(postgres@[local]:5438) [postgres] > create extension pgaudit;
CREATE EXTENSION
Time: 4.688 ms

(postgres@[local]:5438) [postgres] > \dx
                   List of installed extensions
  Name   | Version |   Schema   |           Description           
---------+---------+------------+---------------------------------
 pgaudit | 1.0     | public     | provides auditing functionality
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

Ready. So, what can you do with it? As the documentation is quite well here are just a few examples.

To log all statements against a role:

(postgres@[local]:5438) [postgres] > alter system set pgaudit.log = 'ROLE';

Altering or creating roles from now on is reported in the logfile as:

2016-09-02 14:50:45.432 CEST - 9 - 13353 - [local] - postgres@postgres LOG:  AUDIT: SESSION,2,1,ROLE,CREATE ROLE,,,create user uu login password ,
2016-09-02 14:52:03.745 CEST - 16 - 13353 - [local] - postgres@postgres LOG:  AUDIT: SESSION,3,1,ROLE,ALTER ROLE,,,alter user uu CREATEDB;,
2016-09-02 14:52:20.881 CEST - 18 - 13353 - [local] - postgres@postgres LOG:  AUDIT: SESSION,4,1,ROLE,DROP ROLE,,,drop user uu;,

Object level auditing can be implemented like this (check the documentation for the meaning of the pgaudit.role parameter):

(postgres@[local]:5438) [postgres] > create user audit;
CREATE ROLE
(postgres@[local]:5438) [postgres] > create table taudit ( a int );
CREATE TABLE
(postgres@[local]:5438) [postgres] > insert into taudit values ( 1 );
INSERT 0 1
(postgres@[local]:5438) [postgres] > grant select,delete on taudit to audit;
GRANT
(postgres@[local]:5438) [postgres] > alter system set pgaudit.role='audit';
ALTER SYSTEM
(postgres@[local]:5438) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

Once we touch the table:

(postgres@[local]:5438) [postgres] > select * from taudit;
 a 
---
 1
(1 row)
(postgres@[local]:5438) [postgres] > update taudit set a = 4;

… the audit information appears in the logfile:

2016-09-02 14:57:10.198 CEST - 5 - 13708 - [local] - postgres@postgres LOG:  AUDIT: OBJECT,1,1,READ,SELECT,TABLE,public.taudit,select * from taudit;,
2016-09-02 15:00:59.537 CEST - 9 - 13708 - [local] - postgres@postgres LOG:  AUDIT: OBJECT,2,1,WRITE,UPDATE,TABLE,public.taudit,update taudit set a = 4;,

Have fun with auditing …

 

Cet article Auditing in PostgreSQL est apparu en premier sur Blog dbi services.

Understanding Row Level Security on PostgreSQL

Fri, 2016-09-02 03:11

In this article we will talk about a nice feature Row Level Security on PostgreSQL. We are using EDB Postgres Advanced Server 9.5.
Suppose that I am a team manager and that employee bonus are stored in a table Bonus. I want that each employee can see only data related to him and not data for other. How Can I implement this? I can simply use Row Level Security.
Let’s go on. Below is the structure of my table Bonus

testdb=# \d Bonus
            Table "public.bonus"
 Column |         Type          | Modifiers
--------+-----------------------+-----------
 id     | numeric               | not null
 login   | character varying(20) |
 bonus  | numeric               |
Indexes:
    "bonus_pkey" PRIMARY KEY, btree (id)

Below data inside Bonus

testdb=# table bonus;
 id |        login         | bonus
----+----------------------+-------
  1 | james@example.com    |  2500
  2 | Moise@example.com    |  1500
  3 | Mikael@example.com   |  7500
  4 | jennifer@example.com |  3520
(4 rows)

Let’s create users with corresponding logins

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# create user "james@example.com" password 'secret';
CREATE ROLE
testdb=# create user "Moise@example.com" password 'secret';
CREATE ROLE
testdb=# create user "jennifer@example.com" password 'secret';
CREATE ROLE
testdb=# create user "Mikael@example.com" password 'secret';
CREATE ROLE

And let’s grant them select on Table Bonus

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# grant select on bonus to "james@example.com";
GRANT
testdb=# grant select on bonus to "Moise@example.com";
GRANT
testdb=# grant select on bonus to "Mikael@example.com";
GRANT
testdb=# grant select on bonus to "jennifer@example.com";
GRANT

We can verify that  by default   each user can see all data (what I don’t want). For example with user james@example.com

testdb=> select current_user;
   current_user
-------------------
 james@example.com
(1 row)

testdb=> select * from bonus;
 id |        login         | bonus
----+----------------------+-------
  1 | james@example.com    |  2500
  2 | Moise@example.com    |  1500
  3 | Mikael@example.com   |  7500
  4 | jennifer@example.com |  3520
(4 rows)

And with user jennifer@example.com

testdb=> select current_user;
     current_user
----------------------
 jennifer@example.com
(1 row)

testdb=> select * from bonus;
 id |        login         | bonus
----+----------------------+-------
  1 | james@example.com    |  2500
  2 | Moise@example.com    |  1500
  3 | Mikael@example.com   |  7500
  4 | jennifer@example.com |  3520
(4 rows)

To allow user to see only his data. I have first to create a policy on the table Bonus with an expression which will filter data.

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# create policy bonus_policy on bonus for all to public using (login=current_user);
CREATE POLICY
testdb=#

After creating the policy, let’s enable the RLS on table Bonus

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# alter table bonus enable row level security;
ALTER TABLE
testdb=#

And now bingo we can  verify that each user can only see his corresponding data

testdb=> select current_user;
     current_user
----------------------
 jennifer@example.com
(1 row)

testdb=> select * from bonus;
 id |        login         | bonus
----+----------------------+-------
  4 | jennifer@example.com |  3520
(1 row)
testdb=> select current_user;
   current_user
-------------------
 james@example.com
(1 row)

testdb=> select * from bonus;
 id |       login       | bonus
----+-------------------+-------
  1 | james@example.com |  2500
(1 row)

testdb=>

Now let’s drop the policy but let’s still keep table bonus with the RLS enabled. What happens?

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# drop policy bonus_policy on bonus;
DROP POLICY
testdb=#

Let’s Query  table bonus with user james@example.com for example

testdb=> select current_user;
   current_user
-------------------
 james@example.com
(1 row)

testdb=> select * from bonus;
 id | login | bonus
----+-------+-------
(0 rows)

testdb=>

But if we query the table with user enterprisedb which is the table owner (should also be a superuser)

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# select * from bonus;
 id |        login         | bonus
----+----------------------+-------
  1 | james@example.com    |  2500
  2 | Moise@example.com    |  1500
  3 | Mikael@example.com   |  7500
  4 | jennifer@example.com |  3520
(4 rows)

So we see that if RLS is enabled on a table and that there is no defined policy, a default-deny  policy will be applied. Only owners, super users and users  with the BYPASSRLS attribute will be able to see data in the table

 

Cet article Understanding Row Level Security on PostgreSQL est apparu en premier sur Blog dbi services.

Rolling Invalidate Window Exceeded

Tue, 2016-08-30 16:05

Today I was doing a hard parse storm post-mortem analysis. One hypothesis was rolling invalidation causing invalidation, but figures didn’t match. I often reproduce the hypothesis to check the numbers to be sure I interpret them correctly. Especially the timestamps in V$SQL_SHARED_CURSOR.REASON. And as it may help others (including myself in the future) I share the test case.

I create a table with one row (12c online statistics gathering, so num_rows is 1) and then insert one more row.

21:31:26 SQL> create table DEMO as select * from dual;
Table created.
21:31:26 SQL> insert into DEMO select * from dual;
1 row created.
21:31:26 SQL> commit;
Commit complete.

I run a query on the table. I don’t care about the result, so let’s put it something that will be useful later: the UTC time as the number of seconds since Jan 1st, 1970 (aka Epoch)

21:32:52 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585572
1472585572

The execution plan cardinality estimation is 1 row as this is what is in object statistics.

21:32:52 SQL> select * from table(dbms_xplan.display_cursor(null,null));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 61x2h0y9zv0r6, child number 0
-------------------------------------
select (cast(sys_extract_utc(current_timestamp) as
date)-date'1970-01-01')*24*3600 from DEMO
 
Plan hash value: 4000794843
 
------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2 (100)| |
| 1 | TABLE ACCESS FULL| DEMO | 1 | 2 (0)| 00:00:01 |
------------------------------------------------------------------

I gather statistics with all default attributes, so rolling invalidation occurs.

21:32:52 SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
PL/SQL procedure successfully completed.

At this time, the cursor has been parsed only once:

21:32:52 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------
0 1 1 1 2016-08-30/21:32:51 2016-08-30/21:32:51 30-AUG-16 21:32:51

By default the invalidation window is 5 hours. I don’t want to wait so I set it to something shorter- 15 seconds:

21:32:54 SQL> alter system set "_optimizer_invalidation_period"=15;
System altered.

There will not be any invalidation until the next execution. To prove it I wait 20 seconds, run the query again and check the execution plan:

21:33:12 SQL> select (sysdate-date'1970-01-01')*24*3600 from DEMO;
 
(SYSDATE-DATE'1970-01-01')*24*3600
----------------------------------
1472592792
1472592792
 
21:33:12 SQL> select * from table(dbms_xplan.display_cursor('61x2h0y9zv0r6',null));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 61x2h0y9zv0r6, child number 0
-------------------------------------
select (cast(sys_extract_utc(current_timestamp) as
date)-date'1970-01-01')*24*3600 from DEMO
 
Plan hash value: 4000794843
 
------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2 (100)| |
| 1 | TABLE ACCESS FULL| DEMO | 1 | 2 (0)| 00:00:01 |
------------------------------------------------------------------

This is still the old cursor (child number 0) with old stats (num_rows=1)

However, from this point rolling invalidation occurs: a random timestamp is generated within the rolling window (15 seconds here – 5 hours in default database).

I don’t know how to see this timestamp at that point (comments welcome) so I run the query several times within this 15 seconds window to see when it occurs:

21:33:16 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585596
1472585596
 
21:33:19 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585599
1472585599
 
21:33:22 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585602
1472585602
 
21:33:25 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585605
1472585605
 
21:33:28 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585608
1472585608
 
21:33:31 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585611
1472585611

After those runs, I check that I have a new execution plan with new estimation from new statistics (num_rows=2):

21:33:31 SQL> select * from table(dbms_xplan.display_cursor('61x2h0y9zv0r6',null));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 61x2h0y9zv0r6, child number 0
-------------------------------------
select (cast(sys_extract_utc(current_timestamp) as
date)-date'1970-01-01')*24*3600 from DEMO
 
Plan hash value: 4000794843
 
------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2 (100)| |
| 1 | TABLE ACCESS FULL| DEMO | 1 | 2 (0)| 00:00:01 |
------------------------------------------------------------------
 
SQL_ID 61x2h0y9zv0r6, child number 1
-------------------------------------
select (cast(sys_extract_utc(current_timestamp) as
date)-date'1970-01-01')*24*3600 from DEMO
 
Plan hash value: 4000794843
 
------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2 (100)| |
| 1 | TABLE ACCESS FULL| DEMO | 2 | 2 (0)| 00:00:01 |
------------------------------------------------------------------

Yes, I have a new child cursor, child number 1. A new cursor means that I have a reason in V$SQL_SHARED_CURSOR:

21:33:31 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='61x2h0y9zv0r6';
 
CHILD_NUMBER REASON
------------ --------------------------------------------------------------------------------
0 <ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invalidate Win
dow Exceeded(2)</reason><size>0x0</size><details>already_processed</details></Ch
ildNode><ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invali
date Window Exceeded(3)</reason><size>2x4</size><invalidation_window>1472585604<
/invalidation_window><ksugctm>1472585607</ksugctm></ChildNode>
 
1

Child cursor number 0 has not been shared because of rolling invalidation. The invalidation_window number, 1472585604, is the timestamp set by rolling invalidation, set at first parse call after stats gathering, and defined within the rolling window that follows. After this one (1472585604 is 21:33:24 in my GMT+2 timezone) the cursor will not be shared and a new hard parse occurs. I think that ksugctm is the timestamp when the new cursor is created. 1472585607 is 21:33:27 here in Switzerland. You see the corresponding timestamps in V$SQL:

21:33:31 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------
0 1 5 5 2016-08-30/21:32:51 2016-08-30/21:32:51 30-AUG-16 21:33:24
0 1 2 2 2016-08-30/21:32:51 2016-08-30/21:33:27 30-AUG-16 21:33:30

Ok. Important thing is that the ‘rolling invalidation’ is not an invalidation (as V$SQL.INVALIDATIONS=0) of the cursor, but just non-sharing of the child.

If we gather statistics with immediate invalidation, it’s different:

21:33:31 SQL> exec dbms_stats.gather_table_stats(user,'DEMO',no_invalidate=>false);
PL/SQL procedure successfully completed.
 
21:33:34 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585614
1472585614
21:33:34 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='61x2h0y9zv0r6';
 
CHILD_NUMBER REASON
------------ --------------------------------------------------------------------------------
0 <ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invalidate Win
dow Exceeded(3)</reason><size>2x4</size><invalidation_window>1472585604</invalid
ation_window><ksugctm>1472585607</ksugctm></ChildNode><ChildNode><ChildNumber>0<
/ChildNumber><ID>33</ID><reason>Rolling Invalidate Window Exceeded(2)</reason><s
ize>0x0</size><details>already_processed</details></ChildNode>

I’ve only one child here, a new one, and I’m not sure the reason has a meaning.

21:33:34 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------
1 2 1 1 2016-08-30/21:32:51 2016-08-30/21:33:33 30-AUG-16 21:33:33

This is an invalidation of the cursor. Old children cursors are removed and the proud parent is marked as invalidated 1 time.

 

Cet article Rolling Invalidate Window Exceeded est apparu en premier sur Blog dbi services.

The fastest way to get the Oracle sample schemas

Tue, 2016-08-30 00:52

Do you need the Oracle sample schemas to do a quick test or demonstration? And, as always, you did not install the sample schemas when you did the setup of your environment? The probably fastest way to get them installed is to download them from github. Installation instructions are there as well. Have fun …

 

Cet article The fastest way to get the Oracle sample schemas est apparu en premier sur Blog dbi services.

Filenames in AWR reports

Mon, 2016-08-29 14:33

If you have read my latest blog posts, you know I’ve measured IOPS with SLOB to estimate ACFS overhead on a fast storage. This blog is about something I learned after wasting one hour on the result.

Here is how I did my tests:

  1. Create a SLOB database in ACFS
  2. Run SLOB PIO tests and tag the AWR report as ‘ACFS’
  3. Move datafile to +DATA
  4. Run SLOB PIO tests and tag the AWR report as ‘ASM’

Of course, I’ve scripted to run several tests varying the number of sessions, work unit, etc. while I was doing something more productive.

While done, I got a set of AWR report and the first task was to check that they were consistent. But they were not. The datafile in ‘File IO Stats’ section did not match the tag I’ve put in the file name. First I suspected a bug in my script with bad tagging or failed datafile move. I had to read the alert.log to get that my tagging was good but filename in AWR reports was wrong. I finally looked at AWR views to understand why the filename was wrong and understood the problem:

SQL> desc DBA_HIST_DATAFILE;
Name Null? Type
----------------------------------------- -------- ----------------------------
DBID NOT NULL NUMBER
FILE# NOT NULL NUMBER
CREATION_CHANGE# NOT NULL NUMBER
FILENAME NOT NULL VARCHAR2(513)
TS# NOT NULL NUMBER
TSNAME VARCHAR2(30)
BLOCK_SIZE NUMBER
CON_DBID NUMBER
CON_ID NUMBER

There’s no SNAP_ID. AWR do not store the history of file names. We can suppose that it stores only the latest filename, but then my reports would be good as they were generated immediately after the snapshot. Or that the first name stays, but I had some reports with ‘+DATA’.

Then, I grepped for ‘WRH$_HISTORY’ in ORACLE_HOME/rdbms/admin and came upon this:

dbmsawr.sql: -- This routine updates WRH$_DATAFILE rows for the datafile name and
dbmsawr.sql: -- WRH$_DATAFILE with the current information in database.

There is an update_datafile_info procedure here in the dbms_workload_repository and the comment says something like:
This change will be captured at max after some
-- (generally 50) snapshots. So the AWR and AWR report may be wrong with
-- respect to data file name or tablespace name for that duration.

I love to work with Oracle. All information is there if you know where to look at.

So if you want to rely on filename in an AWR report after a move, you should run this procedure before taking the report. And you should run this report before the next datafile move.

Here is the example:

SQL> exec dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.
 
SQL> select file#,filename from DBA_HIST_DATAFILE where file#=6;
 
FILE# FILENAME
---------- --------------------------------------
6 /u01/DEMO/oradata/DEMO14/users01.dbf
 
SQL> select snap_id,file#,filename from DBA_HIST_FILESTATXS where file#=6 order by snap_id fetch first 10 rows only;
 
SNAP_ID FILE# FILENAME
---------- ---------- --------------------------------------
1244 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1245 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1246 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1247 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1248 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1249 6 /u01/DEMO/oradata/DEMO14/users01.dbf
6 rows selected.

My file is user01 and this is what is stored in AWR.

I rename it to users02 (thanks to 12c online move)

SQL> alter database move datafile '/u01/DEMO/oradata/DEMO14/users01.dbf' to '/u01/DEMO/oradata/DEMO14/users02.dbf';
Database altered.

but AWR is not aware of the change even after a snapshot:

SQL> exec dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.
 
SQL> select file#,filename from DBA_HIST_DATAFILE where file#=6;
 
FILE# FILENAME
---------- --------------------------------------
6 /u01/DEMO/oradata/DEMO14/users01.dbf
 
SQL> select snap_id,file#,filename from DBA_HIST_FILESTATXS where file#=6 order by snap_id fetch first 10 rows only;
 
SNAP_ID FILE# FILENAME
---------- ---------- --------------------------------------
1244 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1245 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1246 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1247 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1248 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1249 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1250 6 /u01/DEMO/oradata/DEMO14/users01.dbf

You have to wait for those 50 snapshots or run the update:

SQL> exec dbms_workload_repository.update_datafile_info;
PL/SQL procedure successfully completed.

SQL> select file#,filename from DBA_HIST_DATAFILE where file#=6;
 
FILE# FILENAME
---------- --------------------------------------
6 /u01/DEMO/oradata/DEMO14/users02.dbf
 
SQL> select snap_id,file#,filename from DBA_HIST_FILESTATXS where file#=6 order by snap_id fetch first 10 rows only;
 
SNAP_ID FILE# FILENAME
---------- ---------- --------------------------------------
1244 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1245 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1246 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1247 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1248 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1249 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1250 6 /u01/DEMO/oradata/DEMO14/users02.dbf

But as you see no history about previous names.

Note that if you look at the table behind the view, there’s a SNAP_ID but it’s not part of the primary key. It is used by the purge procedures.

 

Cet article Filenames in AWR reports est apparu en premier sur Blog dbi services.

Letting GoldenGate automatically maintain the insert and update timestamps on the target

Mon, 2016-08-29 11:18

Today at one of our GoldenGate customers we were faced with the following requirement: For one of the tables on the target we needed two additional columns which shall hold the insert and update timestamps for when the row was either inserted or updated. Just to be clear: It was not about getting the timestamps for the inserts or updates as they happened on the source but the target. In this post we’ll look at how GoldenGate can be configured to do the work requested.

All the below examples work with the well known scott/tiger schema. There is one extract running for capturing on the source and one replicat is re-playing the changes on the target (over sqlnet, no integrated mode here). This is the current status of the streams:

GGSCI (oelogg1) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXTRSCO     00:00:00      00:00:02    
REPLICAT    RUNNING     REPLSCO     00:00:00      00:00:01    


GGSCI (oelogg1) 2> view params EXTRSCO

EXTRACT extrsco
USERIDALIAS DB1 DOMAIN admin
GETUPDATEBEFORES
REPORT AT 23:40
DDL INCLUDE OPTYPE TRUNCATE OBJNAME ARBOR.*, &
    INCLUDE OPTYPE ALTER OBJNAME ARBOR.*
EXTTRAIL /u01/app/ogg/product/12.1.2.1.9/dirdat/es
TABLE SCOTT.*;

GGSCI (oelogg1) 3> view params REPLSCO

REPLICAT REPLSCO
USERIDALIAS DB2 DOMAIN admin
GETUPDATEBEFORES
GETTRUNCATES
APPLYNOOPUPDATES
DDL INCLUDE OPTYPE TRUNCATE OBJNAME SCOTT.*, &
    INCLUDE OPTYPE ALTER OBJNAME SCOTT.* &
    EXCLUDE ALL INSTR 'CONSTRAINT' &
    EXCLUDE ALL INSTR 'TRIGGER' 
ASSUMETARGETDEFS
DBOPTIONS DEFERREFCONST
MAP SCOTT.*, TARGET SCOTT.*;

Pretty basic, no unusual stuff here. The table we’ll use for the scope of this post is the “project” table which has the following contents in a fresh scott/tiger installation:

SQL> col DESCRIPTION for a50
SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE
---------- -------------------------------------------------- ------------ ------------
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

Of course the table looks the same on the target:

SQL> col DESCRIPTION for a50
SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE
---------- -------------------------------------------------- ------------ ------------
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

To prove that the streams are really working lets add an additional row to the source:

SQL> insert into project values (1005, 'my fun project 1', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

SQL> 

… and then check if the row indeed was replicated to the target:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE
---------- -------------------------------------------------- ------------ ------------
      1005 my fun project 1				      29-AUG-16    28-SEP-16
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

Ok, looks fine, so back to the initial requirement. We need two additional columns on the target:

alter table SCOTT.PROJECT add create_dt timestamp with time zone;
alter table SCOTT.PROJECT add update_dt timestamp with time zone;
alter table SCOTT.PROJECT modify create_dt default to_date('01.01.2000','DD.MM.YYYY');
alter table SCOTT.PROJECT modify update_dt default to_date('01.01.2000','DD.MM.YYYY');

In our case we needed to set a default value as both column are not allowed to contain NULL values. For the moment the content of the table on the target is:

SQL> col CREATE_DT for a20
SQL> col UPDATE_DT for a20
SQL> select * from project;

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT	     UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ -------------------- --------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

In real life when the table which will be extended holds millions of rows the following will probably be too simple and you’ll need to spend some time on thinking on how you organize the updates. For the scope of this post this is fine:

update SCOTT.PROJECT set create_dt = to_date('01.01.2000','DD.MM.YYYY') where create_dt is null;
update SCOTT.PROJECT set update_dt = to_date('01.01.2000','DD.MM.YYYY') where update_dt is null;
commit;
alter table SCOTT.PROJECT modify create_dt not null;
alter table SCOTT.PROJECT modify update_dt not null;

From now on we have identical insert and update timestamps for all of the rows on the target:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01                01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

A final check on the source for being sure that the default values work:

SQL> insert into project values (1006, 'my fun project 2', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

On the target:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

All fine. Time to do the GoldenGate work. Obviously the first step is to stop the replicat:

GGSCI (oelogg1) 2> stop REPLSCO

Sending STOP request to REPLICAT REPLSCO ...
Request processed.

GGSCI (oelogg1) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXTRSCO     00:00:00      00:00:09    
REPLICAT    STOPPED     REPLSCO     00:00:00      00:00:01    

Lets do the insert case in a first step by adding the following line (COLMAP) to the replicat configuration:

REPLICAT REPLSCO
USERIDALIAS DB2 DOMAIN admin
GETUPDATEBEFORES
GETTRUNCATES
APPLYNOOPUPDATES
DISCARDFILE /u01/app/ogg/product/12.1.2.1.9/dirdsc/dscsco.dsc, APPEND, MEGABYTES 10
DDL INCLUDE OPTYPE TRUNCATE OBJNAME SCOTT.*, &
    INCLUDE OPTYPE ALTER OBJNAME SCOTT.* &
    EXCLUDE ALL INSTR 'CONSTRAINT' &
    EXCLUDE ALL INSTR 'TRIGGER' 
ASSUMETARGETDEFS
DBOPTIONS DEFERREFCONST
MAP SCOTT.PROJECT, TARGET SCOTT.PROJECT, COLMAP (usedefaults,
    create_dt = @IF (@STREQ (@GETENV ('GGHEADER', 'OPTYPE'), 'INSERT'), @DATENOW(), @COLSTAT (MISSING)));
MAP SCOTT.*, TARGET SCOTT.*;

Start again:

GGSCI (oelogg1) 6> start REPLSCO

Sending START request to MANAGER ...
REPLICAT REPLSCO starting


GGSCI (oelogg1) 7> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXTRSCO     00:00:00      00:00:01    
REPLICAT    RUNNING     REPLSCO     00:00:00      00:00:00    

Looks good from a configuration perspective. Time to start:

SQL> insert into project values (1007, 'my fun project 3', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

On the target we should now see the exact insert date of the record instead of the default value of the column:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

Perfect, this works. Lets go on with the update case: All we need is to add the additional column for the update case and populate it:

REPLICAT REPLSCO
USERIDALIAS DB2 DOMAIN admin
GETUPDATEBEFORES
GETTRUNCATES
APPLYNOOPUPDATES
DISCARDFILE /u01/app/ogg/product/12.1.2.1.9/dirdsc/dscsco.dsc, APPEND, MEGABYTES 10
DDL INCLUDE OPTYPE TRUNCATE OBJNAME SCOTT.*, &
    INCLUDE OPTYPE ALTER OBJNAME SCOTT.* &
    EXCLUDE ALL INSTR 'CONSTRAINT' &
    EXCLUDE ALL INSTR 'TRIGGER' 
ASSUMETARGETDEFS
DBOPTIONS DEFERREFCONST
MAP SCOTT.PROJECT, TARGET SCOTT.PROJECT, COLMAP (usedefaults,
    create_dt = @IF (@STREQ (@GETENV ('GGHEADER', 'OPTYPE'), 'INSERT'), @DATENOW(), @COLSTAT (MISSING)),
    update_dt = @IF (@VALONEOF (@GETENV ('GGHEADER', 'OPTYPE'), 'UPDATE', 'SQL COMPUPDATE', 'PK UPDATE' ), @DATENOW(), @COLSTAT 
(MISSING)));
MAP SCOTT.*, TARGET SCOTT.*;

Then stop and start the replicat to bring the changes into effect. Lets create a new row just to see that this does not have any effect on the insert case:

SQL> insert into project values (1008, 'my fun project 4', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

What we should see on the target are two rows with an exact insert date but a default update date:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1008 my fun project 4				      29-AUG-16    28-SEP-16	29-AUG-16 12.58.40.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

Perfect. What about the update itself?

SQL> update project set DESCRIPTION = upper(description) where PROJECTNO = 1008;

1 row updated.

SQL> commit;

Commit complete.

This should result in one exact update date for my fun project 4:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1008 MY FUN PROJECT 4				      29-AUG-16    28-SEP-16	29-AUG-16 12.58.40.000000 PM +02:00 29-AUG-16 01.04.49.000000 PM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

8 rows selected.

Perfect, works fine, too. Just to be sure that deletes still work lets do a final test:

SQL> delete from project where PROJECTNO = 1008;

1 row deleted.

SQL> commit;

Commit complete.

SQL> 

The two additional columns should not prevent Goldengate from being able to delete the rows, lets see:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

The fun project 4 is gone and all works as expected. Hope this helps ….

 

Cet article Letting GoldenGate automatically maintain the insert and update timestamps on the target est apparu en premier sur Blog dbi services.

ODA X6 database classes and shapes

Mon, 2016-08-29 08:47

On the Oracle Database Appliance, like on the Oracle public Cloud, you define the CPU capacity with ‘shapes’. On the latest ODA, the X6, we have a new interface to provision a database. Let’s look at the different shapes available.

ODACLI

You can provision a new database with the command line ODACLI which replaces the OAKCLI you used in ODA X5:

[root@odax6m ~]# odacli create-database
Usage: create-database [options] Options:
* --adminpassword, -m
Password for SYS,SYSTEM and PDB Admin
--cdb, -c
Create Container Database
Default: false
--dbclass, -cl
Database Class OLTP/DSS/IMDB
Default: OLTP
--dbconsole, -co
Enable Database Console
Default: false
--dbhomeid, -dh
Database Home ID (Use Existing DB Home)
* --dbname, -n
Database Name
--dbshape, -s
Database Shape{odb1,odb2,odb3 etc.}
Default: odb1
--dbstorage, -r
Database Storage {ACFS|ASM}
Default: ACFS
--dbtype, -y
Database Type {SI|RAC}
Default: SI
--help, -h
get help
Default: false
--instanceonly, -io
Create Instance Only (For Standby)
Default: false
--json, -j
json output
Default: false
--pdbadmin, -d
Pluggable Database Admin User
Default: pdbadmin
--pdbname, -p
Pluggable Database Name
Default: pdb1
--targetnode, -g
Node Number (for single-instance databases)
Default: 0
--version, -v
Database Version
Default: 12.1.0.2

ODA WebConsole

But the ODA X6 has also a small graphical interface from the web console.

CaptureODACreate

12c multitenant is the default, but you can choose.

Edition

You don’t have the choice when you create the database. You install the ODA in Standard or Enterprise and then you cannot change.

Versions

Two database versions are available: 11.2.0.4 and 12.1.0.2

CaptureODAVersions

You choose ODA to get a stable, certified and supported system so it make sense to run only supported versions with latest PSU. If you have older versions, you must upgrade. Set optimizer_features_enable to previous if your application was not tuned for newer versions. Very often, when an ISV do not certify his software it’s because of optimizer regressions. With proper testing and optimizer settings you should be able to upgrade any application without the risk of regression.

Templates

There are four DBCA templates available

  • Standard Edition or Enterprise Edition
  • Multitenant or non-CDB

The main difference between Enterprise Edition and Standard Editions are:
Options OMS,SPATIAL,CWMLITE,DV are installed in Enterprise Edition but not in Standard Edition
fast_start_mttr_target=300 in Enterprise Edition (feature not supported in Standard Edition)

The main difference between multitenant and non-CDB:
Options JSERVER,IMEDIA,ORACLE_TEXT,APEX are installed in a CDB an not in a non-CDB
maxdatafiles=1024 in CDB (100 in non-CDB)

All templates are configured with filesystem_io_options=setall and use_large_pages=only

Following underscore parameters are set for all ODA templates:
*._datafile_write_errors_crash_instance=FALSE
*._disable_interface_checking=TRUE
*._enable_NUMA_support=FALSE
*._file_size_increase_increment=2143289344
*._gc_policy_time=0
*._gc_undo_affinity=FALSE

Note that both 12c and 11g are available in Enterprise Edition as well as Standard Edition (can even be Standard Edition One for 11g).
Of course, CDB is only for 12c.

Shapes

As in the Oracle Public Cloud, the CPU and Memory comes in shapes:

CaptureODACShape

The choice is the number of core. The cores are hyperthreaded, which means that odb1 will have cpu_count=2. And it is set in spfile. Note that at install no resource manager plan is active so instance caging will not occur except during the automatic maintenance window… My recommandation is to set a plan. In 12.1.0.2 Standard Edition resource manager is implicitly activated.

ODA X6-2 processors are Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz. Here is an example of the LIOPS you can reach when running on all the 40 threads of a X6-2M:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 39.9 545.6 0.00 25.43
DB CPU(s): 38.8 530.9 0.00 24.75
Logical read (blocks): 18,494,690.4 252,862,769.1

This is 18 million logical reads per seconds in this 2 sockets (2s10c20t) appliance. Half of it on the X6-2S which has one socket 1s10c20t.

The core factor for those processors is 0.5 which means that you can run an Enterprise Edition ‘odb2′ with a single processor license (public price 47,500$) and you can run 4 sessions in CPU which means more that you can do nearly 3 million logical reads per second, as here:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 13.80
DB CPU(s): 4.0 54.5 0.00 13.78
Logical read (blocks): 2,901,991.5 39,660,331.1

Those shapes are defined as:

CaptureODAShapes

Given the high LIOPS and the available memory, this entry-level appliance can be sufficient for most of medium OLTP workload.

Classes

Three classes are defined to derive the database parameters from the shape.

CaptureODAClasses

The SGA/PGA is calculated from the shape memory and a class factor.
OLTP gives 50% to SGA and 25% to PGA which means that for example a odb4 has sga_target=16 and pga_aggregate_target=8G
DSS gives 25% to SGA and 50% to PGA

Note that OLTP is the only one available in Standard Edition. This does not mean that you can run only OLTP. You can change memory settings later (DSS usually need more PGA than SGA) and you have very good storage bandwidth and IOPS (NVMe access to SSD). This setting is more an indication that most of datawarehouses need features available only on Enterprise Edition such as parallel query, partitioning, bitmap indexes.

ASM or ACFS?

CaptureODAStorage

The template shapes above define a 100GB database. When you create a new database you have the choice to put it directly on +DATA and +RECO, or create a 100GB ADVM volume and ACFS filesystem that will be mounted under /u02/app/oracle/oradata. If you choose ACFS the FRA and REDO will be created under the /u03/app/oracle mount point which is a common ACFS.

The default is ACFS but you should think about it. For production, best performance is ASM. You have SSD to reduce avoid disk latency. You have NVMe to reduce CPU latency. You don’t want to add the ACFS layer. The maximum IOPS we observe is 10 times higher with datafiles directly on ASM:

@FranckPachot Don't know? We get 75k IOPS on ACFS and 700k IOPS on pure ASM. /cc @kevinclosson pic.twitter.com/TcLzUsOh0d

— Marco Mischke (@DBAMarco) August 29, 2016

For test databases, where you use snapshot features, especially with multitenant, you may choose ACFS. However, why not create the CDB in ASM and use ACFS for the PDBs you will want to snapshot? No need for that additional layer for the CDB files. Better to isolate the master and clones for a specific environment into its own ACFS.

And anyway, ODA X6-2S and X6-2M are very interesting for Standard Edition, and you cannot use snapshots nor any ACFS features for a database in Standard Edition.

Storage performance is truly amazing. At 100000 IOPS we have 99% single block reads faster than 256 milliseconds and 97% faster than 128 ms. At 800000 IOPS here are the figures:

% of Waits
-----------------------------------------------
Total
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms 1s
------------------------- ------ ----- ----- ----- ----- ----- ----- ----- -----
db file parallel read 6.9M 4.0 74.1 21.7 .2 .0 .0 .0
db file sequential read 18.4M 83.1 16.3 .7 .0 .0 .0

So what?

It’s nice to have an easy GUI to provision a database on ODA. However there are some limits with it:

  • Be careful on the defaults. They may not fit what you want. Do you want you databases on ACFS?
  • Not all operations can be done though the GUI: you can create but not delete a database.

But there’s more. Performance is there. You can run application that need high performance.

Do you know any other solution which gives you a fully certified system installed in few hours with databases ready? With very good hardware and managed software costs (NUP, Standard Edition in socket metric or Entrerprise Edition capacity-on-demand by multiple of 1 processor license).
You need high-availability? In Standard Edition you cannot use Data Guard. In Standard Edition you can buy Dbvisit standby which gives you switchover and failover (Recovery Point Objective of few minutes) to a second ODA or to a cloud service. Of course, you can build or buy custom scripts to manage the manual standby. However, if you go to ODA you probably appreciate easy and robust software.

 

Cet article ODA X6 database classes and shapes est apparu en premier sur Blog dbi services.

ODA X6 command line and Web Console

Sat, 2016-08-27 14:33

The ODA X6 comes with a new command line (odacli) which replaces oakcli, and with a small web console which can display information about the appliance, the databases and the provisioning jobs. It also has the possibility to create a database, but this is for next blog post. In this post I’ll show which information are displayed once the ODA is installed.

The examples here come from ODA X6 version: 12.1.2.7.0

Appliance

The first screen is about the appliance information, the ones that you define when installed the ODA:

CaptureODAWEB001

The same information can be displayed from command line with odacli describe-appliance:

[root@odax6m ~]# odacli describe-appliance
 
Appliance Information
----------------------------------------------------------------
ID: bb8f0eec-0f5c-4319-bade-75770848b923
Platform: OdaliteM
Data Disk Count: 2
CPU Core Count: 20
Created: Aug 26, 2016 2:51:26 PM
 
System Information
----------------------------------------------------------------
Name: odax6m
Domain Name: oracle.democenter.arrowcs.ch
Time Zone: Europe/Zurich
DB Edition: EE
DNS Servers: 172.22.1.9
NTP Servers: 172.22.1.9
 
Disk Group Information
----------------------------------------------------------------
DG Name Redundancy Percentage
------------------------- ------------------------- ------------
Data Normal 75
Reco Normal 25

An important thing to note here is that the choice between Standard Edition and Enterprise Edition is at appliance level: you cannot mix.
There’s also no mention of virtualization because ODA X6 2S and 2M are only bare-metal.

odacli list-networks

[root@odax6m ~]# odacli list-networks
 
ID Name NIC IP Address Subnet Mask Gateway
---------------------------------------- -------------------- ---------- ------------------ ------------------ ------------------
ffcf7d89-8074-4342-9f19-5e72ed695ce7 Private-network priv0 192.168.16.24 255.255.255.240
71a422bc-39d3-483c-b79b-ffe25129dfd2 Public-network btbond1 172.22.1.23 255.255.255.224 172.22.1.2

I’ve no Auto Service Request configured here:
[root@odax6m ~]# odacli describe-asr
Aug 27, 2016 8:56:33 PM com.oracle.oda.dcscli.commands.AsrCommand$getAsr run
SEVERE: No asr found

Databases

The second screen is about the databases:

CaptureODAWEB002

From command line you have information about the ORACLE_HOMEs and databases.

[root@odax6m ~]# odacli list-dbhomes
 
ID Name DB Version Home Location
---------------------------------------- -------------------- ---------- ---------------------------------------------
67419075-d1f9-4c2e-85b1-c74430e35120 OraDB12102_home1 12.1.0.2 /u01/app/oracle/product/12.1.0.2/dbhome_1
cf76a90b-f9e3-44b2-9b43-56111c1785e4 OraDB12102_home2 12.1.0.2 /u01/app/oracle/product/12.1.0.2/dbhome_2
adcbe8bf-f26f-4ab7-98a1-0abcd4412305 OraDB11204_home1 11.2.0.4 /u01/app/oracle/product/11.2.0.4/dbhome_1

[root@odax6m ~]# odacli list-databases
 
ID DB Name DB Version CDB Class Shape Storage Status
---------------------------------------- ---------- ---------- ---------- -------- -------- ---------- ----------
4c182ffb-3e4a-45c0-a6c6-15d5e9b7b2b9 dbee1 12.1.0.2 false OLTP odb4 ACFS Configured
5564ea51-fc93-46f2-9188-c13c23caba94 odb1s 12.1.0.2 true OLTP odb1s ACFS Configured
26c2213d-5992-4b2b-94b0-2d0f4d0f9c2d dbee11g1 11.2.0.4 false OLTP odb2 ACFS Configured

You can get more detail about one database:

CaptureODAWEB0022

[root@odax6m ~]# odacli describe-dbhome -i 67419075-d1f9-4c2e-85b1-c74430e35120
 
DB Home details
----------------------------------------------------------------
ID: 67419075-d1f9-4c2e-85b1-c74430e35120
Name: OraDB12102_home1
Version: 12.1.0.2
Home Location: /u01/app/oracle/product/12.1.0.2/dbhome_1
Created: Aug 26, 2016 2:51:26 PM

[root@odax6m ~]# odacli describe-database -i 4c182ffb-3e4a-45c0-a6c6-15d5e9b7b2b9
 
Database details
----------------------------------------------------------------
ID: 4c182ffb-3e4a-45c0-a6c6-15d5e9b7b2b9
Description: dbee1
DB Name: dbee1
DB Version: 12.1.0.2
DBID: 2933563624
CDB: false
PDB Name:
PDB Admin User Name:
Class: OLTP
Shape: odb4
Storage: ACFS
CharacterSet: DbCharacterSet(characterSet=AL32UTF8, nlsCharacterset=AL16UTF16, dbTerritory=AMERICA, dbLanguage=AMERICAN)
Home ID: 67419075-d1f9-4c2e-85b1-c74430e35120
Console Enabled: false
Created: Aug 26, 2016 2:51:26 PM

Activity

Here is the log of what has been done on the ODA:

CaptureODAWEB003

[root@odax6m ~]# odacli list-jobs
 
ID Description Created Status
---------------------------------------- ------------------------------ ------------------------- ----------
1b99d278-6ab4-4ead-a5f8-f112c74a8f97 Provisioning service creation Aug 26, 2016 2:51:26 PM Success
f0ac9a2c-ba37-412c-8a81-9cc7cb301417 Database service creation with db name: odb1s Aug 26, 2016 4:03:39 PM Success
dec37817-feb7-46e5-b991-b23362268cb1 Database service creation with db name: dbee11g1 Aug 26, 2016 5:09:33 PM Success

And we have more info about the steps executed for one job:

CaptureODAWEB004

Same in command line:

[root@odax6m ~]# odacli describe-job -i 1b99d278-6ab4-4ead-a5f8-f112c74a8f97
 
Job details
----------------------------------------------------------------
ID: 1b99d278-6ab4-4ead-a5f8-f112c74a8f97
Description: Provisioning service creation
Status: Success
Created: 26.08.2016 14:51:26
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ------------------------- ------------------------- ----------
Setting up Network Aug 26, 2016 2:51:27 PM Aug 26, 2016 2:51:27 PM Success
Creating group :asmdba Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :asmoper Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :asmadmin Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :dba Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :dbaoper Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :oinstall Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating user :grid Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating user :oracle Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Setting up ssh equivalance Aug 26, 2016 2:51:39 PM Aug 26, 2016 2:51:39 PM Success
Gi Home creation Aug 26, 2016 2:54:49 PM Aug 26, 2016 2:57:54 PM Success
Creating GI home directories Aug 26, 2016 2:54:49 PM Aug 26, 2016 2:54:49 PM Success
Cloning Gi home Aug 26, 2016 2:54:49 PM Aug 26, 2016 2:57:54 PM Success
GI stack creation Aug 26, 2016 2:57:54 PM Aug 26, 2016 3:08:44 PM Success
Configuring GI Aug 26, 2016 2:57:54 PM Aug 26, 2016 2:58:21 PM Success
Running GI root scripts Aug 26, 2016 2:58:21 PM Aug 26, 2016 3:05:03 PM Success
Running GI config assistants Aug 26, 2016 3:05:03 PM Aug 26, 2016 3:05:34 PM Success
Creating RECO disk group Aug 26, 2016 3:10:02 PM Aug 26, 2016 3:10:16 PM Success
Creating volume reco Aug 26, 2016 3:10:16 PM Aug 26, 2016 3:10:26 PM Success
Creating volume datdbee1 Aug 26, 2016 3:10:26 PM Aug 26, 2016 3:10:35 PM Success
Creating ACFS filesystem for RECO Aug 26, 2016 3:10:35 PM Aug 26, 2016 3:10:42 PM Success
Creating ACFS filesystem for DATA Aug 26, 2016 3:10:42 PM Aug 26, 2016 3:10:49 PM Success
Db Home creation Aug 26, 2016 3:10:49 PM Aug 26, 2016 3:13:40 PM Success
Creating DbHome Directory Aug 26, 2016 3:10:49 PM Aug 26, 2016 3:10:49 PM Success
Extract DB clones Aug 26, 2016 3:10:49 PM Aug 26, 2016 3:12:29 PM Success
Enable DB options Aug 26, 2016 3:12:29 PM Aug 26, 2016 3:12:38 PM Success
Clone Db home Aug 26, 2016 3:12:38 PM Aug 26, 2016 3:13:37 PM Success
Run Root DB scripts Aug 26, 2016 3:13:37 PM Aug 26, 2016 3:13:37 PM Success
Database Service creation Aug 26, 2016 3:13:40 PM Aug 26, 2016 3:19:43 PM Success
Database Creation Aug 26, 2016 3:13:40 PM Aug 26, 2016 3:17:58 PM Success
Running DataPatch Aug 26, 2016 3:18:33 PM Aug 26, 2016 3:19:43 PM Success
create Users tablespace Aug 26, 2016 3:19:43 PM Aug 26, 2016 3:19:46 PM Success

Yes, this is the ODA installation. Half an hour to setup the OS, install Grid Infrastructure, setup the storage and create a first database.

Refresh

Those that all the screens are not refreshed automatically even when you navigate through them. Don’t forget the ‘Refresh’ button (circular arrow) on top-right.

JSON

You can also build your own interface from the JSON format:

[root@odax6m ~]# odacli list-jobs -j
[ {
"jobId" : "1b99d278-6ab4-4ead-a5f8-f112c74a8f97",
"status" : "Success",
"message" : null,
"createTimestamp" : 1472215886601,
"description" : "Provisioning service creation"
}, {
"jobId" : "f0ac9a2c-ba37-412c-8a81-9cc7cb301417",
"status" : "Success",
"message" : null,
"createTimestamp" : 1472220219016,
"description" : "Database service creation with db name: odb1s"
}, {
"jobId" : "dec37817-feb7-46e5-b991-b23362268cb1",
"status" : "Success",
"message" : null,
"createTimestamp" : 1472224173747,
"description" : "Database service creation with db name: dbee11g1"
} ]

So what?

ODA is for easy and fast provisioning and the GUI that was missing is finally there. Of course, it looks very simple, but that’s the goal of the appliance: setup quickly a standardized environment. ODA X6-2S is cheap and has good performance for small databases. You may find equivalent hardware, but can you build and install a stable hardware, OS and database in 30 minutes?

 

Cet article ODA X6 command line and Web Console est apparu en premier sur Blog dbi services.

AUDIT_SYS_OPERATIONS and top-level operation

Tue, 2016-08-23 04:28

SYSDBA is powerful. You should use it only when you need to. And you should audit what is done when connected SYSDBA. This is achieved by AUDIT_SYS_OPERATION. But do you have an idea how much audit log will be generated? Note that in 12c, AUDIT_SYS_OPERATION=true is the default.

The documentation says:
AUDIT_SYS_OPERATIONS enables or disables the auditing of top-level operations, which are SQL statements directly issued by users when connecting with the SYSASM, SYSBACKUP, SYSDBA, SYSDG, SYSKM, or SYSOPER privileges. (SQL statements run from within PL/SQL procedures or functions are not considered top-level.)

From the documentation, you can expect that only the statements that are issued by a SYSDBA user call are logged.
Not the recursive statements.
Not the scheduler jobs.

However, this is not what happens. Look at the following:

CaptureEventLog001

On Windows, the SYS Audit goes to the Event log and it is full of those kind of statements. Those are no statements that I’ce issued when connected as SYSDBA. Those are recursive statements from the automatic statistic gathering job that run everyday. Do I want to audit that? probably not. But it fills my system log.

On Unix, this is not better. Here is a database created a few days ago with nearly no activity since then:

[oracle@CDB adump]$ ls -alrt | head
total 2341264
drwxr-xr-x 5 oracle oinstall 4096 Jul 29 16:12 ..
-rw-r----- 1 oracle oinstall 2699 Aug 19 03:15 CDB_m000_10396_20160819031532925620143795.aud
-rw-r----- 1 oracle oinstall 1150 Aug 19 03:15 CDB_ora_10402_20160819031534799868143795.aud
-rw-r----- 1 oracle oinstall 1134 Aug 19 03:15 CDB_ora_10404_20160819031535178281143795.aud
-rw-r----- 1 oracle oinstallcat 1131 Aug 19 03:15 CDB_ora_10406_20160819031535558089143795.aud
-rw-r----- 1 oracle oinstall 1139 Aug 19 03:15 CDB_ora_10408_20160819031535936355143795.aud
-rw-r----- 1 oracle oinstall 1156 Aug 19 03:15 CDB_ora_10410_20160819031536306359143795.aud
-rw-r----- 1 oracle oinstall 1165 Aug 19 03:15 CDB_ora_10412_20160819031536679750143795.aud
[oracle@CDB adump]$ ls -alrt | tail
-rw-r----- 1 oracle oinstall 1243 Aug 23 08:31 CDB_ora_9044_20160823083113966954143795.aud
-rw-r----- 1 oracle oinstall 1243 Aug 23 08:31 CDB_ora_9047_20160823083114497136143795.aud
-rw-r----- 1 oracle oinstall 1243 Aug 23 08:31 CDB_ora_9048_20160823083114567197143795.aud
-rw-r----- 1 oracle oinstall 1243 Aug 23 08:31 CDB_ora_9049_20160823083115041317143795.aud
-rw-r----- 1 oracle oinstall 1243 Aug 23 08:31 CDB_ora_9050_20160823083115311603143795.aud
-rw-r----- 1 oracle oinstall 54568 Aug 23 08:31 CDB_ora_9036_20160823083113459749143795.aud
-rw-r----- 1 oracle oinstall 1243 Aug 23 08:31 CDB_ora_9052_20160823083115922822143795.aud
-rw-r----- 1 oracle oinstall 54568 Aug 23 08:31 CDB_ora_9045_20160823083114267635143795.aud
drwxr-xr-x 5 oracle oinstall 64274432 Aug 23 08:31 .
[oracle@CDB adump]$ du -hs
8.5G .

Yes, this is two million files. 8GB of text. What will you do with that?
That’s the problem with auditing. It’s activated by default to comply with security policies, but there’s nothing to purge or archive them by default. Size grows and you cannot do anything relevant with it.

If I look at a subset of files:

[oracle@CDB adump]$ ls CDB_ora_*_20160823*aud | wc -l
8184

Only two of them are from user connections. The others have no terminal:

[oracle@CDB adump]$ grep -l "^CLIENT TERMINAL:\[0\]" CDB_ora_*_20160823*aud | wc -l
8182

Expected feature

It’s not a bug. Oracle has decided to log everything in the SYS audit. Because if they don’t, a SYSDBA connection can run a procedure or a job that looks harmless when only the top-level call is logged. That procedure may be created by any user and so do not appear in the audit trail.

So what?

Whatever the default is, the audit trail is something to manage. On Windows, the event log has a maximum size. On Linux, it fills the adump directory (or syslog if you enabled it). So you should have housekeeping jobs. You can also disable the audit so only connections are logged, but not the statements. In this case you should control who has access to SYSDBA.

 

Cet article AUDIT_SYS_OPERATIONS and top-level operation est apparu en premier sur Blog dbi services.

Documentum Administrator UCF Troubleshooting

Fri, 2016-08-19 04:23

Maybe you had some issues with UCF in DA as me. I had this for no reason since few days at a customer. The problem was that we use SSL with DA and the Unified Content Facilities (UCF) wasn’t happy about it.
Thus, in this short blog I’ll speak about troubleshooting UCF.

The error I got happened when trying to edit, view or create documents; I had a popup saying an error occured with UCF.

First, we must know our enemy in order to fight it!

UCF stands for Unified Content Facilities. It’s a java applet made by EMC and used by wdk applications in order to manage and optimize content transfer between the application and your workstation. Thanks to UCF you can transfer large files with compressions and reconnect if the network failed some packets. The applet is downloaded to your workstation at runtime when you connect to a wdk application.
You can find the UCF configuration in your user folder like follow:
C:\Users\<USER>\Documentum\ucf

Refresh UCF Cache

Before going deeper in the debugging, maybe try to clear the ucf cache first and re-download the latest one from the server. In order to do so you’ll have to perform the following steps:
Clear you browser cache. If you have IE, go to Tools -> Delete Browsing History (or press CTRL+SHIFT+DEL).
Then check each checkboxes and click Delete.

Capture1

Close the browser afterwards.

Now be sure that you don’t have any browser pointing to a wdk application and go to C:\Users\<USER>\Documentum and try deleting the ucf folder.
If you have an error telling you it is already used, open the task manager and search for javaw.exe processes, and then close them down.
You should be able to delete the ucf folder now.

Also clear the cached ucf jar files by opening the java control panel. Go to Control Panel -> search for Java -> General Tab -> Temporary Internet Files -> Settings -> Delete Files

Now test again by opening Documentum Administrator and creating/editing a document. You shouldn’t have a popup error about UCF.

If you reached this point in the blog that means you didn’t get rid of the problem, so didn’t I. Well at this point we did some corrections but we don’t know what is the real error about UCF, we only get this stack trace saying UCF failed. We can now enable the ucf tracing in order to see if something more interesting is written in the logs.
You can enable the tracing on both sides, the server and your workstation. The easiest is to begin with your workstation so go back to the ucf folder C:\Users\<USER>\Documentum\ucf
Then navigate to <PCNAME>\shared\config and edit ucf.client.config.xml
Add the following options between <configuration/>:

<option name="tracing.enabled">
    <value>true</value>
</option>
<option name="debug.mode">
    <value>true</value>
</option>

Also edit the file: ucf.client.logging.properties by changing .level=WARNING to .level=ALL

Now reproduce the error and check what has been written in C:\Users\<USER>\Documentum\Logs

If you can’t see what the problem is you can also activate the tracing on the webserver by editing the same way: ../WEB-INF/classes/ucf.server.config.xml but note that you need to restart the webserver for it to take effect.

The errors in the generated log should allow you to find the real cause of the ucf error. In my case it was the SSL handshake that was failing.

 

Cet article Documentum Administrator UCF Troubleshooting est apparu en premier sur Blog dbi services.

Neue Datenbank Konsole Befehle (DBCC) CLONEDATABASE

Fri, 2016-08-19 02:52

Sie haben es vielleicht noch nicht gesehen, aber mit den Service Pack 2 von SQL Server 2014 gibt es einen neuen DBCC Befehl: CLONEDATABASE.

„DBCC CLONEDATABASE sollte Kopien eines Schemas und den Statistiken einer Produktionsdatenbank erstellen, um bei Leistungsproblemen Abfragen zu untersuchen.“ MSDN Quelle finden wir hier.

Dieser Befehl kann nur Benutzedatenbanken klonen.

Clonedatabase01

Wie Sie sehen, für die Systemdatenbanken ist es nicht möglich, denn es treten Fehlermeldungen auf:
Msg 12603, Level 16, State 1, Line 5
DBCC CLONEDATABASE does not support cloning system databases.

Mit DBCC CLONEDATABASE wird eine neue Datenbank erstellet. Es ist ein interner Snapshot der die Systemmetadaten, alle Schemas und alle Statistiken für alle Indizes kopiert. Deswegen, ist die Datenbank leer und ist im Read-Only Modus.
Clonedatabase02

Die Schemas sind kopiert… Ok, ich werde jetzt mit SQL Server Data Tools (SSDT) ein «Schemavergleich» durchführen:
Clonedatabase03

Alle Schemas sind in der geklonten Databank. Die Änderungen sind für den Fulltext und die Daten im XML Schema, wie zum Beispiel der MileRangeType mit seiner Bedeutung.
Clonedatabase04

Ich habe mit meinem Freund Michel über diese Funktionalität gesprochen und er hat mir gefragt wie verhaltet es sich mit den Daten?
Mein erster Schritt ist es die Betrachtung der Dateistruktur nach dem Klonen :
Clonedatabase05
Sie können feststellen, dass meine geklonte Datenbank weder die Benutzerdatenbank noch die Modeldatenbank übernommen hat.
Ich richte eine neue Filegroup [FG_Employees] ein, mit ein neue File AdventureWorks2014_Employees.ndf
Ich ändere mein Clustered Indize PK_Employee_BusinessEntityID zu dieser neuen Filegroup:

USE [master]
GO
ALTER DATABASE [AdventureWorks2014] ADD FILEGROUP [FG_Employees]
GO
ALTER DATABASE [AdventureWorks2014]
ADD FILE ( NAME = N'AdventureWorks2014_Employees',
FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.TEST\MSSQL\DATA\AdventureWorks2014_Employees.ndf'
, SIZE = 16384KB , FILEGROWTH = 8192KB ) TO FILEGROUP [FG_Employees]
GO
USE [AdventureWorks2014];
GO
CREATE UNIQUE CLUSTERED INDEX PK_Employee_BusinessEntityID
    ON HumanResources.Employee(BusinessEntityID)
WITH (DROP_EXISTING =  ON )
ON [FG_Employees]

Und jetzt, klone ich wieder meine Datenbank:
Clonedatabase06
Die Dateistruktur ist kopiert, cool!
Kann ich die Datenbank auf Read-Only ändern und Datei importieren?

USE [master]
GO
ALTER DATABASE [AdventureWorks2014_clone] SET  READ_WRITE WITH NO_WAIT
GO
INSERT INTO [AdventureWorks2014_clone].[Sales].[Currency]
SELECT *  FROM [AdventureWorks2014].[Sales].[Currency]

Kein Problem, die Daten sind direkt kopiert.
Clonedatabase07

Diese neue Funktionalität ist wirklich einfach zu verwenden.
Ich hoffen dass im nächsten Service Pack von SQL Server 2012 und SQL Server 2016 die CLONEDATABASE Funktion auch integriert wird. Für mehr Information, MSDN link hier

 

Cet article Neue Datenbank Konsole Befehle (DBCC) CLONEDATABASE est apparu en premier sur Blog dbi services.

Java reflection: Get classes and packages names from a root package within a custom URLClassloader

Thu, 2016-08-18 08:21

I have got a case for which I needed to use a custom Classloader, in order to find the list of classes and packages available across several software plugins materialized by a set of jar files. These jar files are intended to be located inside a particular place.

After googling, some solutions invited us to deploy external libraries. But I was not interested to manage another lib in my soft just for that purpose.
So, the solution was to recover all jars from the specific Classloader and loop on them in order to find out the classes and packages I was looking for.

This blog post is an extension of a previous one, that only looked for classes within the JVM context Classloader:

http://blog.dbi-services.com/java-get-class-names-from-package-string-in-classpath/

 

This example now, shows how to initiate a new child Classloader, to work with, and extract packages names in addition to Classes list available inside a root package name.
Following, you will see a complete Java class resolving this issue:

 

import java.io.File;
import java.io.FileFilter;
import java.net.URISyntaxException;
import java.net.URL;
import java.net.URLClassLoader;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.Enumeration;
import java.util.TreeMap;
import java.util.TreeSet;
import java.util.jar.JarEntry;
import java.util.jar.JarFile;

/**
 * @author Philippe Schweitzer, dbi services Switzerland
 *
 * It is a Class that demonstrates how to list Classes and Packages from a given
 * root package name.
 *
 * This class uses a specific Classloader in order to lookup into jar files
 * located at a dedicated place (and not in all JavaSE library ).
 *
 */
public class RessourceFinderExample {

    /* Root package name */
    private static final String packageName = "YOUR PACKAGE NAME";

    /* Paths list to jar files */
    private static final String[] classPaths = {"plugins", ".", "dist"};

    /**
     * Main method, it creates the Classloader and displays a consolidated
     * result.
     *
     * @param args no argument required
     */
    public static void main(String args[]) {

        URLClassLoader cl = new URLClassLoader(findJarURLsInClasspath(), Thread.currentThread().getContextClassLoader());

        System.out.println("");

        TreeSet<String> packages = RessourceFinderExample.getPackageFromPackage(packageName, cl);

        System.out.println("");
        System.out.println("-- CONSOLIDATED  RESULT --");
        System.out.println("");

        for (String packageString : packages) {
            System.out.println("   PACKAGE : " + packageString);
        }

        System.out.println("");
        System.out.println("--------------------------");
        System.out.println("");

        TreeMap<String, Class> classes = RessourceFinderExample.getClassesFromPackage(packageName, cl);

        System.out.println("");
        System.out.println("-- CONSOLIDATED  RESULT --");
        System.out.println("");

        for (Class c : classes.values()) {
            System.out.println("   CLASS   : " + packageName + "." + c.getCanonicalName());
        }

        System.out.println("");
        System.out.println("--------------------------");
        System.out.println("");
    }

    /**
     * Method that finds all jar files available in given dedicated classpath
     * places. It serves for an URLClassloader initialization.
     *
     * @return List of jar files URLs
     */
    private static URL[] findJarURLsInClasspath() {
        URL url;

        ArrayList<URL> jarURLs = new ArrayList();

        for (String path : classPaths) {

            File[] jars = new File(path).listFiles(new FileFilter() {
                public boolean accept(File pathname) {

                    return pathname.getName().toLowerCase().endsWith(".jar");
                }
            });

            if (jars != null) {
                for (int i = 0; i < jars.length; i++) {
                    try {
                        System.out.println("JAR Path: " + jars[i].getAbsolutePath());
                        url = jars[i].toURI().toURL();

                        jarURLs.add(url);

                    } catch (Exception e) {

                    }
                }
            }
        }

        URL[] urls = jarURLs.toArray(new URL[0]);
        return urls;
    }

    /**
     * Method that returns all jar files registered in the given URLClassloader
     * and which are present in dedicated classpath places.
     *
     * @return List of jar files URLs
     */
    private static URL[] getJarURLs(URLClassLoader cl) {
        URL[] result = cl.getURLs();
        ArrayList<URL> urls = new ArrayList();

        for (URL url : result) {

            try {
                Path jarPath = Paths.get(url.toURI());

                for (String classPathString : classPaths) {

                    Path classPath = Paths.get(classPathString).toAbsolutePath();

                    if (jarPath.startsWith(classPath)) {
                        urls.add(url);
                    }
                }

            } catch (URISyntaxException ex) {
            }
        }

        result = new URL[urls.size()];
        result = urls.toArray(result);

        return result;
    }

    /**
     * Method that returns all classes available underneath a given package
     * name.
     *
     * @return Set of Classes
     */
    private static TreeMap<String, Class> getClassesFromPackage(String pckgname, URLClassLoader cl) {
        TreeMap<String, Class> result = new TreeMap();
        ArrayList<File> files = new ArrayList();

        for (URL jarURL : getJarURLs(cl)) {
            getClassesInSamePackageFromJar(result, pckgname, jarURL.getPath(), cl);
        }

        return result;
    }

    /**
     * Method that fills TreeMap with all classes available in a particular jar
     * file, underneath a given package name.
     *
     */
    private static void getClassesInSamePackageFromJar(TreeMap<String, Class> result, String packageName, String jarPath, URLClassLoader cl) {
        JarFile jarFile = null;
        try {

            System.out.println("");
            System.out.println("** IN JAR : " + jarPath);
            jarFile = new JarFile(jarPath);

            Enumeration<JarEntry> en = jarFile.entries();
            while (en.hasMoreElements()) {
                JarEntry entry = en.nextElement();
                String entryName = entry.getName();

                packageName = packageName.replace('.', '/');

                if (entryName != null && entryName.endsWith(".class") && entryName.startsWith(packageName) && !entryName.substring(packageName.length() + 1).contains("/")) {

                    try {
                        Class<?> entryClass = cl.loadClass(entryName.substring(0, entryName.length() - 6).replace('/', '.'));
                        if (entryClass != null) {
                            result.put(entryClass.getCanonicalName(), entryClass);
                            System.out.println("   CLASS   : " + packageName + "." + entryClass.getCanonicalName());
                        }
                    } catch (Throwable e) {
//                        System.err.println("Error instanciating: " + entryName + " " + e.toString());
                    }
                }
            }
        } catch (Exception e) {
        } finally {
            try {
                if (jarFile != null) {
                    jarFile.close();
                }
            } catch (Exception e) {
            }
        }
    }

    /**
     * Method that returns all package names underneath a given root package
     * name.
     *
     * @return Set of Classes
     */
    private static TreeSet<String> getPackageFromPackage(String pckgname, URLClassLoader cl) {
        TreeSet<String> result = new TreeSet();

        for (URL jarURL : getJarURLs(cl)) {
            getPackageInPackageFromJar(result, pckgname, jarURL.getPath(), cl);
        }

        return result;
    }

    /**
     * Method that fills TreeMap with all package names in a particular jar
     * file, underneath a given root package name.
     *
     */
    private static void getPackageInPackageFromJar(TreeSet<String> result, String packageName, String jarPath, URLClassLoader cl) {
        JarFile jarFile = null;
        try {
            System.out.println("");
            System.out.println("** IN JAR : " + jarPath);

            jarFile = new JarFile(jarPath);

            Enumeration<JarEntry> en = jarFile.entries();
            while (en.hasMoreElements()) {
                JarEntry entry = en.nextElement();
                String entryName = entry.getName();

                packageName = packageName.replace('.', '/');

                if (entryName != null && entryName.endsWith("/") && entryName.startsWith(packageName + "/")) {

                    try {
                        String packageEntryName = entryName.substring(packageName.length() + 1);
                        packageEntryName = packageEntryName.substring(0, packageEntryName.indexOf("/"));

                        result.add(packageName.replace('/', '.') + "." + packageEntryName);

                        System.out.println("   PACKAGE : " + packageName.replace('/', '.') + "." + packageEntryName);
                    } catch (Throwable e) {
                    }
                }
            }
        } catch (Exception e) {
        } finally {
            try {
                if (jarFile != null) {
                    jarFile.close();
                }
            } catch (Exception e) {
            }
        }
    }

 

Cet article Java reflection: Get classes and packages names from a root package within a custom URLClassloader est apparu en premier sur Blog dbi services.

Generate Azure VM with Resource Manager deployment in PowerShell

Wed, 2016-08-17 09:04

Recently, there is a new way to manage the Azure infrastructure with Resource Manager. It brings many advantages regarding the classic deployment.
The differences between these two deployments will not be covered in this blog because it is not the initial goal, and it already exists a very good Microsoft topic on this subject.

In this blog, we will generate a new Windows Azure Virtual Machine using Resource Manager deployment with PowerShell from On-Premise.

Remember, only RM object can be listed with RM cmdlets! On the contrary, only Classic object can be listed with Classic cmdlets!

We can connect automatically to Azure Account with this command:
Select-AzureRmProfile -Path "C:\temp\AzureCert.json"

But to download this certificate, we need to connect manually to Azure Account at least once as follows:
Add-AzureRmAccount -SubscriptionId "<YourSubscriptionID>"

Enter your personal credentials and then run the following command:
Save-AzureRmProfile -Path "C:\temp\AzureCert.json"

If you want to navigate through your different attached Azure Subscriptions, use the cmdlets Get-AzureRmSubscription/Set-AzureRmSubcription.

To obtain the different existing Azure Locations:
Get-AzureRmLocation | Select DisplayName

For the end of this blog, we will work in this specific Azure Location:
$location = "West Europe"

Hardware Profile

To list all different available Resource Group:
Get-AzureRmResourceGroup | Select ResourceGroupName, Location

And select your specific Azure Resource Group:
$resourceGroupName = (Get-AzureRmResourceGroup).ResourceGroupName[0]

To choose the correct VM size, list all available Azure formats:
Get-AzureRmVMSize -location $location | Select Name, NumberOfCores, MemoryInMB
$vmSize = "Standard_A3"

And initialize the VM object to build:
$vm = New-AzureRMVMConfig -Name $vmname -VMSize $vmsize

Image Profile

Now we want to select a specific image available from a publisher in Azure. In this case, we will choose the last SQL Server 2016 Enterprise edition ISO.
The different steps will describe the method to find out all the elements to select the correct available image.

Select all publishers from a specific Azure Location:
Get-AzureRmVMImagePublisher -Location $location | Select PublisherName
$publisher = "MicrosoftSQLServer"

Now select all offers from a specific Azure Publisher:
Get-AzureRmVMImageOffer -Location $location -PublisherName $publisher | Select Offer
$offer = "SQL2016-WS2012R2"

Then select all Skus from a specific Azure Offer:
Get-AzureRmVMImageSku -Location $location -PublisherName $publisher -Offer $offer | Select Skus
$skus = "Enterprise"

Finally choose your version:
(Get-AzureRmVMImage -Location $location -PublisherName $publisher -Offer $publisher -Skus $skus).version

To obtain the last version of the image:
$Version = (Get-AzureRmVMImage -Location $location -PublisherName $publisher -Offer $offer -Skus $skus | sort -Descending).version[0]

Add the image profile to the existing VM object:
$vm = Set-AzureRmVMSourceImage -VM $vm -PublisherName $publisher -Offer $offer -Skus $skus -Version $version

OS Profile

According to the Image Profile, the Virtual Machine will be a Windows Server. So enter the specifications as follows:
$username = "dbi"
$password = ConvertTo-SecureString "B3stPa$$w0rd3v3r" -AsPlainText –Force
$cred = New-Object System.Management.Automation.PSCredential ($username, $password)
$vm = Set-AzureRmVMOperatingSystem -VM $VM -ComputerName "Artanis" -Windows -Credential $cred -ProvisionVMAgent

Disk Profile

As the VM will be created from an Azure Image, we need to specify a location and a name for the OS disk.

To list all your available Azure Storage Accounts, run this command:
Get-AzureRmStorageAccount | Select StorageAccountName, Location

To list the different containers available in your Azure Storage:
(Get-AzureRmStorageAccount | Get-AzureStorageContainer).CloudBlobContainer

And now add a disk profile to the existing VM:
$diskLocation = "https://<accountStorageName>.blob.core.windows.net/vhds/"
$vm = Set-AzureRmVMOSDisk -VM $vm -Name "artanisVHDOS.vhd" -VhdUri ($diskLocation+"artanisVHDOS.vhd") -CreateOption FromImage

IP Profile

Here is an example of Network configuration:
$subnet = New-AzureRmVirtualNetworkSubnetConfig -Name "CloudSubnet" -AddressPrefix "10.0.64.0/24"
$ain = New-AzureRmVirtualNetwork -Name "VirtualNetwork" -ResourceGroupName $resourceGroupName -Location $location -AddressPrefix "10.0.0.0/16" -Subnet $subnet
$pip = New-AzureRmPublicIpAddress -Name "AzurePublicIP" -ResourceGroupName $resourceGroupName -AllocationMethod Dynamic -Location $location
$nic = New-AzureRMNetworkInterface -Name "AzureNetInterface" -ResourceGroupName $resourceGroupName -Location $location SubnetId $ain.Subnets[0].Id -PublicIpAddressId $pip.Id

Conclusion: VM generation

Now we have entered all different profiles required to generate a new Windows Azure VM:
$azurevm = New-AzureRmVM -ResourceGroupName $resourceGroupName -Location $location -VM $vm

Use “Get-AzureRmVM” cmdlet to list all available VMs.

To download the remote desktop file to connect to this new virtual machine, use the following command:
Get-AzureRmRemoteDesktopFile -ResourceGroupName $resourceGroupName -Name $vmName -LocalPath "C:\Temp\Artanis.rdp"

With all these commands, you can realize how simple it is to automate the generation of a new Virtual Machine in Azure. Moreover you should probably have noticed the construction of the VM object (with the different profiles) is similar to Hyper-V structure.

I hope it helps you ;-)

 

Cet article Generate Azure VM with Resource Manager deployment in PowerShell est apparu en premier sur Blog dbi services.

Unplugged pluggable databases

Mon, 2016-08-15 08:35

When Oracle Multitenant came out in 12c, with pluggable databases, it was easy to draw them as USB sticks that you can plug and unplug to/from your Container Database (CDB). I don’t like this because it gives the false idea that an unplugged database is detached from the container.

Containers

In the Oracle documentation, the Concept book, the description of the multitenant architecture starts with an illustration of a CDB.
CDB with two PDBs where the text description starts like:
This graphic depicts a CDB as a cylinder. Inside the CDB is a box labeled “Root (CDB$ROOT).” Plugged into the box is the Seed (CDB$SEED), hrpdb, and salespdb.

Let me list what I don’t like with this description:

  1. There are 5 containers here but 3 ways to draw them. The CDB itself is a container (CDB_ID=0) and is a cylinder. The CDB$ROOT (CON_ID=1) is a container and is a box. The PDB$SEED, and the user PDBs are cylinders with USB plug.
  2. The CDB$ROOT do not look like a database (cylinder). However, physically it’s the same: SYSTEM, SYSAUX, UNDO, TEMP tablepsaces
  3. The PDB$SEED (CON_ID=1) looks like it is pluggable (USB stick) but you never unplug the PDB$SEED
  4. The USB plug is plugged inside the CDB$ROOT. That’s wrong. All containers inside a CDB are at the same level and are ‘plugged’ in the CDB (CON_ID=0) and not the CDB$ROOT(CON_ID=1). They are contained by the CDB and if they are plugged somewhere, it’s in the CDB controlfile. The root is a root for metadata and object links, not for the whole PDBs.

If I had to show pluggable databases as USB sticks it would be like that:
CaptureUSBStick

Here CDB$ROOT is a container like the pluggable databases, except that you cannot unplug it. PDB$SEED is a pluggable database but that you don’t unplug. The CDB is a container but do not look like a database. It’s the controlfile and the instance, but there’s no datafiles directly attached to the CDB.

Unplugged

However with this illustration, we can think that an unplugged pluggable database is detached from the CDB, which is wrong.

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB READ WRITE NO
SQL> alter pluggable database PDB close;
Pluggable database altered.
 
SQL> alter pluggable database PDB unplug into '/tmp/PDB.xml';
Pluggable database altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB MOUNTED

Here PDB is unplugged, but still pertains to the CDB.

The CDB controlfile still addresses all the PDB datafiles:

RMAN> report schema;
 
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 829 SYSTEM YES /u02/app/oracle/oradata/CDB/system01.dbf
3 1390 SYSAUX NO /u02/app/oracle/oradata/CDB/sysaux01.dbf
4 4771 UNDOTBS1 YES /u02/app/oracle/oradata/CDB/undotbs01.dbf
5 270 PDB$SEED:SYSTEM NO /u02/app/oracle/oradata/CDB/pdbseed/system01.dbf
6 2 USERS NO /u02/app/oracle/oradata/CDB/users01.dbf
7 540 PDB$SEED:SYSAUX NO /u02/app/oracle/oradata/CDB/pdbseed/sysaux01.dbf
12 280 PDB:SYSTEM NO /u02/app/oracle/oradata/CDB/3969397A986337DCE053B6CDC40AC61C/datafile/o1_mf_system_ctcxz29m_.dbf
13 570 PDB:SYSAUX NO /u02/app/oracle/oradata/CDB/3969397A986337DCE053B6CDC40AC61C/datafile/o1_mf_sysaux_ctcxz2bb_.dbf

The datafiles 12 and 13 are the ones from my unplugged PDB, still known and managed by the CDB.

Backup

An unplugged PDB has data, and data should have backups. Who is responsible for the unplugged PDB backups? It’s still the CDB:

RMAN> backup database;
 
...
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00013 name=/u02/app/oracle/oradata/CDB/3969397A986337DCE053B6CDC40AC61C/datafile/o1_mf_sysaux_ctcxz2bb_.dbf
input datafile file number=00012 name=/u02/app/oracle/oradata/CDB/3969397A986337DCE053B6CDC40AC61C/datafile/o1_mf_system_ctcxz29m_.dbf
...

The unplugged CDB is not detached at all and the CDB is still referencing its files and is responsible for them. This is very different from an unplugged USB stick which has no link anymore with the hosts it was plugged-in before.

Backup optimization

If you wonderwhether it’s good to backup an unplugged PDB with each CDB backup, don’t worry. RMAN knows that it is in a state where it cannot be modified (like read-only tablespaces) and do not backup it each time. Of course, you need to have BACKUP OPTIMIZATION is configured to ON:

RMAN> backup database;
 
Starting backup at 15-AUG-16
using channel ORA_DISK_1
skipping datafile 12; already backed up 2 time(s)
skipping datafile 13; already backed up 2 time(s)
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set

Unplug and DROP

From what we have seen, an unplugged PDB is like a closed PDB. There’s a difference through: an unplugged PDB is closed forever. You cannot open it again:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB MOUNTED
SQL> alter pluggable database PDB open;
alter pluggable database PDB open
*
ERROR at line 1:
ORA-65086: cannot open/close the pluggable database
 
SQL> host oerr ora 65086
65086, 00000, "cannot open/close the pluggable database"
// *Cause: The pluggable database has been unplugged.
// *Action: The pluggable database can only be dropped.

So, if you want to keep the USB stick analogy, unplugged do not mean ‘physically unplugged’ but something like what you should do before removing a USB stick:
Screenshot 2016-08-15 10.44.46

In summary:

ALTER PLUGGABLE DATABASE … UNPLUG is like the logical ‘eject’ you do to be sure that what you will remove physically was closed forever.
ALTER PLUGGABLE DATABASE … DROP … KEEP DATAFILES is the physical removal from the CDB

Because DROP is the only thing that can be done on an unplugged PDB, SQL Developer do the both when you click on ‘unplug':
CaptureUNPLUGDROP

The idea to drop it just after the unplug is probably there to prevent the risk to drop it ‘including datafiles’ after it has been plugged into another CDB. Because then it is lost.
However, keep in mind that when unplugged and dropped, nobody will backup the PDB datafiles until it is plugged into a new CDB.

Read-Only

There’s a last one more difference. A USB stick can be read-only. A plugged PDB cannot. You may want to share a database from a read-only filesystem, like you can do with transportable tablespaces. but you can’t:

SQL> drop pluggable database PDB keep datafiles;
Pluggable database dropped.
 
SQL> create pluggable database PDB using '/tmp/PDB.xml';
Pluggable database created.
 
SQL> alter pluggable database PDB open read only;
alter pluggable database PDB open read only
*
ERROR at line 1:
ORA-65085: cannot open pluggable database in read-only mode

The plugged PDB must be opened in read/write mode at least once:
SQL> host oerr ora 65085
65085, 00000, "cannot open pluggable database in read-only mode"
// *Cause: The pluggable database has been created and not opened.
// *Action: The pluggable database needs to be opened in read/write or
// restricted mode first.

And only then, it can be opened read-only:

SQL> alter pluggable database PDB open;
Pluggable database altered.
 
SQL> alter pluggable database PDB close;
Pluggable database altered.
 
SQL> alter pluggable database PDB open read only;
Pluggable database altered.

So what…

Here is the way I visualize pluggable databases:

CaptureCON_ID

Just a bunch of tablespaces, referenced by the CDB controlfile and grouped by self-contained containers. CDB$ROOT cannot be cloned nor unplugged. PDB$SEED can be cloned but not unplugged (but it’s a PDB). Other PDBs can be cloned and unplugged.

I’ll talk about multitenant at Oracle Open World, DOAG Conference and UKOUG TECH16.
There’s also a book coming, probably early 2017 (depends on 12.2 availability)

 

Cet article Unplugged pluggable databases est apparu en premier sur Blog dbi services.

Windows: Sauvegarde RMAN sur un disque partagé

Mon, 2016-08-08 11:09

Dans ce blog, je vais vous expliquer comment exporter les backups RMAN sur un « share disk » appartenant à un Domaine.

Assurer la sécurité des données est l’une des tâches principales de l’administrateur :

  • La mise en œuvre d’une protection des fichiers sensibles de la base :
    • Fichier de contrôle
    • Fichiers de journalisation
  • La mise en place d’une stratégie de sauvegarde/récupération :
    • Adaptée aux contraintes de l’entreprise
    • Testée et documentée.

Afin de vous documenter sur les différentes techniques de sauvegarde et de restauration, je vous propose de jeter un coup d’œil à notre page Workshop Oracle Backup Recovery.

Plusieurs d’entre vous utilisent certainement des serveurs Windows pour administrer les bases de données Oracle, cependant il n’est pas toujours évident de les administrer sur un environnement Windows par rapport à Linux.
C’est pourquoi, je vous propose une solution de sauvegarde qui exportera vos backups sur un disque partagé ou un serveur de stockage sur lequel une sauvegarde des backups se fait quotidiennement sur un disque ou une bande.

Voici les étapes à suivre:

  • Vérifiez les droits (lecture/écriture) sur le disque partagé
  • Configurez le service Oracle et le Listener dans l’outil « services.msc » avec l’utilisateur de service
  • Vérifiez que le mot de passe du compte de service n’expire jamais et qu’il ne soit jamais verrouillé ou supprimé.
  • Redémarrez les services (oracle et listener)
  • Testez les backups avec RMAN

Allez dans le menu « services.msc » et changez le paramètre du service « OracleService_[nom_de_l’instance] » ainsi que le service « Listener » à l’aide de l’utilisateur de service qui fait fonctionner vos bases de données.

Faites un clic droit sur « Propriété » aller sur l’onglet « Connexion » puis sélectionnez « Ce compte ».
Cliquez ensuite sur « Parcourir » puis écrivez le nom de l’utilisateur de service, pour finir cliquez sur « Vérifier les noms » afin de trouver l’utilisateur dans l’Active Directory.

Capture-13
Capture-8Capture-9

Redémarrez le service Oracle et Listener.
ATTENTION : Ne pas redémarrer les services si la base de données est une production !
Capture-15

Testez les Backups RMAN sur le lecteur partagé :
Capture-17

Bien entendu, il est préférable de scripter les backups via le Planificateur de tâches, afin de les exécuter automatiquement. Je vous parlerais de cette prochaine étape lors d’un second blog.

 

Cet article Windows: Sauvegarde RMAN sur un disque partagé est apparu en premier sur Blog dbi services.

Pages