# Yann Neuhaus

dbi services technical blog
Updated: 5 hours 50 min ago

### A free persistent Google Cloud service with Oracle XE

Mon, 2018-02-26 01:40

In a previous post I’ve listed several free online services which run an Oracle XE so that you can test your SQL easily. You may want use Oracle XE further, with full access to the database and its host, and still from a web browser. You probably have a Google account. Then you also have a Virtual Machine on the Google Cloud (0.5 vCPU / 1.70 GB RAM boostable to 1 vCPU / 3.75 GB) and 5 GB of persistent storage (as long as you used it in the 120 previous days). Just try this Google Cloud Shell: https://console.cloud.google.com/cloudshell.
In this post, I explain how to install Oracle XE there.

First, you need to download Oracle XE. You do that on your laptop to upload it to the Google Cloud Shell. For legal reason, there is no automated way to download it with wget because you have to manually accept the OTN License Term: http://www.oracle.com/technetwork/database/database-technologies/express-edition/downloads/index.html and choose ‘Oracle Database Express Edition 11g Release 2 for Linux x64′

You can try to upload it to the Cloud Shell directly (menu on top right – upload file) but I had problems with the size of the file, so I’ve split it into two files:
 split -b 150M oracle-xe-11.2.0-1.0.x86_64.rpm.zip 
You should have ‘split’ even on Windows (Ubuntu Bash Shell) but you can also use any tool. 7-zip can do that.

Now on the Google Cloud shell, concatenate the files back to the .zip:
 franck_pachot@cloudshell:~$cat xa* > oracle-xe-11.2.0-1.0.x86_64.rpm.zip  Unzip it:  franck_pachot@cloudshell:~$ unzip oracle-xe-11.2.0-1.0.x86_64.rpm.zip Archive: ora.zip creating: Disk1/ creating: Disk1/upgrade/ inflating: Disk1/upgrade/gen_inst.sql creating: Disk1/response/ inflating: Disk1/response/xe.rsp inflating: Disk1/oracle-xe-11.2.0-1.0.x86_64.rpm 

This .zip contains a .rpm but we are on Debian in the Google Cloud Shell. In addition to that, I’ll not follow the standard installation of Oracle XE because only my $HOME filesystem is persistent, so I want everything there. I need rpm2cpio to extract from the .rpm, and I’ll need libaio1 to install Oracle:  franck_pachot@cloudshell:~$ sudo apt-get -y install rpm2cpio libaio1 

Here is the extraction:
 franck_pachot@cloudshell:~$rpm2cpio Disk1/oracle-xe-11.2.0-1.0.x86_64.rpm | cpio -idmv  This extracted to u01, etc and usr in my$HOME directory and I’ll leave the Oracle Home there.
I can remove the intermediate files:
 franck_pachot@cloudshell:~$rm -f xa? oracle-xe-11.2.0-1.0.x86_64.rpm.zip Disk1  The Oracle XE deployment contains a ‘createdb.sh’ which will create the XE database. You don’t have dbca here, you don’t have templates. Oracle XE is build to be small. Just set ORACLE_HOME, PATH, LD_LIBRARY_PATH and run createdb.sh  export ORACLE_HOME=$HOME/u01/app/oracle/product/11.2.0/xe echo "$PATH" | grep "$ORACLE_HOME" || export PATH=$PATH:$ORACLE_HOME/bin echo "$LD_LIBRARY_PATH" | grep "$ORACLE_HOME" || export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib createdb.sh 
This takes time: create database, catalog, catproc… and the you have your database

The listener is not started. We need to create the directory for the log, and to define listener.ora to listen on default port:
 mkdir -p ./u01/app/oracle/product/11.2.0/xe/network/log echo "LISTENER=(DESCRIPTION_LIST=(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = 0.0.0.0)(PORT = 1521))))" > ./u01/app/oracle/product/11.2.0/xe/network/admin/listener.ora lsnrctl start export ORACLE_SID=XE sqlplus sys/oracle as sysdba <<<'alter system register;' 

You should see the XE service registered here:
 franck_pachot@cloudshell:~$lsnrctl status LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 25-FEB-2018 23:01:40 Copyright (c) 1991, 2011, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=0.0.0.0)(PORT=1521))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production Start Date 25-FEB-2018 23:00:01 Uptime 0 days 0 hr. 1 min. 38 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /home/frank_pachot/u01/app/oracle/product/11.2.0/xe/network/admin/listener.ora Listener Log File /home/frank_pachot/u01/app/oracle/product/11.2.0/xe/network/log/listener.log Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=8080))(Presentation=HTTP)(Session=RAW)) Services Summary... Service "XE" has 1 instance(s). Instance "XE", status READY, has 1 handler(s) for this service... Service "XEXDB" has 1 instance(s). Instance "XE", status READY, has 1 handler(s) for this service... The command completed successfully  Note that you cannot access your Google Cloud shell from outside, and then you can connect locally. But having a listener and connecting through services is always a good idea. If your session is inactive, you may lose the connection and even have the VM stopped. But your$HOME will still be there when you restart, so you can set the .profile to set the correct environment and start the listener and database if not already running:
cat >> ~/.profile<<'END' export ORACLE_HOME=$HOME/u01/app/oracle/product/11.2.0/xe echo "$PATH" | grep "$ORACLE_HOME" || export PATH=$PATH:$ORACLE_HOME/bin echo "$LD_LIBRARY_PATH" | grep "$ORACLE_HOME" || export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib export ORACLE_SID=XE ps -edf | grep [t]nslsnr || lsnrctl start ps -edf | grep [s]mon_XE || sqlplus sys/oracle as sysdba <<< startup END I don’t use /etc/oratab here because it is outside of the persistent area. We can not connect ‘/ as sysdba’ because we are not in the ‘dba’ group. I don’t think we can change this in Oracle XE. Of course, we can sudo to root and add the group, but that will not be persistent. However, no need for it. The password for SYS is “oracle” and you can create all the users you want. The database, being stored under$HOME, is persistent.

Here are my datafiles:
 franck_pachot@cloudshell:~$rman target sys/oracle Recovery Manager: Release 11.2.0.2.0 - Production on Sun Feb 25 21:28:00 2018 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. connected to target database: XE (DBID=2850165315) RMAN> report schema; using target database control file instead of recovery catalog Report of database schema for database with db_unique_name XE List of Permanent Datafiles =========================== File Size(MB) Tablespace RB segs Datafile Name ---- -------- -------------------- ------- ------------------------ 1 280 SYSTEM *** /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/system.dbf 2 190 SYSAUX *** /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/sysaux.dbf 3 235 UNDOTBS1 *** /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/undotbs1.dbf 4 100 USERS *** /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/users.dbf List of Temporary Files ======================= File Size(MB) Tablespace Maxsize(MB) Tempfile Name ---- -------- -------------------- ----------- -------------------- 1 20 TEMP 500 /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/temp.dbf  You find the alert.log under$ORACLE_HOME/dbs (as all the database files):
 franck_pachot@cloudshell:~$tail$HOME/u01/app/oracle/product/11.2.0/xe/log/diag/rdbms/xe/XE/trace/alert_XE.logThread 1 advanced to log sequence 17 (LGWR switch) Current log# 2 seq# 17 mem# 0: /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/log2.dbf Sun Feb 25 22:01:05 2018 Shared IO Pool defaulting to 44MB. Trying to get it from Buffer Cache for process 2875. Sun Feb 25 22:09:38 2018 Thread 1 advanced to log sequence 18 (LGWR switch) Current log# 3 seq# 18 mem# 0: /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/log3.dbf Sun Feb 25 22:09:43 2018 SERVER COMPONENT id=UTLRP_BGN: timestamp=2018-02-25 22:09:43 SERVER COMPONENT id=UTLRP_END: timestamp=2018-02-25 22:09:50

The limitations and features of the Google Cloud Shell are documented here: https://cloud.google.com/shell/docs/features. In addition to the command line (through ‘tmux’ which allows to split the screen in different panes) you have a file editor in the browser. You can also install Apex as you have browser access to port 8080 in https (icon on top right just before the menu).

The major limitation here comes from Oracle XE which is an old version (11.2.0.2) but this year should come Oracle XE 18c with the latest features. Oracle XE 18c may also come with EM Express and Google Cloud Shell gives access to https. I just hope that there will be a small image for Oracle XE 18c as we have only 5GB here. Maybe a docker container will be easier then, with only the database in an external volume under $HOME. We will see, but in the meanwhile, there’s already a lot we can do with Oracle XE. You can play with Backup/Recovery scenarios and you will always be able to re-create the database by running the createdb.sh again. Added 26-FEB-2018 As I said that this can be a good lab to practice backup/recovery scenarios, you should run in archive log mode:  sqlplus sys/oracle as sysdba <<END shutdown immediate; startup mount; alter database archivelog; alter database open; END  Then to avoid to fill-in the recovery area, you can backup the database and archived logs frequently. You don’t need to put the backups on the persistent storage as it is a lab. I suggest to put the following at the end of the .profile:  rman target sys/oracle > last_backup.log <<<"set echo on; configure channel device type disk format '/var/tmp/%U'; configure backup optimization on; configure controlfile autobackup on; crosscheck backup; delete noprompt expired backup; backup database plus archivelog; delete noprompt obsolete;" &  This will run a backup to /var/tmp when you connect, delete obsolete backups, and expired ones (as they will be removed if the machine is reset after long inactivity). Cet article A free persistent Google Cloud service with Oracle XE est apparu en premier sur Blog dbi services. ### PDB upgrade from 12c to 18c Sun, 2018-02-25 14:20 Oracle 18c is out, in the Oracle Cloud, and the first thing I do with a new version is testing how long it takes to upgrade a previous version PDB by unplug/plug. Faster upgrade should be the benefit of having a slim dictionary where the system objects are reduced to metadata links and data links. However, it looks like upgrading the PDB dictionary still takes the same time as upgrading the CDB$ROOT.

The idea is to create a DBaaS service with a new CDB in 18.1 and plug a PDB coming from 12.2.0.1. Actually, I’m saying 18.1 but that may be 18.0 as I’m now lost in those version numbers. The cloud service was created with version: “18.0.0.0”, V$VERSION displays 18.1.0.0 for the release and 18.0.0.0 for the version: Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.1.0.0.0 My understanding is that the 18.0.0.0 is the version of the 18c dictionary, which will need a full upgrade only for 19c (19.0.0.0). And 18.1.0.0 is about the version, which will be incremented by Release Updates later. I have an unplugged PDB that I plug into the new CDB: SQL> create pluggable database PDB0 using '/u01/app/temp/PDB0.pdb'; Pluggable database PDB0 created. When I open it, I get a warning: SQL> alter pluggable database pdb0 open; ORA-24344: success with compilation error 24344. 00000 - "success with compilation error" *Cause: A sql/plsql compilation error occurred. *Action: Return OCI_SUCCESS_WITH_INFO along with the error code Pluggable database PDB0 altered.  Then I check the PDB PLUG IN VIOLATIONS: SQL> select * from pdb_plug_in_violations; TIME NAME CAUSE TYPE ERROR_NUMBER LINE MESSAGE STATUS ACTION CON_ID ---- ---- ----- ---- ------------ ---- ------- ------ ------ ------ 24-FEB-18 08.35.16.965295000 PM PDB0 OPTION ERROR 0 1 Database option APS mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1 24-FEB-18 08.35.16.966343000 PM PDB0 OPTION ERROR 0 2 Database option CATALOG mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1 24-FEB-18 08.35.16.966556000 PM PDB0 OPTION ERROR 0 3 Database option CATJAVA mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1 24-FEB-18 08.35.16.966780000 PM PDB0 OPTION ERROR 0 4 Database option CATPROC mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1 24-FEB-18 08.35.16.966940000 PM PDB0 OPTION ERROR 0 5 Database option CONTEXT mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1 24-FEB-18 08.35.16.967096000 PM PDB0 OPTION ERROR 0 6 Database option DV mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1 24-FEB-18 08.35.16.967250000 PM PDB0 OPTION ERROR 0 7 Database option JAVAVM mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1 24-FEB-18 08.35.16.967403000 PM PDB0 OPTION ERROR 0 8 Database option OLS mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1 24-FEB-18 08.35.16.967602000 PM PDB0 OPTION ERROR 0 9 Database option ORDIM mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1 24-FEB-18 08.35.16.967785000 PM PDB0 OPTION ERROR 0 10 Database option OWM mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1 24-FEB-18 08.35.16.967939000 PM PDB0 OPTION ERROR 0 11 Database option SDO mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1 24-FEB-18 08.35.16.968091000 PM PDB0 OPTION ERROR 0 12 Database option XDB mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1 24-FEB-18 08.35.16.968246000 PM PDB0 OPTION ERROR 0 13 Database option XML mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1 24-FEB-18 08.35.16.968398000 PM PDB0 OPTION ERROR 0 14 Database option XOQ mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1 24-FEB-18 08.35.16.971138000 PM PDB0 Parameter WARNING 0 1 CDB parameter compatible mismatch: Previous '12.2.0' Current '18.0.0' PENDING Please check the parameter in the current CDB 1 24-FEB-18 08.35.17.115346000 PM PDB0 VSN not match ERROR 0 1 PDB's version does not match CDB's version: PDB's version 12.2.0.1.0. CDB's version 18.0.0.0.0. PENDING Either upgrade the PDB or reload the components in the PDB. 4  The messages are clear: all components have a 12.2.0.1 dictionary and must be upgraded to a 18.0.0.0.0 one The PDB is opened in MIGRATE mode with only RESTRICTED sessions enabled: SQL> show pdbs SP2-0382: The SHOW PDBS command is not available. SQL> pdbs CON_ID CON_NAME OPEN_MODE RESTRICTED STATUS FOREIGN_CDB_DBID FOREIGN_PDB_ID CREATION_TIME REFRESH_MODE REFRESH_INTERVAL LAST_REFRESH_SCN CDB_DBID CURRENT_SCN ------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- ----------- 2 PDB$SEED READ ONLY NO NORMAL 1201448 2 11:42:25 NONE 942476327 3958500 3 CDB1PDB MOUNTED NORMAL 942476327 2 19:58:55 NONE 942476327 3958500 4 PDB0 MIGRATE YES NEW 941386968 3 20:34:50 NONE 942476327 3958500 

Then, here is the upgrade for this newly plugged PDB0:
 [oracle@DBaaS18c 18c]$dbupgrade -c PDB0 Argument list for [/u01/app/oracle/product/18.0.0/dbhome_1/rdbms/admin/catctl.pl] Run in c = PDB0 Do not run in C = 0 Input Directory d = 0 Echo OFF e = 1 Simulate E = 0 Forced cleanup F = 0 Log Id i = 0 Child Process I = 0 Log Dir l = 0 Priority List Name L = 0 Upgrade Mode active M = 0 SQL Process Count n = 0 SQL PDB Process Count N = 0 Open Mode Normal o = 0 Start Phase p = 0 End Phase P = 0 Reverse Order r = 0 AutoUpgrade Resume R = 0 Script s = 0 Serial Run S = 0 RO User Tablespaces T = 0 Display Phases y = 0 Debug catcon.pm z = 0 Debug catctl.pl Z = 0 catctl.pl VERSION: [18.0.0.0.0] STATUS: [Production] BUILD: [RDBMS_18.1.0.0.0_LINUX.X64_180103.1] ... The Build number mentions 18.1 built on 03-JAN-2018 Look at the summary to see the time it takes: Oracle Database Release 18 Post-Upgrade Status Tool 02-24-2018 21:36:5 [PDB0] Component Current Full Elapsed Time Name Status Version HH:MM:SS Oracle Server UPGRADED 18.1.0.0.0 00:13:37 JServer JAVA Virtual Machine UPGRADED 18.1.0.0.0 00:00:51 Oracle XDK UPGRADED 18.1.0.0.0 00:00:21 Oracle Database Java Packages UPGRADED 18.1.0.0.0 00:00:05 OLAP Analytic Workspace UPGRADED 18.1.0.0.0 00:00:11 Oracle Label Security UPGRADED 18.1.0.0.0 00:00:03 Oracle Database Vault UPGRADED 18.1.0.0.0 00:00:34 Oracle Text UPGRADED 18.1.0.0.0 00:00:11 Oracle Workspace Manager UPGRADED 18.1.0.0.0 00:00:18 Oracle Real Application Clusters UPGRADED 18.1.0.0.0 00:00:00 Oracle XML Database UPGRADED 18.1.0.0.0 00:00:49 Oracle Multimedia UPGRADED 18.1.0.0.0 00:01:03 Spatial UPGRADED 18.1.0.0.0 00:02:06 Oracle OLAP API UPGRADED 18.1.0.0.0 00:00:08 Upgrade Datapatch 00:00:05 Final Actions 00:00:09 Post Upgrade 00:00:02 Post Upgrade Datapatch 00:00:04 Total Upgrade Time: 00:20:47 [PDB0] Database time zone version is 26. It is older than current release time zone version 31. Time zone upgrade is needed using the DBMS_DST package. Grand Total Upgrade Time: [0d:0h:21m:10s]  Here we see 18.1 but the important number is the time: 21 minutes… Once again, I see no improvement in the time to upgrade the PDB dictionary. This was on a service with 2 OCPU and I’ve run a whole CDB upgrade with a similar shape and the time to upgrade the CDB$ROOT is exaclty the same – see the screenshot on the right.

Finally I open the PDB:
 SQL> alter pluggable database pdb0 open; Pluggable database PDB0 altered. 

And check that the violations are resolved:
 SQL> select * from pdb_plug_in_violations where status'RESOLVED';   TIME NAME CAUSE TYPE ERROR_NUMBER LINE MESSAGE STATUS ACTION CON_ID ---- ---- ----- ---- ------------ ---- ------- ------ ------ ------ 24-FEB-18 09.46.25.302228000 PM PDB0 OPTION WARNING 0 15 Database option RAC mismatch: PDB installed version 18.0.0.0.0. CDB installed version NULL. PENDING Fix the database option in the PDB or the CDB 4 
Ok, I suppose I can ignore that as this is not RAC.

I’ve not seen a lot of differences in the dbupgrade output. There’s a new summary of versions before and after upgrade, which was not there in 12c:

DOC>##################################################### DOC>##################################################### DOC> DOC> DIAG OS Version: linux x86_64-linux-thread-multi 2.6.39-400.211.1.el6uek.x86_64 DOC> DIAG Database Instance Name: CDB1 DOC> DIAG Database Time Zone Version: 31 DOC> DIAG Database Version Before Upgrade: 12.2.0.1.0 DOC> DIAG Database Version After Upgrade: 18.1.0.0.0 DOC>##################################################### DOC>##################################################### 

However, be careful with this information. The OS Version is not correct:
 [opc@DB ~]$uname -a Linux DB 4.1.12-112.14.10.el6uek.x86_64 #2 SMP Mon Jan 8 18:24:01 PST 2018 x86_64 x86_64 x86_64 GNU/Linux  It seems that this info comes from Config.pm which is the OS version where the perl binaries were built… In summary, nothing changes here about the time it takes to upgrade a PDB when plugged into a new CDB. However, in 18c (and maybe only with next Release Updates) we should have a way to get this improved by recording the upgrade of CDB$ROOT and re-playing a trimmed version on the PDB dictionaries, in the same way as in Application Containers for application upgrades. We already see some signs of it with ‘_enable_cdb_upgrade_capture’ undocumented parameter and PDB_UPGRADE_SYNC database property. This may even become automatic when PDB is opened with the PDB_AUTO_UPGRADE property. But that’s for the future, and not yet documented.

For the moment, you still need to run a full catupgrd on each container, through catctl.pl called by the ‘dbupgrade’ script. Here on a 2 OCPU service, it takes 20 minutes.

Cet article PDB upgrade from 12c to 18c est apparu en premier sur Blog dbi services.

### Configuring huge pages for your PostgreSQL instance, Debian version

Sun, 2018-02-25 10:21

In the last post we had a look at how you can configure huge pages on RedHat and CentOS systems. For Debian and Debian based systems the procedure is different as Debian does not come with tuned. Lets see how it works there.

Checking the basic system configuration works the same in Debian as in RedHat based distributions by checking the /proc/meminfo file:

postgres@debianpg:/home/postgres/ [PG1] cat /proc/meminfo | grep -i huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB


So nothing configured for huge pages in the default configuration. Using the same procedure from the last post this is how you calculate the required huge pages for the PostgreSQL instance:

postgres@debianpg:/home/postgres/ [PG1] head -1 $PGDATA/postmaster.pid 6661 postgres@debianpg:/home/postgres/ [PG1] grep ^VmPeak /proc/6661/status VmPeak: 393836 kB postgres@debianpg:/home/postgres/ [PG1] grep ^Hugepagesize /proc/meminfo Hugepagesize: 2048 kB postgres@debianpg:/home/postgres/ [PG1] echo "393836/2048" | bc 192  We’ll need at least 192 pages. Lets add that to /etc/sysctl.conf: postgres@debianpg:/home/postgres/ [PG1] sudo bash root@debianpg:/home/postgres$ echo "vm.nr_hugepages=200" >> /etc/sysctl.conf


Notify the system about that change:

root@debianpg:/home/postgres$sysctl -p vm.nr_hugepages = 200  … and we have 200 huge pages available: postgres@debianpg:/home/postgres/ [PG1] cat /proc/meminfo | grep -i huge AnonHugePages: 0 kB ShmemHugePages: 0 kB HugePages_Total: 200 HugePages_Free: 200 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB  Again, lets force PostgreSQL to use huge pages and restart the instance: postgres@debianpg:/home/postgres/ [PG1] psql -c "alter system set huge_pages=on" postgres ALTER SYSTEM postgres@debianpg:/home/postgres/ [PG1] pg_ctl -D$PGDATA restart -m fast
waiting for server to shut down.... done
server stopped
waiting for server to start....2018-02-25 17:13:59.398 CET [6918] LOG:  listening on IPv6 address "::1", port 5432
2018-02-25 17:13:59.398 CET [6918] LOG:  listening on IPv4 address "127.0.0.1", port 5432
2018-02-25 17:13:59.403 CET [6918] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2018-02-25 17:13:59.421 CET [6919] LOG:  database system was shut down at 2018-02-25 17:13:59 CET
2018-02-25 17:13:59.427 CET [6918] LOG:  database system is ready to accept connections
done
server started


… and that’s it:

postgres@debianpg:/home/postgres/ [PG1] cat /proc/meminfo | grep -i huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
HugePages_Total:     200
HugePages_Free:      193
HugePages_Rsvd:       64
HugePages_Surp:        0
Hugepagesize:       2048 kB


We can do the same test as in the last post to check that the number of huge pages will increase when you have load on the system:

postgres=# create table t1 as select * from generate_series(1,1000000);
SELECT 1000000
postgres=# select count(*) from t1;
count
---------
1000000
(1 row)

postgres=# \! cat /proc/meminfo | grep -i huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
HugePages_Total:     200
HugePages_Free:      184
HugePages_Rsvd:       55
HugePages_Surp:        0
Hugepagesize:       2048 kB


Btw: This is on Debian 9 (not sure if it is the same for lower versions):

postgres@debianpg:/home/postgres/ [PG1] cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"


Cet article Configuring huge pages for your PostgreSQL instance, Debian version est apparu en premier sur Blog dbi services.

### Configuring huge pages for your PostgreSQL instance, RedHat/CentOS version

Sun, 2018-02-25 04:29

Almost every PostgreSQL I get in touch with is not configured to use huge pages, which is quite a surprise as it can give you a performance boost. Actually it is not the PostgreSQL instance you need to configure but the operating system to provide that. PostgreSQL will use huge pages by default when they are configured and will fall back to normal pages otherwise. The parameter which controls that in PostgreSQL is huge_pages which defaults to “try” leading to the behavior just described: Try to get them, otherwise use normal pages. Lets see how you can do that on RedHat and CentOS. I’ll write another post about how you do that for Debian based distributions shortly.

What you need to know is that RedHat as well as CentOS come with tuned profiles by default. This means kernel parameters and other settings are managed through profiles dynamically and not anymore by adjusting /etc/sysctl (although that works as well). When you are in virtualized environment (VirtualBox in my case) you probably will see something like this:

postgres@pgbox:/home/postgres/ [PG10] tuned-adm active
Current active profile: virtual-guest


Virtual guest is maybe not the best solution for database server as it comes with those settings (especially vm.dirty_ratio and vm.swappiness):

postgres@pgbox:/home/postgres/ [PG10] cat /usr/lib/tuned/virtual-guest/tuned.conf  | egrep -v "^$|^#" [main] summary=Optimize for running inside a virtual guest include=throughput-performance [sysctl] vm.dirty_ratio = 30 vm.swappiness = 30  What we do at dbi services is to provide our own profile which adjusts the settings better suited for a database server. postgres@pgbox:/home/postgres/ [PG10] cat /etc/tuned/dbi-postgres/tuned.conf | egrep -v "^$|^#"
[main]
summary=dbi services tuned profile for PostgreSQL servers
[cpu]
governor=performance
energy_perf_bias=performance
min_perf_pct=100
[disk]
[sysctl]
vm.overcommit_memory=2
vm.swappiness=0
vm.dirty_ratio=2
vm.dirty_background_ratio=1


What has all this to do with larges pages you might think. Well, tuning profiles can also be used to configure them and for us this is the preferred method because we can do it all in one file. But we before we do that lets look at the PostgreSQL instance:

postgres=# select version();
version
----------------------------------------------------------------------------------------------------------------------------
PostgreSQL 10.0 build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16), 64-bit
(1 row)

postgres=# show huge_pages;
huge_pages
------------
try
(1 row)


As said at the beginning of this post the default behavior of PostgreSQL is to use them if available. The question now is: How can you check if you have huge pages configured on the operating system level? The answer is in the virtual /proc/meminfo file:

postgres=# \! cat /proc/meminfo | grep -i huge
AnonHugePages:      6144 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB


Alle “HugePages” statistics report a zero so this system definitely is not configured to provide huge pages to PostgreSQL. AnonHugePages is for Transparent Hugepage and it is common recommendation to disable them for database servers. So we have two tasks to complete:

• Disable transparent huge pages
• Configure the system to provide enough huge pages for our PostgreSQL instance

For disabling transparent huge pages we just need to add the following lines to our tuning profile:

postgres@pgbox:/home/postgres/ [PG10] sudo echo "[vm]
> transparent_hugepages=never" >> /etc/tuned/dbi-postgres/tuned.conf


When transparent huge pages are enabled you can see that in the following file:

postgres@pgbox:/home/postgres/ [PG10] cat /sys/kernel/mm/transparent_hugepage/enabled


Once we switch the profile to our own profile:

postgres@pgbox:/home/postgres/ [PG10] sudo tuned-adm profile dbi-postgres
Current active profile: dbi-postgres



… you’ll notice that it is disabled from now on:

postgres@pgbox:/home/postgres/ [PG10] cat /sys/kernel/mm/transparent_hugepage/enabled


Task one completed. For configuring the operating system to provide huge pages for our PostgreSQL we need to know how many huge pages we require. How do we do that? The procedure is documented in the PostgreSQL documentation. Basically you start your instance and then check how many you would require. In my case, to get the PID of the postmaster process:

postgres@pgbox:/home/postgres/ [PG10] head -1 $PGDATA/postmaster.pid 1640  To get the VmPeak for that process: postgres@pgbox:/home/postgres/ [PG10] grep ^VmPeak /proc/1640/status VmPeak: 344340 kB  As the huge page size is 2MB on my system (which should be default for most systems): postgres@pgbox:/home/postgres/ [PG10] grep ^Hugepagesize /proc/meminfo Hugepagesize: 2048 kB  … we will require at least 344340/2048 huge pages for this PostgreSQL instance: postgres@pgbox:/home/postgres/ [PG10] echo "344340/2048" | bc 168  All we need to do is to add this to our tuning profile in the “[sysctl]” section: postgres@pgbox:/home/postgres/ [PG10] grep nr_hugepages /etc/tuned/dbi-postgres/tuned.conf vm.nr_hugepages=170  Re-set the profile and we’re done: postgres@pgbox:/home/postgres/ [PG10] sudo tuned-adm profile dbi-postgres postgres@pgbox:/home/postgres/ [PG10] cat /proc/meminfo | grep -i huge AnonHugePages: 4096 kB HugePages_Total: 170 HugePages_Free: 170 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB  This confirms that we now have 170 huge pages of which all of them are free to consume. Now lets configure PostgreSQL to only start when it can get the amount of huge pages required by switching the “huge_pages” parameter to “on” and restart the instance: postgres@pgbox:/home/postgres/ [PG10] psql -c "alter system set huge_pages=on" postgres ALTER SYSTEM Time: 0.719 ms postgres@pgbox:/home/postgres/ [PG10] pg_ctl -D$PGDATA restart -m fast
waiting for server to shut down.... done
server stopped
waiting for server to start....2018-02-25 11:21:29.107 CET - 1 - 3170 -  - @ LOG:  listening on IPv4 address "0.0.0.0", port 5441
2018-02-25 11:21:29.107 CET - 2 - 3170 -  - @ LOG:  listening on IPv6 address "::", port 5441
2018-02-25 11:21:29.110 CET - 3 - 3170 -  - @ LOG:  listening on Unix socket "/tmp/.s.PGSQL.5441"
2018-02-25 11:21:29.118 CET - 4 - 3170 -  - @ LOG:  redirecting log output to logging collector process
2018-02-25 11:21:29.118 CET - 5 - 3170 -  - @ HINT:  Future log output will appear in directory "pg_log".
done
server started


As the instance started all should be fine and we can confirm that by looking at the statistics in /proc/meminfo:

postgres@pgbox:/home/postgres/ [PG10] cat /proc/meminfo | grep -i huge
AnonHugePages:      4096 kB
HugePages_Total:     170
HugePages_Free:      162
HugePages_Rsvd:       64
HugePages_Surp:        0
Hugepagesize:       2048 kB


You might be surprised that not all (actually only 8) huge pages are used right now but this will change as soon as you put some load on the system:

postgres=# create table t1 as select * from generate_series(1,1000000);
SELECT 1000000
postgres=# select count(*) from t1;
count
---------
1000000
(1 row)

postgres=# \! cat /proc/meminfo | grep -i huge
AnonHugePages:      4096 kB
HugePages_Total:     170
HugePages_Free:      153
HugePages_Rsvd:       55
HugePages_Surp:        0
Hugepagesize:       2048 kB
postgres=#


Hope this helps …

Cet article Configuring huge pages for your PostgreSQL instance, RedHat/CentOS version est apparu en premier sur Blog dbi services.

### ODA Lite: What is this ‘odacli’ repository?

Fri, 2018-02-23 15:00

When ODA Lite was introduced, with ODA X6-2 S/M/L, and now with ODA x7-2 S/M, a new ‘odacli’ was there to manage it. It will probably replace the oakcli for ODA HA as well in the future. One big difference: it uses a repository to record the configuration and the operations. I don’t really like it because when something fails you are blocked. Oracle Support can modify the directory, but they ask for an access to the machine for that and this is not easy in secured environments. Anyway, I really don’t understand why another repository has been introduced. We already have the Oracle Inventory, the Grid Infrastructure resources, the Linux /etc files,… And now we have a closed repository which controls everything, accessible only with the very limited odacli commands which are not the best example of automation code and error handling.

This post is about viewing what is inside. You may also want to update it in case you have a problem. I can’t tell you not to do it: this blog has readers who fixed critical issues by editing the binary data in system files, so changing some metadata in an embedded SQL database is not so dangerous. On the other hand, you take the risk to mess up everything. So better contact Oracle Support of you are not 142% sure about what you do. But when the support is long to answer, asks a remote access, or has no other solution than re-image the ODA, you may have to find other alternatives. Just limit yourseld to what you know you can do without risk.

So, this repository is stored in an embedded JavaDB which is, as far as I understand it, An Apache Derby database recolored in red by Oracle. There’s a jdbc driver to access it.

You find the repository on your ODA in the following directory:
/opt/oracle/dcs/repo

You will probably copy the directory elsewhere to look at it. And you may do that with the DCS agent stopped.

SQuirreL SQL Client

I used SQuirreL SQL Client to read this database:

Run SQuirreL SQL Client, and add the derby.jar:

Connect to it. If you still have the repo at the original place, the URL is jdbc:derby:/opt/oracle/dcs/repo/node_0. There is no user and no password.

Then, in the ‘APP’ catalog, you can browse the tables:

SchemaSpy

You probably want to see the data model for those few tables. I did it on a 12.1.2.11.0 repository. I used SchemaSpy (http://schemaspy.org/) which is awesome because it uses the awesome Graphviz (https://www.graphviz.org/) for the visual representation. If you want to do the same, here is how I did it:

 export PATH=$PATH:"/cygdrive/c/Program Files (x86)/Graphviz2.38/bin" java -jar schemaspy*.jar -t derby -db ./repo/node_0 -dp ./derby.jar -o ODAviz -u "" -cat "APP"  Here are some of the schemas generated if you want to have a look at what is stored in the ODA repository: ODAviz.pub.zip The schema is very simple. Only a few referential integrity constraints and very simple information. One additional warning: modifications here are not supported by Oracle, and that may even be forbidden as the Capacity On Demand core count is also stored there. Cet article ODA Lite: What is this ‘odacli’ repository? est apparu en premier sur Blog dbi services. ### New SHA-2 functions showing up in PostgreSQL 11 Fri, 2018-02-23 10:03 A recent commit announced new SHA-2 functions coming up in PostgreSQL 11. Until now you can use the md5 function to generate hashes for test data or whatever you want. This commit adds more functions you can use for that. Let’s see how they work. When you want to try what follows make sure you are on the development version of PostgreSQL. You can find a little howto here. For generating test data in PostgreSQL I often use things like that: postgres@pgbox:/home/postgres/ [PGDEV] psql psql (11devel) Type "help" for help. postgres=# \! cat a.sql drop table if exists t1; create table t1 as select a.* , md5(a::varchar) from generate_series (1,1000000) a; postgres=# \i a.sql psql:a.sql:1: NOTICE: table "t1" does not exist, skipping DROP TABLE SELECT 1000000 postgres=# select * from t1 limit 5; a | md5 ---+---------------------------------- 1 | c4ca4238a0b923820dcc509a6f75849b 2 | c81e728d9d4c2f636f067f89cc14862c 3 | eccbc87e4b5ce2fe28308fd9f2a7baf3 4 | a87ff679a2f3e71d9181a67b7542122c 5 | e4da3b7fbbce2345d7772b0674a318d5 (5 rows)  Now we have more function to chose from: postgres=# \df *sha* List of functions Schema | Name | Result data type | Argument data types | Type ------------+----------------------------------+------------------+---------------------+------ pg_catalog | pg_advisory_lock_shared | void | bigint | func pg_catalog | pg_advisory_lock_shared | void | integer, integer | func pg_catalog | pg_advisory_unlock_shared | boolean | bigint | func pg_catalog | pg_advisory_unlock_shared | boolean | integer, integer | func pg_catalog | pg_advisory_xact_lock_shared | void | bigint | func pg_catalog | pg_advisory_xact_lock_shared | void | integer, integer | func pg_catalog | pg_relation_is_publishable | boolean | regclass | func pg_catalog | pg_stat_reset_shared | void | text | func pg_catalog | pg_try_advisory_lock_shared | boolean | bigint | func pg_catalog | pg_try_advisory_lock_shared | boolean | integer, integer | func pg_catalog | pg_try_advisory_xact_lock_shared | boolean | bigint | func pg_catalog | pg_try_advisory_xact_lock_shared | boolean | integer, integer | func pg_catalog | sha224 | bytea | bytea | func pg_catalog | sha256 | bytea | bytea | func pg_catalog | sha384 | bytea | bytea | func pg_catalog | sha512 | bytea | bytea | func  Using the same test script as before but with the sha224 function: postgres=# \! cat a.sql drop table if exists t1; create table t1 as select a.* , sha224(a::text::bytea) from generate_series (1,1000000) a; postgres=# \i a.sql DROP TABLE SELECT 1000000 postgres=# select * from t1 limit 5; a | sha224 ---+------------------------------------------------------------ 1 | \xe25388fde8290dc286a6164fa2d97e551b53498dcbf7bc378eb1f178 2 | \x58b2aaa0bfae7acc021b3260e941117b529b2e69de878fd7d45c61a9 3 | \x4cfc3a1811fe40afa401b25ef7fa0379f1f7c1930a04f8755d678474 4 | \x271f93f45e9b4067327ed5c8cd30a034730aaace4382803c3e1d6c2f 5 | \xb51d18b551043c1f145f22dbde6f8531faeaf68c54ed9dd79ce24d17 (5 rows)  You can use the other functions in the same way, of course. Cet article New SHA-2 functions showing up in PostgreSQL 11 est apparu en premier sur Blog dbi services. ### How to prevent RMAN archivelog backup from log switch Fri, 2018-02-23 06:18 When RMAN backups archivelogs, it does a logswitch before backing up all archivelogs. In some cases this log switching should be suppressed. This can be done with expression “until time ‘sysdate'”, which actually filters nothing. For example: backup archivelog until time 'sysdate'; Cet article How to prevent RMAN archivelog backup from log switch est apparu en premier sur Blog dbi services. ### ODA – manually resetting a CPU alert using Fault Manager Fri, 2018-02-23 02:16 While supporting since several years ODAs of different generations and versions, we faced time to time some hardware alerts sent back by the ILOM. However all of them are not related to real hardware issues and are false positive. To get rid of them the solution is to reset them manually. When an hardware error occurs the first reaction is to open a Service Request and to provide an ILOM snapshot to the support. This can easily be done using the Maintenance menu in the ILOM web interface. Based on support feedback, they may confirm that this alert is simply a false positive. Another solution if support answer is too slow is simply to give a try However this will need a server reboot to ensure the alert really disappeared. Here an example of a fault alarm about CPU we faced: Date/Time Subsystems Component ------------------------ ------------------ ------------ Tue Feb 13 14:00:26 2018 Power PS1 (Power Supply 1) A loss of AC input power to a power supply has been detected. (Probability:100, UUID:84846f3c-036d-6941-eaca-de18c4c236bd, Resource:/SYS/PS1, Part Number:7333459, Serial Number:465824T+1734D30847, Reference Document:http://support.oracle.com/msg/SPX86A-8003-EL) Thu Feb 15 14:27:04 2018 System DBP (Disk Backplane) ILOM has detected that a PCIE link layer is inactive. (Probability:25, UUID:49015767-38b2-6372-9526-c2d2c3885a72, Resource:/SYS/DBP, Part Number:7341145, Serial Number:465136N+1739P2009T, Reference Document:http://support.oracle.com/msg/SPX86A-8009-3J) Thu Feb 15 14:27:04 2018 System MB (Motherboard) ILOM has detected that a PCIE link layer is inactive. (Probability:25, UUID:49015767-38b2-6372-9526-c2d2c3885a72, Resource:/SYS/MB, Part Number:7317636, Serial Number:465136N+1742P500BX, Reference Document:http://support.oracle.com/msg/SPX86A-8009-3J) Thu Feb 15 14:27:04 2018 Processors P1 (CPU 1) ILOM has detected that a PCIE link layer is inactive. (Probability:25, UUID:49015767-38b2-6372-9526-c2d2c3885a72, Resource:/SYS/MB/P1, Part Number:SR3AX, Serial Number:54-85FED07F672D3DD3, Reference Document:http://support.oracle.com/msg/SPX86A-8009-3J) We can see that there are indeed 3 alerts for this issue. In order to reset such an alert, you need first to log in on the server as root and access the IPMI tool [root@oda-dbi01 ~]# ipmitool -I open sunoem cli Connected. Use ^D to exit. Oracle(R) Integrated Lights Out Manager Version 4.0.0.28 r121827 Copyright (c) 2017, Oracle and/or its affiliates. All rights reserved. Warning: password is set to factory default. Warning: HTTPS certificate is set to factory default. Hostname: oda-dbi01-ilom Once in IPMI, you can list the Open Problems to get the same output than above using the following command: -> ls /System/Open_Problems In the list of the Open Problems we can find the UUID of the concerned component (see 3rd line) Thu Feb 15 14:27:04 2018 Processors P1 (CPU 1) ILOM has detected that a PCIE link layer is inactive. (Probability:25, UUID:49015767-38b2-6372-9526-c2d2c3885a72, Resource:/SYS/MB/P1, Part Number:SR3AX, Serial Number:54-85FED07F672D3DD3, Reference Document:http://support.oracle.com/msg/SPX86A-8009-3J) Now it is time to access the fault manager to reset all alerts related to this UUID -> cd SP/faultmgmt/shell/ /SP/faultmgmt/shell -> start Are you sure you want to start /SP/faultmgmt/shell (y/n)? y The reset of the alert is done with the fmadm command faultmgmtsp> fmadm acquit 49015767-38b2-6372-9526-c2d2c3885a72 At this point the alerts are already removed from the Open problems. However to make sure the issue is really gone, we need to reboot the ODA and check the Open Problems afterwards. Note that I presented here the way to check Open Problems using the IPMI command line, but the same output is also available in the ILOM web page. Hope it helps! Cet article ODA – manually resetting a CPU alert using Fault Manager est apparu en premier sur Blog dbi services. ### ODA 12.2.1.2.0: Some curious password management rules Thu, 2018-02-22 09:33 While deploying an ODA based on the DCS stack (odacli), it is mandatory to provide a “master” password at appliance creation. The web GUI provides for that a small tooltip which describes the rules applied on password management. However it looks like there is some flexibility with those rules. Lets try to check this out with some basics tests. First of all here are the rules as provided by the ODA interface: So basically it has to start with an alpha character and be at least 9 characters long. My first reaction was that 9 characters is not to bad even if 10 would be better as minimum. Unfortunately it is not requesting any additional complexity mixing uppercase, lowercase, numbers… My second reaction, as most of IT guys, was to try to not respect these rules and see what happen I started really basically by using an “high secured” password: test Perfect the ODA reacted as expect and tells me I should read the rules once again. Next step is try something a bit more complicated: manager ..and don’t tell me you never used it in any Oracle environment Fine, manager is still not 9 character long, 7 indeed, and the installer is still complaining. For now, everything is okay. Next step was to try a password respecting the rules of 9 characters: welcome123 Still a faultless reaction of ODA! Then I had the strange idea to test the historical ODA password: welcome1 Oops! The password starts with an alpha character fine, but if I’m right welcome1 is only 8 characters long If you don’t believe me, try to count the dot on the picture above….and I swear I didn’t use Gimp to “adjust” it Finally just to be sure I tried another password of 8 characters: welcome2 Ah looks better. This time the installer sees that the password is not long enough and shows a warning. …but would it mean that welcome1 is hard-coded somewhere?? Not matter, let’s continue and run the appliance creation with welcome123. Once done I try log using SSH to my brandly new created ODA using my new master password it doesn’t work! I tried multiple combination from welcome123, welcome1, Welcome123 and much more. Unfortunately none of them work. At this point there are only 2 solutions to connect back to your ODA: 1. There is still a shell connected as root to the ODA and then the root password can easily be changed using passwd 2. No session is open to the ODA anymore and then it requires to open the remote console to reboot the ODA in Single User mode As the master password should be set to both root, grid and oracle users, I tried the password for grid and oracle too: Same thing there the master password provided during the appliance creation hasn’t be set properly. Hope it help! Cet article ODA 12.2.1.2.0: Some curious password management rules est apparu en premier sur Blog dbi services. ### Migrate Oracle Database(s) and ASM diskgroups from VMWARE to Oracle VM Thu, 2018-02-22 06:45 This is a step by step demonstration on how to migrate any ASM disk groups from a cluster to another. May be use, with or without virtualization and may be used with storage layer snapshot for fast environment provisioning. Step 01 – Shutdown source database(s) on VMWARE servers Shutdown all databases hosted in the targeted Disk groups for which you want consistency. Then unmount the disk groups. $ORACLE_HOME/bin/srvctl stop database -db cdb001

$ORACLE_HOME/bin/asmcmd umount FRA$ORACLE_HOME/bin/asmcmd umount DATA

Step 02 – Re route LUNs from the storage array to newf servers

Create a snapshot and make the snapshot LUNs visible for Oracle Virtual Server (OVS) according the third-party storage technology.

Step 03 – Add LUNs to DomUs (VMs)

Then, we refresh the storage layer from OVM Manager to present LUNs in each OVS

OVM> refresh storagearray name=STORAGE_ARRAY_01



Step 04 – Then, tell OVM Manager to add LUNs to the VMs in which we want our databases to be migrated

create VmDiskMapping slot=20 physicalDisk=sa01_clus01_asm_data01 name=sa01_clus01_asm_data01 on Vm name=rac001
create VmDiskMapping slot=21 physicalDisk=sa01_clus01_asm_data02 name=sa01_clus01_asm_data02 on Vm name=rac001
create VmDiskMapping slot=22 physicalDisk=sa01_clus01_asm_data03 name=sa01_clus01_asm_data03 on Vm name=rac001
create VmDiskMapping slot=23 physicalDisk=sa01_clus01_asm_data04 name=sa01_clus01_asm_data04 on Vm name=rac001
create VmDiskMapping slot=24 physicalDisk=sa01_clus01_asm_data05 name=sa01_clus01_asm_data05 on Vm name=rac001
create VmDiskMapping slot=25 physicalDisk=sa01_clus01_asm_data06 name=sa01_clus01_asm_data06 on Vm name=rac001
create VmDiskMapping slot=26 physicalDisk=sa01_clus01_asm_reco01 name=sa01_clus01_asm_reco01 on Vm name=rac001
create VmDiskMapping slot=27 physicalDisk=sa01_clus01_asm_reco02 name=sa01_clus01_asm_reco02 on Vm name=rac001

create VmDiskMapping slot=20 physicalDisk=sa01_clus01_asm_data01 name=sa01_clus01_asm_data01 on Vm name=rac002
create VmDiskMapping slot=21 physicalDisk=sa01_clus01_asm_data02 name=sa01_clus01_asm_data02 on Vm name=rac002
create VmDiskMapping slot=22 physicalDisk=sa01_clus01_asm_data03 name=sa01_clus01_asm_data03 on Vm name=rac002
create VmDiskMapping slot=23 physicalDisk=sa01_clus01_asm_data04 name=sa01_clus01_asm_data04 on Vm name=rac002
create VmDiskMapping slot=24 physicalDisk=sa01_clus01_asm_data05 name=sa01_clus01_asm_data05 on Vm name=rac002
create VmDiskMapping slot=25 physicalDisk=sa01_clus01_asm_data06 name=sa01_clus01_asm_data06 on Vm name=rac002
create VmDiskMapping slot=26 physicalDisk=sa01_clus01_asm_reco01 name=sa01_clus01_asm_reco01 on Vm name=rac002
create VmDiskMapping slot=27 physicalDisk=sa01_clus01_asm_reco02 name=sa01_clus01_asm_reco02 on Vm name=rac002

At this stage we have all LUNs of our both disk groups for DATA and FRA available on both nodes of the cluster.

Step 05 – Migrate disk in AFD

We can rename disk groups if required or if a disk group with the same name already exists

renamedg phase=both dgname=DATA newdgname=DATAMIG verbose=true asm_diskstring='/dev/xvdr1','/dev/xvds1','/dev/xvdt1','/dev/xvdu1','/dev/xvdv1','/dev/xvdw1'
renamedg phase=both dgname=FRA  newdgname=FRAMIG  verbose=true asm_diskstring='/dev/xvdx1','/dev/xvdy1'

Then we migrate disks into AFD configuration

$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdr1 --migrate$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvds1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdt1 --migrate$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdu1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdv1 --migrate$ORACLE_HOME/bin/asmcmd afd_label DATAMIG /dev/xvdw1 --migrate
$ORACLE_HOME/bin/asmcmd afd_label FRAMIG /dev/xvdx1 --migrate$ORACLE_HOME/bin/asmcmd afd_label FRAMIG  /dev/xvdy1 --migrate

Step 06 – Mount disk groups on the new cluster and add database(s) in the cluster

$ORACLE_HOME/bin/asmcmd mount DATAMIG$ORACLE_HOME/bin/asmcmd mount FRAMIG

Then add database(s) to cluster (repeat for each database)

$ORACLE_HOME/bin/srvctl add database -db cdb001 \ -oraclehome /u01/app/oracle/product/12.2.0/dbhome_1 \ -dbtype RAC \ -spfile +DATAMIG/CDB001/spfileCDB001.ora \ -diskgroup DATAMIG,FRAMIG Step 06 – Startup database In that case, we renamed the disk groups so we need to modify file locations and some parameter values create pfile='/tmp/initcdb001.ora' from spfile='+DATAMIG/<spfile_path>' ; -- modify controlfiles, recovery area and any other relevant paramters create spfile='+DATAMIG/CDB001/spfileCDB001.ora' from pfile='/tmp/initcdb001.ora' ; ALTER DATABASE RENAME FILE '+DATA/<datafile_paths>','+DATAMIG/<datafile_paths>' ALTER DATABASE RENAME FILE '+DATA/<tempfile_paths>','+DATAMIG/<tempfile_paths>' ALTER DATABASE RENAME FILE '+DATA/<onlinelog_paths>','+DATAMIG/<onlinelog_paths>' ALTER DATABASE RENAME FILE '+FRA/<onlinelog_paths>', '+FRAMIG/<onlinelog_paths>' Then start the database $ORACLE_HOME/bin/srvctl start database -db cdb001

This method can be used to easily migrated TB of data with almost no pain, reducing at most as possible the downtime period. For near Zero downtime migration, just add a GoldenGate replication on top of that.

The method describes here is also perfectly applicable for ASM snapshot in order to duplicate huge volume from one environment to another. This permits fast environment provisioning without the need to duplicate data over the network nor impact storage layer with intensive I/Os.

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.

Cet article Migrate Oracle Database(s) and ASM diskgroups from VMWARE to Oracle VM est apparu en premier sur Blog dbi services.

### ODA X7-2S/M 12.2.1.2.0: update-repository fails after re-image

Wed, 2018-02-21 00:54

While playing with a brand new ODA X7-2M, I faced a strange behaviour after re-imaging the ODA with the latest version 12.2.1.2.0. Basically after re-imaging and doing the configure-firstnet the next step is to import the GI clone in the repository before creating the appliance. Unfortunately this command fails with an error DCS-10001:Internal error encountered: Fail to start hand shake to localhost:7070. Why not having a look on how to fix it…

First of all doing a re-image is really straight forward and work very well. I simply access to the ILOM remote console to attach the ISO file for the ODA, in this case the patch 23530609 from the MOS, and restart the box on the CDROM. After approx. 40 minutes you have a brand new ODA running the latest release.

Of course instead re-imaging, I could “simply” update/upgrade the DCS agent to the latest version. Let say that I like to start from a “clean” situation when deploying a new environment and patching a not installed system sound a bit strange for me

So once re-imaged the ODA is ready for deployment. The first step is to configure the network that I can SSH to it and go ahead with the create appliance. This takes only 2 minutes using the command configure-firstnet.

The last requirement before running the appliance creation is to import the GI Clone, here the patch p27119393_122120, in the repository. Unfortunately that’s exactly where the problem starts…

Hmmm… I can’t get it in the repository due to a strange hand shake error. So I will check if the web interface is working at least (…of course using Chrome…)

Same thing here, it is not possible to come in the web interface at all.

While searching a bit for this error, we finally landed in the Know Issue chapter of the ODA 12.2.1.2.0 Release Note, which sounds promising. Unfortunately none of the listed error did really match to our case. However doing a small search in the page for the error message pointed us the following case out:

Ok the error is ODA X7-2HA related, but let’s give a try.

Once DCS is restarted, just re-try the update-repository

Here we go! The job has been submitted and the GI clone is imported in the repository

After that the CREATE APPLIANCE will run like a charm.

Hope it helped!

Cet article ODA X7-2S/M 12.2.1.2.0: update-repository fails after re-image est apparu en premier sur Blog dbi services.

### One command database upgrade on ODA

Tue, 2018-02-20 07:00

The 12.2 finally arrived on ODA and is now available on all generations. Modern ODAs are now supporting 11.2.0.4, 12.1.0.2 and 12.2.0.1 database engines, and these 3 versions can work together without any problem.

You probably plan to upgrade some old databases to the latest engine, at least those still running on 11.2. As you may know, 11.2  is no more supported with premier support since January 2015: it’s time to think about an upgrade. Note that premier support for 12.1 will end in July 2018. Actually, running 11.2 and 12.1 databases will need extended support this year. And this extended support is not free, as you can imagine. There is still an exception for 11.2.0.4, Oracle is offering extended support to his customers until the end of 2018.

Database upgrades have always been a lot of work, and often paired with a platform change. You need to recreate the databases, the tablespaces, export and import the data with datapump, correct the problems, and so on. Sometimes you can restore the old database to the new server with RMAN, but it’s only possible if the old engine is supported on your brand new server/OS combination.

As ODA is a longer term platform, you can think about ugrading the database directly on the appliance. Few years ago you should have been using dbua or catupgr, but now latest ODA package is including a tool for one command database upgrade. Let’s try it!

odacli, the ODA Client Line Interface, has a new option: upgrade-database. Parameters are very limited:

[root@oda-dbi01 2018-02-19]# odacli upgrade-database -h
Options:
--databaseids, -i
Default: []
* --destDbHomeId, -to
DB HOME ID of the destination
--help, -h
get help
--json, -j
json output
--sourceDbHomeId, -from
DB HOME ID of the source

You need to provide the database identifier (ODA stores a repository of all databases, db homes, jobs in a JavaDB/DerbyDB database) and the destination db home identifier you want to upgrade to. The source db home id is optional as Oracle can determine it quite easily. There is no other option (for the moment): no pre-backup (advised) and no storage migration (switch between acfs and ASM) for example.

Imagine you have an 11.2.0.4 database you want to upgrade to 12.2.0.1. Look for the id of your database ODAWS11:

[root@oda-dbi01 2018-02-19]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
45ce9de7-3115-45b0-97b0-1384b8401e69     ODAWS      Si       12.2.0.1             false      OLTP     odb2     ASM        Configured   1ca87df9-4691-47ed-90a9-2a794128539d
a948a32c-1cf2-42c8-88c6-88fd9463b297     DBTEST1    Si       12.2.0.1             false      OLTP     odb1s    ACFS       Configured   1ca87df9-4691-47ed-90a9-2a794128539d
de281792-1904-4536-b42c-8a55df489b73     ODAWS11    Si       11.2.0.4             false      OLTP     odb2     ACFS       Configured   72023166-a39c-4a93-98b7-d552029b2eeaodacli create-dbhome -v 12.1.0.2.171017

Note that this database is configured with acfs, as 11.2 databases cannot be stored directly in an ASM 12c.

You can upgrade this database to an existing db home only: if you want to upgrade it to a new home, just create this new home, for example:

[root@oda-dbi01 2018-02-19]# odacli create-dbhome -v 12.1.0.2.171017

If you want to use an existing home, just pick the db home id, for example here the one used by ODAWS database.

[root@oda-dbi01 2018-02-19]# odacli upgrade-database -i de281792-1904-4536-b42c-8a55df489b73 -to 1ca87df9-4691-47ed-90a9-2a794128539d

{
"jobId" : "782e65fd-8b2b-4d16-a542-1f5b2b78d308",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "February 19, 2018 17:40:58 PM CET",
"resourceList" : [ ],
"description" : "Database service upgrade with db ids: [de281792-1904-4536-b42c-8a55df489b73]",
"updatedTime" : "February 19, 2018 17:40:58 PM CET"
}

odacli will schedule a job for that, as for other operations. You can follow the job with describe-job:

[root@oda-dbi01 2018-02-19]# odacli describe-job -i 782e65fd-8b2b-4d16-a542-1f5b2b78d308

Job details
----------------------------------------------------------------
ID:  782e65fd-8b2b-4d16-a542-1f5b2b78d308
Description:  Database service upgrade with db ids: [de281792-1904-4536-b42c-8a55df489b73]
Status:  Running
Created:  February 19, 2018 5:40:58 PM CET
Message:

Task Name                                          Start Time                          End Time                            Status
-------------------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance                         February 19, 2018 5:40:58 PM CET    February 19, 2018 5:40:58 PM CET    Success
Database Upgrade                                   February 19, 2018 5:40:58 PM CET    February 19, 2018 5:40:58 PM CET    Running

You can also look at the database alert.log file during the operation.

Be patient! Database upgrade is taking time, at least 20 minutes for an empty database. And it seems that other jobs planned during the upgrade are in waiting state (like a create-database for example).

[root@oda-dbi01 2018-02-19]# odacli describe-job -i 782e65fd-8b2b-4d16-a542-1f5b2b78d308

Job details
----------------------------------------------------------------
ID:  782e65fd-8b2b-4d16-a542-1f5b2b78d308
Description:  Database service upgrade with db ids: [de281792-1904-4536-b42c-8a55df489b73]
Status:  Running
Created:  February 19, 2018 5:40:58 PM CET
Message:

Task Name                                          Start Time                          End Time                            Status
-------------------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance                         February 19, 2018 5:40:58 PM CET    February 19, 2018 5:40:58 PM CET    Success
Database Upgrade                                   February 19, 2018 5:40:58 PM CET    February 19, 2018 6:01:37 PM CET    Success

Now the upgrade seems OK, let’s check that:

su - oracle
. oraenv <<< ODAWS11
oracle@oda-dbi01:/home/oracle/ # sqlplus / as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Mon Feb 19 18:01:49 2018

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select instance_name, version from v$instance; INSTANCE_NAME VERSION ---------------- ----------------- ODAWS11 12.2.0.1.0 sho parameter spfile NAME TYPE VALUE -------------------- -------- --------------------------------------------------------------- spfile string /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/spfileODAWS11.ora Even the spfile has been moved to new home, quite nice. Let’s check the repository: [root@oda-dbi01 ~]# odacli list-databases ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID ---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ---------------------------------------- 45ce9de7-3115-45b0-97b0-1384b8401e69 ODAWS Si 12.2.0.1 false OLTP odb2 ASM Configured 1ca87df9-4691-47ed-90a9-2a794128539d a948a32c-1cf2-42c8-88c6-88fd9463b297 DBTEST1 Si 12.2.0.1 false OLTP odb1s ACFS Configured 1ca87df9-4691-47ed-90a9-2a794128539d de281792-1904-4536-b42c-8a55df489b73 ODAWS11 Si 12.2.0.1 false OLTP odb2 ACFS Configured 1ca87df9-4691-47ed-90a9-2a794128539d  Everything looks fine! Now let’s test the upgrade with a 12.1 database, ODAWS12. This one is using ASM storage: [root@oda-dbi01 ~]# odacli list-databases ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID ---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ---------------------------------------- 45ce9de7-3115-45b0-97b0-1384b8401e69 ODAWS Si 12.2.0.1 false OLTP odb2 ASM Configured 1ca87df9-4691-47ed-90a9-2a794128539d a948a32c-1cf2-42c8-88c6-88fd9463b297 DBTEST1 Si 12.2.0.1 false OLTP odb1s ACFS Configured 1ca87df9-4691-47ed-90a9-2a794128539d de281792-1904-4536-b42c-8a55df489b73 ODAWS11 Si 12.2.0.1 false OLTP odb2 ACFS Configured 1ca87df9-4691-47ed-90a9-2a794128539d 0276326c-cb6d-4246-9943-8289d29d6a4f DBTEST2 Si 12.2.0.1 false OLTP odb1s ACFS Configured 7d2bbaa0-da3c-4455-abee-6bf4ff2d2630 24821a48-7474-4a8b-8f36-afca399b6def ODAWS12 Si 12.1.0.2 false OLTP odb2 ASM Configured 520167d7-59c8-4732-80a6-cc32ef745cec [root@oda-dbi01 2018-02-19]# odacli upgrade-database -i 24821a48-7474-4a8b-8f36-afca399b6def -to 1ca87df9-4691-47ed-90a9-2a794128539d { "jobId" : "10a2a304-4e8e-4b82-acdc-e4c0aa8b21be", "status" : "Created", "message" : null, "reports" : [ ], "createTimestamp" : "February 19, 2018 18:36:17 PM CET", "resourceList" : [ ], "description" : "Database service upgrade with db ids: [24821a48-7474-4a8b-8f36-afca399b6def]", "updatedTime" : "February 19, 2018 18:36:17 PM CET" } [root@oda-dbi01 ~]# odacli list-databases ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID ---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ---------------------------------------- 45ce9de7-3115-45b0-97b0-1384b8401e69 ODAWS Si 12.2.0.1 false OLTP odb2 ASM Configured 1ca87df9-4691-47ed-90a9-2a794128539d a948a32c-1cf2-42c8-88c6-88fd9463b297 DBTEST1 Si 12.2.0.1 false OLTP odb1s ACFS Configured 1ca87df9-4691-47ed-90a9-2a794128539d de281792-1904-4536-b42c-8a55df489b73 ODAWS11 Si 12.2.0.1 false OLTP odb2 ACFS Configured 1ca87df9-4691-47ed-90a9-2a794128539d 0276326c-cb6d-4246-9943-8289d29d6a4f DBTEST2 Si 12.2.0.1 false OLTP odb1s ACFS Configured 7d2bbaa0-da3c-4455-abee-6bf4ff2d2630 24821a48-7474-4a8b-8f36-afca399b6def ODAWS12 Si 12.1.0.2 false OLTP odb2 ASM Updating 520167d7-59c8-4732-80a6-cc32ef745cec [root@oda-dbi01 2018-02-19]# odacli describe-job -i 10a2a304-4e8e-4b82-acdc-e4c0aa8b21be Job details ---------------------------------------------------------------- ID: 10a2a304-4e8e-4b82-acdc-e4c0aa8b21be Description: Database service upgrade with db ids: [24821a48-7474-4a8b-8f36-afca399b6def] Status: Running Created: February 19, 2018 6:36:17 PM CET Message: Task Name Start Time End Time Status -------------------------------------------------- ----------------------------------- ----------------------------------- ---------- Setting up ssh equivalance February 19, 2018 6:36:17 PM CET February 19, 2018 6:36:17 PM CET Success Database Upgrade February 19, 2018 6:36:17 PM CET February 19, 2018 6:58:05 PM CET Success ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID ---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ---------------------------------------- 45ce9de7-3115-45b0-97b0-1384b8401e69 ODAWS Si 12.2.0.1 false OLTP odb2 ASM Configured 1ca87df9-4691-47ed-90a9-2a794128539d a948a32c-1cf2-42c8-88c6-88fd9463b297 DBTEST1 Si 12.2.0.1 false OLTP odb1s ACFS Configured 1ca87df9-4691-47ed-90a9-2a794128539d de281792-1904-4536-b42c-8a55df489b73 ODAWS11 Si 12.2.0.1 false OLTP odb2 ACFS Configured 1ca87df9-4691-47ed-90a9-2a794128539d 0276326c-cb6d-4246-9943-8289d29d6a4f DBTEST2 Si 12.2.0.1 false OLTP odb1s ACFS Configured 7d2bbaa0-da3c-4455-abee-6bf4ff2d2630 24821a48-7474-4a8b-8f36-afca399b6def ODAWS12 Si 12.2.0.1 false OLTP odb2 ASM Configured 1ca87df9-4691-47ed-90a9-2a794128539d su - oracle . oraenv <<< ODAWS12 oracle@oda-dbi01:/home/oracle/ # sqlplus / as sysdba SQL*Plus: Release 12.2.0.1.0 Production on Mon Feb 19 18:59:08 2018 Copyright (c) 1982, 2016, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL> select instance_name, version from v$instance;

INSTANCE_NAME     VERSION
---------------- -----------------
ODAWS12      12.2.0.1.0

SQL> sho parameter spfile

NAME               TYPE       VALUE
------------------ ---------- ---------------------------------------------
spfile             string     +DATA/ODAWS12/PARAMETERFILE/spfileodaws12.ora

It also worked fine with an 12.1 database: and it also took about 20 minutes for an empty database.

You may have noticed that it’s possible to upgrade several databases in the same time by providing multiple database id. Not sure if you would do that in real life

upgrade-database is also available on ODA that are still using oakcli (nowadays only virtualized ODA I think), but as oakcli has no repository, database id has to be replaced by database name,  and db home id by the name registered in classic oraInventory, for example:

oakcli upgrade database -db ODAWS11 -to OraDb12201_home1

This great feature will not revolutionize your DBA life, but it should help to upgrade your database with minimum effort.

Cet article One command database upgrade on ODA est apparu en premier sur Blog dbi services.

### ODA 12.2.1.2.0 – VLAN Management

Tue, 2018-02-20 02:48

Virtual Local Area Network (VLAN) have become since several years a standard in enterprise class networks. Most enterprises are now segregating their network, especially for security reasons, between server and user or prod and test or applications and backup and aso. In the new release of ODA we finally get the support for VLAN on the bare metal platform. This article will briefly demonstrate how these are managed using ODAADMCLI.

First of all we have to remember that VLAN is not brand new on ODA. While using ODA HA (X5-2 or X6-2) in virtual mode, which means with OVM, it was already possible to manage VLANs. However this was a bit different than the new feature introduced in ODA 12.2.1.2.0.

First of all the HA platform in virtual mode is running using OAKCLI and not ODACLI with the DCS agent. In background the real difference is that the HA in virtualized mode is using the Linux Bridge Control (more details here). To make it simple, you have a kind of virtual switches (bridges) on which each connected VM, including the ODA BASE, can be connected and get an address on this particular network.

On the bare metal platform the principle is totally different as it is directly based on the VLAN implementation in Linux (802.1q standard) which allows activating VLAN on an interface and tagging the packets with the right VLANID.

The first place where VLAN can be configured on the ODA is during the first network plumbing phase, right after booting or re-imaging the ODA, using the configure-firstnet command.

As shown above the command will ask you if you want to use VLAN or not. Answering YES will then request you to provide the VLANID for the primary network of the ODA and will generate a network interface btbond1.<VLANID>.

How does it looks like in background??

The first layer as always on ODA is a bonding of 2 physical interfaces (here em2 and em3 as I’m using the copper interfaces):

[root@oak network-scripts]# cat ifcfg-em2
#File created by Oracle development
DEVICE=em2
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
TYPE=ETHERNET
IPV6INIT=no
NM_CONTROLLED=no
PEERDNS=no
MASTER=btbond1
SLAVE=yes

Looking to the btbond1 interface we will see that it is configure in active-backup mode BUT without any IP address.

[root@oak network-scripts]# cat ifcfg-btbond1
#File created by Oracle development
DEVICE=btbond1
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
TYPE=BOND
BONDING_OPTS="mode=active-backup miimon=100 primary=em2"
IPV6INIT=no
NM_CONTROLLED=no
PEERDNS=no

On top of the bonding configuration, we have then a virtual interface per VLAN. Indeed only one at the beginning as the configure-firstnet generates only the “primary” network of the ODA.

[root@oak network-scripts]# cat ifcfg-btbond1.54
#ODA_VLAN_CONFIG ===
#ODA_VLAN_CONFIG Name=vlan54
#ODA_VLAN_CONFIG VlanId=54
#ODA_VLAN_CONFIG VlanInterface=btbond1
#ODA_VLAN_CONFIG Type=VlanType
#ODA_VLAN_CONFIG VlanSetupType=public
#ODA_VLAN_CONFIG VlanGateway=192.168.54.1
#ODA_VLAN_CONFIG NodeNum=0
#=== DO NOT EDIT ANYTHING ABOVE THIS LINE ===
DEVICE=btbond1.54
BOOTPROTO=none
ONBOOT=yes
VLAN=yes
NM_CONTROLLED=no
GATEWAY=192.168.54.1

Do not look for the VLANID in the configuration file (of, except in the comments  ). It is defined by the device/file name.

Once you have your first VLAN you can easily configure additional ones using the command line. Remember that on the DCS stack ODAs you have 2 different CLIs: ODACLI and ODAADMCLI. The VLAN management is done using ODAADMCLI.

So lets have a look to the help:

[root@oda-dbi01 ~]
# odaadmcli -h
commands: show|manage|stordiag|power|expand
objects : disk|diskgroup|controller|server|processor|memory|iraid|
power|cooling|network|storage|fs|raidsyncstatus|env_hw|vlan

Usage: odaadmcli show - Shows disk, diskgroup, controller, server, processor,
memory, iraid, power, cooling, network,
storage, fs, raidsyncstatus, env_hw
odaadmcli manage - Manages the OAK repository, diagcollect etc.,
odaadmcli stordiag - Run storage diagnostic tool on this Node
odaadmcli power - Power on|off|status disk
odaadmcli expand - Expand storage

Hmmm, looks strange as there is no command CREATE
I can SHOW the VLANs but it looks like I can’t CREATE them… Let’s have a look to the online documentation (here)

It looks like the CREATE VLAN command finally exist. A good advice to remember here is that even the inline help of ODACLI and ODAADMCLI are quite good, it is still a good practice to have a look to the online documentation and especially the chapters 14 and 15.

The good news here is that the help for the CREATE command does exist

[root@oda-dbi01 ~]# odaadmcli create vlan -h
vlan                  -  VLAN name (unique per Node)
vlanid                -  Identifies the ID[valid range: 2 to 4094] to which the VLAN belongs to (unique per Node)
interface             -  Interface on which the VLAN is to be created [Valid interfaces are btbond1]
node                  -  Node number < 0 >
setuptype             -  the type of the VLAN setup for [Valid value are: management, database, dataguard, backup, private and other]
ip                    -  IP address for the VLAN
gateway               -  Gateway address for the VLAN

I guess it’s now time to try to create a new VLAN.

[root@oda-dbi01 ~]# odaadmcli create vlan vlan55-backup -vlanid 55 -if btbond1 -node 0 -setuptype backup -ip 192.168.55.10 -netmask 255.255.255.0 -gateway 192.168.55.1

Created Vlan : vlan55-backup

Note that even on a single node ODA (S or M) you must provide the node number. Otherwise you will get the following error message

ERROR : -node is not available

Let check our newly created VLAN:

[root@oda-dbi01 network-scripts]# odaadmcli show vlan
vlan55-backup            55    btbond1     backup      192.168.55.10   255.255.255.0   192.168.54.1    0

Of course in /etc/sysconfig/network-scripts we will find the corresponding IFCFG file:

[root@oda-dbi01 network-scripts]# ls -lrt
total 248
-rw-r--r--. 1 root root 29853 Apr 12  2016 network-functions-ipv6
-rw-r--r--. 1 root root 14731 Apr 12  2016 network-functions
...
...
...
-rw-r--r--. 1 root root   264 Feb 19 11:40 ifcfg-lo
-rw-r--r--  3 root root   194 Feb 19 12:04 ifcfg-em3
-rw-r--r--  3 root root   194 Feb 19 12:04 ifcfg-em2
-rw-r--r--  3 root root   169 Feb 19 12:04 ifcfg-em1
drwxr-xr-x  2 root root  4096 Feb 19 12:04 backupifcfgFiles
-rw-r--r--. 3 root root   259 Feb 19 12:17 ifcfg-btbond1
-rw-r--r--  1 root root   538 Feb 19 14:43 ifcfg-btbond1.54
-rw-r--r--  1 root root   538 Feb 19 15:31 ifcfg-btbond1.55

Should a VLAN not be required anymore, deleting it works straight forward using DELETE VLAN:

[root@oda-dbi01 network-scripts]# odaadmcli delete vlan vlan55-backup -node 0

Deleted Vlan : vlan55-backup

Easy isn’t it?

Last but not least, for those who deployed their ODA without VLAN do not worry you won’t need to re-image it. Even if the ODA has been deployed without VLAN during the configure-firstnet you can still create VLAN afterwards.

Enjoy!

Cet article ODA 12.2.1.2.0 – VLAN Management est apparu en premier sur Blog dbi services.

### 18c Read Only Oracle Home

Sun, 2018-02-18 13:49

This is the big new feature of Oracle 18c about database software installation. Something that was needed for decades for the ease of software deployment. Piet de Visser raised this to Oracle a long time ago, and we were talking about that recently when discussing this new excitement to deploy software in Docker containers. Docker containers are by definition immutable images. You need a Read Only Oracle Home, all the immutable files (configuration, logs, database) being in an external volume. Then, to upgrade the software, you just open this volume with an image of the new database version.

roohctl

In 12.2 you may have seen a ‘roohctl’ script in ORACLE_HOME/bin. The help explains that ‘rooh’ stands for Read-Only Oracle Home:
 [oracle@VM122 ~]$roohctl -help Usage: roohctl [] [ ] Following are the possible flags: -help Following are the possible commands: -enable Enable Read-only Oracle Home -disable Disable Read-only Oracle Home  Note that in 18c the help does not show ‘-disable’ even if it is accepted…. So in 12cR2 you were able to run ‘roohctl -enable’ but the only thing it did was changing the Yes/No flag in orabasetab:  cat$ORACLE_HOME/install/orabasetab #orabasetab file is used to track Oracle Home associated with Oracle Base /u01/app/oracle/product/12.2.0/dbhome_1:/u01/app/oracle:OraDB12Home1:Y: 

Oracle 18

Here is an Oracle 18 that I re-installed (as an Oracle Home Clone) with the following:
 runInstaller -clone ORACLE_HOME=/u01/app/oracle/product/181 ORACLE_HOME_NAME=O181 ORACLE_BASE=/u00/app/oracle 

My idea is to be able to easily differentiate the different paths (ORACLE_HOME under /u01 and ORACLE_BASE under /u00)

The $ORACLE_HOME/install/orabasetab records the ORACLE_HOME, ORACLE_BASE and ORACLE_HOME_NAME:  [oracle@VM181 18c]$ cat $ORACLE_HOME/install/orabasetab #orabasetab file is used to track Oracle Home associated with Oracle Base /u01/app/oracle/product/181:/u01/app/oracle:O181:N:  ORACLE_HOME: This may seem useless because this file is under ORACLE_HOME, so if you read it you are supposed to know the ORACLE_HOME. However, you may find it from different paths (symbolic links, /../.) and this is a good way to normalize it. ORACLE_BASE: This will be used to externalize the mutable files outside of the ORACLE_HOME ORACLE_HOME_NAME: is the name of Oracle Home that you provide when installing and you can find in the Oracle Inventory. The last field is ‘N’ when the mutable files are under ORACLE_HOME and ‘Y’ when they are externalized to have an immutable Read Only Oracle Home. We are not supposed to use this file directly. It is modified by runInstaller and roohctl. And it is read by orabasehome and orabaseconfig orabasehome and orabaseconfig We have two new location name derived from the orabasetab content. One is the ‘Oracle Base Config’ which is mostly there to find the configuration files (.ora, .dat) in the /dbs subdirectory. With Read Only Oracle Home, this is set to the ORACLE_BASE:  [oracle@VM181 18c]$ orabaseconfig /u00/app/oracle 
Most of the files in /dbs have the ORACLE_SID in their name, which is unique in the host, and this is why they can all go into the same directory. However, I would prefer a subdirectory per database. When you move a database from one system to another, it is easier to move a directory. You can do per-file symbolic links but be sure to maintain them as they may be re-created as files.

The other is the ‘Oracle Base Home’ which is mostly there for the /network subdirectory (with the SQL*Net configuration files, logs and trace) and the /assistant (DBCA templates) and /install ones. With Read Only Oracle Home, this goes to a /homes subdirectory of ORACLE_BASE
 [oracle@VM181 18c]$orabasehome /u00/app/oracle/homes/O181  As you see, there is an additional subdirectory with the name of the Oracle Home. In my opinion, it is not a good idea to put sqlnet.ora, tnsnames.ora and listener.ora here. It is better to have one common TNS_ADMIN. However, because the default was one directory per Oracle Home, the Read Only Oracle Home feature had to keep this possibility. In 12.2 an ORACLE_HOME/env.ora was introduced to set TNS_ADMIN in a consistent way. With Read Only Oracle Home enabled, I strace-ed a ‘startup’ to show which files are read:  [oracle@VM181 18c]$ ORACLE_SID=CDB18 strace -e trace=file -f sqlplus / as sysdba <<&1 | grep /u00 ... open("/u00/app/oracle/homes/O181/network/admin/oraaccess.xml", O_RDONLY) = -1 ENOENT (No such file or directory) access("/u00/app/oracle/homes/O181/network/admin/oraaccess.xml", F_OK) = -1 ENOENT (No such file or directory) open("/u00/app/oracle/homes/O181/network/admin/oraaccess.xml", O_RDONLY) = -1 ENOENT (No such file or directory) access("/u00/app/oracle/homes/O181/network/admin/sqlnet.ora", F_OK) = -1 ENOENT (No such file or directory) ... access("/u00/app/oracle/homes/O181/network/admin/sqlnet.ora", F_OK) = -1 ENOENT (No such file or directory) access("/u00/app/oracle/homes/O181/network/admin/intchg.ora", F_OK) = -1 ENOENT (No such file or directory) access("/u00/app/oracle/homes/O181/network/admin/tnsnav.ora", F_OK) = -1 ENOENT (No such file or directory) ... open("/u00/app/oracle/dbs/cm_CDB18.dat", O_RDONLY|O_SYNC) = -1 ENOENT (No such file or directory) [pid 15339] access("/u00/app/oracle/homes/O181/network/admin/sqlnet.ora", F_OK) = -1 ENOENT (No such file or directory) ... [pid 15339] stat("/u00/app/oracle/dbs/spfileCDB18.ora", 0x7ffe6a5785b8) = -1 ENOENT (No such file or directory) [pid 15339] open("/u00/app/oracle/dbs", O_RDONLY) = 8 [pid 15339] stat("/u00/app/oracle/dbs/spfileCDB18.ora", 0x7ffe6a578010) = -1 ENOENT (No such file or directory) [pid 15339] stat("/u00/app/oracle/homes/O181/dbs/spfile.ora", 0x7ffe6a5785b8) = -1 ENOENT (No such file or directory) [pid 15339] open("/u00/app/oracle/homes/O181/dbs", O_RDONLY) = 8 [pid 15339] stat("/u00/app/oracle/homes/O181/dbs/spfile.ora", 0x7ffe6a578010) = -1 ENOENT (No such file or directory) [pid 15339] access("/u00/app/oracle/dbs/initCDB18.ora", F_OK) = -1 ENOENT (No such file or directory) [pid 15339] open("/u00/app/oracle/dbs/initCDB18.ora", O_RDONLY) = -1 ENOENT (No such file or directory) LRM-00109: could not open parameter file '/u00/app/oracle/dbs/initCDB18.ora' 

The files were not there as I’ve not created any database here. The goal is to show that there is no attempt to read any configuration file under ORACLE_HOME.

You can also see that DBCA will search for templates in this new directory:

I mentioned network and assistant subdirectories. But it concerns all directories where the instance can write files:
 [oracle@VM181 18c]$du$ORACLE_BASE/homes 4 /u01/app/oracle/homes/O181/assistants/dbca/templates 8 /u01/app/oracle/homes/O181/assistants/dbca 12 /u01/app/oracle/homes/O181/assistants 4 /u01/app/oracle/homes/O181/network/trace 4 /u01/app/oracle/homes/O181/network/admin 4 /u01/app/oracle/homes/O181/network/log 16 /u01/app/oracle/homes/O181/network 4 /u01/app/oracle/homes/O181/dbs 4 /u01/app/oracle/homes/O181/install 64 /u01/app/oracle/homes/O181/rdbms/log 72 /u01/app/oracle/homes/O181/rdbms/audit 140 /u01/app/oracle/homes/O181/rdbms 180 /u01/app/oracle/homes/O181 184 /u01/app/oracle/homes 

You may wonder why we see a /dbs subdirectory here as the instance configuration files are in the common /u01/app/oracle/dbs. The /dbs is also the current working directory for oracle processes. And this one will be set to ORACLE_BASE/homes/oracle_home_name/dbs.

We can also see /rdbms/log here. I opened a bug 2 years ago about SBTIO.LOG not going to the right place under ADR_HOME, but going to ORACLE_HOME/rdbms/log (Bug 23559013 USER_DUMP_DEST VALUE NOT IGNORED EVEN THOUGH DIAGNOSTIC_DEST IS SET). I’ve no idea about the status of the bug, but at least this will not go to Oracle Home anymore. Even if you don’t really have the need to have a Read Only Oracle Home, this feature is a good way to ensure that it will not grow and fill the filesystem.

You enable this feature with ‘roohctl -enable’ after software installation and before any creation of databases or listeners:
 [oracle@VM181 18c]$roohctl -enable Enabling Read-Only Oracle home. Update orabasetab file to enable Read-Only Oracle home. Orabasetab file has been updated successfully. Create bootstrap directories for Read-Only Oracle home. Bootstrap directories have been created successfully. Bootstrap files have been processed successfully. Read-Only Oracle home has been enabled successfully. Check the log file /u01/app/oracle/cfgtoollogs/roohctl/roohctl-180217PM111551.log.  If the utility tool finds an existing database or listener related to this Oracle Home, it will return this kind of error:  Cannot enable Read-Only Oracle home in a configured Oracle home. The Oracle Home is configured with databases 'CDB18'. The Oracle Home is configured with listeners 'LISTENER'.  There is an undocumented ‘-force’ parameter to add to ‘roohctl -enable’ which can proceed anyway, but it will not move the configuration files. I have not tested all possibilities because the General Availability of 18c is currently limited to Exadata and Oracle Public Cloud. But it seems that this roohctl will work the same on Windows (with /database instead of /dbs and with registry settings instead of orabasetab) and with Grid Infrastructure (there’s a -nodeList argument). I mentioned above that the ORACLE_HOME/install goes to$(orabasehome)/install. I don’t know which files go there when ROOH is enabled. The orabasetab remains under ORACLE_HOME, of course. And some logs, such as re-running root.sh, still go to ORACLE_HOME/install:
 [oracle@VM181 ~]$sudo$ORACLE_HOME/root.sh Check /u01/app/oracle/product/181/install/root_VM181_2018-02-18_19-06-23-833474515.log for the output of root script 

This looks strange, but remember that the idea of a Read Only Oracle Home is to ship it after all changes are done. If you have something to change (patch, re-link, …) that will go to another Oracle Home. Maybe cloned from the other, then made Read Only after the changes.

?/

Do you use the question mark as a shortcut to ORACLE_HOME? This does not change and remains the ORACLE_HOME:
 [oracle@VM181 ~]$sqlplus / as sysdba SQL*Plus: Release 18.0.0.0.0 Production on Sun Feb 18 20:26:33 2018 Version 18.1.0.0.0 Copyright (c) 1982, 2017, Oracle. All rights reserved. Connected to an idle instance. SQL> start ? SP2-0310: unable to open file "/u01/app/oracle/product/181.sql" SQL> exit Disconnected  This is ok as I mostly use it to read files from the software distribution (such as ?/rdbms/admin/awrrpt) If you use it in database configuration files, then be careful. Here I have enabled ROOH and defined a pfile mentioning the spfile with the ‘?’ shortcut  [oracle@VM181 ~]$ orabaseconfig /u00/app/oracle [oracle@VM181 ~]$cat$(orabaseconfig)/dbs/init$ORACLE_SID.ora spfile=?/dbs/xxxx  However, the ‘?’ is resolved to ORACLE_HOME and not Oracle Base Config:  [oracle@VM181 ~]$ strace -f sqlplus / as sysdba <<&1 | grep xxx [pid 1898] read(10, "spfile=?/dbs/xxx\n", 4096) = 17 [pid 1898] stat("/u01/app/oracle/product/181/dbs/xxx", 0x7ffc5ac1c758) = -1 ENOENT (No such file or directory) [pid 1898] stat("/u01/app/oracle/product/181/dbs/xxx", 0x7ffc5ac1c1b0) = -1 ENOENT (No such file or directory) ORA-01565: error in identifying file '?/dbs/xxx' 

So what?

Having a read-only Oracle Home, or at least be sure that you have no files written into it, is a good idea. Easier to manage space. Easier to deploy by cloning Oracle Home. Good practice to separate big software directory from small configuration files. And to have the current working directory outside of that. Having configuration files at the same place as the software is always a bad idea (and reminds me the .ini files in C:\WIN directory a long time ago). So, even if it is not enabled by default, Read Only Oracle Home is the way to go.

I think the risks are very limited once tested, as it is just changing the directories and any problem can be worked around with symbolic links on directories. However, this may change some habits and scripts. Not finding the right configuration file in a stressful situation may be annoying.

So, don’t wait, and even in 12c, you can change your habits and replace all references to ${ORACLE_HOME}/dbs by$(orabaseconfig)/dbs and other ${ORACLE_HOME} to$(orabasehome). In 12c they will go to the same ORACLE_HOME. And they you will be ready to enable ROOH in 18c.

Cet article 18c Read Only Oracle Home est apparu en premier sur Blog dbi services.

### My personal journey with Linux on the desktop

Sun, 2018-02-18 10:19

My Linux experience started back in 1999 when we needed a router for our ISDN connection in our shared apartment when we were students. I don’t remember the hardware we used for that but I am sure at that time it was a SUSE operating system. Not sure about the version, but based on this it must have been version 4 something or 5 something. The KDE desktop environment looked liked this in version 1 which was released July 12, 1998.

At the company I worked at that time we had no choice and Windows was the only platform available. When I moved to the next company in 2002 it was the same situation: Having all the infrastructure based on Microsoft products (Active directory, Outlook, Office … ) it was nearly impossible to switch to Linux on the desktop. So my Linux experience focused on server platforms (SUSE as well at that time) and mainly in combination with Oracle databases. It was then when I had my first experiences with Oracle RAC 9.x on SUSE. Believe or not: SUSE already had a rpm which installed all dependencies you need for Oracle in the Oracle 9i ages. As far as I know Oracle came up with that for Oracle Linux years later. I did some experiments with Linux on my personal workstation but because of the non availability of games and everyone used Windows it was not more than playing around.

Things started to change when I moved on the next company in 2007. All was based on Microsoft as well but we had several terminal servers mainly used for VPN. But that opened doors for Linux on the Desktop. As the terminal servers had the complete Microsoft Office package installed I could use them for all the Word, Excel, Outlook stuff but use Linux on my notebook. The only bits I had to figure out were:

• What software to use for remote connections to Windows machines (rdp)?
• Can I copy paste between the terminal session and my Linux workstation? (especially for pictures pasted into Outlook)
• Can I share a common drive for exchanging files between the terminal server and my workstation?

What worked really well was freerdp. Using an alias like this:

alias termserver='nohup xfreerdp --plugin rdpdr --data disk:dwetemp:/home/dwe/Downloads -- --plugin cliprdr -z -x l -g 1280x1024 -u dwe -p Rev0luti0n -k 0x00000807 192.168.22.1 &'


… I could easily connect to the terminal server, use the clipboard to copy/paste into and out of the terminal server session and have a shared drive I could use for exchanging files. For all the other stuff we needed to work on that time (we had site to site VPN connections to our customers) I could use my Linux Desktop. All the Oracle stuff was mainly based on Solaris 9 and 10 so defining aliases for all the connections I required and exchanging my public ssh key brought a great speed up compared to my colleagues that worked with putty on Windows. Later on all the Solaris machines have been replaced with RedHat but that did not change my way of working.

Maybe the biggest issue was to find the distribution that worked for me. I tried openSUSE, Fedora and pure Debian. Nothing made me really happy as I didn’t want to compile software from source or add additional software repositories just to have the basic things working: Music, Video, Flash and so on. For a desktop things have to work out of the box, at least for me. I never really liked Ubuntu but I think this was mainly because of the Unity desktop. I know you can have Ubuntu with various other desktops but somehow I did not consider them. Then I came across Linux Mint (Maya, 2012) and I must say this is one of the best distributions out there. Everything just worked out of the box and the Cinnamon desktop is really great:

I was really happy with that for next one or two years until I re-installed Linux Mint but this time with the KDE desktop, maybe just to see how it evolved over time. There were big discussion when the KDE project switched from version three:

… to version four:

… and I really was interested how the product looked and how it feels. Starting then, KDE is what I really love and use every day. It comes with a powerful editor called Kate, and that is essential for me. Beside vim this is one of the best editors I’ve ever used. But then, the Linux Mint project decided to drop the KDE edition and I again I had to search for a stable KDE distribution. I tried Kubuntu, Manjaro, the KDE spin of Fedora and just a few weeks ago I gave GNOME a try with Fedora Workstation. Finally I switched to KDE neon and I am quite happy with it. This is how my desktop looks today:

Not much on it, you might think, but I feel there is no need for that. On KDE you can do almost anything with the keyboard and all I need is Krunner. ALT-F2 brings it up by default and from there you can start whatever you want, no need to use the mouse for that. Define a bunch of custom keyboard shortcuts, all the aliases required, configure Dolphin the way I like it(especially the places), install terminator and the Desktop is ready. For running Microsoft Office CrossOver Linux works fine for me.

Almost all the applications required are available for Linux today but there are still a few trade-offs. I still need a Windows VM for some stuff (mainly VPN clients to connect to customers which are only available for Windows). But the most important point is that the company you work for has an environment you can work with Linux. As more and more stuff is web based today this is becoming easier and easier but still you can mostly not use Linux in big companies as it is just not supported. Some companies switch to Apple products but I never really considered that for myself. What I recommend from my personal experience: Use something that is either based on Debian (Ubuntu, Mint, …) or rpm based (RedHat, CentOS, SUSE, …). A lot of software is either available as rpm or deb, but nothing else. For a desktop things must just work out of the box.

The good thing with Linux and all the available desktops on top of it: You have plenty of products to choose from. Maybe it takes years until you find the right one for you but I am sure there is something which fits your needs.

Cet article My personal journey with Linux on the desktop est apparu en premier sur Blog dbi services.

### What you can do when your Veritas cluster shows interfaces as down

Sat, 2018-02-17 07:40

Recently we had the situation that the Veritas cluster (InfoScale 7.3) showed interfaces as down on the two RedHat 7.3 nodes. This e.g. can happen when you change hardware. Although all service groups were up and running this is a situation you usually want to avoid as you never know what happens when the cluster is in such a state. When you have something like this:

[root@xxxxx-node1 ~]$lltstat -nvv | head LLT node information: Node State Link Status Address * 0 xxxxx-node1 OPEN eth3 UP yy:yy:yy:yy:yy:yy eth1 UP xx:xx:xx:xx:xx:xx bond0 UP rr:rr:rr:rr:rr:rr 1 xxxxx-node2 OPEN eth3 UP ee:ee:ee:ee:ee:ee eth1 DOWN tt:tt:tt:tt:tt:tt bond0 DOWN qq:qq:qq:qq:qq:qq  … what can you do? In our configuration eth1 and eth3 are used for the interconnect and bond0 is the public network. As you can see above the eth1 and bond0 are reported as down for the second node. Of course, the first check you need to do is to check the interface status on the operating system level, but that was fine in our case. Veritas comes with a tiny little utility (dlpiping) you can use to check connectivity on the Veritas level. Using the information from the lltstat command you can start dlpiping in “send” mode on the first node: [root@xxxxx-node1 ~]$ /opt/VRTSllt/dlpiping -vs eth1


When that is running (will not detach from the terminal) you should start in “receive” mode on the second node:

[root@xxxxx-node1 ~]$/opt/VRTSllt/dlpiping -vc eth1 xx:xx:xx:xx:xx:xx using packet size = 78 dlpiping: sent a request to xx:xx:xx:xx:xx:xx dlpiping: received a packet from xx:xx:xx:xx:xx:xx  This confirms that connectivity is fine for eth1. When you repeat that for the remaining interfaces (eth3 and bond0) and all is fine then you you can proceed. If not, then you have another issue than what we faced. The next step is to freeze all your service groups so the cluster will not touch them: [root@xxxxx-node1 ~]$ haconf -makerw
[root@xxxxx-node1 ~]$hagrp -freeze SERVICE_GROUP -persistent # do that for all service groups you have defined in the cluster [root@xxxxx-node1 ~]$ haconf -dump -makerw


Now the magic:

[root@xxxxx-node1 ~]$hastop -all -force  Why magic? This command will stop the cluster stack on all nodes BUT it will leave all the resources running. So you can do that without shutting down any user defined cluster services (Oracle databases in our case). Once the stack is down on all the nodes stop gab and ltt on both nodes as well: [root@xxxxx-node1 ~]$ systemctl stop gab
[root@xxxxx-node1 ~]$systemctl stop llt  Having stopped llt and gab you just need to start them again in the correct order on both systems: [root@xxxxx-node1 ~]$ systemctl start llt
[root@xxxxx-node1 ~]$systemctl start gab  … and after that start the cluster: [root@xxxxx-node1 ~]$ systemctl start vcs


In our case that was enough to make llt work as expected again and the cluster is fine:

[root@xxxxx-node1 ~]$gabconfig -a GAB Port Memberships =============================================================== Port a gen f44203 membership 01 Port h gen f44204 membership 01 [root@xxxxx-node1 ~]# [root@xxxxx-node1 ~]$ lltstat -nvv | head
LLT node information:
* 0 xxxxx-node1 OPEN
eth3 UP yy:yy:yy:yy:yy:yy
eth1 UP xx:xx:xx:xx:xx:xx
bond0 UP rr:rr:rr:rr:rr:rr
1 xxxxx-node2 OPEN
eth3 UP ee:ee:ee:ee:ee:ee
eth1 UP qq:qq:qq:qq:qq:qq
bond0 UP tt:tt:tt:tt:tt:tt


Hope that helps …

Cet article What you can do when your Veritas cluster shows interfaces as down est apparu en premier sur Blog dbi services.

### CPUs: Cores versus Threads on an Oracle Server

Sat, 2018-02-17 06:49

When doing a performance review I often do talk with the DBA about the CPU utilization of the server. How reliable is the server CPU utilization with tools like top or the host CPU utilization in the AWR-report? E.g. on an Linux Intel x86-64 server with 8 Cores and 16 logical CPUs (Intel Hyperthreading), what does a utilization of 50% mean?
As I had an ODA X7-M in a test lab available, I thought I’ll do some tests on that.

In my old days at Oracle Support we used a small script to test the CPU single thread performance of an Oracle DB-server:

 set echo on set linesize 120 set timing on time on with t as ( SELECT rownum FROM dual CONNECT BY LEVEL <= 60 ) select /*+ ALL_ROWS */ count(*) from t,t,t,t,t / 

The SQL just burns a CPU-Core for around 20 seconds. Depending on your CPU single thread performance it may take a bit longer or completes faster.

On the ODA X7-M I have 16 Cores enabled and as hyperthreading enabled I do get 32 CPUs in /proc/cpuinfo:

 oracle@dbi-oda01:/home/oracle/cbleile/ [CBL122] grep processor /proc/cpuinfo | wc -l 32 oracle@dbi-oda01:/home/oracle/cbleile/ [CBL122] lscpu | egrep "Thread|Core|Socket|Model name" Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 2 Model name: Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz 

The CPU-speed was at 2.3 GHZ all the time:

 [root@dbi-oda01 ~]# for a in ls -l /sys/devices/system/cpu/cpu*/cpufreq | grep cpufreq | cut -d "/" -f6 | cut -d "u" -f2; do echo "scale=3;cat /sys/devices/system/cpu/cpu${a}/cpufreq/cpuinfo_cur_freq/1000000" | bc; done 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301 2.301  The CPU is capable of running up to 3.7 GHZ, but that did not happen on my machine. Running my SQL-script on the ODA X7-M actually took 17.49 seconds:  18:44:00 SQL> with t as ( SELECT rownum FROM dual CONNECT BY LEVEL <= 60 ) 18:44:00 2 select /*+ ALL_ROWS */ count(*) from t,t,t,t,t 18:44:00 3 / COUNT(*) ---------- 777600000 Elapsed: 00:00:17.49  I continued to do the following tests (a job means running above SQL-script): – 1 Job alone – 2 Jobs concurrently – 4 Jobs concurrently – 8 Jobs concurrently – 16 Jobs concurrently – 24 Jobs concurrently – 32 Jobs concurrently – 40 Jobs concurrently – 50 Jobs concurrently – 60 Jobs concurrently – 64 Jobs concurrently – 128 Jobs concurrently Here the result:  Jobs Min Time Max Time Avg Time Jobs/Cores Jobs/Threads Avg/Single-Time Thread utilization 1 17.49 17.49 17.49 0.06 0.03 1.00 1.00 2 17.51 17.58 17.55 0.13 0.06 1.00 1.00 4 17.47 17.86 17.62 0.25 0.13 1.01 0.99 8 17.47 17.66 17.55 0.50 0.25 1.00 1.00 16 17.64 21.65 18.50 1.00 0.50 1.06 0.95 24 18 27.38 24.20 1.50 0.75 1.38 0.72 32 32.65 34.57 33.21 2.00 1.00 1.90 0.53 40 34.76 42.74 40.31 2.50 1.25 2.30 0.54 50 48.26 52.64 51.21 3.13 1.56 2.93 0.53 60 52.4 63.6 60.63 3.75 1.88 3.47 0.54 64 54.2 68.4 64.27 4.00 2.00 3.67 0.54 128 119.49 134.34 129.01 8.00 4.00 7.38 0.54  When running with 16 Jobs top showed a utilization of around 50-52%. However running more than 16 Jobs showed an increase of the average time a job takes. I.e. with 16 Jobs the 16-Cores-Server is already almost fully utilized. Running with 32 Jobs results in an average elapsed time of 1.9 times compared to running 16 jobs (or less) concurrently. As it is 1.9 times and not 2 times I can conclude that there is an advantage of running with hyperthreading enabled, but it’s only around 5-10%. So when calculating the utilization of your server then base it on the number of cores and not on the number of threads. When looking at your host CPU-utilization in top or in the AWR-report on an hyperthreaded-enabled server then it’s a good idea to multiply the server-utilization by 1.9. Cet article CPUs: Cores versus Threads on an Oracle Server est apparu en premier sur Blog dbi services. ### Duplex RMAN backups between disk and tape Fri, 2018-02-16 09:46 Below a workaround is shown how to “duplex” archivelog backups between disk and tape: Backup on disk (normal way): backup device type disk archivelog all; Immediately backup on tape: backup device type sbt archivelog until time 'sysdate' not backed up 2 times; This backup command backs up all archivelogs, that are not backed up twice, so all which are backed up with the first command. As in the first backup command a logfile switch is included, between the two backup commands, no logfile switch should occur, otherwise “duplexing” does not work. The until time clause in the second command is added to prevent RMAN from another logfile switch, which would lead to different contents of the backups. And this clause does not filter anything, because sysdate means date and time when issuing the command. Cet article Duplex RMAN backups between disk and tape est apparu en premier sur Blog dbi services. ### SQL Server Management Studio 17.4: Vulnerability Assessment Thu, 2018-02-15 10:01 SQL Server Management Studio is a well know integrated environment used to manage SQL Server infrastructure. This new version 17.4 can support SQL Server from 2008 up to 2017. It enhances existing features like Showplan, XE Profiler (complete list here) but also add an interesting one which is the Vulnerability Assessment. Vulnerability Assessment will scan a database in order to help you to track security holes and deviations. Rules to define those deviations are based on Microsoft SQL Server best practices. Let’s have a look to this new feature. Once SSMS 17.4 installed, just choose the database you want to scan, right click on the database and select Task, Vulnerability Assessment and Scan For Vulnerabilities…: Select where you want to save the scan report and click OK: Once you clicked OK the scan is starting: At the end of the scan a Vulnerability Assessment Results is displayed: The report is displayed in a Management studio pane with the number of checks that have been run, how many issues have been found with different level of risk from Low to High and provide also some links about SQL Server security best practices. Review all failed checks to validate that there are really security issues for your environment and go through results. For each failed issue you will have a description of the issue, the impact, also the rule query applied and a possible remediation script: There is also a possibility to accept results even if there are considered as Potential Risk as a baseline. This will validate results that match the baseline. Once issues are solved or Baseline settled, the Vulnerability Assessment can be run again to see the result of the performed actions: This new feature integrated in Management Studio gives the ability to check that all your databases have a good level of security but also to keep this level. Great new feature Cet article SQL Server Management Studio 17.4: Vulnerability Assessment est apparu en premier sur Blog dbi services. ### Backup and Restore PostgreSQL with PgBackRest II Thu, 2018-02-15 09:41 In a precedent blog I shown a basic utilization of PgBackRest which is a tool to backup and restore PostgreSQL databases. In this blog I am going to talk some useful features of this tool. In practical examples we will see some tasks we can do with this tool. Of course the official documentation remains the best source of knowledges. Encryption Nowadays encryption of backups is very critical and is mandatory for many companies. PgBackRest allows us to encrypt the repository where backups are stored. A passphrase is used to encrypt/decrypt files of the repository. As you may already know, it is recommended to use a strong passphrase. In the following demonstration we use the openssl to generate a passphrase.  [postgres@pgserver ~]$ openssl rand -base64 48 FhXg7oW2pZb9UICZ4iYZPn3X4I6fF0ni7IL6QjaB1IL8qz4LIrP+GW+XqCZqIi3w [postgres@pgserver ~]$ Once the passphrase generated, we can update the PgBackRest configuration file with 2 options: repo-cipher-pass and repo-cipher-type  [postgres@pgserver clustpgserver]$ cat /etc/pgbackrest.conf [global] repo-path=/var/lib/pgbackrest repo-cipher-pass=FhXg7oW2pZb9UICZ4iYZPn3X4I6fF0ni7IL6QjaB1IL8qz4LIrP+GW+XqCZqIi3w repo-cipher-type=aes-256-cbc [clustpgserver] db-path=/var/lib/pgsql/10/data retention-full=2 
The next step is to create the stanza
 [postgres@pgserver ~]$pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 stanza-create 2018-02-13 13:54:50.447 P00 INFO: stanza-create command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/var/lib/pgbackrest --stanza=clustpgserver 2018-02-13 13:55:04.520 P00 INFO: stanza-create command end: completed successfully [postgres@pgserver ~]$ 
As we can see the system automatically detect that the repository is encrypted and then will rewrite the command including the –repo-cipher-pass and the –repo-cipher-type options. After the creation of the stanza we can check the status of our stanza
 [postgres@pgserver ~]$pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 check 2018-02-13 13:56:08.999 P00 INFO: check command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/var/lib/pgbackrest --stanza=clustpgserver 2018-02-13 13:57:08.026 P00 INFO: WAL segment 00000002000000000000004C successfully stored in the archive at '/var/lib/pgbackrest/archive/clustpgserver/10-1/0000000200000000/00000002000000000000004C-f5ced60cd351d74a91c9ce2e913b761144165e28.gz' 2018-02-13 13:57:08.030 P00 INFO: check command end: completed successfully  Everything seems fine, so let’s run a backup. Note that outputs are truncated  [postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 backup 2018-02-13 14:01:40.012 P00 INFO: backup command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/var/lib/pgbackrest --retention-full=2 --stanza=clustpgserver WARN: no prior backup exists, incr backup has been changed to full 2018-02-13 14:01:54.118 P00 INFO: execute non-exclusive pg_start_backup() with label "pgBackRest backup started at 2018-02-13 14:01:52": backup begins after the next regular checkpoint completes ... type=aes-256-cbc --repo-path=/var/lib/pgbackrest --retention-archive=2 --retention-full=2 --stanza=clustpgserver 2018-02-13 14:35:08.281 P00 INFO: full backup total < 2 - using oldest full backup for 10-1 archive retention 2018-02-13 14:35:08.801 P00 INFO: expire command end: completed successfully [postgres@pgserver ~]$ In a non-encrypted repository, file backup.info can be read. Now with encryption the if we try to read the file backup.info in the repository, we cannot.  [postgres@pgserver clustpgserver]$ less /var/lib/pgbackrest/backup/clustpgserver/backup.info "/var/lib/pgbackrest/backup/clustpgserver/backup.info" may be a binary file. See it anyway? 
And using the command strings, we can see that the file is encrypted.
 [postgres@pgserver clustpgserver]$strings /var/lib/pgbackrest/backup/clustpgserver/backup.info Salted__Fx .;Ru cz4@ do:t \pi3"E VUSO }a.R* Wx5M ,?,W 3CXWB [postgres@pgserver clustpgserver]$ 
From now, backups cannot be used unless the password is provided.

Restore in another location
PgBackRest allows to restore to another location. This can be useful if we want to duplicate our cluster on the same server or to another server. In the following demonstration, let’s duplicate on the same server.
The data directory of the source cluster is /var/lib/pgsql/10/data
 postgres=# show data_directory; data_directory ------------------------ /var/lib/pgsql/10/data (1 row) postgres=# 
To duplicate to a new data directory /u01/devdata for example, the option –db-path is used
 [postgres@pgserver log]$pgbackrest --stanza=clustpgserver --log-level-console=info --db-path=/u01/devdata restore … … 2018-02-14 09:40:05.755 P01 INFO: restore file /u01/devdata/base/1/13657 (0B, 100%) 2018-02-14 09:40:05.773 P01 INFO: restore file /u01/devdata/base/1/13652 (0B, 100%) 2018-02-14 09:40:05.811 P01 INFO: restore file /u01/devdata/base/1/13647 (0B, 100%) 2018-02-14 09:40:05.983 P01 INFO: restore file /u01/devdata/base/1/13642 (0B, 100%) 2018-02-14 09:40:06.067 P00 INFO: write /u01/devdata/recovery.conf 2018-02-14 09:40:14.403 P00 INFO: restore global/pg_control (performed last to ensure aborted restores cannot be started) 2018-02-14 09:40:30.187 P00 INFO: restore command end: completed successfully  After the duplicate don’t forget to change the port (as we are in the same server) and then start your new cluster  postgres=# show data_directory ; data_directory ---------------- /u01/devdata (1 row) postgres=#  Restore specific databases With PgBackRest, we can restore specific user databases. Note that built-in databases (template0, template1 and postgres) are always restored. Let’s show an example. In our source cluster we actually have two databases test and sandbox.  sandbox=# \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+----------+-------------+-------------+----------------------- postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | sandbox | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres test | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | (5 rows)  In sandbox we have a table mytab with 2 rows  sandbox=# \c sandbox You are now connected to database "sandbox" as user "postgres". sandbox=# table mytab; id ---- 1 2 (2 rows)  Now let’s restore the cluster but only with test database, the option –db-include will be used.  [postgres@pgserver log]$ pgbackrest --stanza=clustpgserver --log-level-console=info --db-path=/u01/devdata --db-include=test restore 2018-02-14 10:11:00.948 P00 INFO: restore command begin 1.28: --db-include=test=1 --db1-path=/u01/devdata --log-level-console=info --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/var/lib/pgbackrest --stanza=clustpgserver 2018-02-14 10:11:05.137 P00 INFO: restore backup set 20180214-095439F_20180214-100446I 2018-02-14 10:11:25.110 P00 INFO: remap $PGDATA directory to /u01/devdata ...  After the restore completed, let’s start the new cluster and let’s verify present databases.  [postgres@pgserver devdata]$ psql -p 5436 psql (10.1) Type "help" for help. postgres=# \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+----------+-------------+-------------+----------------------- postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | sandbox | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres test | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | (5 rows) 
What!! the sandbox is still present despite the use of option –include-db=test. But if we try to connect to sandbox database. We get an error.
 postgres=# \c sandbox FATAL: relation mapping file "base/24581/pg_filenode.map" contains invalid data Previous connection kept postgres=# 
And if we compare at OS level the size of files of the database at the source cluster and at the target
 [postgres@pgserver log]$du -sh /var/lib/pgsql/10/data/base/24581 7.8M /var/lib/pgsql/10/data/base/24581 [postgres@pgserver log]$ du -sh /u01/devdata/base/24581 16K /u01/devdata/base/24581 [postgres@pgserver log]$ We can see that at the target cluster, sandbox uses less disk space during the selective restore than it would have if the entire database had been restored. To finish the selective restore, we have to manually drop the sandbox database. Indeed PgBackRest cannot automatically drop the database because the cluster is not accessible until the recovery process finishes.  postgres=# drop database sandbox; DROP DATABASE postgres=#  Automatic cleanup of expired backups Another nice feature of PgBackRest is that expired backups are automatically removed. If we check our pgbackrest.conf file, we see that the retention-full is set to 2. This means that 2 full backups will be maintained. So if we do a third full backup, the first full backup and all corresponding incremental and differential backups will be expired and removed  [postgres@pgserver log]$ cat /etc/pgbackrest.conf [global] repo-path=/var/lib/pgbackrest repo-cipher-pass=FhXg7oW2pZb9UICZ4iYZPn3X4I6fF0ni7IL6QjaB1IL8qz4LIrP+GW+XqCZqIi3w repo-cipher-type=aes-256-cbc [clustpgserver] db-path=/var/lib/pgsql/10/data retention-full=2 [postgres@pgserver log]$ Let’s do a quick demonstration. Actually we have 2 full backups  [postgres@pgserver log]$ pgbackrest --stanza=clustpgserver info stanza: clustpgserver status: ok db (current) wal archive min/max (10-1): 00000002000000000000004E / 000000020000000000000056 full backup: 20180213-140152F timestamp start/stop: 2018-02-13 14:01:52 / 2018-02-13 14:32:00 wal start/stop: 00000002000000000000004E / 00000002000000000000004E database size: 577MB, backup size: 577MB repository size: 28.8MB, repository backup size: 28.8MB incr backup: 20180213-140152F_20180213-152509I timestamp start/stop: 2018-02-14 09:31:03 / 2018-02-14 09:33:17 wal start/stop: 000000020000000000000052 / 000000020000000000000052 database size: 30.7MB, backup size: 285.3KB repository size: 3.6MB, repository backup size: 24.3KB backup reference list: 20180213-140152F full backup: 20180214-095439F timestamp start/stop: 2018-02-14 09:54:39 / 2018-02-14 09:58:53 wal start/stop: 000000020000000000000054 / 000000020000000000000054 database size: 30.7MB, backup size: 30.7MB repository size: 3.6MB, repository backup size: 3.6MB incr backup: 20180214-095439F_20180214-100446I timestamp start/stop: 2018-02-14 10:04:46 / 2018-02-14 10:07:43 wal start/stop: 000000020000000000000056 / 000000020000000000000056 database size: 38.3MB, backup size: 7.6MB repository size: 4.5MB, repository backup size: 928.5KB backup reference list: 20180214-095439F [postgres@pgserver log]$ And we can confirm by executing a simple ls in the repository where backups are stored  [postgres@pgserver clustpgserver]$ ls -ld * drwxr-x---. 3 postgres postgres 69 Feb 13 14:32 20180213-140152F drwxr-x---. 3 postgres postgres 69 Feb 14 09:33 20180213-140152F_20180213-152509I drwxr-x---. 3 postgres postgres 69 Feb 14 09:59 20180214-095439F drwxr-x---. 3 postgres postgres 69 Feb 14 10:07 20180214-095439F_20180214-100446I drwxr-x---. 3 postgres postgres 17 Feb 13 14:33 backup.history -rw-r-----. 1 postgres postgres 2992 Feb 14 10:08 backup.info -rw-r-----. 1 postgres postgres 2992 Feb 14 10:08 backup.info.copy lrwxrwxrwx. 1 postgres postgres 33 Feb 14 10:08 latest -> 20180214-095439F_20180214-100446I [postgres@pgserver clustpgserver]$ls -ld drwxr-x---. 7 postgres postgres 4096 Feb 14 10:08 . [postgres@pgserver clustpgserver]$ ls -ld * 
Now let’s do a third full backup
 [postgres@pgserver clustpgserver]$pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 --type=full backup 2018-02-14 10:55:52.250 P00 INFO: backup command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/var/lib/pgbackrest --retention-full=2 --stanza=clustpgserver --type=full … 2018-02-14 11:19:02.001 P00 INFO: backup command end: completed successfully 2018-02-14 11:19:02.107 P00 INFO: expire command begin 1.28: --log-level-console=info --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/var/lib/pgbackrest --retention-archive=2 --retention-full=2 --stanza=clustpgserver 2018-02-14 11:19:02.928 P00 INFO: expire full backup set: 20180213-140152F, 20180213-140152F_20180213-152509I 2018-02-14 11:22:08.759 P00 INFO: remove expired backup 20180213-140152F_20180213-152509I 2018-02-14 11:22:09.000 P00 INFO: remove expired backup 20180213-140152F 2018-02-14 11:22:49.387 P00 INFO: expire command end: completed successfully [postgres@pgserver clustpgserver]$ 
We can see that at the end of backups, some old backups are expired and removed. We can also confirm this by listing files in the repository
 [postgres@pgserver clustpgserver]$ls -ld * drwxr-x---. 3 postgres postgres 69 Feb 14 09:59 20180214-095439F drwxr-x---. 3 postgres postgres 69 Feb 14 10:07 20180214-095439F_20180214-100446I drwxr-x---. 3 postgres postgres 69 Feb 14 11:13 20180214-105603F drwxr-x---. 3 postgres postgres 17 Feb 13 14:33 backup.history -rw-r-----. 1 postgres postgres 2320 Feb 14 11:19 backup.info -rw-r-----. 1 postgres postgres 2320 Feb 14 11:20 backup.info.copy lrwxrwxrwx. 1 postgres postgres 16 Feb 14 11:14 latest -> 20180214-105603F [postgres@pgserver clustpgserver]$ 

Point-in-Time Recovery
PgBackRest can also do a point-in-time recovery. Let’s drop table article in the database test
 test=# table article; nom --------- printer (1 row) . test=# select now(); now ------------------------------- 2018-02-14 11:39:28.024378+01 (1 row) . test=# drop table article; DROP TABLE . test=# table article; ERROR: relation "article" does not exist LINE 1: table article; ^ test=# 
And now let’s restore until just before we drop the table let’s say 2018-02-14 11:39:28.
But as we have many backup sets we have to restore from a backup done before the table was dropped.
If we check our backups, we have to restore from the full backup: 20180214-105603F
which was taken before table article was dropped.
 [postgres@pgserver devdata]$pgbackrest --stanza=clustpgserver --log-level-console=info info stanza: clustpgserver status: ok db (current) wal archive min/max (10-1): 000000020000000000000054 / 00000002000000000000005A full backup: 20180214-095439F timestamp start/stop: 2018-02-14 09:54:39 / 2018-02-14 09:58:53 wal start/stop: 000000020000000000000054 / 000000020000000000000054 database size: 30.7MB, backup size: 30.7MB repository size: 3.6MB, repository backup size: 3.6MB incr backup: 20180214-095439F_20180214-100446I timestamp start/stop: 2018-02-14 10:04:46 / 2018-02-14 10:07:43 wal start/stop: 000000020000000000000056 / 000000020000000000000056 database size: 38.3MB, backup size: 7.6MB repository size: 4.5MB, repository backup size: 928.5KB backup reference list: 20180214-095439F full backup: 20180214-105603F timestamp start/stop: 2018-02-14 10:56:03 / 2018-02-14 11:12:26 wal start/stop: 000000020000000000000058 / 000000020000000000000058 database size: 38.3MB, backup size: 38.3MB repository size: 4.5MB, repository backup size: 4.5MB incr backup: 20180214-105603F_20180214-121044I timestamp start/stop: 2018-02-14 12:10:44 / 2018-02-14 12:15:14 wal start/stop: 00000002000000000000005A / 00000002000000000000005A database size: 38.3MB, backup size: 1.1MB repository size: 4.5MB, repository backup size: 140.8KB backup reference list: 20180214-105603F [postgres@pgserver devdata]$ 
For the restore we use the option –set which allows us to specify the backup set we want to use for the restore. Note also the use of –type=time and –target
 [postgres@pgserver log]$pgbackrest --stanza=clustpgserver --log-level-console=info --type=time "--target=2018-02-14 11:39:28.024378+01" --db-path=/u01/devdata --set=20180214-105603F restore 2018-02-14 13:36:50.848 P00 INFO: restore command begin 1.28: --db1-path=/u01/devdata --log-level-console=info --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/var/lib/pgbackrest --set=20180214-105603F --stanza=clustpgserver "--target=2018-02-14 11:39:28.024378+01" --type=time 2018-02-14 13:37:03.406 P00 INFO: restore backup set 20180214-105603F ...  At the end of the restore let’s see the contents of the recovery.done file  [postgres@pgserver devdata]$ cat recovery.conf restore_command = '/usr/bin/pgbackrest --db1-path=/u01/devdata --log-level-console=info --stanza=clustpgserver archive-get %f "%p"' recovery_target_time = '2018-02-14 11:39 
If we start our new cluster, we can see in log files that PITR is starting
2018-02-14 13:54:23.824 CET [10049] LOG: starting point-in-time recovery to 2018-02-14 11:39:28.024378+01
And once the recovery finished, we can verify that the table article is present
 postgres=# \c test You are now connected to database "test" as user "postgres". test=# \d article Table "public.article" Column | Type | Collation | Nullable | Default --------+-----------------------+-----------+----------+--------- nom | character varying(50) | | | . test=# table article; nom --------- printer (1 row) test=# 
Conclusion:
In this blog I talked about some features about PgBackRest. But as already specified, there are many, many other options with this wonderful tool. The official documentation can give more information. In future blogs we will explore more with advanced configuration.

Cet article Backup and Restore PostgreSQL with PgBackRest II est apparu en premier sur Blog dbi services.