Skip navigation.

Feed aggregator

getting started with postgres plus advanced server (2) – setting up a backup and recovery server

Yann Neuhaus - Wed, 2015-04-29 23:50

The first post in this series explained how to get ppas installed on a linux system. Now that the database cluster is up and running we should take care immediately about backup and recovery. For this I'll use another system where I'll install and configure bart. So, the system overview for now is:

server ip address purpose ppas 192.168.56.243 ppas database cluster ppasbart 192.168.56.245 backup and recovery server


As bart requires the postgres binaries I'll just repeat the ppas installation on the bart server. Check the first post on how to do that.

tip: there is a "--extract-only" switch which only extracts the binaries without bringing up a database cluster.

After that just install the bart rpm:

yum localinstall edb-bart-1.0.2-1.rhel6.x86_64.rpm

All the files will be installed under:

ls -la /usr/edb-bart-1.0/
total 20
drwxr-xr-x.  4 root root    44 Apr 23 13:41 .
drwxr-xr-x. 14 root root  4096 Apr 23 13:41 ..
drwxr-xr-x.  2 root root    17 Apr 23 13:41 bin
drwxr-xr-x.  2 root root    21 Apr 23 13:41 etc
-rw-r--r--.  1 root root 15225 Jan 27 15:24 license.txt

Having a dedicated user for bart is a good idea:

# groupadd bart
# useradd -g bart bart
# passwd bart
Changing password for user bart.
New password: 
Retype new password: 
$passwd: all authentication tokens updated successfully.

As backups need some space a top level directory for all the bart backups needs to be created:

# mkdir /opt/backup
chown bart:bart /opt/backup
chmod 700 /opt/backup
mkdir -p /opt/backup/ppas94/archived_wals

Now everything is in place to start the bart configuration. A minimal configuration file would look like this:

cat /usr/edb-bart-1.0/etc/bart.cfg
[BART]
bart-host = bart@192.168.56.245
backup_path = /opt/backup
pg_basebackup_path = /opt/PostgresPlus/9.4AS/bin/pg_basebackup
logfile = /var/tmp/bart.log
xlog-method = fetch

[PPAS94]
host = 192.168.56.243
port = 5444
user = enterprisedb
description = "PPAS 94 server"

The BART section is the global section while the next sections are specific to the database clusters to backup and restore. As bart requires passwordless ssh authentication between the bart host and the database host to be backup up lets setup this. On the bart bart host ( ppasbart ):

su - bart
ssh-keygen -t rsa

On the host where database runs ( ppas ):

su -
cd /opt/PostgresPlus/9.4AS
mkdir .ssh
chown enterprisedb:enterprisedb .ssh/
chmod 700 .ssh/
su - enterprisedb
ssh-keygen -t rsa

As the public keys are now available we'll need to make them available on each host. On the ppas host:

cat .ssh/id_rsa.pub > .ssh/authorized_keys
chmod 600 .ssh/authorized_keys

Add the public key from the barthost to the authorized keys file above. Example: get the public key from the bart host:

[bart@ppasbart ~]$ id
uid=1001(bart) gid=1001(bart) groups=1001(bart) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[bart@ppasbart ~]$ cat .ssh/id_rsa.pub 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCn+DN//ia+BocR6kTfHkPoXfx3/HRU5KM1Bqy1nDeGnUn98CSl3kbRkUkiyumDfj4XOIoxOxnVJw6Invyi2VjzeQ12XMMILBFRBAoePDpy4kOQWY+SaS215G72DKzNYY8nGPUwjaQdFpFt3eQhwLP4D5uqomPIi9Dmv7Gp8ZHU0DBgJfrDaqrg8oF3GrzF50ZRjZTAkF3pDxJnrzIEEme+QQFKVxBnSU2ClS5XHdjMBWg+oSx3XSEBHZefP9NgX22ru52lTWmvTscUQbIbDo8SaWucIZC7uhvljteN4AuAdMv+OUblOm9ZUtO2Y8vX8hNMJvqRBlYh9RGl+m6wUZLN document.write(['bart','ppasbart.loca'].join('@'))l

Copy/paste this key into the authorized_keys file for the enterprisedb user on the database host, so that the file looks similar to this:

cat .ssh/id_rsa.pub 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCn+DN//ia+BocR6kTfHkPoXfx3/HRU5KM1Bqy1nDeGnUn98CSl3kbRkUkiyumDfj4XOIoxOxnVJw6Invyi2VjzeQ12XMMILBFRBAoePDpy4kOQWY+SaS215G72DKzNYY8nGPUwjaQdFpFt3eQhwLP4D5uqomPIi9Dmv7Gp8ZHU0DBgJfrDaqrg8oF3GrzF50ZRjZTAkF3pDxJnrzIEEme+QQFKVxBnSU2ClS5XHdjMBWg+oSx3XSEBHZefP9NgX22ru52lTWmvTscUQbIbDo8SaWucIZC7uhvljteN4AuAdMv+OUblOm9ZUtO2Y8vX8hNMJvqRBlYh9RGl+m6wUZLN l
[bart@ppasbart ~]$ cat .ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAQZWeegLpqVB20c3cIN0Bc7pN6OjFM5pBsunDbO6SQ0+UYxZGScwjnX9FSOlmYzqrlz62jxV2dOJBHgaJj/mbFs5XbmvFw6Z4Zj224aBOXAfej4nHqVnn1Tpuum4HIrbsau3rI+jLCNP+MKnumwM7JiG06dsoG4PeUOghCLyFrItq2/uCIDHWoeQCqqnLD/lLG5y1YXQCSR4VkiQm62tU0aTUBQdZWnvtgskKkHWyVRERfLOmlz2puvmmc5YxmQ5XBVMN5dIcIZntTfx3JC3imjrUl10L3hkiPkV0eAt3KtC1M0n9DDao3SfHFfKfEfp5p69vvpZM2uGFbcpkQrtN l
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCn+DN//ia+BocR6kTfHkPoXfx3/HRU5KM1Bqy1nDeGnUn98CSl3kbRkUkiyumDfj4XOIoxOxnVJw6Invyi2VjzeQ12XMMILBFRBAoePDpy4kOQWY+SaS215G72DKzNYY8nGPUwjaQdFpFt3eQhwLP4D5uqomPIi9Dmv7Gp8ZHU0DBgJfrDaqrg8oF3GrzF50ZRjZTAkF3pDxJnrzIEEme+QQFKVxBnSU2ClS5XHdjMBWg+oSx3XSEBHZefP9NgX22ru52lTWmvTscUQbIbDo8SaWucIZC7uhvljteN4AuAdMv+OUblOm9ZUtO2Y8vX8hNMJvqRBlYh9RGl+m6wUZLN

Make the file the same on the bart host and test if you can connect without passwords:

[bart@ppasbart ~]$ hostname
ppasbart.local
[bart@ppasbart ~]$ ssh bart@ppasbart
Last login: Thu Apr 23 14:24:39 2015 from ppas
[bart@ppasbart ~]$ logout
Connection to ppasbart closed.
[bart@ppasbart ~]$ ssh enterprisedb@ppas
Last login: Thu Apr 23 14:24:47 2015 from ppas
-bash-4.2$ logout
Connection to ppas closed.

Do the same test on the ppas host:

bash-4.2$ hostname
ppas.local
-bash-4.2$ ssh bart@ppasbart
Last login: Thu Apr 23 14:22:07 2015 from ppasbart
[bart@ppasbart ~]$ logout
Connection to ppasbart closed.
-bash-4.2$ ssh enterprisedb@ppas
Last login: Thu Apr 23 14:22:18 2015 from ppasbart
-bash-4.2$ logout
Connection to ppas closed.
-bash-4.2$ 

Once this works we need to setup a replication user in the database being backed up. So create the user in the database which runs on the ppas host (I'll do that with enterprise user instead of the postgres user as we'll need to adjust pg_hba.conf file right after creating the user):

[root@ppas 9.4AS]# su - enterprisedb
Last login: Thu Apr 23 14:25:50 CEST 2015 from ppasbart on pts/1
-bash-4.2$ . pgplus_env.sh
-bash-4.2$ psql -U enterprisedb
psql.bin (9.4.1.3)
Type "help" for help.

edb=# CREATE ROLE bart WITH LOGIN REPLICATION PASSWORD 'bart';       
CREATE ROLE
edb=# exit
-bash-4.2$ echo "host    all     bart         192.168.56.245/32          md5" >> data/pg_hba.conf

Make sure that the IP matches your bart host. Then adjust the bart.cfg file on the bart host to match your configuration:

cat /usr/edb-bart-1.0/etc/bart.cfg
[BART]
bart-host = bart@192.168.56.245
backup_path = /opt/backup
pg_basebackup_path = /opt/PostgresPlus/9.4AS/bin/pg_basebackup
logfile = /var/tmp/bart.log
xlog-method = fetch

[PPAS94]
host = 192.168.56.243
port = 5444
user = bart
remote-host = enterprisedb@192.168.56.243
description = "PPAS 94 remote server"

Another requirement is that the bart database user must be able to connect to the database without prompting for a password. Thus we create the .pgpass file on the bart host which is used for reading the password:

[bart@ppasbart ~]$ cat .pgpass 
192.168.56.243:5444:*:bart:bart
[bart@ppasbart ~]$ chmod 600 .pgpass

As a last step we need to enable wal archiving on the database that should be backed up. The following parameters need to be set in the postgresql.conf file:

wal_level = archive  # or higher
archive_mode = on
archive_command = 'scp %p bart@192.168.56.245:/opt/backup/ppas94/archived_wals/%f'
max_wal_senders = 1  # or higher

Once done restart the database cluster:

su -
service ppas-9.4 restart

Lets see if bart can see anything on the bart server:

[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg SHOW-SERVERS -s PPAS94 
Server name         : ppas94
Host name           : 192.168.56.243
User name           : bart
Port                : 5444
Remote host         : enterprisedb@192.168.56.243
Archive path        : /opt/backup/ppas94/archived_wals
WARNING: xlog-method is empty, defaulting to global policy
Xlog Method         : fetch
Tablespace path(s)  : 
Description         : "PPAS 94 remote server"

Looks fine. So lets do a backup:

[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg BACKUP -s PPAS94

INFO:  creating backup for server 'ppas94'
INFO:  backup identifier: '1429795268774'
WARNING: xlog-method is empty, defaulting to global policy
56357/56357 kB (100%), 1/1 tablespace

INFO:  backup checksum: 6e614f981902c99326a7625a9c262d98
INFO:  backup completed successfully

Cool. Lets see what is in the backup catalog:

[root@ppasbart tmp]# ls -la /opt/backup/
total 0
drwx------. 3 bart bart 19 Apr 23 15:02 .
drwxr-xr-x. 4 root root 38 Apr 23 13:49 ..
drwx------. 4 bart bart 46 Apr 23 15:21 ppas94
[root@ppasbart tmp]# ls -la /opt/backup/ppas94/
total 4
drwx------. 4 bart bart   46 Apr 23 15:21 .
drwx------. 3 bart bart   19 Apr 23 15:02 ..
drwx------. 2 bart bart   36 Apr 23 15:21 1429795268774
drwx------. 2 bart bart 4096 Apr 23 15:21 archived_wals
[root@ppasbart tmp]# ls -la /opt/backup/ppas94/1429795268774/
total 56364
drwx------. 2 bart bart       36 Apr 23 15:21 .
drwx------. 4 bart bart       46 Apr 23 15:21 ..
-rw-rw-r--. 1 bart bart       33 Apr 23 15:21 base.md5
-rw-rw-r--. 1 bart bart 57710592 Apr 23 15:21 base.tar
[root@ppasbart tmp]# ls -la /opt/backup/ppas94/archived_wals/
total 81928
drwx------. 2 bart bart     4096 Apr 23 15:21 .
drwx------. 4 bart bart       46 Apr 23 15:21 ..
-rw-------. 1 bart bart 16777216 Apr 23 15:10 000000010000000000000002
-rw-------. 1 bart bart 16777216 Apr 23 15:13 000000010000000000000003
-rw-------. 1 bart bart 16777216 Apr 23 15:20 000000010000000000000004
-rw-------. 1 bart bart 16777216 Apr 23 15:21 000000010000000000000005
-rw-------. 1 bart bart 16777216 Apr 23 15:21 000000010000000000000006
-rw-------. 1 bart bart      304 Apr 23 15:21 000000010000000000000006.00000028.backup

Use the SHOW-BACKUPS switch to get on overview of the backups available:

[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg SHOW-BACKUPS 
 Server Name   Backup ID       Backup Time           Backup Size  
                                                                  
 ppas94        1429795268774   2015-04-23 15:21:23   55.0371 MB   
 ppas94        1429795515326   2015-04-23 15:25:18   5.72567 MB   
 ppas94        1429795614916   2015-04-23 15:26:58   5.72567 MB   
                                                                  

A backup without a restore proves nothing so lets try to restore one of the backups to the ppas server to a different directory:

[root@ppas 9.4AS]# mkdir /opt/PostgresPlus/9.4AS/data2
[root@ppas 9.4AS]# chown enterprisedb:enterprisedb /opt/PostgresPlus/9.4AS/data2

On the ppasbart host do the restore:

[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg RESTORE -s PPAS94 -i 1429795614916 -r enterprisedb@ppas -p /opt/PostgresPlus/9.4AS/data2
INFO:  restoring backup '1429795614916' of server 'ppas94'
INFO:  restoring backup to enterprisedb@ppas:/opt/PostgresPlus/9.4AS/data2
INFO:  base backup restored
INFO:  archiving is disabled
INFO:  backup restored successfully at enterprisedb@ppas:/opt/PostgresPlus/9.4AS/data2

Looks good. Lets see what is in the data2 directory on the ppas host:

[root@ppas 9.4AS]# ls /opt/PostgresPlus/9.4AS/data2
backup_label  dbms_pipe  pg_clog      pg_hba.conf    pg_log      pg_multixact  pg_replslot  pg_snapshots  pg_stat_tmp  pg_tblspc    PG_VERSION  postgresql.auto.conf
base          global     pg_dynshmem  pg_ident.conf  pg_logical  pg_notify     pg_serial    pg_stat       pg_subtrans  pg_twophase  pg_xlog     postgresql.conf
[root@ppas 9.4AS]# ls /opt/PostgresPlus/9.4AS/data2/pg_xlog
000000010000000000000008  archive_status

Looks good, too. As this is all on the same server we need to change the port before bringing up the database:

-bash-4.2$ grep port postgresql.conf  | head  -1
port = 5445				# (change requires restart)
-bash-4.2$ pg_ctl start -D data2/
server starting
-bash-4.2$ 2015-04-23 16:01:30 CEST FATAL:  data directory "/opt/PostgresPlus/9.4AS/data2" has group or world access
2015-04-23 16:01:30 CEST DETAIL:  Permissions should be u=rwx (0700).

Ok, fine. Change it:

-bash-4.2$ chmod 700 /opt/PostgresPlus/9.4AS/data2
-bash-4.2$ pg_ctl start -D data2/
server starting
-bash-4.2$ 2015-04-23 16:02:00 CEST LOG:  redirecting log output to logging collector process
2015-04-23 16:02:00 CEST HINT:  Future log output will appear in directory "pg_log".

Seems ok, lets connect:

-bash-4.2$ psql -p 5445 -U bart
Password for user bart: 
psql.bin (9.4.1.3)
Type "help" for help.

edb=> l
                                           List of databases
   Name    |    Owner     | Encoding |   Collate   |    Ctype    | ICU |       Access privileges       
-----------+--------------+----------+-------------+-------------+-----+-------------------------------
 edb       | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | 
 postgres  | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | 
 template0 | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | =c/enterprisedb              +
           |              |          |             |             |     | enterprisedb=CTc/enterprisedb
 template1 | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | =c/enterprisedb              +
           |              |          |             |             |     | enterprisedb=CTc/enterprisedb
(4 rows)

Cool. Works. But: archiving is disabled and you'll need to enable it again. This is the default behavior of bart as it adds "archive_mode=off" to the end of the postgressql.conf. But take care that you adjust the archive_command parameter as all archived wals will be scp'ed to the same directory on the ppasbart server as the original database did. Can we do a point in time recovery? Let's try (I'll destroy the restored database cluster and will use the same data2 directory ):

-bash-4.2$ pg_ctl -D data2 stop -m fast
waiting for server to shut down.... done
server stopped
-bash-4.2$ rm -rf data2/*
-bash-4.2$ 

Lets try the restore to a specific point in time:

[bart@ppasbart ~]$ /usr/edb-bart-1.0/bin/bart -c /usr/edb-bart-1.0/etc/bart.cfg RESTORE -s PPAS94 -i 1429795614916 -r enterprisedb@ppas -p /opt/PostgresPlus/9.4AS/data2 -g '2015-04-03 15:23:00'
INFO:  restoring backup '1429795614916' of server 'ppas94'
INFO:  restoring backup to enterprisedb@ppas:/opt/PostgresPlus/9.4AS/data2
INFO:  base backup restored
INFO:  creating recovery.conf file
INFO:  archiving is disabled
INFO:  backup restored successfully at enterprisedb@ppas:/opt/PostgresPlus/9.4AS/data2

Seems ok, but what is the difference? When specifying a point in time a recovery.conf file will be created for the restored database cluster:

-bash-4.2$ cat data2/recovery.conf
restore_command = 'scp -o BatchMode=yes -o PasswordAuthentication=no bart@192.168.56.245:/opt/backup/ppas94/archived_wals/%f %p'
recovery_target_time = '2015-04-03 15:23:00'

Lets start the database (after changing the port again in postgresql.conf):

-bash-4.2$ pg_ctl -D data2 start
server starting
-bash-4.2$ 2015-04-23 16:16:12 CEST LOG:  redirecting log output to logging collector process
2015-04-23 16:16:12 CEST HINT:  Future log output will appear in directory "pg_log".

Are we able to connect?

-bash-4.2$ psql -U bart -p 5445 
Password for user bart: 
psql.bin (9.4.1.3)
Type "help" for help.

edb=>

Works, too. So now we have a central backup server for our postgresql infrastructure from which backups and restores can be executed. Combine this with a backup software (like netbackup, etc) which picks up the backups from the bartserver and you should be fine. in the next post we'll setup a hot standby database server.

A migration pitfall with ALL COLUMN SIZE AUTO

Yann Neuhaus - Wed, 2015-04-29 13:05

When you migrate, you should be prepared to face some execution plan changing. That's not new. But here I'll show you a case where you have several bad execution plans because lot of histograms are missing. The version is the same. The system is the same. You've migrated with DataPump importing all statistics. You have the same automatic job to gather statistics with all default options. You have repeated the migration several times on a system where you constantly reproduce the load. Have done a lot of regression tests. Everything was ok.

SQL Server Tips: How to know if In-Memory Feature is supported by your server?

Yann Neuhaus - Wed, 2015-04-29 00:21


A customer asks me, how to know if In-Memory Feature is supported by my SQL Server server?

An easy way is to check the edition, version etc. but now, you have directly a property for that.

 

On msdn here, you find all property that you can search with the T-SQL Command: SERVERPROPERTY

 

But if you try to run through all your servers with CMS (Central Management Server), for all SQL Server below than SQL Server 2014, you have a NULL value.


 InMemory_Property01.png

 

I write rapidly this script to have no NULL value and have an usable info:

SELECT CASE CAST( ServerProperty('IsXTPSupported') AS INT)  WHEN 0 THEN 'Not Supported'  WHEN 1 THEN 'Supported'  ELSE 'Not Available Version (must be SQL2014 or Higher)'ENDAS In_Memory_Supported

 


InMemory_Property02.png


I hope this script can help you and if you want to know more on SQL Server In-Memory Technology come to our Event in June 2015: inscription & details here Cool




getting started with postgres plus advanced server (1) - setting up ppas

Yann Neuhaus - Tue, 2015-04-28 10:35

I did several posts around postgresql and postgres plus advanced server in the past. What is missing is a beginners guide on how to get postgres plus advanced server up and running including a solution for backup and recovery, high availability and monitoring. So I thought I'd write a guide on how to do that, consisting of:

  1. setting up postgres plus advanced server
  2. setting up a backup and recovery server
  3. setting up a hot standby database
  4. setting up monitoring

As this is the first post of the series this is about getting ppas installed and creating the first database cluster.

Obviously the first thing to do is to install an operating system. Several of these are supported, just choose the one you like. An example setup can be found here. So, once ppas was downloaded and transferred to the system where it is supposed to be installed we can start. There are several ways to get ppas installed on the system but before you begin java should be installed. For yum based distributions this is done by:

yum install java
Using the standalone installer in interactive mode

Starting the installation is just a matter of extracting the file and executing it:

[root@oel7 tmp]# ls
ppasmeta-9.4.1.3-linux-x64.tar.gz
[root@oel7 tmp]# tar -axf ppasmeta-9.4.1.3-linux-x64.tar.gz 
[root@oel7 tmp]# ls
ppasmeta-9.4.1.3-linux-x64  ppasmeta-9.4.1.3-linux-x64.tar.gz
[root@oel7 tmp]# ppasmeta-9.4.1.3-linux-x64/ppasmeta-9.4.1.3-linux-x64.run 

alt alt alt provide the username and password you used for downloading the product: alt alt alt alt alt alt alt alt alt alt alt alt alt alt altdone.

Using the standalone installer in interactive text mode

If you do not want to use the graphical user interface you can launch the installer in interactive text mode:

# ppasmeta-9.4.1.3-linux-x64/ppasmeta-9.4.1.3-linux-x64.run --mode text

Either go with the default options or adjust what you like. The questions should be self explaining:

Language Selection

Please select the installation language
[1] English - English
[2] Japanese - 日本語
[3] Simplified Chinese - 简体中文
[4] Traditional Chinese - 繁体中文
[5] Korean - 한국어
Please choose an option [1] : 1
----------------------------------------------------------------------------
Welcome to the Postgres Plus Advanced Server Setup Wizard.

----------------------------------------------------------------------------
Please read the following License Agreement. You must accept the terms of this 
agreement before continuing with the installation.

Press [Enter] to continue:
.....
.....
Press [Enter] to continue:

Do you accept this license? [y/n]: y

----------------------------------------------------------------------------
User Authentication

This installation requires a registration with EnterpriseDB.com. Please enter 
your credentials below. If you do not have an account, Please create one now on 
https://www.enterprisedb.com/user-login-registration

Email []: 

Password : xxxxx

----------------------------------------------------------------------------
Please specify the directory where Postgres Plus Advanced Server will be 
installed.

Installation Directory [/opt/PostgresPlus]: 

----------------------------------------------------------------------------
Select the components you want to install.

Database Server [Y/n] :y

Connectors [Y/n] :y

Infinite Cache [Y/n] :y

Migration Toolkit [Y/n] :y

Postgres Enterprise Manager Client [Y/n] :y

pgpool-II [Y/n] :y

pgpool-II Extensions [Y/n] :y

EDB*Plus [Y/n] :y

Slony Replication [Y/n] :y

PgBouncer [Y/n] :y

Is the selection above correct? [Y/n]: y

----------------------------------------------------------------------------
Additional Directories

Please select a directory under which to store your data.

Data Directory [/opt/PostgresPlus/9.4AS/data]: 

Please select a directory under which to store your Write-Ahead Logs.

Write-Ahead Log (WAL) Directory [/opt/PostgresPlus/9.4AS/data/pg_xlog]: 

----------------------------------------------------------------------------
Configuration Mode

Postgres Plus Advanced Server always installs with Oracle(R) compatibility features and maintains full PostgreSQL compliance. Select your style preference for installation defaults and samples.

The Oracle configuration will cause the use of certain objects  (e.g. DATE data types, string operations, etc.) to produce Oracle compatible results, create the same Oracle sample tables, and have the database match Oracle examples used in the documentation.

Configuration Mode

[1] Oracle Compatible
[2] PostgreSQL Compatible
Please choose an option [1] : 1

----------------------------------------------------------------------------
Please provide a password for the database superuser (enterprisedb). A locked 
Unix user account (enterprisedb) will be created if not present.

Password :
Retype Password :
----------------------------------------------------------------------------
Additional Configuration

Please select the port number the server should listen on.

Port [5444]: 

Select the locale to be used by the new database cluster.

Locale

[1] [Default locale]
......
Please choose an option [1] : 1

Install sample tables and procedures. [Y/n]: Y

----------------------------------------------------------------------------
Dynatune Dynamic Tuning:
Server Utilization

Please select the type of server to determine the amount of system resources 
that may be utilized:

[1] Development (e.g. a developer's laptop)
[2] General Purpose (e.g. a web or application server)
[3] Dedicated (a server running only Postgres Plus)
Please choose an option [2] : 2

----------------------------------------------------------------------------
Dynatune Dynamic Tuning:
Workload Profile

Please select the type of workload this server will be used for:

[1] Transaction Processing (OLTP systems)
[2] General Purpose (OLTP and reporting workloads)
[3] Reporting (Complex queries or OLAP workloads)
Please choose an option [1] : 2

----------------------------------------------------------------------------
Advanced Configuration

----------------------------------------------------------------------------
PgBouncer Listening Port [6432]: 

----------------------------------------------------------------------------
Service Configuration

Autostart PgBouncer Service [Y/n]: n

Autostart pgAgent Service [Y/n]: n

Update Notification Service [Y/n]: n

The Update Notification Service informs, downloads and installs whenever 
security patches and other updates are available for your Postgres Plus Advanced 
Server installation.

----------------------------------------------------------------------------
Pre Installation Summary

Following settings will be used for installation:

Installation Directory: /opt/PostgresPlus
Data Directory: /opt/PostgresPlus/9.4AS/data
WAL Directory: /opt/PostgresPlus/9.4AS/data/pg_xlog
Database Port: 5444
Database Superuser: enterprisedb
Operating System Account: enterprisedb
Database Service: ppas-9.4
PgBouncer Listening Port: 6432

Press [Enter] to continue:

----------------------------------------------------------------------------
Setup is now ready to begin installing Postgres Plus Advanced Server on your 
computer.

Do you want to continue? [Y/n]: Y

----------------------------------------------------------------------------
Please wait while Setup installs Postgres Plus Advanced Server on your computer.

 Installing Postgres Plus Advanced Server
 0% ______________ 50% ______________ 100%
 ########################################
 Installing Database Server ...
 Installing pgAgent ...
 Installing Connectors ...
 Installing Migration Toolkit ...
 Installing EDB*Plus ...
 Installing Infinite Cache ...
 Installing Postgres Enterprise Manager Client ...
 Installing Slony Replication ...
 Installing pgpool-II ...
 Installing pgpool-II Extensions ...
 Installing PgBouncer ...
 Installing StackBuilder Plus ...
 #

----------------------------------------------------------------------------
Setup has finished installing Postgres Plus Advanced Server on your computer.

done.

Using the standalone installer in unattended mode

Another option is to use the unattended mode by providing all the parameters on the command line or by creating a configuration file. This is an example for providing the parameters on the command line. Most of the parameters can be skipped and the default is applied:

ppasmeta-9.4.1.3-linux-x64/ppasmeta-9.4.1.3-linux-x64.run --mode unattended 
   --enable-components dbserver,connectors,infinitecache,edbmtk,pem_client,
                       pgpool,pgpoolextension,edbplus,replication,pgbouncer 
   --installer-language en --superaccount enterprisedb 
   --servicename ppas-9.4 --serviceaccount enterprisedb 
   --prefix /opt/PostgresPlus --datadir /opt/PostgresPlus/9.4AS/data 
   --xlogdir /opt/PostgresPlus/9.4AS/data/pg_xlog 
   --databasemode oracle --superpassword enterprisedb 
   --webusername document.write(['xx.xx','xx.xxx'].join('@')) --webpassword xxxxx

 Installing Database Server ...
 Installing pgAgent ...
 Installing Connectors ...
 Installing Migration Toolkit ...
 Installing EDB*Plus ...
 Installing Infinite Cache ...
 Installing Postgres Enterprise Manager Client ...
 Installing Slony Replication ...
 Installing pgpool-II ...
 Installing pgpool-II Extensions ...
 Installing PgBouncer ...
 Installing StackBuilder Plus ...X11 connection rejected because of wrong authentication.

Done. No matter which installation method was chosen the result is that ppas is installed and the database cluster is initialized. You might check the processes:

# ps -ef | grep postgres
enterpr+ 12759     1  0 12:03 ?        00:00:00 /opt/PostgresPlus/9.4AS/bin/edb-postgres -D /opt/PostgresPlus/9.4AS/data
enterpr+ 12760 12759  0 12:03 ?        00:00:00 postgres: logger process   
enterpr+ 12762 12759  0 12:03 ?        00:00:00 postgres: checkpointer process   
enterpr+ 12763 12759  0 12:03 ?        00:00:00 postgres: writer process   
enterpr+ 12764 12759  0 12:03 ?        00:00:00 postgres: wal writer process   
enterpr+ 12765 12759  0 12:03 ?        00:00:00 postgres: autovacuum launcher process   
enterpr+ 12766 12759  0 12:03 ?        00:00:00 postgres: stats collector process   
enterpr+ 12882 12759  0 12:03 ?        00:00:00 postgres: enterprisedb edb ::1[45984] idle
root     13866  2619  0 12:15 pts/0    00:00:00 grep --color=auto postgres

Or the services that got created:

# chkconfig --list | grep ppas

Note: This output shows SysV services only and does not include native
      systemd services. SysV configuration data might be overridden by native
      systemd configuration.

      If you want to list systemd services use 'systemctl list-unit-files'.
      To see services enabled on particular target use
      'systemctl list-dependencies [target]'.

ppas-9.4       	0:off	1:off	2:on	3:on	4:on	5:on	6:off
ppas-agent-9.4 	0:off	1:off	2:on	3:on	4:on	5:on	6:off
ppas-infinitecache	0:off	1:off	2:off	3:off	4:off	5:off	6:off
ppas-pgpool    	0:off	1:off	2:off	3:off	4:off	5:off	6:off
ppas-replication-9.4	0:off	1:off	2:off	3:off	4:off	5:off	6:off
# ls -la /etc/init.d/ppas*
-rwxr-xr-x. 1 root root 3663 Apr 23 12:03 /etc/init.d/ppas-9.4
-rwxr-xr-x. 1 root root 2630 Apr 23 12:03 /etc/init.d/ppas-agent-9.4
-rwxr-xr-x. 1 root root 1924 Apr 23 12:04 /etc/init.d/ppas-infinitecache
-rwxr-xr-x. 1 root root 3035 Apr 23 12:04 /etc/init.d/ppas-pgpool
-rwxr-xr-x. 1 root root 3083 Apr 23 12:04 /etc/init.d/ppas-replication-9.4

As the account which installed the software should not be used to work with the database lets create an os account for doing the connections to the database:

# groupadd postgres
# useradd -g postgres postgres
# passwd postgres
Changing password for user postgres.
New password: 
Retype new password: 
passwd: all authentication tokens updated successfully.

ppas brings an environment file for setting all the environment variables. Lets source that so it will be available for future logins:

su - postgres
echo ". /opt/PostgresPlus/9.4AS/pgplus_env.sh" >> .bash_profile

Once you login to the postgres account the environment is there:

$ env | grep PG
PGPORT=5444
PGDATABASE=edb
PGLOCALEDIR=/opt/PostgresPlus/9.4AS/share/locale
PGDATA=/opt/PostgresPlus/9.4AS/data
$ env | grep EDB
EDBHOME=/opt/PostgresPlus/9.4AS

Now we are ready to login to the database:

$ psql -U enterprisedb
Password for user enterprisedb: 
psql.bin (9.4.1.3)
Type "help" for help.

edb=# l
                                           List of databases
   Name    |    Owner     | Encoding |   Collate   |    Ctype    | ICU |       Access privileges       
-----------+--------------+----------+-------------+-------------+-----+-------------------------------
 edb       | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | 
 postgres  | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | 
 template0 | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | =c/enterprisedb              +
           |              |          |             |             |     | enterprisedb=CTc/enterprisedb
 template1 | enterprisedb | UTF8     | en_US.UTF-8 | en_US.UTF-8 |     | =c/enterprisedb              +
           |              |          |             |             |     | enterprisedb=CTc/enterprisedb
(4 rows)

Mission completed. The next post will setup a backup and recovery server for backing up and restoring the ppas database cluster.

Azure DocumentDB: Microsoft goes to NoSQL

Yann Neuhaus - Mon, 2015-04-27 02:26

Azure DocumentDB is a documentary database service and as like its name suggests, it is a Microsoft Azure Cloud service.

b2ap3_thumbnail_Blog_DocumentDB01.jpg

Since the last summer, developers have access to this new service, and now it is available for all.
The Redmond Company offers for the first time a "NoSQL" database.

List listeners and services from the instance

Yann Neuhaus - Fri, 2015-04-24 11:35

Want to know all your listeners - including scan listeners, and the services it listens for? It is possible from the instance, with the - undocumented - view V$LISTENER_NETWORK which is there since 11.2

ALTER TABLE INMEMORY

Yann Neuhaus - Thu, 2015-04-23 06:52

In-Memory Column Store is amazing. It brings very good performance to full table scans. I't easy: just 'flip a switch' and you accelerate all reporting queries on your table, without thinking about what to index and how. But in this post, I would like to warn you about the consequences when you just flip that switch. The new full table scan plan will replace the old ones... even before the table is populated in memory...
I'm not sure that it is the expected behaviour. In my opinion the CBO should consider INMEMORY plans only once the population is done. But here is the exemple.

Test case

Here is the testcase. I have a table DEMO with bitmap indexes on its columns:

12:04:54 SQL> create table DEMO compress as
12:04:54   2  with M as (select substr(dbms_random.string('U',1),1,1) U from dual connect by 10>=level)
12:04:54   3  select M1.U U1, M2.U U2, M3.U U3, M4.U U4 from M M1,M M2, M M3, M M4, (select * from dual connect by 1000>=level)
12:04:54   4  /
Table created.

12:05:00 SQL> create bitmap index DEMO_U1 on DEMO(U1);
Index created.
12:05:01 SQL> create bitmap index DEMO_U2 on DEMO(U2);
Index created.
12:05:03 SQL> create bitmap index DEMO_U3 on DEMO(U3);
Index created.
12:05:04 SQL> create bitmap index DEMO_U4 on DEMO(U4);
Index created.
And my test query on those columns:
12:05:05 SQL> alter session set statistics_level=all;
Session altered.
12:05:05 SQL> select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E') and U4='B';
no rows selected
with its execution plan:
12:05:06 SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last'));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------
SQL_ID  64skw45ghn5a0, child number 0
-------------------------------------
select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E')
and U4='B'

Plan hash value: 3881032911

---------------------------------------------------------------------------------------
| Id  | Operation                      | Name    | Starts | E-Rows | A-Rows | Buffers |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |         |      1 |        |      0 |       2 |
|   1 |  HASH UNIQUE                   |         |      1 |      2 |      0 |       2 |
|   2 |   TABLE ACCESS BY INDEX ROWID  | DEMO    |      1 |   4070 |      0 |       2 |
|   3 |    BITMAP CONVERSION TO ROWIDS |         |      1 |        |      0 |       2 |
|   4 |     BITMAP AND                 |         |      1 |        |      0 |       2 |
|   5 |      BITMAP MERGE              |         |      1 |        |      0 |       2 |
|*  6 |       BITMAP INDEX RANGE SCAN  | DEMO_U2 |      1 |        |      0 |       2 |
|*  7 |      BITMAP INDEX SINGLE VALUE | DEMO_U1 |      1 |        |      0 |       0 |
|*  8 |      BITMAP INDEX SINGLE VALUE | DEMO_U4 |      1 |        |      0 |       0 |
|   9 |      BITMAP OR                 |         |      1 |        |      0 |       0 |
|* 10 |       BITMAP INDEX SINGLE VALUE| DEMO_U3 |      1 |        |      0 |       0 |
|* 11 |       BITMAP INDEX SINGLE VALUE| DEMO_U3 |      1 |        |      0 |       0 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("U2">'X')
       filter("U2">'X')
   7 - access("U1"='A')
   8 - access("U4"='B')
  10 - access("U3"='A')
  11 - access("U3"='E')


34 rows selected.
Good. I'm happy with that plan. But I've In-Memory option so probably I can get rid of those bitmap indexes.

alter table INMEMORY

Let's put that query in memory:

12:05:06 SQL> alter table DEMO inmemory priority none memcompress for query high;
Table altered.
and run that query again
12:05:06 SQL> select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E') and U4='B';
no rows selected

12:05:07 SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------
SQL_ID  64skw45ghn5a0, child number 0
-------------------------------------
select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E')
and U4='B'

Plan hash value: 51067428

------------------------------------------------------------------------------------------
| Id  | Operation                   | Name | Starts | E-Rows | A-Rows | Buffers | Reads  |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |      1 |        |      0 |   13740 |  13736 |
|   1 |  HASH UNIQUE                |      |      1 |      2 |      0 |   13740 |  13736 |
|*  2 |   TABLE ACCESS INMEMORY FULL| DEMO |      1 |   4070 |      0 |   13740 |  13736 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - inmemory(("U2">'X' AND "U1"='A' AND "U4"='B' AND INTERNAL_FUNCTION("U3")))
       filter(("U2">'X' AND "U1"='A' AND "U4"='B' AND INTERNAL_FUNCTION("U3")))
Here is my problem. Now that I have defined the table to be populated into the In-Memory Column Store, then the CBO choose an In-Memory plan for my query.

This is a FULL TABLE SCAN because you can only do full table scans from the In-Memory Column Store. But I have a problem. The column store is not yet populated:

12:05:07 SQL> select segment_name,inmemory_size,bytes_not_populated from v$im_segments;
no rows selected
So the FULL TABLE SCAN occured on the row store. Look at the statistics above: 1370 logical reads from the buffer cache. And 13736 physical reads because that table is not in the buffer cache. I always used index access for it before, so the table blocks are not in buffer cache. And the full table scan has good change to be done in direct-path.
I still have a very good access from the bitmap indexes - which are still there - but now I'm now doing a very expensive full table scan.

Population

Look at the same query two seconds later:

12:05:09 SQL> select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E') and U4='B';
no rows selected

12:05:09 SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------
SQL_ID  64skw45ghn5a0, child number 0
-------------------------------------
select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E')
and U4='B'

Plan hash value: 51067428

------------------------------------------------------------------------------------------
| Id  | Operation                   | Name | Starts | E-Rows | A-Rows | Buffers | Reads  |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |      1 |        |      0 |   11120 |  11117 |
|   1 |  HASH UNIQUE                |      |      1 |      2 |      0 |   11120 |  11117 |
|*  2 |   TABLE ACCESS INMEMORY FULL| DEMO |      1 |   4070 |      0 |   11120 |  11117 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - inmemory(("U2">'X' AND "U1"='A' AND "U4"='B' AND INTERNAL_FUNCTION("U3")))
       filter(("U2">'X' AND "U1"='A' AND "U4"='B' AND INTERNAL_FUNCTION("U3")))
It is just a bit better: 11117 physical reads instead of 13736. This is because some In-Memory Compression Units are already there in the In-Memory Column Store:
12:05:10 SQL> select segment_name,inmemory_size,bytes_not_populated from v$im_segments;

SEGMENT_NA INMEMORY_SIZE      BYTES BYTES_NOT_POPULATED
---------- ------------- ---------- -------------------
DEMO             6815744  117440512            88973312
Among the 117440512 bytes (which is 14336 8k blocks) only 88973312 are not yet populated (10861 8k blocks). This is why a bit earlier the query still had to read 11120 blocks from buffer cache.

Let's wait 1 minute for population. Remember that during that time, the population uses a lot of CPU in order to read the row store blocs, put it in column, compress it and store it into the column store.

12:06:04 SQL> select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E') and U4='B';
no rows selected

12:06:04 SQL> select * from table(dbms_xplan.display_cursor(format=>'iostats last'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------
SQL_ID  64skw45ghn5a0, child number 0
-------------------------------------
select distinct * from DEMO where U1='A' and U2>'X' and U3 in ('A','E')
and U4='B'

Plan hash value: 51067428

---------------------------------------------------------------------------------
| Id  | Operation                   | Name | Starts | E-Rows | A-Rows | Buffers |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |      1 |        |      0 |       3 |
|   1 |  HASH UNIQUE                |      |      1 |      2 |      0 |       3 |
|*  2 |   TABLE ACCESS INMEMORY FULL| DEMO |      1 |   2546 |      0 |       3 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - inmemory(("U1"='A' AND "U4"='B' AND "U2">'X' AND INTERNAL_FUNCTION("U3")))
       filter(("U1"='A' AND "U4"='B' AND "U2">'X' AND INTERNAL_FUNCTION("U3")))
Ok. not only 3 blocks were read from buffer cache. I have now good performance that I can compare with what I had with the bitmap indexes.

This is because population is completed:

12:06:15 SQL> select segment_name,inmemory_size,bytes,bytes_not_populated from v$im_segments;

SEGMENT_NA INMEMORY_SIZE      BYTES BYTES_NOT_POPULATED
---------- ------------- ---------- -------------------
DEMO            31195136  117440512                   0

Conclusion

My conclusion is that altering a table to populate it into the In-memory Column Store looks like an easy operation. But it is not. When you do that:

  • You change the plans to FULL TABLE SCAN which will not be optimal until the table is fully populated.
  • You trigger the population which will increase a lot your server CPU usage
  • you have the risk to get tables only partially populated in case you're in RAC, or if you don't have enough space in the inmemory_size
So this is something to plan and to monitor. And you will also need to think about what happens if your instance crashes and you have to restart it. How long will it take to get back to correct performance?
And that's even without asking yourself yet if you can drop those bitmap indexes that are superseeded by the In-Memory column store now.

Of course, there are solutions for any problem. if you are on Exadata, then SmartScan will come to the rescue until the IMCS is populated. Full table scan is offloaded to storage nodes. Database node CPU resources are available for quick population. In that way, they are complementary.

Successful launch dbi services Zürich

Yann Neuhaus - Wed, 2015-04-22 12:50

Yesterday evening dbi services (headquarters in Delémont) launched officially its third branch in Zürich (Beside Basel and Lausanne). Five years after its take off, the "Oracle Database Partner of Year 2014", is employing more than 40 consultants. I would like to use this opportunity to thank all the customers and partners who trust dbi services. A particular thanks goes to the customers and partners who helped us to enjoy a very pleasant inauguration party yesterday.

Thanks also to Mr Thomas Salzmann (KKG) who presented our successful collaboration and to Massimo Castelli (Oracle) who presented the challenges of recruitment in the IT branch. I was pleased to see that large players like Oracle do, sometimes, have the same challenges as mid-sized companies :-) .

All this adventure would not have been possible without our incredible teams, working hard every day to transform ideas and problems into projects and solutions. dbi services will continue to leverage the skills of its employees, to look for opportunities, in order to remain at top-level provider for the operating systems, database and middleware layers.

A final thanks goes to Kurt Meier who will lead the dbi services branch in Zürich and for the very good organisation of this party. After having won the first customers, Kurt proved that dbi services will succeed and manage this new challenge.

b2ap3_thumbnail_rsz_dsc_2404.jpg

 

b2ap3_thumbnail_rsz_dsc_2390.jpg

 

D2 xPlore search on apostrophes

Yann Neuhaus - Wed, 2015-04-22 03:32

When using the D2 searches you are likely to go into trouble regarding special characters such as apostrophes. The goal in this blog is to show you how to parameterize new special character searches.

In many documents you'll have text with apostrophes or special characters that you want to search on. But unfortunately by default the D2 search will return nothing when you enter apostrophes directly into the search bar. The xPlore will replace special characters by spaces and store the two words one after the other to match them more easily in this order.

In fact this is not a D2 issue. Your xPlore is likely to not be set for special characters handling. By default xPlore is set to recognize apostrophes but in Word for example you have different kind of apostrophes. These characters have a different ascii code, so xPlore doesn't recognize them.

To solve this issue you simply have to tell xPlore to handle Word's apostrophes (or whatever character you want to search on).

In order to do this, login to your xPlore server then edit the following file:

Path$DSEARCH_HOME/config/indexserverconfig.xml

Find the line with:

Line special-characters="@#$%^_~`&:.()-+='/\[]{}" context-characters="!,;?""

Then add your apostrophes or special characters as follow (copy and past directly from Word to the file):

New Line"@#$%^_~`&:.()-+='/\[]{}’‘„“"

And save the file.

Now, new indexed documents can be searched with apostrophes. But note that if you want the older documents to be searchable as well, you will need to re-index the whole repository.

Weblogic ThreadPool has stuck threads

Yann Neuhaus - Wed, 2015-04-22 02:49

In Weblogic it is common to have this warning: ThreadPool has stuck threads. Here we will see a way to determine which can be the cause.

When monitoring Weblogic you can notice that time to time your servers are going in Warning mode. And when clicking on warnings you see this screen:

Warning-reason.png

The reason is presented as "ThreadPool has stuck threads". So it doesn't help a lot. But we can have a deeper view and maybe a real cause.

Now click on the server name from your list, then go to Monitoring -> Threads.

The Hogging Thread Count column shows how many threads seems stuck. The Pending User Request Count column shows the number of requests not delivered to the users. If it is different than 0 your users are impacted.

In order to visualize the real state of threads click on Dump Thread Stacks:

Monitoring-threads-Edited.png

Some threads are marked as stuck whereas they aren't. If the process handled by the thread is too long to achieve then Weblogic will detect it as stuck. By default Weblogic will detect stuck threads after 600 seconds (10 minutes) of waiting (This parameter can be changed).

When the thread dump is displayed you can search for thread with STUCK status:

Thread-dump.png

Here you can see that the thread is stuck in java.lang.Object.wait() function. It means that the thread is waiting for a result or another process to end. In this particular case we can see that the function com.crystaldecisions.sdk.occa.managedreports.ras.internal.CECORBACommunicationAdapter.request() was executed just before waiting, so the thread is likely to wait for the result for this "request" function.

As what we thought the issue came from a reporting server that could not deliver the reports anymore. That's why we had some stuck threads.

Stuck threads are generally generated by the application itself or some other components which do not have to do with weblogic.

Last point, you can check if the thread is stuck in the previous view like this:

Threads-stuck-Edited.png

SQL Server - Change Management: list all updates

Yann Neuhaus - Mon, 2015-04-20 01:29

I am looking to have all SQL Server updates on a server including Service Packs, Cumulative Updates and other fixes like we can see in the uninstall panel from Windows.

installed-update.png

 

C15LV: Nothing to BLOG About - Think Again

Yann Neuhaus - Wed, 2015-04-15 19:58

Before going to the airport to come back in Europe, the las session I assisted at IOUG Collaborate is a panel: Nothing to BLOG About - Think Again

Some famous bloggers were there to help, motivate and inspire people that are afraid to blog. The idea is good. The session was good. The panelists have shared ideas, tips, experience. It was very good.

I'm already a blogger, but that inspired me anyway. In the same idea I'll expose how I started blogging, just in case it gives some ideas.

Note that tomorrow I speak about participating in the community (read, share, write, talk, blog, tweet, link,...) at our internal event dbi xChange. Thus, waiting at the airport, it's a good occasion to explain how I start blogging in case it gives some ideas to people that want to start.

SQL Monitor report little improvement in 12c

Yann Neuhaus - Wed, 2015-04-15 14:35

This is a very short post about something I though I had already blogged about.

I like SQL Monitoring active reports. The are graphical, and still very light. There is only one thing I don't like about it in 11g and it has been improved in 12c

QlikView – Load data in QlikView / QlikSence – Best practice and trick

Yann Neuhaus - Tue, 2015-04-14 07:45

In this blog, I will give you some best practices and tricks when you are loading table to generate your data. But before, I will review the different ways to load data in a QlikView or QlikSense report.


1.How to retrieve data in QlikView

You have 2 different possibilities to load your data in your report:

- Database connector

- File connector

Picture1.png

   a) Database connector:

     If your data are located in a database, you must use this kind of connectors.

To connect to a database:

Open "Script Editor"

Picture2.png

Click on “Tab” and “Add Tab…”

Picture3.png

Give a name to the new Tab and clock “OK"

Picture4.png

Select the data source

Picture5.png 

Select your connection (for OLE DB, ODBC, connections should be created in the ODBC administrator tool from windows)

Remark: You can use a special connection on the ETL tool from QlikView named QlikView Expressor

Picture6.png

For this example, I want to connect on a Oracle database:

Select the connection and click “OK”

Picture7.png

Select a table (1), than select the fields you want see in your report (2) and click “OK” (3)

Picture8.png

TRICK 1: If you use the “Connect” option and you add the User ID and the password in the connection interface, they will be put in the SQL script in an encrypted format

Picture28.png

Picture9.png

   b) Data file:

You have 4 different options:

Picture11.png

(1) Table Files: you can select the following kind of files

Picture12.png

(2) QlikView File: You can load .qvw file (QlikView file)

(3) Web File: You can load a file coming from a website

(4) Field data: you can load specific rows coming from a field located in a database

In this example, we select a .qvd file using the 1 option (table file)

Picture13.png

You have 2 options:

Click “Next” and “Next”: you access to the column choose interface

Remark: To remove a column, click on the cross. Then click “Next ”

Picture14.png

Check the SQL. If it’s ok, click on “Finish”

Picture15.png

Click on “Reload” to load the data

Picture16.png

Best Practices: create a variable path

If you must load data coming from files located in a specific repository, and if this repository is going to change after the report is published in different environments, it is recommended to create a variable to define the folder path.

Go on the Main Tab and create the variable “FilePath”. Don’t forget the “;” at the end

Picture17.png

On the other tab where you load data coming from file located in the same folder, add the variable before the name of the file.

Picture18.png

After the deployment on other environment, you just have to update the variable and of course, reload the data.

 

2.Optimize the data recovery in QlikView / QlickSence

In this example, some tables are loaded just for one field. We can optimize this schema with using a mapping function. The goal is to limit the number of tables used directly in the schema.

Picture19.png

Warning: The mapping function can only be used to add one field in a table.

In our example, we want to add the filed “Product Group desc” in the table “ITEM_MATSER”.

To create a mapping tab:

Add a new Tab just after the main tab

Picture20.png

Use the function “MAPPING” in the script as follow:

Picture21.png

In the destination table, add the field with the following function “Applymap” as follow:

Picture22.png

(1) Put the name of the Mapping table you have created

(2) Put the name of the key field

(3) Put the name of the field you want to show in your table

Don’t forget to comment the script from your mapped table.

After refreshing the data, you will see that the table has disappeared and the filed has been added in the main table.

Picture23.png

 

3.How to add a Master Calendar table

You can generate automatically a master calendar if you need to have all the days located in a period. This calendar will be generated in 3 different steps:

   a) Creation of the Min / Max date temporary table.

Create a new tab and add the following script:

Picture24.png

   b) Creation date temporary table.

Add the following script to create a temporary table with all the dates between the MIN and MAX date you have define using the function “AUTOGENERATE”

Picture25.png

Note that we drop the result from the MIN / MAX table at the end of the creation from the temporary table.

   c) Creation Master Calendar table.

After the generation from the temporary table, add the following script to create all the different date fields you need (Year, month, week …)

Picture26.png

Remark: to join your tables, you must give to your new generated field the same name than the field you have used to create your calendar. The result should be like this.

Picture27.png

I hope that these best pratices and tricks will help you !

 

 

 



 




 

 



Yesterday's AWR Straight to the Goal

Yann Neuhaus - Tue, 2015-04-14 06:43

Yesterday I was a speaker at Collaborate15 and did my presentation about reading an AWR report. That was great. Many people (and not enough seats).

Here are some aswers about questions that came later.

Cloning a PDB from a standby database

Yann Neuhaus - Mon, 2015-04-13 05:14

Great events like IOUG Collaborate is a good way to meet experts we know through blogs, twitter,etc. Yesterday evening, with nice music in the background, I was talking with Leighton Nelson about cloning PDB databases. Don't miss his session today if you are in Las Vegas. The big problem with PDB cloning is that the source must be read-only. The reason is that it works like transportable tablespaces (except that it can transport the datafiles through database link and that we transport SYSTEM as well instead of having to import metadata). There is no redo shipping/apply here, so the datafiles must be consistent.

Obviously, being read-only is a problem when you want to clone from production.

But if you have a standby database, can you open it read-only and clone a pluggable database from there? From what we know, it should be possible, but better to test it.

Here is my source - a single tenant standby database opened in read-only:

SQL> connect sys/oracle@//192.168.78.105/STCDB as sysdba
Connected.
SQL> select banner from v$version where rownum=1;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

SQL> select name,open_mode,database_role from v$database;

NAME      OPEN_MODE            DATABASE_ROLE
--------- -------------------- ----------------
STCDB     READ ONLY            PHYSICAL STANDBY

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
STDB1                          MOUNTED

Then from the destination I define a database link to it:

SQL> connect sys/oracle@//192.168.78.113/CDB as sysdba
Connected.
SQL> select banner from v$version where rownum=1;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
CDB       READ WRITE

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB                            READ WRITE

SQL>
SQL> create database link DBLINK_TO_STCDB connect to system identified by oracle using '//192.168.78.105/STCDB';

Database link created.

and create a pluggable database from it:

SQL> create pluggable database STDB2 from STDB1@DBLINK_TO_STCDB;

Pluggable database created.

SQL> alter pluggable database STDB2 open;

Pluggable database altered.

So yes. This is possible. And you don't need Active Data Guard for that. As long as you can stop the apply for the time it takes to transfer the datafiles, then this is a solution for cloning. Of course, just do one clone and if you need others then you can do it from that first clone. And within the same PDB they can be thin clones if you can use snapshots.

Ok, It's 5 a.m here. As usual, the jetlag made me awake awake a bit early, so that was a good occasion to test what we have discussed yesterday...

Data Mining Scoring Development Process

Dylan Wan - Thu, 2015-04-02 23:39

I think that the process of building a data mining scoring engine is similar to develop an application.


We have the requirement analysis, functional design, technical design, coding, testing, deployment, etc. phases.



Categories: BI & Warehousing

New blog to handle the PJC/Bean articles

Francois Degrelle - Mon, 2015-03-30 12:06
Here is the link to another place that stores the PJCs/Beans article without adds. http://forms.pjc.bean.blog.free.fr/ Francois

dramatic differences of in memory scanning performance on range queries

Karl Reitschuster - Tue, 2015-02-17 11:07
Given following two identical tables, on which run the same SQL,  replicated in memory with oracle in-memory option - one table created out of the other.
each tables covers 24m rows.
    
Same structure ...

Setting up Xubuntu in Lenovo Flex2 14D

Vattekkat Babu - Tue, 2014-09-16 00:29

Lenovo Flex2 14D is a good laptop with decent build quality, light weight, 14" screen and touch screen for those who like it. With AMD A6 processor version, it is reasonably priced too.

It comes pre-loaded wth Windows 8.1 and a bunch of Lenovo software. If you want to get this to dual boot with Ubuntu Linux, here are the specific fixes you need to do.