Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 16 hours 31 min ago

Installing PostgreSQL 9.6.1 with EnterpriseDB Installer

Thu, 2016-12-08 14:20

As you may know PostgreSQL 9.6 was released a few months ago. And EnterpriseDB (EDB) offers a package installer for installing the community version. This EDB package can be downloaded here.
In this blog we are going to install PostgreSQL using edb installer. The installation is done using a graphical tool. A user with root privileges must be used otherwise we will get an error. Indeed the default installation creates the systemd service which need root privileges. For example using the user postgres for installation

[postgres@pgserver1 96]$ chmod +x postgresql-9.6.1-1-linux-x64.run
[postgres@pgserver1 96]$

[postgres@pgserver1 96]$ ./postgresql-9.6.1-1-linux-x64.run

install1
An alternative is to launch the installer with the option “–extract-only” which does not require root (but then we do not get the service)

Now logged as root, let’s start

[root@pgserver1 96]# ./postgresql-9.6.1-1-linux-x64.run

install2
Click on Next
install3
Specify PGHOME
install4
Specify the PGDATA
install5
Specify the password for user postgres
install6
The default port is 5432
install7
Choose your Locale
install8
And launch installation process
install9
install10
Once installation done we can verify that PostgreSQL is running

[postgres@pgserver1 ~]$ ps -ef | grep postgres
root 3135 976 0 09:53 ? 00:00:00 sshd: postgres [priv] postgres 3140 3135 0 09:54 ? 00:00:00 sshd: postgres@pts/0
postgres 3142 3140 0 09:54 pts/0 00:00:00 -bash
root 3657 976 0 09:55 ? 00:00:00 sshd: postgres [priv] postgres 3661 3657 0 09:55 ? 00:00:00 sshd: postgres@pts/2
postgres 3664 3661 0 09:55 pts/2 00:00:00 -bash
postgres 6255 1 0 10:23 ? 00:00:00 /u01/app/PostgreSQL/9.6/bin/postgres -D /u01/app/PostgreSQL/9.6/data
postgres 6256 6255 0 10:23 ? 00:00:00 postgres: logger process
postgres 6258 6255 0 10:23 ? 00:00:00 postgres: checkpointer process
postgres 6259 6255 0 10:23 ? 00:00:00 postgres: writer process
postgres 6260 6255 0 10:23 ? 00:00:00 postgres: wal writer process
postgres 6261 6255 0 10:23 ? 00:00:00 postgres: autovacuum launcher process
postgres 6262 6255 0 10:23 ? 00:00:00 postgres: stats collector process
postgres 6332 3664 0 10:26 pts/2 00:00:00 ps -ef
postgres 6333 3664 0 10:26 pts/2 00:00:00 grep --color=auto postgres
[postgres@pgserver1 ~]$

A file pg_env.sh is created during installation and can be used to setup all variables we need for the connection

[postgres@pgserver1 9.6]$ pwd
/u01/app/PostgreSQL/9.6
[postgres@pgserver1 9.6]$ cp pg_env.sh /home/postgres/
[postgres@pgserver1 9.6]$ cd
[postgres@pgserver1 ~]$


[postgres@pgserver1 ~]$ chmod +x pg_env.sh
[postgres@pgserver1 ~]$ . ./pg_env.sh
[postgres@pgserver1 ~]$ env | grep PG
PGPORT=5432
PGUSER=postgres
PGDATABASE=postgres
PGLOCALEDIR=/u01/app/PostgreSQL/9.6/share/locale
PGDATA=/u01/app/PostgreSQL/9.6/data
[postgres@pgserver1 ~]$

And now we can connect

[postgres@pgserver1 ~]$ psql
Password:
psql.bin (9.6.1)
Type "help" for help.

postgres=#

We can see that installation of PostgreSQL edb 9.6 is very easy.

 

Cet article Installing PostgreSQL 9.6.1 with EnterpriseDB Installer est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 7 – Partitioning

Thu, 2016-12-08 04:19

PostgreSQL supports tables up to 32TB. Do you want to be the one responsible for managing such a table? I guess not. Usually you start to partition your tables when they grow very fast and consume more than hundreds of gigabytes. Can PostgreSQL do this? Do you you know what table inheritance is? No? PostgreSQL implements partitioning by using table inheritance and constraint exclusion. Sounds strange? Lets have a look …

Us usual I am running the currently latest version of PostgreSQL:

postgres@pgbox:/home/postgres/ [PG961] psql postgres
psql (9.6.1 dbi services build)
Type "help" for help.

(postgres@[local]:5439) [postgres] > select version();
                                                          version                                                           
----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6.1 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit
(1 row)

Time: 0.513 ms
(postgres@[local]:5439) [postgres] > 

So, what is table inheritance. In PostgreSQL you do things like this:

(postgres@[local]:5439) [postgres] > create table databases ( name varchar(10), vendor varchar(10) );
CREATE TABLE
Time: 20.477 ms
(postgres@[local]:5439) [postgres] > create table databases_rdbms ( rdbms boolean ) inherits (databases);
CREATE TABLE
Time: 20.080 ms
(postgres@[local]:5439) [postgres] > create table databases_nosql ( nosql boolean ) inherits (databases);
CREATE TABLE
Time: 22.048 ms

What we’ve done here is: We created three tables in total. The “databases_rdbms” and “databases_nosql” tables inherit from the “databases” table. What does that mean? Lets insert some data into the tables that inherit from the “databases” table:

(postgres@[local]:5439) [postgres] > insert into databases_rdbms values ('PostgreSQL','Community',true);
INSERT 0 1
Time: 20.215 ms
(postgres@[local]:5439) [postgres] > insert into databases_rdbms values ('MariaDB','MariaDB',true);
INSERT 0 1
Time: 1.666 ms
(postgres@[local]:5439) [postgres] > insert into databases_nosql values ('MongoDB','MongoDB',true);
INSERT 0 1
Time: 1.619 ms
(postgres@[local]:5439) [postgres] > insert into databases_nosql values ('Cassandra','Apache',true);
INSERT 0 1
Time: 0.833 ms

Note that we did not insert any data into the “databases” table, but when we query the “databases” table we get this result:

(postgres@[local]:5439) [postgres] > select * from databases;
    name    |  vendor   
------------+-----------
 PostgreSQL | Community
 MariaDB    | MariaDB
 MongoDB    | MongoDB
 Cassandra  | Apache
(4 rows)

All the data from all child tables has been retrieved (of course without the additional column on the child tables). We can still query the child tables:

(postgres@[local]:5439) [postgres] > select * from databases_rdbms;
    name    |  vendor   | rdbms 
------------+-----------+-------
 PostgreSQL | Community | t
 MariaDB    | MariaDB   | t
(2 rows)

Time: 0.224 ms
(postgres@[local]:5439) [postgres] > select * from databases_nosql;
   name    | vendor  | nosql 
-----------+---------+-------
 MongoDB   | MongoDB | t
 Cassandra | Apache  | t
(2 rows)

But when we query “only” on the master table there is no result:

(postgres@[local]:5439) [postgres] > select * from only databases;
 name | vendor 
------+--------
(0 rows)

Of course for this specific example it would be better to add an additional column to the master table which specifies if a database is a NoSQL database or not. This is just to show how it works. There is a good example for another use case in the documentation.

What does all this have to do with partitioning? When you want to partition your tables in PostgreSQL you’ll do exactly the same thing:

(postgres@[local]:5439) [postgres] > create table log_data ( id int, some_data varchar(10), ts date );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create table log_data_2016() inherits ( log_data );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create table log_data_2015() inherits ( log_data );
CREATE TABLE

We want to partition our log data by year, so we create a child table for each year we know we have data for. We additionally need is a check constraint on each of the child tables:

(postgres@[local]:5439) [postgres] > \d+ log_data_2016
                             Table "public.log_data_2016"
  Column   |         Type          | Modifiers | Storage  | Stats target | Description 
-----------+-----------------------+-----------+----------+--------------+-------------
 id        | integer               |           | plain    |              | 
 some_data | character varying(10) |           | extended |              | 
 ts        | date                  |           | plain    |              | 
Check constraints:
    "cs1" CHECK (ts >= '2016-01-01'::date AND ts  \d+ log_data_2015
                             Table "public.log_data_2015"
  Column   |         Type          | Modifiers | Storage  | Stats target | Description 
-----------+-----------------------+-----------+----------+--------------+-------------
 id        | integer               |           | plain    |              | 
 some_data | character varying(10) |           | extended |              | 
 ts        | date                  |           | plain    |              | 
Check constraints:
    "cs1" CHECK (ts >= '2015-01-01'::date AND ts < '2016-01-01'::date)
Inherits: log_data

This guarantees that the child tables only get data for a specific year. So far so good. But how does PostgreSQL know that inserts into the master table should get routed to the corresponding child table? This is done by using triggers:

(postgres@[local]:5439) [postgres] > CREATE OR REPLACE FUNCTION log_data_insert_trigger()
RETURNS TRIGGER AS $$
BEGIN
IF ( NEW.ts >= DATE '2015.01.01' AND
NEW.ts < DATE '2016-01-01' ) THEN INSERT INTO log_data_2015 VALUES (NEW.*); ELSIF ( NEW.ts >= DATE '2016-01-01' AND
NEW.ts < DATE '2017-01-01' ) THEN
INSERT INTO log_data_2016 VALUES (NEW.*);
ELSE
RAISE EXCEPTION 'Date out of range!';
END IF;
RETURN NULL;
END;
$$
LANGUAGE plpgsql;

CREATE TRIGGER insert_log_data_trigger
BEFORE INSERT ON log_data
FOR EACH ROW EXECUTE PROCEDURE log_data_insert_trigger();

When there are inserts against the master table, from now on these go to the corresponding child table:

(postgres@[local]:5439) [postgres] > insert into log_data values ( 1, 'aaaa', date('2016.03.03'));
INSERT 0 0
(postgres@[local]:5439) [postgres] > insert into log_data values ( 2, 'aaaa', date('2015.03.03'));
INSERT 0 0
(postgres@[local]:5439) [postgres] > select * from log_data;
 id | some_data |     ts     
----+-----------+------------
  1 | aaaa      | 2016-03-03
  2 | aaaa      | 2015-03-03
(2 rows)
(postgres@[local]:5439) [postgres] > select * from log_data_2015;
 id | some_data |     ts     
----+-----------+------------
  2 | aaaa      | 2015-03-03
(1 row)

(postgres@[local]:5439) [postgres] > select * from log_data_2016;
 id | some_data |     ts     
----+-----------+------------
  1 | aaaa      | 2016-03-03
(1 row)

Selects against the master table where we use the ts column in the where condition now only select from the child table:

(postgres@[local]:5439) [postgres] > explain analyze select * from log_data where ts = date ('2016.03.03');
                                                  QUERY PLAN                                                   
---------------------------------------------------------------------------------------------------------------
 Append  (cost=0.00..23.75 rows=7 width=46) (actual time=0.006..0.006 rows=1 loops=1)
   ->  Seq Scan on log_data  (cost=0.00..0.00 rows=1 width=46) (actual time=0.002..0.002 rows=0 loops=1)
         Filter: (ts = '2016-03-03'::date)
   ->  Seq Scan on log_data_2016  (cost=0.00..23.75 rows=6 width=46) (actual time=0.004..0.004 rows=1 loops=1)
         Filter: (ts = '2016-03-03'::date)
 Planning time: 0.131 ms
 Execution time: 0.019 ms
(7 rows)
(postgres@[local]:5439) [postgres] > explain analyze select * from log_data where ts = date ('2015.03.03');
                                                  QUERY PLAN                                                   
---------------------------------------------------------------------------------------------------------------
 Append  (cost=0.00..23.75 rows=7 width=46) (actual time=0.007..0.007 rows=1 loops=1)
   ->  Seq Scan on log_data  (cost=0.00..0.00 rows=1 width=46) (actual time=0.002..0.002 rows=0 loops=1)
         Filter: (ts = '2015-03-03'::date)
   ->  Seq Scan on log_data_2015  (cost=0.00..23.75 rows=6 width=46) (actual time=0.004..0.004 rows=1 loops=1)
         Filter: (ts = '2015-03-03'::date)
 Planning time: 0.102 ms
 Execution time: 0.019 ms
(7 rows)

Of course you can create indexes on the child tables as well. This is how partitioning basically works in PostgreSQL. To be honest, this is not the most beautiful way to do partitioning and this can become tricky to manage. But as always there are projects that assist you, e.g. pg_partman or pg_pathman.

Wouldn’t it be nice to have a SQL syntax to do table partitioning? Exactly this was committed yesterday and will probably be there in PostgreSQL 10 next year. The development documentation already describes the syntax:

[ PARTITION BY { RANGE | LIST } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace_name ]
 

Cet article Can I do it with PostgreSQL? – 7 – Partitioning est apparu en premier sur Blog dbi services.

OEL 7 – Project Quotas on Oracle Homes with XFS on Oracle Linux 7

Thu, 2016-12-08 02:43

User and group quotas do exist for quite a while for all kind of file systems, like ext4 or vxfs and maybe many others.

However, for my use case, I do need quotas for different directories on the same file system. My mount point is /u01 and in that file system, I do have different Oracle Homes which belong to the same user, oracle. However, I do not want that my 11.2.0.4 Oracle Home influences my 12.1.0.2 Oracle Home in respect of file system usage. e.g. if the 11.2.0.4 home is core dumping, it should not fill up the space of the 12.1.0.2 home, and the other way around. But how can I do that?

The idea is to create XFS project quotas, and this is how it works.

First of all, we need to enable project quotas on the XFS file system.  /u01 is currently mounted with XFS default options, which are (rw,relatime,attr2,inode64,noquota). As you can see, the default is “noquota”.

[root@dbidg01 ~]# mount | grep u01
/dev/mapper/vg_ora_u01-lv_ora_u01 on /u01 type xfs (rw,relatime,attr2,inode64,noquota)

We can enable project quotas by adding the “prjquota” mount option to the /etc/fstab and afterwards remounting the file system.

[root@dbidg01 ~]# mount | grep u01
/dev/mapper/vg_ora_u01-lv_ora_u01 on /u01 type xfs (rw,relatime,attr2,inode64,noquota)

Unfortunately, the remount option does not work with the XFS filesystem. Meaning, I can remount the file system, but my new option “prjquota” is not reflected.

[root@dbidg01 ~]# mount -o remount,rw,relatime,attr2,inode64,prjquota /u01
[root@dbidg01 ~]# mount | grep u01
/dev/mapper/vg_ora_u01-lv_ora_u01 on /u01 type xfs (rw,relatime,attr2,inode64,noquota)

So I have to do a umount, mount. Not a good thing from my point of view, because it means that I cannot enable project quotas online for my /u01 directory, where I have different Oracle homes located. In other words, I need to shutdown all Oracle databases.

[root@dbidg01 ~]# umount /u01
[root@dbidg01 ~]# mount /u01
[root@dbidg01 ~]# mount | grep u01
/dev/mapper/vg_ora_u01-lv_ora_u01 on /u01 type xfs (rw,relatime,attr2,inode64,prjquota)

Ok. Now it looks better. The next step is to define unique project id’s for the different directory hierarchies in the /etc/projects file.

For example, to set a project ID of 11 for the directory hierarchy /u01/app/oracle/product/11.2.0, and the project ID of 12 for the directory hierarchy /u01/app/oracle/product/12.1.0 we can do the following.

# echo "11:/u01/app/oracle/product/11.2.0" >> /etc/projects
# echo "12:/u01/app/oracle/product/12.1.0" >> /etc/projects

[root@dbidg01 ~]# cat /etc/projects
11:/u01/app/oracle/product/11.2.0
12:/u01/app/oracle/product/12.1.0

If you don’t want to work with ID’s, you have the possibility to map project names to the project ID’s in your /etc/projid file. It is much easier in regards of reporting quota usage, which we will see later.

For example, to map the project name oracle11gR2 to the project with ID 11 or to map the project name oracle12cR1 to 12 do the following.

# echo "oracle11gR2:11" >> /etc/projid
# echo "oracle12cR1:12" >> /etc/projid

Now use the project subcommand of xfs_quota to define a managed tree in the XFS file system for the different projects.

For example, to define a managed tree in the /u01 file system for the project oracle11gR2, which corresponds to the directory hierarchy /u01/app/oracle/product/11.2.0, do the following.

# xfs_quota -x -c 'project -s oracle11gR2' /u01

[root@dbidg01 etc]# xfs_quota -x -c 'project -s oracle11gR2' /u01
Setting up project oracle11gR2 (path /u01/app/oracle/product/11.2.0)...
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/bin/lbuilder
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/jdk/jre/lib/amd64/server/libjsig.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libagtsh.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libodm11.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libclntsh.so.10.1
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libocci.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/libclntsh.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/liborasdk.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/lib/liborasdkbase.so
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/sqlnet.ora
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames.ora
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/ldap.ora
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/SQLCA.H
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/ORACA.H
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/SQLDA.H
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/SQLCA.COB
xfs_quota: skipping special file /u01/app/oracle/product/11.2.0/dbhome_1/precomp/public/ORACA.COB
Processed 1 (/etc/projects and cmdline) paths for project oracle11gR2 with recursion depth infinite (-1).

The same applies to project oracle12cR1.

# xfs_quota -x -c 'project -s oracle12cR1' /u01

[root@dbidg01 etc]# xfs_quota -x -c 'project -s oracle12cR1' /u01
Setting up project oracle12cR1 (path /u01/app/oracle/product/12.1.0)...
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/bin/lbuilder
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/admin/classes.bin
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/admin/cbp.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/admin/libjtcjt.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/security/US_export_policy.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/security/cacerts
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/security/java.security
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/security/local_policy.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/jce.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/javavm/lib/sunjce_provider.jar
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/jdk/bin/ControlPanel
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/jdk/jre/bin/ControlPanel
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/jdk/jre/javaws/javaws
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/jdk/jre/lib/amd64/server/libjsig.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libagtsh.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libjavavm12.a
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libodm12.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libocci.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libclntshcore.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libclntsh.so
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libclntsh.so.10.1
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/lib/libclntsh.so.11.1
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/sqlnet.ora
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/ldap.ora
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/tnsnames.ora
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/SQLCA.H
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/ORACA.H
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/SQLDA.H
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/SQLCA.COB
xfs_quota: skipping special file /u01/app/oracle/product/12.1.0/dbhome_1/precomp/public/ORACA.COB
Processed 1 (/etc/projects and cmdline) paths for project oracle12cR1 with recursion depth infinite (-1).

Now the fun part begins, and we can start using the limit subcommand to set the limits on the disk usage for the different projects. My 11.2.0.4 Oracle home is currently 5.7G in size, and the 12.1.0.2 Oracle home is 7.1G big. I want to configure for the 11g home a soft limit of 8G and a hard limit of 12G, and for the 12c home a soft limit of 10G and a hard limit of 16G.

oracle@dbidg01:/u01/app/oracle/product/ [rdbms112] du -hs 11.2.0
5.7G    11.2.0

oracle@dbidg01:/u01/app/oracle/product/ [rdbms112] du -hs 12.1.0
7.1G    12.1.0

# xfs_quota -x -c 'limit -p bsoft=8g bhard=12g oracle11gR2' /u01
# xfs_quota -x -c 'limit -p bsoft=10g bhard=16g oracle12cR1' /u01

[root@dbidg01 ~]# xfs_quota -x -c 'report -p' /u01
Project quota on /u01 (/dev/mapper/vg_ora_u01-lv_ora_u01)
                               Blocks
Project ID       Used       Soft       Hard    Warn/Grace
---------- --------------------------------------------------
#0            8776636          0          0     00 [--------]
oracle11gR2    5883604    8388608   12582912     00 [--------]
oracle12cR1    7415292   10485760   16777216     00 [--------]

[root@dbidg01 ~]# xfs_quota -x -c 'report -h -p' /u01
Project quota on /u01 (/dev/mapper/vg_ora_u01-lv_ora_u01)
                        Blocks
Project ID   Used   Soft   Hard Warn/Grace
---------- ---------------------------------
#0           8.4G      0      0  00 [------]
oracle11gR2   5.6G     8G    12G  00 [------]
oracle12cR1   7.1G    10G    16G  00 [------]

The quotas are immediately seen by the df command. The df on the /u01 shows the full filesystem size, which is 50G, however, if I navigate to my 11g or 12c home, I see the soft limit which I have configured beforehand.

[root@dbidg01 product]# df -h /u01
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/vg_ora_u01-lv_ora_u01   50G   22G   29G  43% /u01

[root@dbidg01 ~]# cd /u01/app/oracle/product/11.2.0
[root@dbidg01 11.2.0]# df -h .
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/vg_ora_u01-lv_ora_u01  8.0G  5.7G  2.4G  71% /u01

[root@dbidg01 ~]# cd /u01/app/oracle/product/12.1.0
[root@dbidg01 12.1.0]# df -h .
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/vg_ora_u01-lv_ora_u01   10G  7.1G  3.0G  71% /u01
Conclusion

The project quotas with XFS are a quite cool feature. Limiting file system usage per projects gives you a huge flexibility. Setting quotas on individual Oracle homes is one thing, but you could also limit filesystem usage for your Oracle databases in the DIAG directory based on ORACLE_SID, so that /u01/app/oracle/diag/rdbms/<SID1> can never fill up /u01/app/oracle/diag/rdbms/<SID2>. The only drawback I see, is that enabling project quotas is not an online operation. You need to umount and mount the file system to activate this feature.

Have fun with XFS quotas.

Cheers,
William

 

Cet article OEL 7 – Project Quotas on Oracle Homes with XFS on Oracle Linux 7 est apparu en premier sur Blog dbi services.

UKOUG 2016 DAY 4

Wed, 2016-12-07 09:29

uk4

Today is the last day at UKOUG in Birmingham; the first session I attended this morning was presented by Julian Dyke about installing and upgrading Oracle 12c release 2 Grid infrastructure and RAC.

He had the possibility to test the installation and upgrade phases at Oracle during 5 days at Oracle last spring. The following tests were done:

single instance : install 12.2.0.1, create database with dbca, upgrade 12.1.0.2 to 12..2.0.1 with dbua

RAC: install 12.2.0.2 grid infrastructure, install 12.2.0.1 RDBMS software, create ASM disk groups (ASMCA), create 12.2.0.2 RAC database (DBCA) , upgrade 12.1.0.2 Grid infrastructure to 12.2.0.1 (gridSetup.sh), upgrade 12.1.0.2 RAC database to 12.2.0.1.

He showed us the main different screenshots describing the installation phases and told us that they did not meet a lot of problems during their installation or upgrade  phases. To upgrade the Grid infrastructure, it is important to run the CVU connected as grid user for example :

runcluvfy.sh -src_crshome=/u00/app/12.1.0.2 -dest_crshome=/u00/app/12.2.0.1
 -dest_version=12.2.0.1 -fixupnoexec

Then after you have the possibility to resolve any issues detected using the generated fixup script.

In his opinion, the use of DBUA is sufficiently robust to use for most upgrades, expecially when the upgrade concerns non critical databases, or databases with fast recovery times or databases on virtual machines. By the way he also mentioned that Oracle is still recommending using scripts for upgrades of large or business critical databases.

He encountered some isssues concerning the upgrade phase for Grid Infrastructure. In particular with the memory_target parameter setting because the ASM and GIMR instances use more memory than in 12.1.0.2, he received the classical ORA-00845 error message. He also encountered problems with invalid objects  and had to extend the root file system of his virtual machine.

Then I attended to Franck Pachot’s session about Statistics Gathering, Best Practices  and Statistic Advisor:

uk5

His session described us his findings and recommendations about how to gather statistics, with a lot of technical demonstrations done on the Cloud. A lot of cases were shown, for example volatile tables, preferences for partitioned tables. index gathering statistics.

He showed us the Oracle 12c release 2 statistics Advisor which might be a useful tool, I will check if it is available in Enterprise Manager 13.2.

He finished his by giving us hsi own recommendations: use automatic job for most of the tables, customize the statistics gathering for volatile tables, gather statistics for tables that you load, and important customize the maintenance window for the gathering statistics job.

Finally I wanted to attend at the OEM round table, but unfortunately the session has been canceled :=((

UK6

Well,this was a very interesting week with a lot of exchanges and sharing experiences with other Oracle DBA. hope to come back at UKOUG next year !

 

 

 

Cet article UKOUG 2016 DAY 4 est apparu en premier sur Blog dbi services.

Oracle 12cR2: Statistics Advisor

Wed, 2016-12-07 04:48

Today at #ukoug_tech16 12:30 in hall 11A I’ll talk about Statistics Gathering Best Practice & 12cR2 Statistics Advisor
Rather than taking the Optimizer Statistics Advisor rules one by one I’ll show the things to take care (some people may call it best practices) when gathering statistics and they mention the Statistics Advisor Rule.
If you need a reference about all rules, you can get it from V$STATS_ADVISOR_RULES

09:41:19 SQL> select to_char(rule_id,99)||' '||description||' ('||name||')' from V$STATS_ADVISOR_RULES where rule_id>0 order by rule_id;
&nbsp
TO_CHAR(RULE_ID,99)||''||DESCRIPTION||'('||NAME||')'
----------------------------------------------------------------------------------------------------

I’ll explain them briefly here.

You should always enable to automatic statistic gathering job. You may manage special cases manually, but do not disable it.
1 Use Auto Job for Statistics Collection (UseAutoJob)
For sure if the job does not complete successfully, the advisor detects the probable cause.
2 Auto Statistics Gather Job should complete successfully (CompleteAutoJob)
Statistics history may save your life in case of a regression. But be sure the retention is not too large and purge occurs or SYSAUX will grow
3 Maintain Statistics History (MaintainStatsHistory)
The faster the statistics gathering run, the more statistics you can gather. Use all your server resources for it.
4 Use Concurrent preference for Statistics Collection (UseConcurrent)
Default options for global preferences are what the optimizer developers think are the best for most cases.
5 Use Default Preference for Stats Collection (UseDefaultPreference)
Humm.. this one was introduced before the decision not to activate SPD by default
(see http://blog.dbi-services.com/oracle-12cr2-optimizer-adaptive-statistics/)
6 SQL Plan Directives should not be disabled (TurnOnSQLPlanDirective)
Setting statistics manually may be used as a workaround but not the general case
7 Avoid Set Statistics Procedures (AvoidSetProcedures)
When you run dbms_stats.gather_…_stats manually, default options are what the optimizer developers think are the best for most cases.
8 Use Default Parameters in Statistics Collection Procedures (UseDefaultParams)
And in those cases, better to run it for a schema so that you are sure to include newly created tables
9 Use gather_schema_stats procedure (UseGatherSchemaStats)
You waste time and ressources if you gather statistics in a addition to what is done with online statistics gathering
10 Avoid inefficient statistics operation sequences (AvoidInefficientStatsOprSeq)
You waste time and ressources if you gather statistics when nothing has changed
11 Avoid unnecessary statistics collection (AvoidUnnecessaryStatsCollection)
You need statistics for all tables
12 Avoid objects with stale or no statistics (AvoidStaleStats)
Statistics gathered before bulk inserts will be immediately stale
13 Do not gather statistics right before bulk DML (GatherStatsAfterBulkDML)
You don’t want the automatic statistics gathering run on a table between a truncate and an insert
14 Statistics for objects with volatile data should be locked (LockVolatileTable)
But let it run for tables with no massive change
15 Statistics for objects with non-volatile should not be locked (UnlockNonVolatileTable)
16 Statistics of dependent objects should be consistent (MaintainStatsConsistency)

Better truncate, make indexes unusable, and insert /*+ append */ than drop and recreate the table (which removes statistics).
17 Avoid drop and recreate object seqauences (AvoidDropRecreate)
Statistics advisor may detect when incremental statistics gathering is me efficient for partitioned tables
18 Statistics should be maintained incrementally when it is beneficial (UseIncremental)
19 Statistics should not be maintained incrementally when it is not beneficial (NotUseIncremental)

Stale statistics may lead to under-estimation because of linear decay
20 Avoid Out of Range Histogram endpoints (AvoidOutOfRange)
Large tables can be scanned in parallel, recommendation is default degree
21 Use Auto Degree for statistics collection (UseAutoDegree)
As we have seen about global preferences, table preference should be default for most cases (rolling invalidation, auto sample size, auto histogram size)
22 Use Default Object Preference for statistics collection (UseDefaultObjectPreference)
And for sure dbms_stats is the way to gather statistics for the optimizer. ANALYZE is deprecated for that since 91
23 Avoid using analyze table commands for statistics collection (AvoidAnalyzeTable)

Those are only my interpretation. 12.2 is new (and cloud first) and I’ve not observed all those recommandations yet. But there are properly described by the advisor.
This is the kind of output we can get:

----------------------------------------------------------------------------------------------------
GENERAL INFORMATION
-------------------------------------------------------------------------------
 
Task Name : MY_TASK
Execution Name : EXEC_52
Created : 12-07-16 11:31:40
Last Modified : 12-07-16 11:32:37
-------------------------------------------------------------------------------
SUMMARY
-------------------------------------------------------------------------------
For execution EXEC_52 of task MY_TASK, the Statistics Advisor has 6
finding(s). The findings are related to the following rules: USECONCURRENT,
AVOIDSETPROCEDURES, USEDEFAULTPARAMS, USEGATHERSCHEMASTATS, AVOIDSTALESTATS,
UNLOCKNONVOLATILETABLE. Please refer to the finding section for detailed
information.
-------------------------------------------------------------------------------
FINDINGS
-------------------------------------------------------------------------------
Rule Name: UseConcurrent
Rule Description: Use Concurrent preference for Statistics Collection
Finding: The CONCURRENT preference is not used.
 
Recommendation: Set the CONCURRENT preference.
Example:
dbms_stats.set_global_prefs('CONCURRENT', 'ALL');
Rationale: The system's condition satisfies the use of concurrent statistics
gathering. Using CONCURRENT increases the efficiency of statistics
gathering.
----------------------------------------------------
...

 

Cet article Oracle 12cR2: Statistics Advisor est apparu en premier sur Blog dbi services.

OEL 7 – How to disable IPv6 on Oracle Linux 7

Wed, 2016-12-07 02:36

In case you are not interested in IPv6, you can use the following HowTo to disable it on Oracle Linux 7. Unless you have something very very special on your System, these 10 Steps should do it.

  1. First of all, check if IPv6 is active at all
  2. Add the disable_ipv6 = 1 entries to the /etc/sysctl.conf file
  3. Disable IPv6 in all /etc/sysconfig/network-scripts/ifcfg-* files, e.g.
  4. Disable IPv6 in /etc/sysconfig/network
  5. Remove the “::1″ line from the /etc/hosts file
  6. Remove the “restrict -6″ line from the /etc/ntp.conf
  7. Add ipv6.disable=1 to the GRUB_CMDLINE_LINUX entry in the /etc/default/grub file
  8. Regenerate a GRUB configuration file and overwrite the existing one
  9. Reboot the server
  10. Confirm if IPV6 is disabled

 

First of all, check if IPv6 is active at all

[root@dbidg01 ~]# /sbin/ip -6 addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
    inet6 fe80::ad02:9b6a:bf40:5a3a/64 scope link
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
    inet6 fe80::a00:27ff:feb8:3544/64 scope link
       valid_lft forever preferred_lft forever

 

Add the disable_ipv6 = 1 entries to the /etc/sysctl.conf file

#-- Disable IPv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1

 

Disable IPv6 in all /etc/sysconfig/network-scripts/ifcfg-* files, e.g.

cat /etc/sysconfig/network-scripts/ifcfg-enp0s3 | grep IPV6INIT
IPV6INIT=no

 

Disable IPv6 in /etc/sysconfig/network

cat /etc/sysconfig/network | grep NETWORKING_IPV6
NETWORKING_IPV6=no

 

Remove the following line from the /etc/hosts file

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

 

Remove the following line from the /etc/ntp.conf

cat /etc/ntp.conf | egrep ' -6'
restrict -6 default kod nomodify notrap nopeer noquery
restrict -6 ::1

 

Add ipv6.disable=1 to the GRUB_CMDLINE_LINUX entry in the /etc/default/grub file

[root@dbidg01 /]# cat /etc/default/grub | grep GRUB_CMDLINE_LINUX
GRUB_CMDLINE_LINUX="ipv6.disable=1 crashkernel=auto rd.lvm.lv=ol/root rd.lvm.lv=ol/swap rhgb quiet numa=off transparent_hugepage=never"

 

Regenerate a GRUB configuration file and overwrite the existing one

[root@dbidg01 /]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.1.12-61.1.19.el7uek.x86_64
Found initrd image: /boot/initramfs-4.1.12-61.1.19.el7uek.x86_64.img
Found linux image: /boot/vmlinuz-4.1.12-61.1.18.el7uek.x86_64
Found initrd image: /boot/initramfs-4.1.12-61.1.18.el7uek.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-514.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-514.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-547c48bd53614a2ca2d16909b3c14419
Found initrd image: /boot/initramfs-0-rescue-547c48bd53614a2ca2d16909b3c14419.img
done

 

Reboot the server

init 6

 

Confirm if IPV6 is disabled

[root@dbidg01 ~]# /sbin/ip -6 addr
[root@dbidg01 ~]# lsmod | grep -i v6

 

In case the ip and the lsmod command do not return anything back, then you have successfully disabled IPv6.

Cheers, William

 

 

 

 

Cet article OEL 7 – How to disable IPv6 on Oracle Linux 7 est apparu en premier sur Blog dbi services.

UKOUG 2016 – Third day

Tue, 2016-12-06 11:41

Birmingham

Today, it’s the third day in Birmingham for the UKOUG Tech16 event. We had a good time yesterday evening in English pubs.

Today, I attended several sessions today. Sessions that I was mostly interested in was the “Application Express 5.1 New Features part 1 an part 2.

The 1st session was presented by David Peake from Oracle. He provided main new features that will come with Apex 5.1. He demonstrates new capability for developer usage improvement in terms of productivity. In Apex 5.1, we will be able to define the page designer as we want, customising tabs order, displaying the page designer in different pane layout and page rendering. He also presented the Interactive Grid and quickly created a master detail, detail, detail, detail view. The number of detail is unlimited but he strongly advises to carefully minimise the number of detail.

The 2nd session, APEX 5.1 part 2 was presented by Patrick Wolf. He concentrated his session by presenting improvement made on the Universal Theme that was already rolled out with APEX 5.0. So they continued to improve it in APEX 5.1. In my opinion, the important information is the upgrade of the Universal Theme for any existing 5.0 application. You will have to refresh the theme in order to use the improvement made in APEX 5.1. This has to be done by going in the shared component theme visualisation and click on the refresh theme. This will upgrade the already used UT with 5.1 capabilities. There is lot’s of new capabilities and would wait until the final rollout date to do some tests on my side.

Another session I followed was “APEX, Meet the Rest of the Web – Modern Web Technologies in Your APEX Application”. It was good to know how to create a search like google with auto-complete support depending on the value entered in the search field. Presenter also showed us how to quickly integrate google charts using APEX JSON packages. He also showed us how to integrate google map, showed us how to call Facebook and Twitter API in our application in order to follow twits a.s.o. It’s quite easy to integrate Modern Web Technologies in any APEX Application.

See you tomorrow for the last day in Birmingham.

 

Cet article UKOUG 2016 – Third day est apparu en premier sur Blog dbi services.

UKOUG 2016 DAY 3

Tue, 2016-12-06 11:25

uk3

Today at UKOUG 2016, the Cloud has won against the sun :=)

The first sesssion I attended this morning was animated by Kamil Stawiarski from ORA 600 company: Securing the database againt Unauthorized attacks, but the real title was Oracle Hacking Session.

The session was amazing, as usual with Kamil, no slides , only technical demos :=))

He first showed us that after creating a standard user in an Oracle database with the classical privileges connect, resource and create any index, and using a simple function he created, the standard user could receive the DBA privilege.

The seconf demonstration was about DirtyCow (a computer vulnerability under Linux that allows remote execution of non-privileged code to achieve remote root access on a computer). He showed us how easy it is to get connected root under Linux.

In the last demo he showed us how it is possible to read the data from a particular table directly from the data file, only by using one of his C program and the data_object_id of the table.

He finished his session by asking himself why a lot of money is wasted to protect data, and why it should not be more intelligent to spend less money and to write correct applications with correct privileges.

The second session was more corporate: Oracle database 12cR2, the overview by Dominic Giles from Oracle. He talked us about Oracle 12cR2 on the cloud; What is available now: Exadata Express Cloud Server and Database Cloud Service. Comming soon: Exadata Cloud Machine.

Then he talked about the new features of Oracle database 12cR2:

Performances: The main idea for 12cR2 is: go faster, he gave us some examples: a high compression rate of indexes (subject to licensing option of course) which might result in I/O improvement and significantly space savings.

Security: Oracle 12cR2 introduces online encryption of existing data files. There is also the posiibility of full encryption of internal database structures such as SYSTEM SYSAUX or UNDO. Also a Database Vault simulation mode which defines and tests security protection profiles through application lifecycle.

Developpers: AL32UTF8 is the default character set for databases. Object name for tables or columns can now be 128 bits long.

Manageability: PDB number per container increased from 252 to 4096. The PDB are optimized for RAC. And interesting it will be possible to realize PDB hot clones, PDB refresh and PDB relocate without downtime.

Availability: a lot of improvements for RAC: RAC reader nodes, ASM flex disk groups, Autonomous Health Framework (identifies issues, notifies with corrective actions). For active dataguard, diagnostic tuning and SQL plan advisor will be available on standby side, no user disconnection on failover, high speed block comparaison between primary and standby database. And finally there will be the possibility to use SSL redo transport to be more secure.

Finally, I attended at the last session of the day, but one the most active essentially because of the speaker’s talent and of course the subject: Upgrade to the next generation of Oracle Database; live and uncensored !

He talked us about the different ways to upgrade to 12.1.0.2 or 12.2.0.2 abording subjects like extended support, direct upgrade and DBUA.

A new upgrade script is available : preupgrade.jar executes checks in source environment, generates detailed recommendations, generates also fixup scripts and last but not least is rerunnable :=))

He showed us that the upgrade process is faster and has less downtime, and we have the possibility to run databse upgrade in parallel (by using catctlpl.pl with the -n 8 option for example). It deals with non CDBs and CDBs. During his upgrade from 11.2.0.4 to 12.1.0.2 he interrupted the upgrade process by typing CTRL-C during the upgrade process to 12.1.0.2 … and he proved that the process upgrade is rerunnable by running catctl.pl with the -R option :=)

He is not a great fan of DBUA for multiple reasons : for him it is hard to debug, the parallel option is by default to cpu_count, the progress bar is impredictive and sometimes we have to wait a lot without knowing what’s happening in the source database, we have to be careful with datapatch in 12.1 version. For me the only advantage is the timezone  automatic upgrade by using dbua.

Well this was another exciting day at UKOUG 2016, tomorrow is the last day with other interesting sessions and an OEM round table :=)

 

Cet article UKOUG 2016 DAY 3 est apparu en premier sur Blog dbi services.

From MySQL (Oracle) to Postgres using the EDB Migration Toolkit

Tue, 2016-12-06 11:24

Why should you migrate?
If your current MySQL database does not offer some needed functionnalities according to your business as:
– more security
– more high availibilty options (hot standby)
– Strong Data Warehouse capabilities
If you want to consolidate the number of different instances (Postgres, MySQL, MS-SQL,…)
If you want to reduce administrative costs by using fewer database platforms
Which tool should you use?
the migration Toolkit command-line from EnterpriseDB that can be found below
http://www.enterprisedb.com/products-services-training/products-overview/postgres-plus-solution-pack/migration-toolkit
Why ?
Really easy to use
Which MySQL Objects are supported for the migration?
– Schemas
– Tables
– Constraints
– Indexes
– Table Data
What about partitionned table?
You have to remove the partitions before the migration
mysql> ALTER TABLE Table_name REMOVE PARTITIONING;
My environment:
MySQL: 5.7.14 on Oracle Linux Server 7.1
PostgreSQL: 9.6.1.4 on Oracle Linux Server 7.1
What are the prerequisites?
– download the migration toolkit from EnterpriseDB
Note that it can be only installed by registered users but the registration is free and can be done directly on the EnterpriseDB website.
– Install it and follow the instructions
./edb-migrationtoolkit-50.0.1-2-linux-x64.run
– download the MySQL JDBC driver: mysql-connector-java-5.1.40-bin.jar
http://www.enterprisedb.com/downloads/third-party-jdbc-drivers
– Install the driver by moving it to the right directory:
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-2.b15.el7_3.x86_64/jre/lib/ext
– To facilitate  the migration, you have to prepare the configuration file: toolkit.properties located in your installation directory
the most important is to associate the right JDBC URL to the SRC_DB_URL parameter
SRC_DB_URL=jdbc:mysql://hostname[:port]/database
Following is the content of the config file
SRC_DB_URL=jdbc:mysql://192.168.56.200:33001/employees
SRC_DB_USER=root
SRC_DB_PASSWORD=manager
TARGET_DB_URL=jdbc:edb://192.168.56.200:5433/employees # the database must be created in Postgres before
TARGET_DB_USER=postgres
TARGET_DB_PASSWORD=manager

In case you get MySQL connection problems (SSL), modify the parameter SRC_DB_URL
SRC_DB_URL=jdbc:mysql://192.168.56.200:33001/employees?autoReconnect=true&useSSL=false
This will disable SSL and also suppress SSL errors.
Before starting the Migration, it is mandatory to create a blank target database in the Postgres instance
What options for the migration ?
-sourcedbtype is mysql
-targetdbtype is enterprisedb
-fetchsize is 1  to avoid  an ‘out of heap space’ error and force the toolkit to load data one row at a time
How to start the migration?
[root@pg_essentials_p1 mtk]# bin/runMTK.sh -sourcedbtype mysql -targetdbtype enterprisedb -fetchSize 1 employees
Running EnterpriseDB Migration Toolkit (Build 50.0.1) ...
Source database connectivity info...
conn =jdbc:mysql://192.168.56.200:33001/employees?autoReconnect=true&useSSL=false
user =root
password=******
Target database connectivity info...
conn =jdbc:edb://192.168.56.200:5433/employees
user =postgres
password=******
Connecting with source MySQL database server...
Connected to MySQL, version '5.7.14-enterprise-commercial-advanced-log'
Connecting with target EDB Postgres database server...
Connected to EnterpriseDB, version '9.6.1.4'
Importing mysql schema employees...
Creating Schema...employees
Creating Tables...
Creating Table: departments
..........................
Created 6 tables.
Loading Table Data in 8 MB batches...
Loading Table: departments ...
[departments] Migrated 9 rows.
..............................
Loading Table: salaries ...
[salaries] Migrated 246480 rows.
................................
[salaries] Migrated 2844047 rows.
[salaries] Table Data Load Summary: Total Time(s): 20.143 Total Rows: 2844047 Total Size(MB): 94.1943359375
Loading Table: titles ...
[titles] Migrated 211577 rows.
[titles] Migrated 419928 rows.
[titles] Migrated 443308 rows.
[titles] Table Data Load Summary: Total Time(s): 3.898 Total Rows: 443308 Total Size(MB): 16.8955078125
Data Load Summary: Total Time (sec): 33.393 Total Rows: 3919015 Total Size(MB): 138.165
Creating Constraint: PRIMARY
Creating Constraint: dept_name
................................
Creating Index: dept_no1
Schema employees imported successfully.
Migration process completed successfully.
Migration logs have been saved to /root/.enterprisedb/migration-toolkit/logs
******************** Migration Summary ********************
Tables: 6 out of 6
Constraints: 11 out of 11
Indexes: 2 out of 2
Total objects: 19
Successful count: 19
Failed count: 0
Invalid count: 0
************************************************************

So as you can see, this migration process is really easy and you can take immediately benefits of all the standard features.

 

Cet article From MySQL (Oracle) to Postgres using the EDB Migration Toolkit est apparu en premier sur Blog dbi services.

UKOUG 2016 – Second day

Mon, 2016-12-05 13:51

IMG_1965

This second day at UKOUG was quite good. I slept well at the Jurys Inn hotel and this morning, I enjoyed one more time a real English breakfast with beans, bacons, eggs and sausages. I like that to be fit over all the day ;)

Today, I attended the general Keynote and several sessions around integration, APEX & Database Development and Database. My colleague, Franck Pachot also presented today and I attended his session “12c Multitenant: Not a Revolution, Just an Evolution”. His session reminds me the article I wrote some years ago about Oracle Multitenant architecture and APEX.

Early in the morning, I followed the “Application Container Cloud Service: Backend Integration Using Node.js”. The presenter described what Node.js is, give javascript framework that can be easily integrated with Node.js such as Express.js to create HTTP server and retrieve Node.js data by creating HTTP server. He also presented the architecture where we can have Node.js hosted in Docker on the cloud.

After that, I attended the session “APEX Version Control & Team Working”. During that session, I learned more on Apex Version Control best practices and which nice commands can be done through SQL cli, apex java utility and so on. I was quite happy learning that for internal development we were not so bad and we already properly control version, make backup of APEX workspace, applications and themes. I now have information to improve our internal works around APEX development activities such as APEX ATAF “Apex Test Automation Framework”

Next session was “Interactive Grids in Application Express 5.1″. This session was a demonstration oriented session in which the presenter showed us new amazing features that will be incorporated in APEX 5.1. Most of the demonstration was based on the sample package application.

The next session was “Real Time Vehicle Tracking with APEX5″. For me it was great to see the power of Apex and the Oracle Database to store and display data in real time through the APEX5 MapViewer. The application uses Oracle Spatial getting data from each vehicle GPS where PL/SQL converts data for geospatial information.

During the last session, “A RESTful MicroService for JSON Processing in the Database” I learned how to execute JavaScript directly from the database. In fact, with Java 8 and the Nashhorn project it’s now possible to execute JavaScript codes from the JVM and so directly in the database avoiding data shipping.

This is all for today and see you tomorrow, we will now take time with my blog reviewer to drink some pints in an English pub.

 

Cet article UKOUG 2016 – Second day est apparu en premier sur Blog dbi services.

SQL Server 2016: distributed availability groups and cascaded replicas

Mon, 2016-12-05 12:28

During the last MVP summit, we had some interesting discussions about availability groups with the SQL Server team and I remember someone asked for managing scenarios like Oracle cascaded destinations and the good news is that SQL Server 2016 already addresses this kind of scenario with distributed availability groups. For example, let’s say you have to manage heavily reporting activity on your application and a solution would be to offload this activity across several secondary read-only replicas. So, a typical architecture as follows:

blog 110 - 0 - initial infra AG

We basically want to achieve high availability on the primary datacenter (DATACENTER1) and to use the secondary datacenter as DR and at the same time to offload reporting activity on secondary replicas. But let’s say you get a low network bandwidth – (WAN classified with ~= 150 / 200 mbps) between your two datacenters which are geographically dispersed from each other. Regarding your current workload against the availability group, we may potentially experience high network traffic when the number of secondary replicas increases on the DR site. Indeed, the number of log blocks to replicate is directly proportional to the number of secondary replicas for the same payload.

I decided to simulate this scenario on my lab environment which reflects the above scenario (2 replicas on the first datacenter and four other replicas on the second datacenter). I used two Lenovo T530 laptop with Hyper-V to simulate the whole environment with a cross datacenter network connection handled by two RRAS servers.

In addition, for a sake of precision, let’s describe the test protocol:

  • I used a script which inserts a bunch of data from the primary replica (~ 900MB of data)
  • I ran the same workload test after adding one asynchronous read-only replica at time on each test up to 4 replicas.
  • I collected performance data from various perfmon counters focused on the availability group network stack (both primary site and DR site)

Here the output of the whole test.

blog 110 - 1 - network usage trend AG

The picture above is pretty clear here. We notice the network bandwidth grows up when adding secondary replicas. In the last test, the network bandwidth reached 400 Mbps (received traffic) on the remote datacenter while that reached for primary replica 600 Mbps (send traffic). Why have we got a difference between network bandwidth consumption between the primary replica and remote datacenter? Well, the answer is simple: network bandwidth consumption on remote datacenter doesn’t include network traffic from the secondary located on the first datacenter for high-availability.

We may also notice the third iteration of the test (1 primary + 1 secondary sync + 2 secondaries async) is showing up a critical point if we have to face a scenario that includes a WAN connection between the two datacenters with a maximum network bandwidth of 200 Mbps. Indeed in this case, the network bandwidth could be quickly saturated by the replication traffic between all the replicas and here probably the first symptoms you may encountered in this case:

 blog 110 - 3 - AG DMV monitoring

A continuous high log send queue size for each concerned secondary replica on the remote datacenter (250 MB on average in my case)…

blog 110 - 3 - AG wait stats

You may minimize the network overhead by isolating the replication traffic to its own network but in some cases if you’re unlucky it will not be enough. This is a situation which may be solved by introducing distributed availability groups and the cascaded destinations principle as shown below:

blog 110 - 4 - distributed infra AG

Distributed availability group feature will permit to offload the replication traffic from the primary to the read-only secondaries by using a replica on the second datacenter. Thus, we are able to reduce drastically the network bandwidth from 4 replicas to only one. In addition, adding one or several other replicas may be considered because this new architecture is more scalable and we will only impact local network bandwidth on the second datacenter.

Here my new lab configuration after applying distributed availability groups on the previous architecture.

  • In the first datacenter, one availability group AdvGrp that includes two replicas in synchronous replication and automatic failover for HA purpose
  • In the second datacenter, one availability AdvGrpDR that includes four replicas enrolled as read-only.
  • One distributed availability group AdvDistGrp which makes the cascade between the two aforementioned availability groups

blog 110 - 5 - distributed cfg AG

Let’s run the same workload test on the new architecture and let’s have a look at the new output:

The log send queue size got back to normal at the primary replica level on the first datacenter by cascading all the previous replication traffic from the primary replica located to the second datacenter (AdvGrpDR availability group).

blog 110 - 6 - AG DMV monitoring distrib

From a wait statistics perspective, we got rid of HADR_DATABASE_FLOW_CONTROL meaning we did not saturated the network link between the 2 datacenters

blog 110 - 7 - AG wait stats distributed

The picture below confirms the replication traffic dropped drastically with this new configuration (150 Mbps vs 400 Mbps from the first architecture).

blog 110 - 8 - perfmon AG distributed

Bottom line

In this blog post I tried to demonstrate using distributed availability groups to cascade the replication traffic to another replica may be a good idea in order to address scenarios which include many secondary replicas on a remote location with a low network bandwidth. However introducing distributed availability groups has a cost in terms of management because we have to deal with an additional layer of complexity. But if the rewards make the effort worthwhile we should consider this kind of architecture.

 

 

 

 

 

Cet article SQL Server 2016: distributed availability groups and cascaded replicas est apparu en premier sur Blog dbi services.

UKOUG 2016 Day 2

Mon, 2016-12-05 12:07

uk2

Today I assisted at a first session about one of my favorite tool: Upgrade to EM 13c now. The session was presented by Phil Gric from Red Stack Tech.

At the begining he described us the most common mistakes while implementing Enterprise Manager:

- EM 13c is an enterprise application

- It is a critical part of your infrastructure

- it is designed to help you

- EM 13c is not a glorified db console

- IT manager should not see EM as a job for DBA

He described us the main pre requisites before to realize an EM 13c upgrade ( for example disable optimizer_adaptive_features). He also talked about isssues such as the upgrade will create users with the sysman password, we should ensure that the repository password policy accept such a password.

There is also an issue while upgrading agent on AIX to 13.2 version. There is a problem securing the agent due to SHA encryption (Metalink Note 1965676.1).

To complete his presentation, he described us the main new features in EM 13c: export and import of incident rules, incident compression, always on monitoring, in emcli more than 300 new verbs and a general functionnality improved, system broadcast , comparaison and drift management.

He finally explained us why for him it is important to regularly upgrade to the last EM13c version: it is easy to upgrade, and the longer you wait, the closer it is to the next upgrade :=))

The second presentation was about the 12c upgrade : the good , the bad and the ugly presented by Niall Litchfield. He talked about his experiences about upgrading to 12c a very huge infrastructure composed of more than 100 servers, with database version from 10.1 to 11.2.0.3, with RAC or single instances.

His first advice was to read the Mike Dietrich documentation (Update, Migrate , Consolidate to 12c), and to have a look at the Oracle recommanded patch list.

A good reason to upgrade is because the support for 11g ends at teh end of the year, and the extended support is expensive.

The good news after this huge upgrade was that there has been no upgrade failures (tens of clusters, hundreds of servers and databases), a performance benchmark showed a 50 % improvement.

The bad and ugly news concern the number of patches. It also concern the JSON bundle patches which require database bundle patches. He also adviced us to turn off the optimizer_adaptive_features (recommanded also to be disabled with EM13c, PeopleSoft and EBS). Finally a last ugly point is the documentation, there is no one place to read the documenation but many. He also recommended to allow significant time for testing the database and the applications after the upgrade to 12c.

Then I assisted at a session talking about Oracle database 12c on Windows animated by Christian Shay of Oracle.

He showed us the database certification on 64-bit Windows. In a short resume Oracle 12..2 is certified on Windows server 2012, Windows Server 2012 R2, Windows 10 and Windows Server 2016, as Oracle 12.1 is certified on the same servers except Windows Server 2016.

In Windows 8 and Windows Server 2012, Microsoft has introduced the Group Managed service Account (GMSA), i.e. a domain level account which can be used by multiple servers in that domain to run their services under this account. A GMSA can be the Oracle Home user for Oracle Database Real Application Clusters (Oracle RAC), single instance, and client installations. It has similarities with the ‘oracle’ user on Linux, as you are able to connect on windows with this user and perform administrative tasks  like create database, install Oracle or upgrade databases.

In Windows 7 and Windows Server 2008 R2, Microsoft introduced virtual accounts. A virtual account can be the Oracle home user for Oracle Database single instance and client installations.

The recommandations are the following: for DB server (single instance) use virtual account to avoid password management (12.2), for 12.1 specify a Windows user account during installation. For RAC DB and Grid infrastructure, use a domain user or group managed service account, for a GMSA you do not need to provide the password for any database operation.

He also talked about large page support for windows. When large page support is enabled, the CU are able to access the Oracle database buffers im RAM more quickly. It will address the buffers in 2 MB page size instead of 4 KB increments.

Large pages can be used in two modes : Regular or Mixed mode. The regular one means all the SGA is attempted to be allocated in large pages. By the way if the amount of large pages is not available the database will not come up. Thats the reason using the mixed mode is perhaps better, if all the SGA cannot be allocated in large pages, the rest of the pages will be allocated with regular pages and the instance will come up.

I finished my UKOUG day by assisting at Franck Pachot’s session talking about 12c Mutltitenant (not a revolution but an evolution). He clearly explained us that we did not have to fear about 12c mutlitenant, from the begining of Oracle there has been a lot of new features a lot people feared, but now they are impelemented and work correctly. By the way the patch upgrade optimization is partially implemented, we will see how 12c multitenant will evolve in the next years.

 

 

 

 

 

Cet article UKOUG 2016 Day 2 est apparu en premier sur Blog dbi services.

UKOUG – Tech16 – Super Sunday

Mon, 2016-12-05 04:38

Screen Shot 2016-12-05 at 11.31.49

 

This year, I had the opportunity to attend the UKOUG 2016 which took place in Birmingham. This event normally begin on Monday but each year, there is a complimentary afternoon with high technical presentation for those who are registered for the Tech16 called super Sunday.

For this first Super Sunday afternoon at UKOUG 2016, I followed 2 sessions and I participated to an hands on lab around the cloud.

The 1st session was very interesting with lost of useful information about APEX and nodeJS.

This session was called NodeJS & Oracle – A Match Made in Heaven and the presenter, Mr Trond Enstad, focused the session to demonstrate the power of using Node.js.

He installed, Node.js, an Oracle Database client and created Node.js config file extracting sar command informations storing them in a remote Oracle Database. After that, he quickly created D3 charts in APEX showing real time monitoring of those stored information. I’m really enthousiaste to do some tests.

The 2nd session “Auto REST Enabling Your Tables, Views, & More With SQL Developer & REST Data Services” from Mr Jeff Smith was also interesting providing useful information about the ORDS product from Oracle.

After these 2 interesting sessions, I followed an Oracle Cloud Platform Hands On Lab called “Cloud Native Stack on Oracle Bare Metal Compute”.

In this labs, we created a Virtual Cloud Network (VCN) in which we were able to create a bare metal instance with Oracle Linux 7.2. Once launched, we installed MongoDB, we setup Node.js and MEAN.js. At the end, we were able to access the mean home page.

It was interesting to see how fast, we were able to provision a Bare Metal instance to install application components on it.

See you tomorrow for other interesting sessions.

 

Cet article UKOUG – Tech16 – Super Sunday est apparu en premier sur Blog dbi services.

Multitenant internals – Summary

Mon, 2016-12-05 02:02

Today at UKOUG TECH16 conference I’m presenting the internals of the new multitenant architecture: 12c Multitenant: Not a Revolution, Just an Evolution. My goal is to show how it works, that metadata links and object links are not blind magic.
Here are the links to the blog posts I’ve published about multitenant internals.

Fichier 05.12.16 07 39 43
The dictionary separation, METADATA LINK and OBJECT LINK (now called DATA LINK): :
http://blog.dbi-services.com/multitenant-dictionary-what-is-stored-only-in-cdbroot/
http://blog.dbi-services.com/oracle-12c-cdb-metadata-a-object-links-internals/
http://blog.dbi-services.com/oracle-multitenant-dictionary-metadata-links/
http://blog.dbi-services.com/oracle-multitenant-dictionary-object-links/
http://blog.dbi-services.com/multitenant-internals-how-object-links-are-parsedexecuted/
http://blog.dbi-services.com/multitenant-internals-object-links-on-fixed-tables/
An exemple with the AWR views:
http://blog.dbi-services.com/12c-multitenant-internals-awr-tables-and-views/
How the upgrades should work:
http://blog.dbi-services.com/oracle-multitenant-dictionary-upgrade/
What about shared pool rowcache and library cache:
http://blog.dbi-services.com/oracle-multitenant-dictionary-rowcache/
http://blog.dbi-services.com/12c-multitenant-cursor-sharing-in-cdb/
And how to see when session switches to CDB$ROOT:
http://blog.dbi-services.com/oracle-12cr2-multitenant-containers-in-sql_trace/

If you are in Birmingham, I’m speaking on Monday and Wednesday.

CaptureUKOUGFeaturedSpeaker

 

Cet article Multitenant internals – Summary est apparu en premier sur Blog dbi services.

UKOUG Super Sunday

Sun, 2016-12-04 16:30

uk1

Today at the UKOUG Super Sunday in Birmingham, I had the opportunity to assist at interesting conferences.

The first presentation was about Oracle RAC internals and its new features in version 12.2.0.1 on Oracle Cloud. The main new features concern the cache fusion, the undo header hash table, the leaf nodes, and the hang manager.

In 12c release 2 in a RAC environment, the cache fusion automatically chooses an optimal path; the cache fusion collects and maintains statistics on the private network, and will use this information to find the optimal path network or disk to serve blocks. We can consider that flash storage will provide better acces time to data than the private network in case of high load.

In order to reduce remote lookups, each instance maintain a hash table of recent transactions (active and commited). So the Undo Header Hash table will improve the scalibility by eliminating remote lookups.

Flex Cluster and leaf nodes were implemented in 12cR1. With 12cR2, it is now possible to run read-only workload on instances running on leaf nodes.

Hang Manager has been introduced with 12cR2. It determines sessions holding resources on which sessions are waiting.  Hang Manager has the possibility to detect hangs across layers.

The second was about the use of strace, perf and gdb. This was a very funny presentation with no slides, only technical demos. It was talking on how to use strace, perf or gdb without being an expert. The speaker showed us the different analysis we can realize with strace gdb or perf in case we realize a sql query over a table in a file system tablespace or an ASM tablespace.

Using those tools allowed us to understand the mechanism of physical read and asynchronous I/O, and showed us the differences between asynchronous I/O and direct path read between ASM and file system.

It showed us that the use of strace and gdb is very simple but not recommended in a production environment.

The last session was talking about dba_feature_usage_statistics, and the speaker describes us the components behind the scene.

This view  as its name indicates it displays information about database feature usage statistics. The view gives an overview of each option pack taht have been used in the database and are currently in use. It pprovides also information when the product was first used and when it was used for the last time.

It is not very easy to find information in the Oracle documentation about how this view is populated. But the speaker gave us important information about wrl$_dbu_usage_sample, wrl$_dbu_feature_usage and wrl$_dbu_feature_metadata which are important for the dba_feature_usage_statistics view.

He also showed us a method to refresh manually the view dba_feature_usage_statistics.

Tomorrow another day of interesting sessions is waiting for us !

 

 

Cet article UKOUG Super Sunday est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 6 – Server programming

Sat, 2016-12-03 05:43

Today we’ll continue this series with another topic: What does PostgreSQL provide when it comes to server programming, that is: Writing functions and triggers to support your application? In Oracle you can either use PL/SQL or Java, in MariaDB you can use stored procedures written in SQL, MS SQL Server provides Transact SQL and with DB2 you can write stored procedures in a host language or SQL.

We’ll use the same sample data as in the last post:

\c postgres
drop database if exists ddl;
create database ddl;
\c ddl
create table t1 ( a int, b int );
insert into t1 (a,b)
       values (generate_series(1,1000000)
              ,generate_series(1,1000000));
select count(*) from t1;
create index i1 on t1(a);
create unique index i2 on t1(b);
create view v1 as select a from t1;
alter table t1 add constraint con1 check ( a < 2000000 );
\d t1

So, what can you do? To begin with you can create functions containing pure SQL commands. These are called “query language functions”. You can for example do things like this (although this function is not very useful as can you do the same by just selecting the whole table):

CREATE FUNCTION select_all_from_t1() RETURNS SETOF t1 AS '
  SELECT * 
    FROM t1;
' LANGUAGE SQL;

There are two important points here: The “LANGUAGE” part which means that the function is written in pure SQL. The keyword “SETOF” which means that we want to return a whole set of the rows of t1. Once the function is created you can use it in SQL:

(postgres@[local]:5439) [ddl] > select select_all_from_t1();
 select_all_from_t1 
--------------------
 (1,1)
 (2,2)
 (3,3)
...

When you want to do something where it does not make sense to return anything you can do it by using the “VOID” keyword:

CREATE FUNCTION update_t1() RETURNS VOID AS '
  UPDATE t1
     SET a = 5
   WHERE a < 10
' LANGUAGE SQL;

When you execute this you do not get a result:

(postgres@[local]:5439) [ddl] > select update_t1();
 update_t1 
-----------
 NULL
(1 row)
(postgres@[local]:5439) [ddl] > select count(*) from t1 where a = 5;
 count 
-------
     9
(1 row)

What about parameters? You can do this as well:

CREATE FUNCTION do_the_math(anumber1 numeric, anumber2 numeric ) RETURNS numeric AS '
  SELECT do_the_math.anumber1 * do_the_math.anumber2;
' LANGUAGE SQL;

Execute it:

(postgres@[local]:5439) [ddl] > select do_the_math(1.1,1.2);
 do_the_math 
-------------
        1.32

Another great feature is that you can have a variable/dynamic amount of input parameters when you specify the input parameter as an array:

CREATE FUNCTION dynamic_input(VARIADIC arr numeric[]) RETURNS int AS $$
    SELECT array_length($1,1);
$$ LANGUAGE SQL;

(postgres@[local]:5439) [ddl] > select dynamic_input( 1,2,3,4 );
 dynamic_input 
---------------
             4

So far for the SQL functions. What can you do when you need more than SQL? Then you can use the so called “procedural language functions”. One of these which is available by default is PL/pgSQL:

(postgres@[local]:5439) [ddl] > \dx
                 List of installed extensions
  Name   | Version |   Schema   |         Description          
---------+---------+------------+------------------------------
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(1 row)

By using PL/pgSQL you can add control structures around your SQL very much as you can do it in PL/SQL (except that you cannot create packages).

CREATE FUNCTION f1(int,int) RETURNS text AS $$
DECLARE
    t_row t1%ROWTYPE;
    result text;
BEGIN
    SELECT * 
      INTO t_row
      FROM t1
     WHERE a = 99;
    IF t_row.b > 0
    THEN
        result := 'aaaaaa';
    ELSE
        result := 'bbbbbb';
    END IF;
    RETURN result;
END;
$$ LANGUAGE plpgsql;
(postgres@[local]:5439) [ddl] > select f1(1,1);
   f1   
--------
 aaaaaa

You can also use anonymous blocks:

(postgres@[local]:5439) [ddl] > DO $$
BEGIN
  FOR i IN 1..10
  LOOP
    raise notice 'blubb';
  END LOOP;
END$$ LANGUAGE plpgsql;
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
DO

Of course there is more than IF-THEN-ELSE which is documented here.

So by now we know two options to write functions in PostgreSQL. Is there more we can do? Of course: You prefer to write your functions in Perl?

(postgres@[local]:5439) [ddl] > create extension plperl;
CREATE EXTENSION
(postgres@[local]:5439) [ddl] > \dx
                 List of installed extensions
  Name   | Version |   Schema   |         Description          
---------+---------+------------+------------------------------
 plperl  | 1.0     | pg_catalog | PL/Perl procedural language
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language


CREATE FUNCTION perl_max (integer, integer) RETURNS integer AS $$
    my ($x, $y) = @_;
    if (not defined $x) {
        return undef if not defined $y;
        return $y;
    }
    return $x if not defined $y;
    return $x if $x > $y;
    return $y;
$$ LANGUAGE plperl;

(postgres@[local]:5439) [ddl] > select perl_max(1,2);
 perl_max 
----------
        2

You prefer python?

(postgres@[local]:5439) [ddl] > create extension plpythonu;
CREATE EXTENSION
Time: 327.434 ms
(postgres@[local]:5439) [ddl] > \dx
                        List of installed extensions
   Name    | Version |   Schema   |               Description                
-----------+---------+------------+------------------------------------------
 plperl    | 1.0     | pg_catalog | PL/Perl procedural language
 plpgsql   | 1.0     | pg_catalog | PL/pgSQL procedural language
 plpythonu | 1.0     | pg_catalog | PL/PythonU untrusted procedural language

CREATE FUNCTION pymax (a integer, b integer)
  RETURNS integer
AS $$
  if a > b:
    return a
  return b
$$ LANGUAGE plpythonu;

(postgres@[local]:5439) [ddl] > select pymax(1,1);
 pymax 
-------
     1

… or better TcL?

(postgres@[local]:5439) [ddl] > create extension pltclu;
CREATE EXTENSION
Time: 382.982 ms
(postgres@[local]:5439) [ddl] > \dx
                        List of installed extensions
   Name    | Version |   Schema   |               Description                
-----------+---------+------------+------------------------------------------
 plperl    | 1.0     | pg_catalog | PL/Perl procedural language
 plpgsql   | 1.0     | pg_catalog | PL/pgSQL procedural language
 plpythonu | 1.0     | pg_catalog | PL/PythonU untrusted procedural language
 pltclu    | 1.0     | pg_catalog | PL/TclU untrusted procedural language

And these are only the default extensions. There is much more you can do:

  • Java
  • PHP
  • R
  • Ruby
  • Scheme
  • Unix shell

You see: PostgreSQL gives you the maximum flexibility :)

 

Cet article Can I do it with PostgreSQL? – 6 – Server programming est apparu en premier sur Blog dbi services.

Histograms on character strings between 11.2.0.3 and 11.2.0.4

Wed, 2016-11-30 15:33

In short, when have statistics gathered in 11.2.0.3 but the query is now running with the 11.2.0.4 optimizer, you may have wrong cardinality estimation on histograms, leading to sub-optimal plans.

I had a table with a flag that has two values ‘Y’ and ‘N’ with even distribution between them. It’s a good case for frequency histograms. I had frequency histograms and expected exact cardinality estimation for a WHERE FLAG=’Y’ predicate. But that was not the case: very low estimation leading to very bad execution plan. Because the cardinality estimation was far from what we have in histograms and far from what we would have without histograms, I checked a 10053 trace and this is what I’ve find:
Using prorated density: 0.004557 of col #97 as selectivity of out-of-range/non-existent value pred

This is linear decay because of out-of-range predicate (I’ll show that next Wednesday at UKOUG TECH16) but I don’t expect an out-of-range condition when I provide one of the two values that are in the frequency histogram.

Here is my testcase


SQL> create table DEMO ( flag char);
Table created.
 
SQL> select count(*) from DEMO where flag='Y';
 
COUNT(*)
----------
0
 
SQL> insert into DEMO select 'Y' from xmltable('1 to 100000');
100000 rows created.
 
SQL> insert into DEMO select 'N' from xmltable('1 to 1000');
1000 rows created.
 
SQL> select flag,to_char(ascii(flag),'XX'),count(*) from DEMO group by flag;
 
FLAG TO_CHAR(ASCII(FLAG),'XX') COUNT(*)
---- ------------------------- --------
Y 59 100000
N 4E 1000

100000 rows with ‘Y’ and 1000 rows with ‘N’.

11.2.0.3

I gather statistics in 11.2.0.3


SQL> alter session set optimizer_features_enable='11.2.0.3';
Session altered.
 
SQL> exec dbms_stats.gather_table_stats(user,'DEMO',no_invalidate=>false);
PL/SQL procedure successfully completed.

And run a query looking for rows where flag is ‘Y’


SQL> explain plan for select count(*) from DEMO where flag='Y';
Explained.
 
SQL> select * from table(dbms_xplan.display(format=>'basic +rows'));
PLAN_TABLE_OUTPUT
Plan hash value: 2180342005
 
-------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | TABLE ACCESS FULL| DEMO | 100K|
-------------------------------------------

The estimate is perfect thanks to the histograms.

11.2.0.4

Now the CBO will running in 11.2.0.4 but statistics have not been gathered since 11.2.0.3


SQL> alter session set optimizer_features_enable='11.2.0.4';
Session altered.
 
SQL> explain plan for select count(*) from DEMO where flag='Y';
Explained.
 
SQL> select * from table(dbms_xplan.display(format=>'basic +rows'));
PLAN_TABLE_OUTPUT
Plan hash value: 2180342005
 
-------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | TABLE ACCESS FULL| DEMO | 500 |
-------------------------------------------

Bad estimation here. Were those 500 come from?
It’s not from the histogram that knows that 100K rows have value ‘Y’
Without histograms the estimation would be based on 2 distinct values among 101000 so that would be 50500.
CBO trace says:

Access path analysis for DEMO
***************************************
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for DEMO[DEMO] SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
 
kkecdn: Single Table Predicate:"DEMO"."FLAG"='Y'
Column (#1):
NewDensity:0.004950, OldDensity:0.000005 BktCnt:101000.000000, PopBktCnt:101000.000000, PopValCnt:2, NDV:2
Column (#1): FLAG(CHARACTER)
AvgLen: 2 NDV: 2 Nulls: 0 Density: 0.004950
Histogram: Freq #Bkts: 2 UncompBkts: 101000 EndPtVals: 2 ActualVal: no
Using density: 0.004950 of col #1 as selectivity of pred having unreasonably low value
Table: DEMO Alias: DEMO
Card: Original: 101000.000000 Rounded: 500 Computed: 500.000000 Non Adjusted: 500.000000

Predicate having unreasonably low value…

Time to look at the histograms.

ENDPOINT_VALUE

You can get the histogram entries. For CHAR datatype (my case here) the endpoint value is hashed through their ASCII representation put in decimal. Or at least with first characters.

SQL> select endpoint_number,endpoint_value
2 ,to_char(endpoint_value,rpad('FM',65,'X'))
3 ,utl_raw.cast_to_varchar2(hextoraw(to_char(endpoint_value,rpad('FM',65,'X'))))
4 from user_histograms h where table_name='DEMO';
ENDPOINT_NUMBER ENDPOINT_VALUE TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X')) UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW(TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X'))))
--------------- ------------------------------------ ----------------------------------------- ------------------------------------------------------------------------------
1000 405650737316592000000000000000000000 4E20202020203A7BB119D5F6000000 N :{��
101000 462766002760475000000000000000000000 59202020202034D998FF0B5AE00000 Y 4٘�
Z�

Looks good. I can see the ‘N’ and ‘Y’ values here. But obviously the CBO sees that as different than ‘Y’.

Let’s gather statistics again (I’m now with 11.2.0.4 CBO):

SQL> exec dbms_stats.gather_table_stats(user,'DEMO',no_invalidate=>false);
PL/SQL procedure successfully completed.

And look at the difference:

SQL> select endpoint_number,endpoint_value
2 ,to_char(endpoint_value,rpad('FM',65,'X'))
3 ,utl_raw.cast_to_varchar2(hextoraw(to_char(endpoint_value,rpad('FM',65,'X'))))
4 from user_histograms h where table_name='DEMO';
ENDPOINT_NUMBER ENDPOINT_VALUE TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X')) UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW(TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X'))))
--------------- ------------------------------------ ----------------------------------------- ------------------------------------------------------------------------------
1000 404999154965717000000000000000000000 4E0000000000181F436C7BBB200000 NCl{�
101000 462114420409600000000000000000000000 590000000000127D2B51B120000000 Y}+Q�

In 11.2.0.3 the characters (‘Y’ is ASCII 0x59) were padded with spaces (ASCII 0x20). But In 11.2.0.4 they are padded with nulls (ASCII 0x00).
This is the reason why it was considered different. The ENDPOINT_VALUE for ‘Y’ calculated by the 11.2.0.4 version of the CDB is different from the one calculated by the 11.2.0.3 dbms_stats.

Now, the estimation is good again:


SQL> explain plan for select count(*) from DEMO where flag='Y';
Explained.
 
SQL> select * from table(dbms_xplan.display(format=>'basic +rows'));
PLAN_TABLE_OUTPUT
Plan hash value: 2180342005
 
-------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | TABLE ACCESS FULL| DEMO | 100K|
-------------------------------------------

char_value

The ENDPOINT_VALUE function has been described by Jonathan Lewis in Cost-Based Oracle Fundamentals and the script to encode a character string is downloadable rom http://www.jlcomp.demon.co.uk/cbo_book/book_cbo_scripts.zip. I use it (as an inline function because I’m running that in 12c) to show the values calculated from ‘Y’ and ‘N’ as well as the values calculated from same strings right-padded with spaces:


SQL> with function char_value(i_vc varchar2) return number
-- function coming from Jonathan Lewis char_value.sql http://www.jlcomp.demon.co.uk/cbo_book/book_cbo_scripts.zip
is
m_vc varchar2(15) := substr(rpad(i_vc,15,chr(0)),1,15);
m_n number := 0;
begin
for i in 1..15 loop
m_n := m_n + power(256,15-i) * ascii(substr(m_vc,i,1));
end loop;
m_n := round(m_n, -21);
return m_n;
end;
select flag,char_value(flag),cast(flag as char(32)),char_value(cast(flag as char(32))),count(*) from DEMO.DEMO group by flag
/
 
FLAG CHAR_VALUE(FLAG) CAST(FLAGASCHAR(32)) CHAR_VALUE(CAST(FLAGASCHAR(32))) COUNT(*)
---- ------------------------------------ -------------------------------- ------------------------------------ --------
Y 462114420409600000000000000000000000 Y 462766002760475000000000000000000000 100000
N 404999154965717000000000000000000000 N 405650737316592000000000000000000000 1000

Comparing the calculated values with the ENDPOINT_VALUE I had above, we can see that the function has not changed but the input string was padded with spaces before which is not the case anymore.

So what?

Be careful when upgrading from <= 11.2.0.3 to higher version. You should re-gather the statistics (but then read http://blog.dbi-services.com/a-migration-pitfall-with-all-column-size-auto/ as there’s another pitfall). Don't set optimizer_features_enable to lower version as the scope is very wide and many things may be different. It is always better to take the time to focus on the features you want to disable. And never give up. The weirdest situations can be troubleshooted. Oracle Database has lot of ways to understand what happens, from the debugging tools provided with the software, and from the excellent literature about it.

 

Cet article Histograms on character strings between 11.2.0.3 and 11.2.0.4 est apparu en premier sur Blog dbi services.

Switchover and Failover with Dbvisit 8

Wed, 2016-11-30 12:19

In this blog we will talk about how to do a switchover and how to do a failover. We suppose that dbvisit is already installed and that a standby database is already configured. Our instance is named DBPRIMA.

Switchover
SWITCHOVER is the fact to change database role. The primary becomes the standby and the standby becomes the primary. This can be useful for many reasons.
Before performing a switchover, we have first to send archived logs if any not already sent on the primary server. For this we use the magic command dbvctl

[oracle@dbvisit2 DBPRIMA]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 9201)
dbvctl started on dbvisit2: Tue Nov 29 14:46:15 2016
=============================================================
>>> Applying Log file(s) from dbvisit1 to DBPRIMA on dbvisit2:
thread 1 sequence 30 (1_30_926978008.arc)
=============================================================
dbvctl ended on dbvisit2: Tue Nov 29 14:46:17 2016
=============================================================
[oracle@dbvisit2 DBPRIMA]$

After let’s apply all archived logs on the standby

[oracle@dbvisit2 DBPRIMA]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 9201)
dbvctl started on dbvisit2: Tue Nov 29 14:46:15 2016
=============================================================
>>> Applying Log file(s) from dbvisit1 to DBPRIMA on dbvisit2:
thread 1 sequence 30 (1_30_926978008.arc)
=============================================================
dbvctl ended on dbvisit2: Tue Nov 29 14:46:17 2016
=============================================================
[oracle@dbvisit2 DBPRIMA]$

Note that the commands above can be scheduled on both servers using crontab for example on linux system
On the Primary Server:

00,10,20,30,40,50 * * * * cd /u01/app/dbvisit/standby; ./dbvctl -d DBPRIMA >>/dev/null 2>&1

On the Standby Server:

00,10,20,30,40,50 * * * * cd /u01/app/dbvisit/standby; ./dbvctl -d DBPRIMA >>/dev/null 2>&1

After sending and applying archived logs, we can check the status of the synchronization

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -i
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 4420)
dbvctl started on dbvisit1: Tue Nov 29 14:26:11 2016
=============================================================
Dbvisit Standby log gap report for DBPRIMA thread 1 at 201611291426:
-------------------------------------------------------------
Destination database on dbvisit2 is at sequence: 29.
Source database on dbvisit1 is at log sequence: 30.
Source database on dbvisit1 is at archived log sequence: 29.
Dbvisit Standby last transfer log sequence: 29.
Dbvisit Standby last transfer at: 2016-11-29 14:24:16.
Archive log gap for DBPRIMA: 0.
Transfer log gap for DBPRIMA: 0.
Standby database time lag (DAYS-HH:MI:SS): +0:01:37.
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:26:12 2016
=============================================================
[oracle@dbvisit1 ~]$

If all is ok, we can now we can go for the SWITCHOVER. We can do it either by the graphical console or by command line. We are using the command line method. The command should be run only on the primary server.

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -o switchover
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 5081)
dbvctl started on dbvisit1: Tue Nov 29 14:47:32 2016
=============================================================
>>> Starting Switchover between dbvisit1 and dbvisit2
Running pre-checks ... done
Pre processing ... done
Processing primary ... done
Processing standby ... done
Converting standby ... done
Converting primary ... done
Completing ... done
Synchronizing ... done
Post processing ... done
>>> Graceful switchover completed.
Primary Database Server: dbvisit2
Standby Database Server: dbvisit1
>>> Dbvisit Standby can be run as per normal:
dbvctl -d DBPRIMA
PID:5081
TRACE:5081_dbvctl_switchover_DBPRIMA_201611291447.trc
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:50:23 2016
=============================================================
[oracle@dbvisit1 ~]$

Failover
The FAILOVER process happens when the primary database is no longer working. In this case the standby should be activated and will become the primary one. This FAILOVER process is NOT reversible unlike SWITCHOVER process.
A good practice before activating the standby database is to run a quick test to ensure that the standby database is in a consistent state (datafile headers and controlfile is in sync) and ready to be activated. This can be done by opening the standby database read-only.

[oracle@dbvisit2 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -o read
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 2542)
dbvctl started on dbvisit2: Wed Nov 30 09:40:50 2016
=============================================================
Open Standby Database DBPRIMA in READ ONLY mode...
Standby Database DBPRIMA opened in READ ONLY mode.
Log files cannot be applied to Database while in READ ONLY mode.
Database tempfile(s) may need to be added to this database.
=============================================================
dbvctl ended on dbvisit2: Wed Nov 30 09:40:55 2016
=============================================================
[oracle@dbvisit2 ~]$

As we don’t get any error and that we know we can open the standby read-only, let’s start it back into recovery mode (mount state).

[oracle@dbvisit2 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -o restart
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 2667)
dbvctl started on dbvisit2: Wed Nov 30 09:45:45 2016
=============================================================
Stopping instance DBPRIMA...
Standby Instance DBPRIMA shutdown successfully on dbvisit2.
Starting instance DBPRIMA...
Standby Instance DBPRIMA started
=============================================================
dbvctl ended on dbvisit2: Wed Nov 30 09:45:57 2016
=============================================================
[oracle@dbvisit2 ~]$

And proceed with the FAILOVER. Dbvisit will ask to confirm. We can use the –force option to avoid this.

[oracle@dbvisit2 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -o activate
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 2796)
dbvctl started on dbvisit2: Wed Nov 30 09:47:12 2016
=============================================================
Activating means this database will become a Primary Database.
It will no longer be a Standby Database for DBPRIMA on dbvisit1.
Activation cannot be reversed.
=>Activate Standby Database on dbvisit2? [No]: yes
Are you sure? [No]: yes
>>> Activating now...
File /u01/app/dbvisit/standby/conf/dbv_DBPRIMA.env copied to
/u01/app/dbvisit/standby/conf/dbv_DBPRIMA.env.201611300947.
>>> Activation complete. Please ensure a backup of this Database is made
>>> Old archives from before the activation should be removed to avoid mix-up between new
and old archive logs
>>> If the Dbvisit Standby process is to be reversed, then database on dbvisit1 will need
to be rebuilt as a Standby Database
=============================================================
dbvctl ended on dbvisit2: Wed Nov 30 09:48:07 2016
=============================================================
[oracle@dbvisit2 ~]$

In this article we have seen how easy it is to do a switchover or failover with Dbvisit 8 .

 

Cet article Switchover and Failover with Dbvisit 8 est apparu en premier sur Blog dbi services.

Creating standby database with Dbvisit 8

Wed, 2016-11-30 12:13

The new version of Dbvisit standby is now released. In this blog we are going to see how to install dbvisit 8 and how to create a standby database.
For the installation after downloading the software, just uncompress it and then run the install-dbvisit command under the installer directory. The installation process is divided in 2 parts.
1- Core Components (Dbvisit Standby Cli, Dbvnet, Dbvagent) –On both servers
2- Dbvserver console installation. Just note that the console now needs to be installed only on one server.

We will not show all outputs, more info for the installation can be found in my precedent blog about upgrading dbvisit 7 to dbvisit 8

[oracle@dbvisit1 installer]$ pwd
/home/oracle/dbvisit/dbvisit/dbvisit/installer
[oracle@dbvisit1 installer]$ ls
install-dbvisit
[oracle@dbvisit1 installer]$
[oracle@dbvisit1 installer]$ ./install-dbvisit
-----------------------------------------------------------
Welcome to the Dbvisit software installer.
-----------------------------------------------------------
It is recommended to make a backup of our current Dbvisit software
location (Dbvisit Base location) for rollback purposes.
Installer Directory /home/oracle/dbvisit/dbvisit/dbvisit
>>> Please specify the Dbvisit installation directory (Dbvisit Base).
The various Dbvisit products and components - such as Dbvisit Standby,
Dbvisit Dbvnet will be installed in the appropriate subdirectories of
this path.
Enter a custom value or press ENTER to accept default [/usr/dbvisit]:
> /u01/app/dbvisit
DBVISIT_BASE = /u01/app/dbvisit
-----------------------------------------------------------
Component Installer Version Installed Version
-----------------------------------------------------------
standby 8.0.04.18184 not installed
dbvnet 2.0.04.18184 not installed
dbvagent 2.0.04.18184 not installed
dbvserver 2.0.04.18184 not installed
-----------------------------------------------------------
What action would you like to perform?
1 - Install component(s)
2 - Uninstall component(s)
3 - Terminate
Your choice: 1
Which component do you want to install?
1 - Core Components (Dbvisit Standby Cli, Dbvnet, Dbvagent)
2 - Dbvisit Standby Core (Command Line Interface)
3 - Dbvnet (Dbvisit Network Communication)
4 - Dbvagent (Dbvisit Agent)
5 - Dbvserver (Dbvisit Central Console)
6 - Exit Installer
Your choice: 1

And then follow instructions.
At the end of the proceesus we can start the different components: the dbvagent and the dbvserver


[oracle@dbvisit1 installer]$ /u01/app/dbvisit/dbvagent/dbvagent -d start


[oracle@dbvisit1 installer]$ /u01/app/dbvisit/dbvnet/dbvnet -d start


[oracle@dbvisit1 installer]$ ps -ef|egrep 'dbvagent|dbvnet' | grep -v grep
oracle 4064 1 0 10:46 ? 00:00:00 /u01/app/dbvisit/dbvagent/dbvagent -d start
oracle 4140 1 0 10:47 ? 00:00:00 /u01/app/dbvisit/dbvnet/dbvnet -d start
[oracle@dbvisit1 installer]$

After core components installation, we can proceed with Dbvisit Standby Central console installation (dbvserver). Just launch again install-dbvisit command and follow instructions.

[oracle@dbvisit1 installer]$ ./install-dbvisit
-----------------------------------------------------------
Welcome to the Dbvisit software installer.
-----------------------------------------------------------
It is recommended to make a backup of our current Dbvisit software
location (Dbvisit Base location) for rollback purposes.
Installer Directory /home/oracle/dbvisit/dbvisit/dbvisit
.....
.....
What action would you like to perform?
1 - Install component(s)
2 - Uninstall component(s)
3 - Terminate
Your choice: 1
Which component do you want to install?
1 - Core Components (Dbvisit Standby Cli, Dbvnet, Dbvagent)
2 - Dbvisit Standby Core (Command Line Interface)
3 - Dbvnet (Dbvisit Network Communication)
4 - Dbvagent (Dbvisit Agent)
5 - Dbvserver (Dbvisit Central Console)
6 - Exit Installer
Your choice: 5

At the end of the installation, We can now start dbserver

[oracle@dbvisit1 installer]$ /u01/app/dbvisit/dbvserver/dbvserver -d start


[root@dbvisit1 ~]# netstat -taupen | grep dbvser
tcp 0 0 0.0.0.0:4433 0.0.0.0:* LISTEN 1000 37848 5348/dbvserver
[root@dbvisit1 ~]#

We should be able to connect to dbvserver console on port 4433, with admin user. The default password is admin
dbvserverconsole1
dbvserverconsole2
For example using MANAGE USERS, we can change default admin password.
Now that dbvisit standby is installed on both server, the first thing is to test connectivity. For this we will use the command dbvctl which is the main command of dbvisit 8. In dbvisit documentation you can find The first command you need to be fimiliar with is the “dbvctl -h” command which will display the syntax and usage options for the Dbvisit Standby command line interface. and it’s really true. The command dbvctl -f system_readiness will be used.

[oracle@dbvisit1 installer]$ /u01/app/dbvisit/standby/dbvctl -f system_readiness
Please supply the following information to complete the test.
Default values are in [].
Enter Dbvisit Standby location on local server: [/u01/app/dbvisit]:
Your input: /u01/app/dbvisit
Is this correct? <Yes/No> [Yes]:
Enter the name of the remote server: []: dbvisit2
Your input: dbvisit2
Is this correct? <Yes/No> [Yes]:
Enter Dbvisit Standby location on remote server: [/u01/app/dbvisit]:
Your input: /u01/app/dbvisit
Is this correct? <Yes/No> [Yes]:
Enter the name of a file to transfer relative to local install directory
/u01/app/dbvisit: [standby/doc/README.txt]:
Your input: standby/doc/README.txt
Is this correct? <Yes/No> [Yes]:
Choose copy method:
1) /u01/app/dbvisit/dbvnet/dbvnet
2) /usr/bin/scp
Please enter choice [1] : 1
Is this correct? <Yes/No> [Yes]:
Enter port for method /u01/app/dbvisit/dbvnet/dbvnet: [7890]:
Your input: 7890
Is this correct? <Yes/No> [Yes]:
-------------------------------------------------------------
Testing the network connection between local server and remote server dbvisit2.
-------------------------------------------------------------
Settings
========
Remote server =dbvisit2
Dbvisit Standby location on local server =/u01/app/dbvisit
Dbvisit Standby location on remote server =/u01/app/dbvisit
Test file to copy =/u01/app/dbvisit/standby/doc/README.txt
Transfer method =/u01/app/dbvisit/dbvnet/dbvnet
port =7890
-------------------------------------------------------------
Checking network connection by copying file to remote server dbvisit2...
-------------------------------------------------------------
Trace file /u01/app/dbvisit/standby/trace/5589_dbvctl_system_readiness_201611291139.trc
File copied successfully. Network connection between local and dbvisit2
correctly configured.
[oracle@dbvisit1 installer]$

If everything is fine with the installation, now we can create a standby database using dbvctl. The configuration we use is the following:
Primary Server : dbvisit1
Standby Server: dbvisit2
Database Instance : DBPRIMA ( We suppose that database is in archive mode and in force logging mode )
First Let’s create Dbvisit Standby Database Configuration (DDC) file.The name of the DDC is generally the name of the database instance. The command dbvctl -o setup is used.

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -o setup
========================================================
Dbvisit Standby Database Technology (8.0.04.18184)
=========================================================
=>dbvctl only needs to be run on the primary server.
Is this the primary server? <Yes/No> [Yes]:
The following Dbvisit Database configuration (DDC) file(s) found on this
server:
DDC
===
1) Create New DDC
2) Cancel
Please enter choice [] : 1
Is this correct? <Yes/No> [Yes]:
END USER LICENSE AGREEMENT
PLEASE READ THIS END USER LICENSE AGREEMENT (AGREEMENT) CAREFULLY BEFORE
DOWNLOADING THE LICENSED PRODUCT. BY CLICKING I AGREE BELOW, YOU ARE
REPRESENTING THAT YOU HAVE THE RIGHT AND AUTHORITY TO LEGALLY BIND YOURSELF
…….
…….
of it on future occasions. Headings are for reference purposes only and
have no substantive effect.
Continue ? <Yes/No> [No]: yes
=========================================================
Dbvisit Standby setup begins.
=========================================================
The following Oracle instance(s) have been found on this server:
SID ORACLE_HOME
=== ===========
1) DBPRIMA /u01/app/oracle/product/12.1.0.2/dbhome_1
2) Enter own ORACLE_SID and ORACLE_HOME
Please enter choice [] : 1
Is this correct? <Yes/No> [Yes]:
=>ORACLE_SID will be: DBPRIMA
=>ORACLE_HOME will be: /u01/app/oracle/product/12.1.0.2/dbhome_1
------------------------------------------------------------------------------
Enter the primary server name.
Note: The hostname is case sensitive on UNIX based systems.
=>SOURCE is [dbvisit1]:
Your input: dbvisit1
Is this correct? <Yes/No> [Yes]:
Choice is dbvisit1
------------------------------------------------------------------------------
Please enter a filesystem directory that Dbvisit Standby use to store (archive) log files. This directory
is not the same as the database recovery area or archive destinations and should not be located in these
areas.
....
Please ensure that this directory exists on
=>ARCHSOURCE is [/u01/app/oracle/dbvisit_arch/DBPRIMA]: /u01/app/archivedbvisit/DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is /u01/app/archivedbvisit/DBPRIMA
------------------------------------------------------------------------------
Do you want to use SSH to connect to the standby server? Note that if you are using SSH, passwordless SSH
authentication between the hosts must already be configured. By default Dbvnet will be used.
=>USE_SSH is [N]:
Your input: N
Is this correct? <Yes/No> [Yes]:
Choice is N
------------------------------------------------------------------------------
Enter the standby database hostname.
If the standby database will be Oracle RAC enabled:
Enter the the Virtual Hostname (linked to a Virtual IP) for standby database. This virtual hostname is
....
For non-RAC configurations specify the standby database server name here.
=>DESTINATION is []: dbvisit2
Your input: dbvisit2
Is this correct? <Yes/No> [Yes]:
Choice is dbvisit2
------------------------------------------------------------------------------
Specify the DBVNET or SSH port number on the standby server. The default value supplied is the dbvnet port
7890. If you specified the use of SSH, please specify the SSH port here.
=>NETPORT is [7890]:
Your input: 7890
Is this correct? <Yes/No> [Yes]:
Choice is 7890
------------------------------------------------------------------------------
Enter Dbvisit Standby installation directory on the standby server
=>DBVISIT_BASE_DR is [/u01/app/dbvisit]:
Your input: /u01/app/dbvisit
Is this correct? <Yes/No> [Yes]:
Choice is /u01/app/dbvisit
------------------------------------------------------------------------------
Enter ORACLE_HOME directory on the standby server
=>ORACLE_HOME_DR is [/u01/app/oracle/product/12.1.0.2/dbhome_1]:
Your input: /u01/app/oracle/product/12.1.0.2/dbhome_1
Is this correct? <Yes/No> [Yes]:
Choice is /u01/app/oracle/product/12.1.0.2/dbhome_1
------------------------------------------------------------------------------
Enter DB_UNIQUE_NAME on the standby server
=>DB_UNIQUE_NAME_DR is [DBPRIMA]:
Your input: DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is DBPRIMA
------------------------------------------------------------------------------
Please enter the directory where Dbvisit Standby will transfer the (archive) log files to on standby
server.
...
Please ensure that this directory exists on the standby server
=>ARCHDEST is [/u01/app/archivedbvisit/DBPRIMA]:
Your input: /u01/app/archivedbvisit/DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is /u01/app/archivedbvisit/DBPRIMA
------------------------------------------------------------------------------
Enter ORACLE_SID on the standby server
=>ORACLE_SID_DR is [DBPRIMA]:
Your input: DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is DBPRIMA
------------------------------------------------------------------------------
Enter ASM instance name on the standby server, if your standby is using ASM. If you are not using ASM on
the standby leave the value blank.
=>ORACLE_SID_ASM_DR is []:
Is this correct? <Yes/No> [Yes]:
Choice is null
------------------------------------------------------------------------------
Please specify the name of the Dbvisit Database configuration (DDC) file.
The DDC file is a plain text file that contains all the Dbvisit Standby settings.
=>ENV_FILE is [DBPRIMA]:
Your input: DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is DBPRIMA
------------------------------------------------------------------------------
Below are the list of configuration variables provided during the setup process:
Configuration Variable Value Provided
====================== ==============
ORACLE_SID DBPRIMA
ORACLE_HOME /u01/app/oracle/product/12.1.0.2/dbhome_1
SOURCE dbvisit1
ARCHSOURCE /u01/app/archivedbvisit/DBPRIMA
RAC_DR N
USE_SSH N
DESTINATION dbvisit2
NETPORT 7890
DBVISIT_BASE_DR /u01/app/dbvisit
ORACLE_HOME_DR /u01/app/oracle/product/12.1.0.2/dbhome_1
DB_UNIQUE_NAME_DR DBPRIMA
ARCHDEST /u01/app/archivedbvisit/DBPRIMA
ORACLE_SID_DR DBPRIMA
ENV_FILE DBPRIMA
Are these variables correct? <Yes/No> [Yes]:
>>> Dbvisit Database configuration (DDC) file DBPRIMA created.
>>> Dbvisit Database repository (DDR) DBPRIMA created.
Repository Version 8.0
Software Version 8.0
Repository Status VALID
PID:2330
TRACE:dbvisit_install.log
[oracle@dbvisit1 ~]$

The DDC file we created will be used to create the standby database. But before we must enter the license key, otherwise we will have an error.Following command is used dbvctl -d DDC -l License_Key

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -l 4jo6z-8aaai-u09b6-ijjq5-m1u6k-1uwpp-cmjfq
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 3835)
dbvctl started on dbvisit1: Tue Nov 29 14:12:08 2016
=============================================================
=>Update with license key: 4jo6z-8aaai-u09b6-ijjq5-m1u6k-1uwpp-cmjfq? <Yes/No> [Yes]:
>>> Dbvisit Standby License
License Key : 4jo6z-8aaai-u09b6-ijjq5-m1u6k-1uwpp-cmjfq
customer_number : 1
dbname :
expiry_date : 2016-12-29
product_id : 8
sequence : 1
status : VALID
updated : YES
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:12:17 2016
=============================================================
[oracle@dbvisit1 ~]$

And then we can proceed with the standby database creation using the command dbvctl -d DDC –csd

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA --csd
>>> Running pre-checks please wait... done
What would you like to do:
1 - Create standby database (and optionally save settings in template)
2 - Help
3 - Terminate processing
Please enter your choice [1]: 1
-------------------------------------------------------------------------------
Answer Yes to transfer the datafiles directly to the standby server (not
recommended for large database or slow networks)
Note tablespaces will be placed in backup mode for direct copy.
Answer No (recommended) to create an RMAN backup of primary datafiles in a
temporary location and transfer the backup to the standby server
=>Do you want to copy database files directly to the standby server? [N]: y
Your input: Y
Is this correct? <Yes/No> [Yes]: yes
The following oracle database parameters will be set in the standby database pfile or spfile:
-------------------------------------------------------------------------------
SID NAME VALUE
* audit_file_dest /u01/app/oracle/admin/DBPRIMA/adump
* compatible 12.1.0.2.0
* control_files /u01/app/oracle/oradata/DBPRIMA/control01.ctl,/u01/app/oracle/fast_recovery_area/DBPRIMA/control02.ctl
* db_block_size 8192
* db_domain
* db_name DBPRIMA
* db_recovery_file_dest /u01/app/oracle/fast_recovery_area
* db_recovery_file_dest_size 10485760000
* db_unique_name DBPRIMA
* diagnostic_dest /u01/app/oracle
* dispatchers (PROTOCOL=TCP) (SERVICE=DBPRIMAXDB)
* log_archive_format %t_%s_%r.dbf
* open_cursors 300
* pga_aggregate_target 304087040
* processes 300
* remote_login_passwordfile EXCLUSIVE
* sga_target 912261120
* spfile OS default
* undo_tablespace UNDOTBS1
-------------------------------------------------------------------------------
What would you like to do:
1 - Proceed with creating the standby database
2 - Edit oracle database parameters for the standby database pfile/spfile
3 - Terminate processing
Please enter your choice [1]: 1
......
=>Create standby database template for DBPRIMA using provided answers? [Y]:
Your input: 1
Is this correct? <Yes/No> [Yes]:
-------------------------------------------------------------------------------
=>Continue with creating a standby database? (If No processing will terminate,
the saved template will be available for future use) [Y]:
Is this correct? <Yes/No> [Yes]:
>>> dbvctl will now run a pre-flight check for standby database creation. An attempt will
be made to create a standby (s)pfile using oracle standby database parameters, followed
by trying to start the standby instance. If this step fails, then please double-check
the following items before re-running dbvctl again:
1) Review the standby database parameters you have supplied and provide valid values
unless a template is used.
2) Recreate the template to provide valid values for standby database parameters if a
template is used.
>>> Running pre-flight check for standby creation, please wait... done
>>> Total database size for DBPRIMA is 1.57GB
>>> Creating standby control file... done
>>> Transferring datafiles from dbvisit1 to dbvisit2...
Transferring /u01/app/oracle/oradata/DBPRIMA/system01.dbf...
Transferring /u01/app/oracle/oradata/DBPRIMA/sysaux01.dbf... done
Transferring /u01/app/oracle/oradata/DBPRIMA/undotbs01.dbf... done
Transferring /u01/app/oracle/oradata/DBPRIMA/users01.dbf... done
>>> Restoring standby control files... done
>>> Starting standby database DBPRIMA on dbvisit2 mount... done
>>> Restoring datafiles on dbvisit2...
>>> Renaming standby redo logs and tempfiles on dbvisit2... done
>>> Performing checkpoint and archiving logs... done
>>> Finishing standby database creation... done
>>> Standby database created.
To complete creating standby database please run dbvctl on the primary server first,
then on the standby server, to ensure the standby database is in sync with the primary
database.
PID:3915
TRACE:3915_dbvctl_csd_DBPRIMA_201611291413.trc

Let’s now run dbvctl -d DDC on the primary server to send all archived logs and on the standby server to apply all archived logs.

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 4354)
dbvctl started on dbvisit1: Tue Nov 29 14:24:11 2016
=============================================================
>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
>>> Sending heartbeat message... skipped
Performing a log switch...
>>> Transferring Log file(s) from DBPRIMA on dbvisit1 to dbvisit2 for thread 1:
thread 1 sequence 29 (o1_mf_1_29_d3v083x6_.arc)
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:24:16 2016
=============================================================
[oracle@dbvisit1 ~]$


[oracle@dbvisit2 DBPRIMA]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 8665)
dbvctl started on dbvisit2: Tue Nov 29 14:25:07 2016
=============================================================
>>> Sending heartbeat message... skipped
>>> Applying Log file(s) from dbvisit1 to DBPRIMA on dbvisit2:
thread 1 sequence 29 (1_29_926978008.arc)
=============================================================
dbvctl ended on dbvisit2: Tue Nov 29 14:25:09 2016
=============================================================
[oracle@dbvisit2 DBPRIMA]$

We can check the sync status with dbvctl -d DDC -i

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -i
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 4420)
dbvctl started on dbvisit1: Tue Nov 29 14:26:11 2016
=============================================================
Dbvisit Standby log gap report for DBPRIMA thread 1 at 201611291426:
-------------------------------------------------------------
Destination database on dbvisit2 is at sequence: 29.
Source database on dbvisit1 is at log sequence: 30.
Source database on dbvisit1 is at archived log sequence: 29.
Dbvisit Standby last transfer log sequence: 29.
Dbvisit Standby last transfer at: 2016-11-29 14:24:16.
Archive log gap for DBPRIMA: 0.
Transfer log gap for DBPRIMA: 0.
Standby database time lag (DAYS-HH:MI:SS): +0:01:37.
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:26:12 2016
=============================================================
[oracle@dbvisit1 ~]$

Now it’s time to import our configuration in our graphical console. For this let’s use MANAGE HOSTS
hosts

And then add hosts using the option NEW
hosts2

Fill info and click on CREATE NEW HOST
hosts3
After we add all hosts
hosts4

We can now import our standby configuration using the MANAGE CONFIGURATIONS tab.
config1

Using IMPORT
config2
config3

Now we can manage our configuration either by command line using dbvctl or by using the graphical console. In a next blog we will talk about how to do a SWITCHOVER and a FAILOVER

 

Cet article Creating standby database with Dbvisit 8 est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 5 – Generating DDL commands

Wed, 2016-11-30 11:17

From time to time it is very useful that you can generate the DDL commands for existing objects (Tables, Indexes, whole Schema …). In Oracle you can either use the dbms_metadata PL/SQL package for this or use expdp/impdp to generate the statements out of a dump file. What options do you have in PostgreSQL? Note: We’ll not look at any third party tools you could use for that, only plain PostgreSQL.

As always we’ll need some objects to test with, so here we go:

\c postgres
drop database if exists ddl;
create database ddl;
\c ddl
create table t1 ( a int, b int );
insert into t1 (a,b)
       values (generate_series(1,1000000)
              ,generate_series(1,1000000));
select count(*) from t1;
create index i1 on t1(a);
create unique index i2 on t1(b);
create view v1 as select a from t1;
alter table t1 add constraint con1 check ( a < 2000000 );
\d t1
CREATE FUNCTION add(integer, integer) RETURNS integer
    AS 'select $1 + $2;'
    LANGUAGE SQL
    IMMUTABLE
    RETURNS NULL ON NULL INPUT;

PostgreSQL comes with a set of administration functions which can be used to query various stuff. Some are there to get the definitions for your objects.

You can get the definition of a view:

(postgres@[local]:5439) [ddl] > select pg_get_viewdef('v1'::regclass, true);
 pg_get_viewdef 
----------------
  SELECT t1.a  +
    FROM t1;
(1 row)

You can get the definition of a constraint:

(postgres@[local]:5439) [ddl] > SELECT conname
                                     , pg_get_constraintdef(r.oid, true) as definition
                                  FROM pg_constraint r
                                 WHERE r.conrelid = 't1'::regclass;
 conname |     definition      
---------+---------------------
 con1    | CHECK (a < 2000000)

You can get the definition of a function:

(postgres@[local]:5439) [ddl] > SELECT proname
     , pg_get_functiondef(a.oid)
  FROM pg_proc a
 WHERE a.proname = 'add';
 proname |                   pg_get_functiondef                    
---------+---------------------------------------------------------
 add     | CREATE OR REPLACE FUNCTION public.add(integer, integer)+
         |  RETURNS integer                                       +
         |  LANGUAGE sql                                          +
         |  IMMUTABLE STRICT                                      +
         | AS $function$select $1 + $2;$function$                 +
         | 
--OR
(postgres@[local]:5439) [ddl] > SELECT pg_get_functiondef(to_regproc('add'));
                   pg_get_functiondef                    
---------------------------------------------------------
 CREATE OR REPLACE FUNCTION public.add(integer, integer)+
  RETURNS integer                                       +
  LANGUAGE sql                                          +
  IMMUTABLE STRICT                                      +
 AS $function$select $1 + $2;$function$                 +

You can get the definition of an index:

(postgres@[local]:5439) [ddl] > select pg_get_indexdef('i1'::regclass);
            pg_get_indexdef            
---------------------------------------
 CREATE INDEX i1 ON t1 USING btree (a)
(1 row)

But surprisingly you can not get the DDL for a table. There is just no function available to do this. How can you do that without concatenating the definitions you can get out of the PostgreSQL catalog? The only option I am aware of is pg_dump:

postgres@pgbox:/home/postgres/ [PG961] pg_dump -s -t t1 ddl | egrep -v "^--|^$"
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;
SET search_path = public, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
CREATE TABLE t1 (
    a integer,
    b integer,
    CONSTRAINT con1 CHECK ((a < 2000000))
);
ALTER TABLE t1 OWNER TO postgres;
CREATE INDEX i1 ON t1 USING btree (a);
CREATE UNIQUE INDEX i2 ON t1 USING btree (b);

Using the “-s” (schema only) and “-t” (tables) options you get the DDL for the complete table. Not as handy as in Oracle where you can do this in sqlplus but it works and produces a result you can work with.

Of course you can always create the DDLs for your own by querying the catalog, e.g. pg_attribute which holds all the column definitions for the tables:

    Table "pg_catalog.pg_attribute"
    Column     |   Type    | Modifiers 
---------------+-----------+-----------
 attrelid      | oid       | not null
 attname       | name      | not null
 atttypid      | oid       | not null
 attstattarget | integer   | not null
 attlen        | smallint  | not null
 attnum        | smallint  | not null
 attndims      | integer   | not null
 attcacheoff   | integer   | not null
 atttypmod     | integer   | not null
 attbyval      | boolean   | not null
 attstorage    | "char"    | not null
 attalign      | "char"    | not null
 attnotnull    | boolean   | not null
 atthasdef     | boolean   | not null
 attisdropped  | boolean   | not null
 attislocal    | boolean   | not null
 attinhcount   | integer   | not null
 attcollation  | oid       | not null
 attacl        | aclitem[] | 
 attoptions    | text[]    | 
 attfdwoptions | text[]    | 
Indexes:
    "pg_attribute_relid_attnam_index" UNIQUE, btree (attrelid, attname)
    "pg_attribute_relid_attnum_index" UNIQUE, btree (attrelid, attnum)

One nasty way which which is even documented on the PostgreSQL wiki is this:

(postgres@[local]:5439) [ddl] > create extension plperlu;
CREATE EXTENSION
Time: 90.074 ms
(postgres@[local]:5439) [ddl] > \dx
                      List of installed extensions
  Name   | Version |   Schema   |              Description               
---------+---------+------------+----------------------------------------
 plperlu | 1.0     | pg_catalog | PL/PerlU untrusted procedural language
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language

(postgres@[local]:5439) [ddl] > CREATE OR REPLACE FUNCTION system(text) RETURNS text 
AS 'my $cmd=shift; return `cd /tmp;$cmd`;' LANGUAGE plperlu;
CREATE FUNCTION

(postgres@[local]:5439) [ddl] > select system('pg_dump -s -t t1 ddl | egrep -v "^--|^$"');
                    system                     
-----------------------------------------------
 SET statement_timeout = 0;                   +
 SET lock_timeout = 0;                        +
 SET idle_in_transaction_session_timeout = 0; +
 SET client_encoding = 'UTF8';                +
 SET standard_conforming_strings = on;        +
 SET check_function_bodies = false;           +
 SET client_min_messages = warning;           +
 SET row_security = off;                      +
 SET search_path = public, pg_catalog;        +
 SET default_tablespace = '';                 +
 SET default_with_oids = false;               +
 CREATE TABLE t1 (                            +
     a integer,                               +
     b integer,                               +
     CONSTRAINT con1 CHECK ((a < 2000000))    +
 );                                           +
 ALTER TABLE t1 OWNER TO postgres;            +
 CREATE INDEX i1 ON t1 USING btree (a);       +
 CREATE UNIQUE INDEX i2 ON t1 USING btree (b);+
 

Can be a workaround. Hope this helps…

 

Cet article Can I do it with PostgreSQL? – 5 – Generating DDL commands est apparu en premier sur Blog dbi services.

Pages