Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 15 hours 48 min ago

SQL Server 2016 : Basic availability groups may fit to your HA needs

Fri, 2016-09-16 13:22

In this blog let’s talk about SQL Server and availability groups in standard Edition. I had some customer questions about existing limitations and this is a good opportunity to review them on this blog post. First of all, let’s say that Microsoft has already documented the restrictions here and from my side, I already written a blog post on the subject at the time of the CTP version of SQL Server 2016. In the meantime, some changes occurred and are now included in the RTM version.

As a reminder, the Standard Edition offers limited capabilities of availability groups and it tends to be a replacement of previous feature called mirroring. This is a good news for customers that may finally upgrade their old environment.

The first main limitation concerns the number of replicas capped to only 2 and with no possibility to use the secondary replica for Reporting and backups offloading as well. But we want to use a Standard Edition and it is supposed to be sold at a lower cost right?

Let’s just have a quick check by using the T-SQL script. Generally speaking, I’m not a big fan of the GUI! So if I try to create an availability group with 3 replicas, I will obviously face the following error message:

Msg 35233, Level 16, State 0, Line 2
Cannot create an availability group containing 3 availability replica(s). 
The maximum number of availability replicas in an availability group  with (basic) is 2. 
Reenter your CREATE AVAILABILITY GROUP command specifying fewer availability replicas.

Let’s try to use a secondary replica as read-only replica (ALLOW_CONNECTIONS = ALL)

-- primary replica
CREATE AVAILABILITY GROUP [testGrpStd]   
FOR  
REPLICA ON
N'WIN20121SQL16\SQL16STD'
WITH (ENDPOINT_URL = N'TCP://WIN20121SQL16.dbi-services.test:5023',  
    FAILOVER_MODE = AUTOMATIC,  
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = ALL),   
    SEEDING_MODE = MANUAL)
,
N'WIN20124SQL16\SQL16STD'
WITH (ENDPOINT_URL = N'TCP://WIN20124SQL16.dbi-services.test:5022',   
    FAILOVER_MODE = AUTOMATIC,   
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = ALL),   
    SEEDING_MODE = MANUAL);   
GO

Well another error message I got here:

Msg 41199, Level 16, State 8, Line 2
The specified command is invalid because the Always On Availability Groups allow_connections feature
is not supported by this edition of SQL Server. For information about features supported by the editions
of SQL Server, see SQL Server Books Online.

However, I noticed a big change compared to my first test with a CTP version concerning the possibility to add a listener to my availability group. From my point of view, this is a very good improvement compared to the mirroring feature because no changes will be required at the application layer with additional options at the connection string level. Let me prove it with the following script:

USE [master]
GO
ALTER AVAILABILITY GROUP [testGrpStd]
ADD LISTENER N'Test-GrpSTD' (
WITH IP
((N'192.168.5.122', N'255.255.255.0')
)
, PORT=1433);
GO

We can check the listener creation by using DMVs to get the listener configuration as shown below:

blog 103 - STD AG - listener 1

According to my discussions with some customers at our last SQL Server 2016 event, availability groups in Standard Edition may also suggest that more complex infrastructures like multi-subnet availability groups or new special features like direct seeding will not be supported. A first look at the Microsoft documentation doesn’t provide any clues. You can argue that the Microsoft documentation is black-list oriented and if it is not written explicitly in the documentation it should work and you are probably right. But you know, I’m curious and I was interesting to check myself on my lab environment!

For the both suppositions, I can confirm that I didn’t face any issue when creating an availability group with a different setup scenario on the top of multi-subnet WSFC or when using direct seeding. Here the script used to create my availability group with a listener and two IP addresses on different subnet. I also have enabled the direct seeding feature by configuring the SEEDING_MODE parameter to AUTOMATIC as shown below.

-- Availability groups with SEEDING_MODE = AUTOMATIC
CREATE AVAILABILITY GROUP [testGrpStd]   
FOR  
REPLICA ON
N'WIN20121SQL16\SQL16STD'
WITH (ENDPOINT_URL = N'TCP://WIN20121SQL16.dbi-services.test:5023',  
    FAILOVER_MODE = AUTOMATIC,  
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),   
    SEEDING_MODE = AUTOMATIC)
,
N'WIN20124SQL16\SQL16STD'
WITH (ENDPOINT_URL = N'TCP://WIN20124SQL16.dbi-services.test:5022',   
    FAILOVER_MODE = AUTOMATIC,   
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = NO),   
    SEEDING_MODE = AUTOMATIC);   
GO

-- Listener with two IP addresses on a different subnet 
ALTER AVAILABILITY GROUP [testGrpStd]
ADD LISTENER N'Test-GrpSTD' (
WITH IP
((N'192.168.5.122', N'255.255.255.0'),
 (N'192.168.50.122', N'255.255.255.0')
)
, PORT=1433);
GO

Now let’s have a look at the availability group configuration and direct seeding parameter value

blog 103 - STD AG - automatic seeding

Here the listener configuration retrieved from the sys.availability_group_listeners DMV:

blog 103 - STD AG - listener 2

Let’s take this opportunity to change the replication type to ASYNCHRONOUS. In fact, I had in mind the mirroring feature when using the asynchronous replication that was only available with Enterprise edition. This is another interesting point and now we may benefit from both asynchronous and synchronous replication mode regardless the edition. By the way this feature is clearly documented in the BOL.

blog 103 - STD AG - async replication

Finally let’s check if direct seeding works as expected. If you don’t know what direct seeding is please refer to my previous blog. Just as reminder, we must first configure direct seeding on the secondary by granting the correct permission to the availability group itself.

ALTER AVAILABILITY GROUP [testGrpStd] GRANT CREATE ANY DATABASE 
GO

Here we go. Let’s finally add the database to my existing availability and let’s see if direct seeding comes into play

ALTER AVAILABILITY GROUP testGrpStd ADD DATABASE test_db;  
GO

To get data from direct seeding session states we may use DMVs like sys.dm_hadr_automatic_seeding and sys.dm_hadr_physical_seeding_stats as well.

blog 103 - STD AG - direct seeding

Sounds good right?

Bottom line

Basic availability groups feature is shipped with limited features. Yes, but in a meantime it will still respond to a large number of customer scenarios where we have to find a balance between high availability options and costs. Referring to some discussions I had with some customers, they don’t need special features provided by availability groups like readable secondaries or offloading backups. They just want to benefit from high availability for only some critical applications. I’m looking forward to see basic availability groups at customer shops!

 

 

 

 

Cet article SQL Server 2016 : Basic availability groups may fit to your HA needs est apparu en premier sur Blog dbi services.

The dbi Open Infrastructure Technology Division

Fri, 2016-09-16 07:37

As you already might know dbi (on a technical level) is organized into several technology divisions. Traditionally there is one for Oracle, one for Microsoft and one for Middleware products. Almost a year ago a new technology division was born to reflect the new requirements we see on the market. This does not mean that we did not provide expertise in these areas before but the goal was to more clearly organize ourself for being able to rapidly adopt new technologies and provide the best expertise for our current and future customers. In this post I’ll take a look at our Open Infrastructure Technology Division. What is it about, what do we believe in and what you can expect now and in the future.

To start with: What do we call a technology division? As the name already adumbrates it is all about technology. You could also say: The technology division are the heart of dbi. This is where people work on the different technologies, organize technical events, do researching, define and plan new workshops, adopt new products and have lots of fun. Each of these technology divisions is led by a so called technology leader who is responsible for his technology division. Currently dbi supports four of these divisions which reflect the areas we work in:

to-organization-overview

On the top there is Hervé who is our CTO and then we have the four technology divisions:

  • We have Franck who is responsible for all aspects around the Oracle database
  • We have Stéphane who is responsible for the Microsoft stuff
  • We have Gérard who is responsible for everything around the middleware products we support
  • And finally we have the Open Infrastructure Technology Division which I am happy to represent

A very important point to notice here is that each of these divisions is cross connected to every other. We are not working in isolated silos but do work together wherever we can. Nowadays you can never be sure what happens and things have recently happened you never thought would be possible in the past, e.g.

… just to name a few. Do you know if you need a SQL Server expert on Linux tomorrow? Are you running Oracle on Windows? Are you planning to run Docker on Windows? These are just a few examples where you might need expertise which goes beyond the traditional “I only do Windows” or “I only do Oracle” way of working. This is what the technology divisions are there for: Build expertise inside the divisions but even more important build a kind of “backbone” where everyone can get help from others (inside and outside of dbi). Share what you learned, be open, be communicative, be mindful and have an open eye on what is going on around you. No matter which technology division you take: All of them share their knowledge by writing blogs, attending, organizing and speaking at events (e.g. DOAG 2016, Oracle Open World 2016, IT-Tage 2016, SWISS PGDay, PASS SQLSaturdays and many more), writing articles, creating hands-on workshops and even come together twice a year for exchanging knowledge across the technology divisions.

So far for the overview (which is far longer that it intended to be, sorry). Lets focus on the Open Infrastructure Division from now on. Probably the most important point in any division is what it stands for, so what do we stand for? As you can see from the name the word “Open” plays a central role. We do not only believe in open source software but even more important believe that being and thinking open is key to success. There are plenty of wonderful open source projects out there, each of them having an active community and each of them supporting business critical applications. Our idea is not to favor one over another but rather selecting the right technology for the right requirement. This might range from one specific product to a whole set of products. This could even result in extending (or interfacing) a proprietary solution by open source solutions when it makes sense (remember the cross connected divisions described above).

On the other hand we can not support the whole open source world because this is just not possible so we have to select a kind of technology set we want to focus on (more on that later). This set is and will be changing over time, we are absolutely aware of that. New technologies will come in, others will go out and some will stay. That’s the way of life. Important for us is the “Open Infrastructure” because when things change you need to adopt. When you keep your infrastructure open it is much more easy to adopt than when you have to adopt your closed systems. When you focus on real open standards, open formats and open software you are free to decide in which direction you want to go and even if it was the wrong decision you are not on a …

traffic-sign-6727_960_720

… you are free to adjust the direction :) This is more or less what we believe in and work on every day: Making infrastructures more open, easier to maintain, easier to adapt, faster to profit from and more fun to work with. Yes, work can be fun when you have the right tools and the right people to work with.

Going deeper into our division what do we actually provide? We had various internal discussion about that and it was not easy to come to a final conclusion. Indeed there is no final set or conclusion but we do provide what we call a “core” set. This are our core technologies we currently work on, provide best practices, provide trainings for (some), provide consulting (all), provide SLAs (some), implement (all) and do reviews or feasibility studies (all). So, finally here they are (bottom up):

On the Linux level:

logo_redhat logo_centos logo_suse logo_debian logo_ubuntu

On the rdbms level:

logo_mariadb logo_postgresql logo_mysql

On the NoSQL level:

logo_mongodb logo_cassandra

On the provisioning and deployment level:

logo_ansible

On the monitoring level:

logo_nagios

On the DevOps and OpenStack level:

logo_atomichost logo_docker logo_mirantis

This are quite a few but not too much technologies. As already stated somewhere above we do not provide everything for every product but we are constantly extending our knowledge and best practices so that products might be fully supported by us in the future which are not today. Other products we are working on are not even listed above (e.g. the E(astic search) L(ogstash) K(ibana) stack). Believe me: It is really hard to select the right tools for the right job: This is what we are here for.

 

Cet article The dbi Open Infrastructure Technology Division est apparu en premier sur Blog dbi services.

Result cache side effects on number of calls

Tue, 2016-09-13 04:38

During the execution of a SQL statement, you cannot guess how many times an operation, a predicate, or a function will be executed. This depends on the execution plan, on some caching at execution, and some other execution time decisions. Here is an example where result cache may bring some overhead by calling a function multiple times.

Here is my function:
SQL> create or replace function F return number is
2 begin
3 dbms_lock.sleep(5);
4 dbms_output.put_line('Hello World');
5 return 255;
6 end;
7 /
Function created.

The function displays ‘Hello World’ so that I can check how many times it is executed (I’ve set serveroutput on).

Obviously, on a one row table, it is called only once:
SQL> select f from dual;
 
F
----------
255
 
Hello World

Query result cache miss

I’ll run now the same query but with the result cache hint. The first execution will have to execute the query because the cache is empty at that point:

SQL> exec dbms_result_cache.flush;
PL/SQL procedure successfully completed.
 
SQL> select /*+ result_cache */ f from dual;
 
F
----------
255
 
Hello World
Hello World

Here is what I wanted to show: ‘Hello World’ is displayed two times instead of one. If your function is an expensive one, then the first execution, or every cache miss, will have a performance overhead.

Query result cache hit

Now that the result is in the cache:

SQL> select id, type, status, name from v$result_cache_objects;
 
ID TYPE STATUS NAME
---------- ---------- --------- ------------------------------------------------------------
33 Dependency Published DEMO.F
34 Result Published select /*+ result_cache */ f from dual

and the table has not changed (it’s DUAL here :; ) further executions do not call the function anymore, which is the expected result.

SQL> select /*+ result_cache */ f from dual ;
 
F
----------
255

Bug or not?

Bug 21484570 has been opened for that and closed as ‘Not a bug’. There is no guarantee that the function is evaluated once, twice, more or never.
Ok, why not. That’s an implementation decision. Just think about it if you want to workaround an expensive function called for each row, then query result cache may not be the right solution (except if all tables are static and you always have cache hits).

Note that if the function is declared as deterministic, it is executed only once.

You can workaround the issue by using result cache at function level (in place, or in addition to query result cache if you need it).

SQL> create or replace function F return number RESULT_CACHE is
2 begin
3 dbms_lock.sleep(5);
4 dbms_output.put_line('Hello World');
5 return 255;
6 end;
7 /
Function created.
 
SQL> select /*+ result_cache */ f from dual;
 
F
----------
255
 
Hello World
 
SQL> select id, type, status, name from v$result_cache_objects;
 
ID TYPE STATUS NAME
---------- ---------- --------- ------------------------------------------------------------
64 Dependency Published DEMO.F
66 Result Published "DEMO"."F"::8."F"#e17d780a3c3eae3d #1
65 Result Published select /*+ result_cache */ f from dual

So, not a big problem. Just something to know. And anyway, the right design is NOT to call a function for each row because it’s not scalable. Pipeline functions should be used for that.

 

Cet article Result cache side effects on number of calls est apparu en premier sur Blog dbi services.

Sharding with PostgreSQL

Mon, 2016-09-12 14:02

In this article we are going to talk about sharding in PostgreSQL. What is sharding, Sharding is like partitioning. The difference is that with traditional partioning, partitions are stored in the same database while sharding shards (partitions) are stored in different servers.
Below is an example of sharding configuration we will use for our demonstration

sharding
PostgreSQL does not provide built-in tool for sharding. We will use citus  which extends PostgreSQL capability to do sharding and replication.
We will use 3 servers
pgshard0: 192.168.1.50
pgshard1: 192.168.1.51
pgshard2: 192.168.1.52
First let’s install PostgreSQL + citus extension on all servers (pgshard0, pgshard1, pgshard2). We suppose of course that network is configured so that all server can communicate
[root@pgshard0 ~]# yum install -y https://download.postgresql.org/pub/repos/yum/9.5/redhat/rhel-7-x86_64/pgdg-oraclelinux95-9.5-2.noarch.rpm
[root@pgshard0 ~]# yum install -y citus_95.x86_64
Now let’s  to initialize a new database system on all servers (pgshard0, pgshard1, pgshard2)
[root@pgshard0 ~]# /usr/pgsql-9.5/bin/postgresql95-setup initdb
Initializing database ... OK

To load citus extension we have to edit the /var/lib/pgsql/9.5/data/postgresql.conf   file and add the following line
shared_preload_libraries = ‘citus’
[root@pgshard0 data]# grep shared_preload_libraries /var/lib/pgsql/9.5/data/postgresql.conf
#shared_preload_libraries = '' # (change requires restart)
shared_preload_libraries = 'citus'
[root@pgshard0 data]#

To allow postgreSQL connection between servers we have to configure two configuration files /var/lib/pgsql/9.5/data/postgresql.conf  and /var/lib/pgsql/9.5/data/pg_hba.conf
[root@pgshard0 data]# grep listen postgresql.conf
listen_addresses = '*' # what IP address(es) to listen on;
[root@pgshard0 data]#

[root@pgshard0 data]# grep trust pg_hba.conf
# METHOD can be "trust", "reject", "md5", "password", "gss", "sspi",
host all all 127.0.0.1/32 trust
host all all 192.168.1.0/24 trust
host all all ::1/128 trust
[root@pgshard0 data]#

Let’s now start database on all servers (pgshard0, pgshard1, pgshard2)

[root@pgshard0 data]# service postgresql-9.5 start
Redirecting to /bin/systemctl start postgresql-9.5.service
[root@pgshard0 data]#

 We need to inform the master (pgshard0) about its workers (pgshard1 and pgshard2). For this we have to create a configuration file pg_worker_list.conf  like this

[root@pgshard0 data]# pwd
/var/lib/pgsql/9.5/data
[root@pgshard0 data]# cat pg_worker_list.conf
pgshard1 5432
pgshard2 5432
[root@pgshard0 data]#

Let’s now reload the master pgshard0  to take into account the modification

[root@pgshard0 ~]# service postgresql-9.5 reload
Redirecting to /bin/systemctl reload postgresql-9.5.service
[root@pgshard0 ~]#

Very important:  we must create citus extension on both servers

postgres=# create extension citus;
CREATE EXTENSION
postgres=# \dx
List of installed extensions
Name | Version | Schema | Description
---------+---------+------------+------------------------------
citus | 5.2-1 | pg_catalog | Citus distributed database
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
(2 rows)

The last step before the sharding is now to verify that the master is ready. This SELECT command we will  run in the psql shell should output the worker nodes mentioned in the pg_worker_list.conf file.

postgres=# SELECT * FROM master_get_active_worker_nodes();
node_name | node_port
-----------+-----------
pgshard2 | 5432
pgshard1 | 5432
(2 rows)
postgres=#

Every thing is going fine until now, so we can create on the master the table to be sharded. Let’s say table sales

postgres=# CREATE TABLE sales
postgres-# (deptno int not null,
postgres(# deptname varchar(20),
postgres(# total_amount int,
postgres(# CONSTRAINT pk_sales PRIMARY KEY (deptno)) ;
CREATE TABLE

We need have inform Citus that data of table sales will be distributed among pghard1 and pgshard2. In our case we choose a hash distribution.

postgres=# SELECT master_create_distributed_table('sales', 'deptno', 'hash');
master_create_distributed_table
---------------------------------
(1 row)

In our example we are going to create one shard on each worker. We will  Specify
the table name :  sales
total shard count : 2
replication factor :  1  –No replication
postgres=# SELECT master_create_worker_shards('sales', 2, 1);
master_create_worker_shards
-----------------------------
(1 row)
postgres=#

And guys it’s all. Sharding is done. Connecting to pgshard1 and pgshard2 we can verify that shards were created with the same structure than the base table.

[postgres@pgshard1 bin]$ psql -c "\dt"
List of relations
Schema | Name | Type | Owner
--------+--------------+-------+----------
public | sales_102026 | table | postgres
(1 row)

[postgres@pgshard2 ~]$ psql -c "\dt"
List of relations
Schema | Name | Type | Owner
--------+--------------+-------+----------
public | sales_102027 | table | postgres
(1 row)

Now let’s insert some rows in the table from the master
postgres=# insert into sales (deptno,deptname,total_amount) values (1,'french_dept',10000);
INSERT 0 1
postgres=# insert into sales (deptno,deptname,total_amount) values (2,'german_dept',15000);
INSERT 0 1
postgres=# insert into sales (deptno,deptname,total_amount) values (3,'china_dept',21000);
INSERT 0 1
postgres=# insert into sales (deptno,deptname,total_amount) values (4,'gambia_dept',8750);
INSERT 0 1
postgres=# insert into sales (deptno,deptname,total_amount) values (5,'japan_dept',12010);
INSERT 0 1
postgres=# insert into sales (deptno,deptname,total_amount) values (6,'china_dept',35000);
INSERT 0 1
postgres=# insert into sales (deptno,deptname,total_amount) values (7,'nigeria_dept',10000);
INSERT 0 1
postgres=# insert into sales (deptno,deptname,total_amount) values (8,'senegal_dept',33000);
INSERT 0 1
postgres=#

We can see that data are distributed between pgshard1 and pgshard2

[postgres@pgshard1 bin]$ ./psql
psql (9.5.4)
Type "help" for help.
postgres=# select * from sales_102026;
deptno | deptname | total_amount
--------+--------------+--------------
1 | french_dept | 10000
3 | india_dept | 21000
4 | gambia_dept | 8750
5 | japan_dept | 12010
7 | nigeria_dept | 10000
8 | senegal_dept | 33000
(6 rows)

[postgres@pgshard2 bin]$ ./psql
psql (9.5.4)
Type "help" for help.
postgres=# select * from sales_102027;
deptno | deptname | total_amount
--------+-------------+--------------
2 | german_dept | 15000
6 | china_dept | 35000
(2 rows)

If we do an explain from the master pgshard0, we note that query is routed to different shard depending of data
postgres=# explain verbose select * from sales where deptno=5;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------
Distributed Query
Executor: Router
Task Count: 1
Tasks Shown: All
-> Task
Node: host=pgshard1 port=5432 dbname=postgres
-> Index Scan using pk_sales_102026 on public.sales_102026 sales (cost=0.15..8.17 rows=1 width=66)
Output: deptno, deptname, total_amount
Index Cond: (sales.deptno = 5)
(9 rows)

postgres=# explain verbose select * from sales where deptno=6;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------
Distributed Query
Executor: Router
Task Count: 1
Tasks Shown: All
-> Task
Node: host=pgshard2 port=5432 dbname=postgres
-> Index Scan using pk_sales_102027 on public.sales_102027 sales (cost=0.15..8.17 rows=1 width=66)
Output: deptno, deptname, total_amount
Index Cond: (sales.deptno = 6)
(9 rows)

Conclusion:
In this article we show that PostgreSQL allows to do many interesting things. Use of extensions can considerably extend PostgreSQL capability

 

Cet article Sharding with PostgreSQL est apparu en premier sur Blog dbi services.

What the hell are these template0 and template1 databases in PostgreSQL?

Sun, 2016-09-11 00:45

When people start to work with PostgreSQL, especially when they are used to Oracle, some things might be very confusing. A few of the questions we usually get asked are:

  • Where is the listener and how can I configure it?
  • When you talk about a PostgreSQL cluster where are the other nodes?
  • Why do we have these template databases and what are they used for?
  • …and some others…

In this post we’ll look at the last point: Why do we have two template databases (template0 and template1) and additionally a database called “postgres”? That makes three databases by default. In Oracle we only have one, well two when you use pluggable databases (the root and pdb$seed). Why does PostgreSQL need three by default? Isn’t that only overhead? Lets see …

To begin with: Assuming these databases are really not required we can drop them, can’t we?

(postgres@[local]:5432) [postgres] > \l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres

Can we drop the “postgres” database?

(postgres@[local]:5432) [postgres] > drop database postgres;
ERROR:  cannot drop the currently open database
Time: 1.052 ms

Ok, this is the first point to remember: You can not drop a database which users are currently connected to (in this case it is my own connection). So lets try to connect to template1 and then drop the “postgres” database:

postgres@pgbox:/home/postgres/ [PG960] psql template1
psql (9.6rc1 dbi services build)
Type "help" for help.

(postgres@[local]:5432) [template1] > drop database postgres;
DROP DATABASE
Time: 489.287 ms
(postgres@[local]:5432) [template1] > 

Uh, our default “postgres” database is gone. Does it matter? Not really from a PostgreSQL perspective but probably all your clients (pgadmin, monitoring scripts, …) will have a problem now:

postgres@pgbox:/home/postgres/ [postgres] psql
psql: FATAL:  database "postgres" does not exist

The “postgres” database is meant as a default database for clients to connect to. When you administer a PostgreSQL instance which runs under the postgres operating system user the default database that is used for a connection is the same as the username => postgres. Now that this database does not exist anymore you can not longer connect if you do not provide a database name in you connection request. But we can still connect to “template1″:

postgres@pgbox:/home/postgres/ [postgres] psql template1
psql (9.6rc1 dbi services build)
Type "help" for help.

Second thing to remember: The “postgres” database is meant as a default database for connections. It is not required, you can drop it but probably a lot of tools you use will need to be adjusted because they assume that the “postgres” database is there by default.

Luckily we can easy recover from that:

postgres@pgbox:/home/postgres/ [postgres] psql template1
psql (9.6rc1 dbi services build)
Type "help" for help.

(postgres@[local]:5432) [template1] > create database postgres;
CREATE DATABASE
(postgres@[local]:5432) [template1] > \l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
(3 rows)

What happened? We connected to template1 again and re-created the “postgres” database. Of course everything we added to the “postgres” database before we dropped it is not any more available. This brings us to the next question: When we create a new database what or who defines the initial contents?

Third thing to remember: When you create a new database by using the syntax “create database [DB_NAME]” you get an exact copy of template1.

Really? What happens when I modify template1? Lets add one table and one extension:

postgres@pgbox:/home/postgres/ [postgres] psql template1
psql (9.6rc1 dbi services build)
Type "help" for help.

(postgres@[local]:5432) [template1] > create table my_test_tab ( a int );
CREATE TABLE
(postgres@[local]:5432) [template1] > create extension hstore;
CREATE EXTENSION
Time: 123.722 ms
(postgres@[local]:5432) [template1] > \dx
                           List of installed extensions
  Name   | Version |   Schema   |                   Description                    
---------+---------+------------+--------------------------------------------------
 hstore  | 1.4     | public     | data type for storing sets of (key, value) pairs
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

If the above statement is true every new database created with the above syntax should contain the table and the extension, right?

(postgres@[local]:5432) [postgres] > \c db_test
You are now connected to database "db_test" as user "postgres".
(postgres@[local]:5432) [db_test] > \d
            List of relations
 Schema |    Name     | Type  |  Owner   
--------+-------------+-------+----------
 public | my_test_tab | table | postgres
(1 row)

(postgres@[local]:5432) [db_test] > \dx
                           List of installed extensions
  Name   | Version |   Schema   |                   Description                    
---------+---------+------------+--------------------------------------------------
 hstore  | 1.4     | public     | data type for storing sets of (key, value) pairs
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

Whatever you put into template1 will be available in a new database if you use the following syntax: “create database [DB_NAME];” This can simplify your deployments a lot if you rely on pre-installed objects for e.g. monitoring or development.

Ok, I got it, but what is template0 for then? For this to understand we first take a look at pg_database, especially two columns: datallowconn and datistemplate:

(postgres@[local]:5432) [db_test] > select datname,datallowconn,datistemplate from pg_database order by 3;
  datname  | datallowconn | datistemplate 
-----------+--------------+---------------
 postgres  | t            | f
 db_test   | t            | f
 template1 | t            | t
 template0 | f            | t
(4 rows)

When you take a look at “datallowcon” the only database that has set this to false is “template0″. Do you remember the beginning of this post when we tried to delete the “postgres” database? You can only delete a database when there are no connections. But: You can only create a database from another database if there are no connections to the source, too. Really? Why then can I create a new database when I am connected to template1 when template1 is the source for the new database?

(postgres@[local]:5432) [template1] > create database db_test_2;
CREATE DATABASE

Confusing? This does not work anymore if there is another session connected to template1.

Lets try to create another new database but this time we we use db_test as the source. Yes, this is possible if you slightly adjust the syntax. But before we create the database we create a another connection to db_test:

(postgres@[local]:5432) [template1] > \q
postgres@pgbox:/home/postgres/ [PG960] psql db_test
psql (9.6rc1 dbi services build)
Type "help" for help.

In another session we try to create new database with db_test as the source:

postgres@pgbox:/home/postgres/ [PG960] psql postgres
psql (9.6rc1 dbi services build)
Type "help" for help.

(postgres@[local]:5432) [postgres] > create database db_test_3 template db_test;
ERROR:  source database "db_test" is being accessed by other users
DETAIL:  There is 1 other session using the database.

Fourth point to remember: For creating new databases you can use whatever database you like as the source when you specify then template explicitly.
Fifth point to remember: When you want to drop a database or when you want to create a database there must be no connections to the database you either want to drop or you want to create a new database from.

Coming back to the “datistemplate” and “datallowcon” settings: template0 is the only database that has “datallowcon” set to false, why? Because template0 is meant as the default unmodifiable database. You never should make any changes there. In a brand new PostgreSQL instance template0 and template1 are exactly the same. But why do I need both of them then? Assume you messed up template1 somehow (installed too many objects, for example). Using template one you still can recover from that easily:

postgres@pgbox:/home/postgres/ [PG960] psql postgres
psql (9.6rc1 dbi services build)
Type "help" for help.
(postgres@[local]:5432) [postgres] > update pg_database set datistemplate = false where datname = 'template1';
UPDATE 1
(postgres@[local]:5432) [postgres] > drop database template1;
DROP DATABASE
(postgres@[local]:5432) [postgres] > create database template1 template template0;
CREATE DATABASE
(postgres@[local]:5432) [postgres] > update pg_database set datistemplate = true where datname = 'template1';
UPDATE 1

What happened here? I modified template1 to not being a template anymore because you can not drop a database flagged as a template. Then I dropped and re-created the template1 database by using template0 as the template. Now my template1 is an exact copy of template0 again and all the things I messed up are gone. Another use case for this is: Image you modified template1 to include the stuff you rely on but at some point in the future you need a new database which shall be without your modifications (e.g. restoring a dump). For this you always can use template0 as a template because template0 is always clean. And this is why connections are not allowed to template0.

Of course you can create your own template database(s) by setting the “datistemplate” to true:

(postgres@[local]:5432) [postgres] > update pg_database set datistemplate = true where datname = 'db_test';
UPDATE 1
(postgres@[local]:5432) [postgres] > drop database db_test;
ERROR:  cannot drop a template database

What about the overhead:

(postgres@[local]:5432) [postgres] > select * from pg_size_pretty ( pg_database_size ( 'template0' ));
 pg_size_pretty 
----------------
 7233 kB
(1 row)

(postgres@[local]:5432) [postgres] > select * from pg_size_pretty ( pg_database_size ( 'template1' ));
 pg_size_pretty 
----------------
 7233 kB
(1 row)

(postgres@[local]:5432) [postgres] > select * from pg_size_pretty ( pg_database_size ( 'postgres' ));
 pg_size_pretty 
----------------
 7343 kB
(1 row)

Should not really be an issue on todays hardware. Hope this puts some light on these default databases.

 

Cet article What the hell are these template0 and template1 databases in PostgreSQL? est apparu en premier sur Blog dbi services.

Securing your connections to PostgreSQL by using SSL

Fri, 2016-09-09 13:30

Security is a big topic today and in the news almost every day. As the database usually holds sensitive data this data must be well protected. In most cases this is done by encrypting critical data inside the database and decrypt only when requested. But this is not all: When a client reads the data it is decrypted inside the database and then send back over the network unencrypted. What do you win with such a setup? The only risk it protects you from is a theft of either your disks or the whole server. Even more important is that all the connections to your database are encrypted so the traffic from and to your database can not be read be someone else. In this post we’ll look at how you can do this with PostgreSQL.

Obviously, for securing the connections to the database by using SSL we’ll need a server certificate. As I am on Linux this can be generated very easy by using openssl to create a self signed certificate. Be aware that your PostgreSQL binaries need to be compiled with “–with-openssl” for the following to work. You can check this by using using pg_config:

postgres@pgbox:/u01/app/postgres/local/dmk/ [PG960] pg_config | grep CONFIGURE
CONFIGURE = '--prefix=/u01/app/postgres/product/96/db_0' '--exec-prefix=/u01/app/postgres/product/96/db_0' '--bindir=/u01/app/postgres/product/96/db_0/bin' '--libdir=/u01/app/postgres/product/96/db_0/lib' '--sysconfdir=/u01/app/postgres/product/96/db_0/etc' '--includedir=/u01/app/postgres/product/96/db_0/include' '--datarootdir=/u01/app/postgres/product/96/db_0/share' '--datadir=/u01/app/postgres/product/96/db_0/share' '--with-pgport=5432' '--with-perl' '--with-python' '--with-tcl' '--with-openssl' '--with-pam' '--with-ldap' '--with-libxml' '--with-libxslt' '--with-segsize=2' '--with-blocksize=8' '--with-wal-segsize=16' '--with-extra-version= dbi services build'

To create a self signed certificate with openssl simple execute the following command:

postgres@pgbox:/home/postgres/ [PG960] openssl req -new -text -out server.req

This creates a new certificate request based on the information you provide. The only important point here (for the scope of this post) is that the “Common Name” must match the server name where your PostgreSQL is running on, e.g.:

Generating a 2048 bit RSA private key
............................................................................................................................................+++
..................................................................................................+++
writing new private key to 'privkey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CH
State or Province Name (full name) []:BS
Locality Name (eg, city) [Default City]:Basel
Organization Name (eg, company) [Default Company Ltd]:dbi services
Organizational Unit Name (eg, section) []:dba
Common Name (eg, your name or your server's hostname) []:pgbox
Email Address []:xx@xx@com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

This created two files in the directory where you executed the command (the first one is the certificate request and the second one is the private key):

-rw-r--r--. 1 postgres postgres  3519 Sep  9 13:24 server.req
-rw-r--r--. 1 postgres postgres  1821 Sep  9 13:24 privkey.pem

If you want your PostgreSQL instance to start automatically you should remove the pass phrase from the generated private key:

postgres@pgbox:/home/postgres/ [PG960] openssl rsa -in privkey.pem -out server.key
Enter pass phrase for privkey.pem:
writing RSA key
postgres@pgbox:/home/postgres/ [PG960] rm privkey.pem

The password which is asked for is the one you provided when you generated the certificate request above. The new key is now in “server.key”. Now you can create your certificate:

postgres@pgbox:/home/postgres/ [PG960] openssl req -x509 -in server.req -text -key server.key -out server.crt

If everything went well your brand new certificate should be available:

postgres@pgbox:/home/postgres/ [PG960] ls -l server.crt 
-rw-r--r--. 1 postgres postgres 4473 Sep  9 13:32 server.crt
postgres@pgbox:/home/postgres/ [PG960] cat server.crt
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 12528845138836301488 (0xaddf6645ea37a6b0)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=CH, ST=BS, L=Basel, O=dbi services, OU=dba, CN=pgbox/emailAddress=xx@xx@com
        Validity
            Not Before: Sep  9 11:32:42 2016 GMT
            Not After : Oct  9 11:32:42 2016 GMT
        Subject: C=CH, ST=BS, L=Basel, O=dbi services, OU=dba, CN=pgbox/emailAddress=xx@xx@com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:cb:4f:d1:b7:81:c4:83:22:2f:fb:9f:4b:fa:6a:
                    16:77:fd:62:37:91:f1:09:cc:c4:e1:04:e1:de:f2:
                    3f:77:35:ec:e5:8f:5a:03:1d:7b:53:8e:5a:72:76:
                    42:2a:cb:95:9a:35:4a:98:1d:78:3c:21:85:3d:7c:
                    59:f6:e8:7b:20:d0:73:db:42:ff:38:ca:0c:13:f6:
                    cc:3e:bc:b0:8f:41:29:f1:c7:33:45:79:c7:04:33:
                    51:47:0b:23:f8:d6:58:68:2d:95:83:c9:ad:40:7c:
                    95:9a:0c:ff:92:bd:d6:4f:b2:96:6c:41:45:0d:eb:
                    19:57:b3:9a:fc:1c:82:01:9c:2d:e5:2e:1b:0f:47:
                    ab:84:fa:65:ed:80:e7:19:da:ab:89:09:ed:6a:2c:
                    3a:aa:fe:dc:ba:53:e5:52:3f:1c:db:47:4c:4a:d6:
                    e5:0f:76:12:df:f4:6c:fd:5a:fb:a5:70:b4:7b:06:
                    c3:0c:b1:4d:cf:04:8e:5c:b0:05:cb:f2:ac:78:a6:
                    12:44:55:07:f9:88:55:59:23:11:0f:dd:53:14:6a:
                    e8:c4:bb:6a:94:af:1e:54:e8:7d:4f:10:8a:e5:7e:
                    31:3b:cf:28:28:80:37:62:eb:5e:49:26:9d:10:17:
                    33:bc:a7:3f:2a:06:a4:f0:37:a5:b3:07:6d:ce:6a:
                    b7:17
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Subject Key Identifier: 
                EA:63:B1:7F:07:DF:31:3F:55:28:77:CC:FB:F2:1F:3A:D6:45:3F:55
            X509v3 Authority Key Identifier: 
                keyid:EA:63:B1:7F:07:DF:31:3F:55:28:77:CC:FB:F2:1F:3A:D6:45:3F:55

            X509v3 Basic Constraints: 
                CA:TRUE
    Signature Algorithm: sha256WithRSAEncryption
         18:2b:96:b6:01:d8:3e:7f:bb:35:0c:4b:53:c2:9c:02:22:41:
         25:82:d3:b6:a9:88:6e:0e:5d:5b:d3:ac:00:43:0a:04:f4:12:
         6e:22:fd:3f:77:63:0e:42:28:e3:09:6b:16:67:5f:b7:08:08:
         74:a3:55:1f:49:09:69:96:e8:f6:2e:9c:8a:d6:a0:e2:f7:d8:
         30:62:06:f0:5e:1a:85:fe:ff:2d:39:64:f7:f1:e9:ce:21:02:
         f3:86:5f:3b:f6:12:1d:61:cd:a8:bf:36:e2:98:d4:99:b6:95:
         5e:05:87:8d:ab:2f:30:38:b2:fe:68:ac:50:8d:98:fd:aa:4d:
         79:e2:f5:71:92:d6:e5:1d:59:42:02:49:7a:2e:e0:f3:ba:41:
         4d:f4:15:33:44:36:14:43:3b:7a:41:1b:61:6c:ff:78:fb:13:
         4a:a4:e0:96:6c:45:80:0e:30:e3:63:9d:dc:f1:77:16:22:9c:
         7a:c9:92:96:53:3b:62:87:ca:cb:e8:4a:a4:4f:69:a6:a0:5a:
         a9:eb:be:58:7f:c1:da:d4:d7:41:d4:54:06:fb:5b:8b:ea:46:
         68:f5:e6:1e:2b:6a:0b:65:f9:66:5a:a2:14:ec:eb:05:2f:99:
         46:bc:bb:d8:11:f6:3f:2e:6e:15:48:ac:70:1f:18:2d:e2:78:
         4b:a3:cb:ef
-----BEGIN CERTIFICATE-----
MIIDxTCCAq2gAwIBAgIJAK3fZkXqN6awMA0GCSqGSIb3DQEBCwUAMHkxCzAJBgNV
BAYTAkNIMQswCQYDVQQIDAJCUzEOMAwGA1UEBwwFQmFzZWwxFTATBgNVBAoMDGRi
aSBzZXJ2aWNlczEMMAoGA1UECwwDZGJhMQ4wDAYDVQQDDAVwZ2JveDEYMBYGCSqG
SIb3DQEJARYJeHhAeHhAY29tMB4XDTE2MDkwOTExMzI0MloXDTE2MTAwOTExMzI0
MloweTELMAkGA1UEBhMCQ0gxCzAJBgNVBAgMAkJTMQ4wDAYDVQQHDAVCYXNlbDEV
MBMGA1UECgwMZGJpIHNlcnZpY2VzMQwwCgYDVQQLDANkYmExDjAMBgNVBAMMBXBn
Ym94MRgwFgYJKoZIhvcNAQkBFgl4eEB4eEBjb20wggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQDLT9G3gcSDIi/7n0v6ahZ3/WI3kfEJzMThBOHe8j93Nezl
j1oDHXtTjlpydkIqy5WaNUqYHXg8IYU9fFn26Hsg0HPbQv84ygwT9sw+vLCPQSnx
xzNFeccEM1FHCyP41lhoLZWDya1AfJWaDP+SvdZPspZsQUUN6xlXs5r8HIIBnC3l
LhsPR6uE+mXtgOcZ2quJCe1qLDqq/ty6U+VSPxzbR0xK1uUPdhLf9Gz9WvulcLR7
BsMMsU3PBI5csAXL8qx4phJEVQf5iFVZIxEP3VMUaujEu2qUrx5U6H1PEIrlfjE7
zygogDdi615JJp0QFzO8pz8qBqTwN6WzB23OarcXAgMBAAGjUDBOMB0GA1UdDgQW
BBTqY7F/B98xP1Uod8z78h861kU/VTAfBgNVHSMEGDAWgBTqY7F/B98xP1Uod8z7
8h861kU/VTAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQAYK5a2Adg+
f7s1DEtTwpwCIkElgtO2qYhuDl1b06wAQwoE9BJuIv0/d2MOQijjCWsWZ1+3CAh0
o1UfSQlpluj2LpyK1qDi99gwYgbwXhqF/v8tOWT38enOIQLzhl879hIdYc2ovzbi
mNSZtpVeBYeNqy8wOLL+aKxQjZj9qk154vVxktblHVlCAkl6LuDzukFN9BUzRDYU
Qzt6QRthbP94+xNKpOCWbEWADjDjY53c8XcWIpx6yZKWUztih8rL6EqkT2mmoFqp
675Yf8Ha1NdB1FQG+1uL6kZo9eYeK2oLZflmWqIU7OsFL5lGvLvYEfY/Lm4VSKxw
Hxgt4nhLo8vv
-----END CERTIFICATE-----

For PostgreSQL to accept the key when it starts up you’ll need to modify the permissions:

postgres@pgbox:/home/postgres/ [PG960] chmod 600 server.key
postgres@pgbox:/home/postgres/ [PG960] ls -l server.key
-rw-------. 1 postgres postgres 1675 Sep  9 13:30 server.key

Both files (server.key and server.crt) need to be copied to your data directory (you can adjust this by using the ssl_cert_file and ssl_key_file configuration parameters):

postgres@pgbox:/home/postgres/ [PG960] mv server.key server.crt $PGDATA/

Now you can turn on ssl…

(postgres@[local]:5432) [postgres] > alter system set ssl='on';
ALTER SYSTEM
Time: 5.427 ms

… and restart your instance:

postgres@pgbox:/home/postgres/ [PG960] pg_ctl -D $PGDATA restart -m fast

How can you test if SSL connections do work? Add the following line to pg_hba.conf for your instance:

hostssl  all             all             127.0.0.1/32            md5

Reload your server and then create a new connection:

postgres@pgbox:/u02/pgdata/PG1/ [PG960] psql -h localhost -p 5432 postgres
psql (9.6rc1 dbi services build)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.

Works as expected. For everything which is not for testing you’ll need a real certificate, of course. Just in case you expected to configure another port: PostgreSQL listens for normal and SSL connections on the same port. When the client supports SSL then SSL connections will be established, otherwise normal connections. When you want to force the use of SSL connections you can do it by adjusting your pg_hba.conf (deny all connections which are not SSL).

 

Cet article Securing your connections to PostgreSQL by using SSL est apparu en premier sur Blog dbi services.

EDB Failover Manager 2.1, (two) new features

Fri, 2016-09-09 00:08

In the last post we upgraded EDB EFM from version 2.0 to 2.1. In this post we’ll look at the new features. Actually we’ll look only at two of the new features in this post:

  • Failover Manager now simplifies cluster startup with the auto.allow.hosts property
  • efm promote now includes a -switchover option; the -switchover option instructs Failover Manager to perform a failover, promoting a Standby to Master, and then, return the Master node to the cluster as a Standby node. For more information

Lets go …

My failover cluster status is still fine:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       
	Standby     192.168.22.243       UP     UP        
	Master      192.168.22.245       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.243 192.168.22.245

Membership coordinator: 192.168.22.244

Standby priority host list:
	192.168.22.243

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/3C000220       
	Standby     192.168.22.243       0/3C000220       

	Standby database(s) in sync with master. It is safe to promote.

The first bit we’re going to change is the auto.allow.hosts on the database servers. According to the documentation this should eliminate the need to allow the hosts to join the cluster and registration should happen automatically. So, lets change it from “false” to “true” on all nodes:

[root@ppasstandby efm-2.1]$ grep allow.hosts efm.properties
auto.allow.hosts=true

… and then lets add all nodes to the efm.nodes files on the witness:

[root@edbbart efm-2.1]$ cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.22.244:9998 192.168.22.243:9998 192.168.22.245:9998

When we now shutdown the EFM service on all hosts and bring it up again on the witness what is the result?

[root@edbbart efm-2.1]$ systemctl stop efm-2.1.service  # do this on all hosts

Lets start on the witness again:

[root@edbbart efm-2.1]$ systemctl start efm-2.1.service
[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       

Allowed node host list:
	192.168.22.244 192.168.22.243 192.168.22.245

Membership coordinator: 192.168.22.244

Standby priority host list:
	(List is empty.)

Promote Status:

Did not find XLog location for any nodes.

So far so good, all nodes are in the “Allowed” list. What happens when we start EFM on the current primary node:

[root@ppasstandby efm-2.1]$  systemctl start efm-2.1.service
[root@ppasstandby efm-2.1]$ 

We should see the node as a member now without explicitly allowing it to join:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       
	Master      192.168.22.245       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.243 192.168.22.245

Membership coordinator: 192.168.22.244

Standby priority host list:
	(List is empty.)

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/3D000060       

	No standby databases were found.

Cool, same on the standby node:

[root@edbppas edb-efm]$ cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.22.244:9998
[root@edbppas edb-efm]$  systemctl start efm-2.1.servic

What is the status:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       
	Master      192.168.22.245       UP     UP        
	Standby     192.168.22.243       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.243 192.168.22.245

Membership coordinator: 192.168.22.244

Standby priority host list:
	192.168.22.243

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/3D000060       
	Standby     192.168.22.243       0/3D000060       

	Standby database(s) in sync with master. It is safe to promote.

Perfect. Makes it a bit easier and fewer things to remember to bring up a failover cluster.

Coming to the “big” new feature (at least in my opinion): Switching to the standby and making the old master automatically a new standby which follows the new master. According to the docs all we need to do is this:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm promote efm -switchover

Does it really work?

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm promote efm -switchover
Promote/switchover command accepted by local agent. Proceeding with promotion and will reconfigure original master. Run the 'cluster-status' command for information about the new cluster state.

Hm, lets check the status:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       
	Master      192.168.22.243       UP     UP        
	Standby     192.168.22.245       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.245 192.168.22.243

Membership coordinator: 192.168.22.244

Standby priority host list:
	192.168.22.245

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.243       0/410000D0       
	Standby     192.168.22.245       0/410000D0       

	Standby database(s) in sync with master. It is safe to promote.

It really worked! And backwards:

[root@edbbart ~]$ /usr/edb-efm/bin/efm promote efm -switchover
Promote/switchover command accepted by local agent. Proceeding with promotion and will reconfigure original master. Run the 'cluster-status' command for information about the new cluster state.

[root@edbbart ~]$ /usr/edb-efm/bin/efm cluster-status efm
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

    Agent Type  Address              Agent  DB       Info
    --------------------------------------------------------------
    Witness     192.168.22.244       UP     N/A       
    Standby     192.168.22.243       UP     UP        
    Master      192.168.22.245       UP     UP        

Allowed node host list:
    192.168.22.244 192.168.22.245 192.168.22.243

Membership coordinator: 192.168.22.244

Standby priority host list:
    192.168.22.243

Promote Status:

    DB Type     Address              XLog Loc         Info
    --------------------------------------------------------------
    Master      192.168.22.245       0/480001A8       
    Standby     192.168.22.243       0/480001A8       

    Standby database(s) in sync with master. It is safe to promote.

Cool, that is really a great new feature.

 

Cet article EDB Failover Manager 2.1, (two) new features est apparu en premier sur Blog dbi services.

EDB Failover Manager 2.1, upgrading

Thu, 2016-09-08 05:53

Some days ago EnterpriseDB released a new version of its EDB Failover Manager which brings one feature that really sounds great: “Controlled switchover and switchback for easier maintenance and disaster recovery tests”. This is exactly what you want when you are used to operate Oracle DataGuard. Switching back and forward as you like without caring much about the old master. The old master shall just be converted to a standby which follows the new master automatically. This post is about upgrading EFM from version 2.0 to 2.1.

As I still have the environment available which was used for describing the maintenance scenarios with EDB Failover Manager (Maintenance scenarios with EDB Failover Manager (1) – Standby node, Maintenance scenarios with EDB Failover Manager (2) – Primary node and Maintenance scenarios with EDB Failover Manager (3) – Witness node) I will use the same environment to upgrade to the new release. Lets start …

This is the current status of my failover cluster:

[root@edbbart ~]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Master      192.168.22.245       UP     UP        
	Witness     192.168.22.244       UP     N/A       
	Standby     192.168.22.243       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.245 192.168.22.243

Standby priority host list:
	192.168.22.243

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/3B01C5E0       
	Standby     192.168.22.243       0/3B01C5E0       

	Standby database(s) in sync with master. It is safe to promote.
[root@edbbart ~]$ 

Obviously you have to download the new version to begin the upgrade. Once the rpm is available on all nodes simply install it on all the nodes:

[root@edbppas tmp]$ yum localinstall efm21-2.1.0-1.rhel7.x86_64.rpm

EFM 2.1 comes with an utility command that helps in upgrading a cluster. You should invoke it on each node:

[root@edbbart tmp]$ /usr/efm-2.1/bin/efm upgrade-conf efm
Processing efm.properties file.
Setting new property node.timeout to 40 (sec) based on existing timeout 5000 (ms) and max tries 8.

Processing efm.nodes file.

Upgrade of files is finished. Please ensure that the new file permissions match those of the template files before starting EFM.
The db.service.name property should be set before starting a non-witness agent.

This created a new configuration file in the new directory under /etc which was created when the new version was installed:

[root@edbbart tmp]$ ls /etc/efm-2.1
efm.nodes  efm.nodes.in  efm.properties  efm.properties.in

All the values from the old EFM cluster should be there in the new configuration files:

[root@edbbart efm-2.1]$ pwd
/etc/efm-2.1
[root@edbbart efm-2.1]$ cat efm.properties | grep daniel
user.email=daniel.westermann...

Before going further check the new configuration parameters for EFM 2.1, which are:

auto.allow.hosts
auto.resume.period
db.service.name
jvm.options
minimum.standbys
node.timeout
promotable
recovery.check.period
script.notification
script.resumed

I’ll leave everything as it was before for now. Notice that a new service got created:

[root@edbppas efm-2.1]$ systemctl list-unit-files | grep efm
efm-2.0.service                             enabled 
efm-2.1.service                             disabled

Lets try to shutdown the old service on all nodes and then start the new one. Step 1 (on all nodes):

[root@edbppas efm-2.1]$ systemctl stop efm-2.0.service
[root@edbppas efm-2.1]$ systemctl disable efm-2.0.service
rm '/etc/systemd/system/multi-user.target.wants/efm-2.0.service'

Then enable the new service:

[root@edbppas efm-2.1]$ systemctl enable efm-2.1.service
ln -s '/usr/lib/systemd/system/efm-2.1.service' '/etc/systemd/system/multi-user.target.wants/efm-2.1.service'
[root@edbppas efm-2.1]$ systemctl list-unit-files | grep efm
efm-2.0.service                             disabled
efm-2.1.service                             enabled 

Make sure your efm.nodes file contains all the nodes which make up the cluster, in my case:

[root@edbppas efm-2.1]$ cat efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.22.243:9998 192.168.22.244:9998 192.168.22.245:9998

Lets try to start the new service on the witness node first:

[root@edbbart efm-2.1]$ systemctl start efm-2.1.service
[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       

Allowed node host list:
	192.168.22.244

Membership coordinator: 192.168.22.244

Standby priority host list:
	(List is empty.)

Promote Status:

Did not find XLog location for any nodes.

Looks good. Are we really running the new version?

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm -v
Failover Manager, version 2.1.0

Looks fine as well. Time to add the other nodes:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm add-node efm 192.168.22.243
add-node signal sent to local agent.
[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm add-node efm 192.168.22.245
add-node signal sent to local agent.
[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Witness     192.168.22.244       UP     N/A       

Allowed node host list:
	192.168.22.244 192.168.22.243

Membership coordinator: 192.168.22.244

Standby priority host list:
	(List is empty.)

Promote Status:

Did not find XLog location for any nodes.

Proceed on the master:

[root@ppasstandby efm-2.1]$ systemctl start efm-2.1.service
[root@ppasstandby efm-2.1]$ systemctl status efm-2.1.service
efm-2.1.service - EnterpriseDB Failover Manager 2.1
   Loaded: loaded (/usr/lib/systemd/system/efm-2.1.service; enabled)
   Active: active (running) since Thu 2016-09-08 12:04:11 CEST; 25s ago
  Process: 4020 ExecStart=/bin/bash -c /usr/efm-2.1/bin/runefm.sh start ${CLUSTER} (code=exited, status=0/SUCCESS)
 Main PID: 4075 (java)
   CGroup: /system.slice/efm-2.1.service
           └─4075 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.77-0.b03.el7_2.x86_64/jre/bin/java -cp /usr/e...

Sep 08 12:04:07 ppasstandby systemd[1]: Starting EnterpriseDB Failover Manager 2.1...
Sep 08 12:04:08 ppasstandby sudo[4087]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/efm-... efm
Sep 08 12:04:08 ppasstandby sudo[4098]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/efm-... efm
Sep 08 12:04:08 ppasstandby sudo[4114]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAND=/usr/... efm
Sep 08 12:04:08 ppasstandby sudo[4125]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAND=/usr/... efm
Sep 08 12:04:10 ppasstandby sudo[4165]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/efm-...9998
Sep 08 12:04:10 ppasstandby sudo[4176]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/efm-...4075
Sep 08 12:04:11 ppasstandby systemd[1]: Started EnterpriseDB Failover Manager 2.1.
Hint: Some lines were ellipsized, use -l to show in full.

And then continue on the standby:

[root@edbppas efm-2.1]$ systemctl start efm-2.1.service
[root@edbppas efm-2.1]$ systemctl status efm-2.1.service
efm-2.1.service - EnterpriseDB Failover Manager 2.1
   Loaded: loaded (/usr/lib/systemd/system/efm-2.1.service; enabled)
   Active: active (running) since Thu 2016-09-08 12:05:28 CEST; 3s ago
  Process: 3820 ExecStart=/bin/bash -c /usr/efm-2.1/bin/runefm.sh start ${CLUSTER} (code=exited, status=0/SUCCESS)
 Main PID: 3875 (java)
   CGroup: /system.slice/efm-2.1.service
           └─3875 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.77-0.b03.el7_2.x86_64/jre/bin/jav...

Sep 08 12:05:24 edbppas systemd[1]: Starting EnterpriseDB Failover Manager 2.1...
Sep 08 12:05:25 edbppas sudo[3887]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/u...efm
Sep 08 12:05:25 edbppas sudo[3898]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/u...efm
Sep 08 12:05:25 edbppas sudo[3914]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAN...efm
Sep 08 12:05:25 edbppas sudo[3925]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAN...efm
Sep 08 12:05:25 edbppas sudo[3945]: efm : TTY=unknown ; PWD=/ ; USER=postgres ; COMMAN...efm
Sep 08 12:05:28 edbppas sudo[3981]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/u...998
Sep 08 12:05:28 edbppas sudo[3994]: efm : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/u...875
Sep 08 12:05:28 edbppas systemd[1]: Started EnterpriseDB Failover Manager 2.1.
Hint: Some lines were ellipsized, use -l to show in full.

What is the cluster status now?:

[root@edbbart efm-2.1]$ /usr/edb-efm/bin/efm cluster-status efm 
Cluster Status: efm
VIP: 192.168.22.250
Automatic failover is disabled.

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Master      192.168.22.245       UP     UP        
	Witness     192.168.22.244       UP     N/A       
	Standby     192.168.22.243       UP     UP        

Allowed node host list:
	192.168.22.244 192.168.22.243 192.168.22.245

Membership coordinator: 192.168.22.244

Standby priority host list:
	192.168.22.243

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      192.168.22.245       0/3B01C7A0       
	Standby     192.168.22.243       0/3B01C7A0       

	Standby database(s) in sync with master. It is safe to promote.

Cool. Back in operation on the new release. Quite easy.

PS: Remember to re-point your symlinks in /etc and /usr if you created symlinks for easy of use.

 

Cet article EDB Failover Manager 2.1, upgrading est apparu en premier sur Blog dbi services.

Multitenant thin provisioning: PDB snapshots on ACFS

Tue, 2016-09-06 16:07

Database on ACFS is a long story. At first, the main reason for ASM was to bypass a filesystem layer that is not required by the database. ACFS was for the non-database files that had to be accessed by all cluster nodes. But then, storage vendors and other providers came with snapshots, compression and thin provisioning and Oracle had to answer: they implemented those storage features in ACFS and allowed database files on it.

When you create a database on an ODA X5, datafiles are created on an ACFS mount. There is only one ACFS mount for many databases. You probably want to snapshot at database level, but ACFS snapshots are only at filesystem level. To avoid that any write on the filesystem is copied when you need the snapshot for a single database, they implemented this way: At installation, the ACFS mount is created and a snapshot is taken when empty. Then each database created will create a snapshot. This means that in each snapshot you access only to one database. There is no overhead and no additional copies because the master is empty.
Then came multitenant where you can snapshot at PDB level for thin cloning (create PDB from … snapshot copy). But multitenant cannot be created on a snapshot. CDB must be at root level on ACFS. In ODA X6, an ACFS filesystem is created for each database. But then, when you thin clone a PDB, a snapshot is taken for the whole database. But this one is not empty: any write will have additional copy and overhead.

There’s more info about ACFS, copy-on-write which is not copy-on-write but redirect-on-write, and performance overhead in the excellent presentation and demo from Ludovico Caldara. Here I’ll show the snapshot overhead in multitenant when writing to the master, the clone, and the others.

PDB snapshots on ACFS

I start with a brand new CDB on ACFS with no snapshots:

[oracle@rac1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvdb3 16G 5.0G 9.9G 34% /
tmpfs 7.3G 631M 6.7G 9% /dev/shm
/dev/xvdb1 477M 118M 330M 27% /boot
/dev/xvdc1 69G 22G 44G 34% /u01
/dev/asm/data-52 25G 6.6G 19G 27% /u02
/dev/asm/fra-401 18G 3.4G 15G 19% /u03
/dev/asm/redo-358 19G 8.2G 11G 44% /u04

[oracle@rac1 ~]$ acfsutil snap info /u02
number of snapshots: 0
snapshot space usage: 0 ( 0.00 )

This is what is created by the Oracle Public Cloud for a RAC DBaaS.

I have a PDB1 pluggable database.

I create another one, PDBx, which will be totally independent.

SQL> create pluggable database PDBx admin user admin identified by "Ach1z0#d";
Pluggable database created.
SQL> alter pluggable database PDBx open;
Pluggable Database opened

I create a thin clone pluggable database database PDB2,using PDB1 as the master:

SQL> create pluggable database PDB2 from PDB1 snapshot copy;
Pluggable database created.
SQL> alter pluggable database PDB2 open;
Pluggable Database opened

Here are my pluggable databases:

SQL> select pdb_name,GUID from dba_pdbs;
 
PDB_NAME GUID
-------- --------------------------------
PDB$SEED 3360B2A306C60098E053276DD60A9928
PDB1 3BDAA124651F103DE0531ADBC40A5DD3
PDBX 3BDCCBE4C1B64A5AE0531ADBC40ADBB7
PDB2 3BDCCBE4C1B74A5AE0531ADBC40ADBB7

The PDB2 being a snapshot clone, it has created a snapshot on the /u02 which is the ACFS filesystem where datafiles are:

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 3588096 ( 3.42 MB )

Space usage is minimal here as no write happened yet.

datafiles

Here are the datafiles of my CDB to see if PDB2 are on the snapshot:

RMAN> report schema;
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 780 SYSTEM YES /u02/app/oracle/oradata/ORCL/datafile/o1_mf_system_cwxwcgz4_.dbf
2 260 PDB$SEED:SYSTEM NO /u02/app/oracle/oradata/ORCL/3360B2A306C60098E053276DD60A9928/datafile/o1_mf_system_cwxwbzrd_.dbf
3 1030 SYSAUX NO /u02/app/oracle/oradata/ORCL/datafile/o1_mf_sysaux_cwxw98jl_.dbf
4 760 PDB$SEED:SYSAUX NO /u02/app/oracle/oradata/ORCL/3360B2A306C60098E053276DD60A9928/datafile/o1_mf_sysaux_cwxwdof3_.dbf
7 545 UNDOTBS1 YES /u02/app/oracle/oradata/ORCL/datafile/o1_mf_undotbs1_cwxwdl6s_.dbf
8 200 UNDOTBS2 YES /u02/app/oracle/oradata/ORCL/datafile/o1_mf_undotbs2_cwxwrw7y_.dbf
9 370 PDB1:SYSTEM NO /u02/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_system_cwxx3fb0_.dbf
10 800 PDB1:SYSAUX NO /u02/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_sysaux_cwxx3fbl_.dbf
11 5 USERS NO /u02/app/oracle/oradata/ORCL/datafile/o1_mf_users_cwxxop2q_.dbf
12 5 PDB1:USERS NO /u02/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_users_cwxxopm9_.dbf
49 370 PDBX:SYSTEM NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B64A5AE0531ADBC40ADBB7/datafile/o1_mf_system_cwy6688l_.dbf
50 800 PDBX:SYSAUX NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B64A5AE0531ADBC40ADBB7/datafile/o1_mf_sysaux_cwy6688r_.dbf
51 5 PDBX:USERS NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B64A5AE0531ADBC40ADBB7/datafile/o1_mf_users_cwy6688z_.dbf
52 370 PDB2:SYSTEM NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile/o1_mf_system_cwy6725s_.dbf
53 800 PDB2:SYSAUX NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile/o1_mf_sysaux_cwy67261_.dbf
54 5 PDB2:USERS NO /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile/o1_mf_users_cwy67268_.dbf

The PDB2 datafiles are actually symbolic links to the snapshot:


[oracle@rac1 ~]$ ls -l /u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile
/u02/app/oracle/oradata/ORCL/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/datafile:
total 62484
lrwxrwxrwx 1 oracle oinstall 142 Sep 6 19:28 o1_mf_sysaux_cwy67261_.dbf -> /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_sysaux_cwxx3fbl_.dbf
lrwxrwxrwx 1 oracle oinstall 142 Sep 6 19:28 o1_mf_system_cwy6725s_.dbf -> /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_system_cwxx3fb0_.dbf
-rw-r----- 1 oracle oinstall 63971328 Sep 6 19:28 o1_mf_temp_cwy67267_.dbf
lrwxrwxrwx 1 oracle oinstall 141 Sep 6 19:28 o1_mf_users_cwy67268_.dbf -> /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7/app/oracle/oradata/ORCL/3BDAA124651F103DE0531ADBC40A5DD3/datafile/o1_mf_users_cwxxopm9_.dbf

So, you have a snapshot on the/u02 which contains all the CDB datafiles but only the PDB2 datafiles will be read/written on the snapshot (though the symbolic links). The other CDB files are included in the snapshot, but without any reason. Nothing will read or write them. They are there only because ACFS cannot snapshot a folder or a set of file. It’s only a filesystem level.

wite on master

For the moment, the snapshot is small: the blocks are shared.

If I write 100MB on the master (PDB1), those blocks will be copied in order to persist the old image for the snapshot:

SQL> alter session set container=PDB1
Session altered.
SQL> truncate table DEMO;
Table truncated.
SQL> insert /*+ append */into DEMO select lpad('b',900,'b') x from xmltable('1 to 100000');
100000 rows created.

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 105025536 ( 100.16 MB )

Snapshot size increased by the volume that has been written, which is expected: old image is required by PDB2.

write on thin clone

If I write to the clone, copy has to happen as well:

SQL> alter session set container=PDB2
Session altered.
SQL> truncate table DEMO;
Table truncated.
SQL> insert /*+ append */into DEMO select lpad('b',900,'b') x from xmltable('1 to 100000');
100000 rows created.

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 211275776 ( 201.48 MB )

So, because I’ve written 200MB to blocks involved in the snapshots, the snapshot size is now 200MB.

However, look at the way I did it with truncate and insert. I’m writing on the same blocks as I did when writing on PDB1. To be sure I checked it from DBA_EXTENTS and had the same result in both pdbs:
SQL> select file_id,listagg(block_id,',')within group(order by block_id),blocks from dba_extents where segment_name='DEMO' and segment_type='TABLE' group by file_id,blocks;
 
FILE_ID LISTAGG(BLOCK_ID,',')WITHINGROUP(ORDERBYBLOCK_ID) BLOCKS
---------- ------------------------------------------------------------------------------------------------------------------------------------------------------ ----------
64 33024,33032,33040,33048,33056,33064,33072,33080,33088,33096,33104,33112,33120,33128,33136,33144 8
64 33152,33280,33408,33536,33664,33792,33920,34048,34176,34304,34432,34560,34688,34816,34944,35072,35200,35328,35456,35584,35712,35840,35968,36096,36224, 128
36352,36480,36608,36736,36864,36992,37120,37248,37376,37504,37632,37760,37888,38016,38144,38272,38400,38528,38656,38784,38912,39040,39168,39296,39424,
39552,39680,39808,39936,40064,40192,40320,40448,40576,40704,40832,40960,41088 1024
64 41216,42240,43264,44288,45312

So why do I have additional 100MB on my snapshots? Writing to the original blocks would be sufficient as they have been redirected to new ones by the write from PDB1? But because the ACFS snapshot, the previous image is kept. In addition to the current state of PDB1 and PDB2, the snapshot also keeps the blocks from PDB1 as they were at the time of the PDB2 clone. Who needs it?

Ok. This is not a big issue if we consider that you usually don’t write on the master, because it’s the master.

write on other PDB

Remember that multitenant is for consolidation. You don’t use a CDB only for the master and its clones. You may want to host other PDBs. If we had a snapshot for PDB1 and PDB2, writes to other files should not be concerned: no write overhead and no additional storage. However, because the snapshot was made on the whole filesystem, any write to it will copy the blocks, even those that are not concerned by the thin cloned PDB. Here I’m writing 100MB to PDBx which has nothing in common with PDB1 nor PDB2:

SQL> alter session set container=PDBx
Session altered.
SQL> truncate table DEMO;
Table truncated.
SQL> insert /*+ append */into DEMO select lpad('b',900,'b') x from xmltable('1 to 100000');
100000 rows created.

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 311214080 ( 296.79 MB )

This is an additional 100MB. Before image of PDBx has been saved, for no reason because we will never read this previous image.

snapshot keeps growing

after a few minutes without any user activity, snapshot has grown further:

[oracle@rac1 ~]$ acfsutil snap info /u02
snapshot name: 3BDCCBE4C1B74A5AE0531ADBC40ADBB7
snapshot location: /u02/.ACFS/snaps/3BDCCBE4C1B74A5AE0531ADBC40ADBB7
RO snapshot or RW snapshot: RW
parent name: /u02
snapshot creation time: Tue Sep 6 19:28:35 2016
 
number of snapshots: 1
snapshot space usage: 332947456 ( 317.52 MB )

On the /u02 filesystem, there is all the CDB files, SYSTEM, UNDO, controlfile, etc. They have activity and they are copied when written, once again for no reason.

drop thin clone

Only when I drop the PDB2 this space is released:

SQL> alter pluggable database pdb2 close;
Pluggable database altered.
SQL> drop pluggable database pdb2 including datafiles;
Pluggable database dropped.

[oracle@rac1 ~]$ acfsutil snap info /u02
number of snapshots: 0
snapshot space usage: 0 ( 0.00 )

So what?

There’s no bug here. It works as designed because ACFS snapshot is at filesystem level. If you want to use multitenant thin provisioning, the recommendation is to dedicate a CDB for the master and its clones. Any other PDB will have their writes copied to the snapshot for no reason. Writes to common files: UNDO (in 12.1), SYSTEM, SYSAUX will also be concerned. The duration of the clones should be short life, refreshed frequently. And of course, thin cloning is not for production. Very few snapshot/compression/clone technologies can be used in production. Look at storage vendors solutions for that (XtremIO for example).

 

Cet article Multitenant thin provisioning: PDB snapshots on ACFS est apparu en premier sur Blog dbi services.

Disable Red Hat 7 Transparent Hugepages

Mon, 2016-09-05 09:56
Ausgangslage

Ein Kunde hat mich kürzlich angefragt, wie am effizientesten alle Kernel Tuning Parameter für Oracle bei Red Hat 7 konfiguriert werden können. Daraus habe ich untersucht welche Möglichkeiten es gibt, ausser dem seit langer Zeit bekannten /etc/sysctl.conf. Bei dieser Analyse führt der Weg natürlich über tuned. Dieser Dienst erlaubt es Profile zu definieren welche ein ganzes Set von Kernel Tuningparametern enthält. Das habe ich bereits in einem BLOG “Stay tuned with kernel parameters” publiziert. Tuned ist seit Red Hat Linux 7 standardmässig enabled!

Welche Möglichkeiten gibt es?

Bei Red Hat Linux 7 sind standardmässig die THP (Transparent Hugepages) eingeschalten. Wird eine Oracle Datenbank auf diesem Server installiert, ist es dringend zu empfehlen die THP auszuschalten. Oracle gibt einen erheblichen Performance verlust an werden die THP nicht ausgeschalten.

Um die THP auszuschalten gibt es nun unterschiedliche Wege (der Weg führt über Kernel Boot-Parameter):

  • Variante 1: Kernel Boot-Parameter /boot/grub2/grub.cfg
  • Variante 2: Kernel Boot-Parameter /etc/default/grub.conf
  • Variante 3: Kernel Boot-Parameter /usr/lib/tuned/oracle/tuned.conf

Jeder der aufgeführten Varianten hat wie immer Vor und Nachteile.

Variante 1: (/boot/grub2/grub.cfg)
menuentry 'Red Hat Enterprise Linux Server (3.10.0-327.28.3.el7.x86_64) 7.2 (Maipo)' --class red --class gnu-linux --class gnu --class os --unrestri
cted $menuentry_id_option 'gnulinux-3.10.0-327.28.3.el7.x86_64-advanced-5b1a0ee8-c384-47be-99be-6d09a44d9583' {
        load_video
        set gfxpayload=keep
        insmod gzio
        insmod part_msdos
        insmod xfs
        set root='hd0,msdos1'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  69
06b305-a050-4815-8e4b-79202d7ba9fa
        else
          search --no-floppy --fs-uuid --set=root 6906b305-a050-4815-8e4b-79202d7ba9fa
        fi
        linux16 /vmlinuz-3.10.0-327.28.3.el7.x86_64 root=/dev/mapper/rhel-root ro crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quie
t transparent_hugepage=never


        initrd16 /initramfs-3.10.0-327.28.3.el7.x86_64.img
}

Hier wurde die Zeile in der der Kernel Aufruf ist, mit “transparent_hugepage=never” erweitert.

Nachteil, wird eine neue Kernel Version installiert muss dieser Eintrag dort manuell nachgeführt werden. Hier handelt es sich bestenfalls um eine schnelle Möglichkeit für einen Test und nicht um eine permanente Lösung.

Variante 2: (/etc/default/grub.cfg) [root@redhat72  ~]# cat /etc/default/grub

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet transparent_hugepage=never hugepages=100"
GRUB_DISABLE_RECOVERY="true" GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_DEFAULT:+$GRUB_CMDLINE_LINUX_DEFAULT }"

Hier wurde die Zeile “GRUB_CMDLINE_LINUX=”, mit “transparent_hugepage=never hugepages=100″ erweitert. Vorteil, es wird bei jedem neuen Kernel der installiert wird automatisch ergänzt im /boot/grub2/grub.cfg. Im Vergleich zur Variante 1, schon eher am richtigen Ort, aber es geht noch etwas besser….

Variante 3: (/usr/lib/tuned/oracle/tuned.conf)

Jetzt kommen in Abhängigkeit des gewählten tuned Profile, die Kernel Boot-Parameter in die /boot/grub2/grub.cfg. Ein solches tuned Profile z.B. für Oracle:

[root@redhat72 ~]# cat /usr/lib/tuned/oracle/tuned.conf 
#
# tuned configuration
# maintained by dbi services, GGR 24.08.2016
[main]
include=throughput-performance
[bootloader]
cmdline = "transparent_hugepage=never hugepages=100"
[sysctl]
vm.swappiness = 1
vm.dirty_background_ratio = 3
vm.dirty_ratio = 80
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs = 100
kernel.shmmax = 4398046511104
#
# Half the size of physical memory in bytes
# See My Oracle Support Note 567506.1 for additional information about
# configuring shmmax
#
kernel.shmall = 1073741824
#
# 40 percent of the size of physical memory in pages
# Note: If the server supports multiple databases, or uses a large SGA,
# then set this parameter to a value that is equal to the total amount
# of shared memory, in 4K pages, that the system can use at one time
#
kernel.shmmni = 4096
kernel.msgmni = 2878
kernel.msgmnb = 65535
kernel.msgmax = 65536
# kernel.sem = 250 32000 100 128
# on oracle engineered systems
kernel.sem = 4096 256000 100 4096
fs.file-max = 6815744
fs.aio-max-nr = 3145728
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
kernel.panic_on_oops = 1

 

Der spezielle Abschnitt in dem Konfigurationsfile ist die “bootloader” Sektion, mit darauf folgendem Befehl: tuned-adm profile oracle wird das Profile aktiviert.

Das hat den grossen Vorteil, wird ein anderes tuned Profile selektiert können andere Parameter gesetzt werden. Diese Methode erlaubt die Konfiguration, ausschalten der THP und festlegen der grösse der HugePages in einem Profile und das nur an einer Stelle, im oracle/tune.cfg.

Nachteil, es muss nach der Anpassung mit grub2-mkconfig -o /boot/grub2/grub.cfg, noch die /boot/grub2/grub.cfg neu Erstellt werden.

[root@redhat72 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-327.28.3.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-327.28.3.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-327.28.2.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-327.28.2.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-327.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-327.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-29b38e2132e04639809cde7749271d64
Found initrd image: /boot/initramfs-0-rescue-29b38e2132e04639809cde7749271d64.img
done
[root@redhat72 ~]#

 

Mit dem letzten Schritt, werden einige Dinge automatisch angepasst:

  • Parameter auf der command_line_linux_default im /etc/default/grub
  • Parameter auf der Kernelzeile im /boot/grub2/grub.cfg
  • angepasstes File unter /etc/tuned/bootcmdline

 

/boot/grubs/grub.cfg

linux16 /vmlinuz-3.10.0-327.28.3.el7.x86_64 root=/dev/mapper/rhel-root ro crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quie
t $tuned_params

 

/etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_DEFAULT:+$GRUB_CMDLINE_LINUX_DEFAULT }\$tuned_params"

 

/etc/tuned/bootcmdline

TUNED_BOOT_CMDLINE="transparent_hugepage=never hugepages=100"

Damit die ganzen Anpassungen Wirkung zeigen, muss ein Reboot des Servers durchgeführt werden.

Prüfen der Anpassungen:

Nun wollen wir die ganzen Anpassungen auch überprüfen, speziell die AnonHugePages sind die Transparenten Hugepages, auf das achten wir jetzt:

Wie sehen uns die HugePage Konfiguration vor der Konfiguration an:

[root@redhat72 ~]# cat /proc/meminfo | grep Huge
AnonHugePages: 6144 kB
HugePages_Total: 100
HugePages_Free: 100
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

Wir sehen uns die Konfiguration nach der Konfiguration an:

[root@redhat72 ~]# cat /proc/meminfo | grep Huge
AnonHugePages: 0 kB
HugePages_Total: 100
HugePages_Free: 100
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

 

Fazit:

Wird mit tuned gearbeitet und werden auf diese Art die Kernel-Parameter für z.B. für Oracle gesetzt, so können auf die gleiche Art auch gerade noch die THP ausgeschaltet und die Anzahl der HugePages welche bereitsgestellt werden sollen konfiguriert werden.

 

Cet article Disable Red Hat 7 Transparent Hugepages est apparu en premier sur Blog dbi services.

Adaptive Plans and cost of inactive branches

Sat, 2016-09-03 06:27

Here are the details about an execution plan screenshot I’ve tweeted recently because the numbers looked odd. It’s not a big problem, or maybe not a problem at all. Just something surprising. I don’t like when the numbers don’t match and then I try to reproduce and get an explanation, just to be sure there is not something hidden that I misunderstood.

Here is a similar test case joining two small tables DEMO1 and DEMO2 with specific stale statistics.

Hash Join

I start by forcing a full table scan to get a hash join:

select /*+ full(DEMO2) */ * from DEMO1 natural join DEMO2
Plan hash value: 3212315601
------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 130 (100)| 0 |00:00:00.01 | 3 | | | |
|* 1 | HASH JOIN | | 1 | 200 | 130 (1)| 0 |00:00:00.01 | 3 | 1696K| 1696K| 520K (0)|
| 2 | TABLE ACCESS FULL| DEMO1 | 1 | 200 | 3 (0)| 0 |00:00:00.01 | 3 | | | |
| 3 | TABLE ACCESS FULL| DEMO2 | 0 | 100K| 127 (1)| 0 |00:00:00.01 | 0 | | | |
------------------------------------------------------------------------------------------------------------------------------

The cost of DEMO1 full table scan is 3. The cost of DEMO2 full table scan is 127. That’s a total of 130 (the cost of the hash join itself is negligible here)

Nested Loop

When forcing an index access, a nested loop will be used:

select /*+ index(DEMO2) */ * from DEMO1 natural join DEMO2
Plan hash value: 995663177
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 203 (100)| 0 |00:00:00.01 | 3 |
| 1 | NESTED LOOPS | | 1 | 200 | 203 (0)| 0 |00:00:00.01 | 3 |
| 2 | NESTED LOOPS | | 1 | 200 | 203 (0)| 0 |00:00:00.01 | 3 |
| 3 | TABLE ACCESS FULL | DEMO1 | 1 | 200 | 3 (0)| 0 |00:00:00.01 | 3 |
|* 4 | INDEX UNIQUE SCAN | DEMOPK | 0 | 1 | 0 (0)| 0 |00:00:00.01 | 0 |
| 5 | TABLE ACCESS BY INDEX ROWID| DEMO2 | 0 | 1 | 1 (0)| 0 |00:00:00.01 | 0 |
--------------------------------------------------------------------------------------------------------------

The cost of the index access is 1 and as it expected to run 200 loops the total cost is 200. With the full table scan of DEMO1 the total is 203.

Adaptive plan

Here is an explain plan to see the initial plan with active and inactive branches:

SQL> explain plan for
2 select /*+ */ * from DEMO1 natural join DEMO2;
SQL> select * from table(dbms_xplan.display(format=>'adaptive'));
Plan hash value: 3212315601
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 200 | 6400 | 130 (1)| 00:00:01 |
| * 1 | HASH JOIN | | 200 | 6400 | 130 (1)| 00:00:01 |
|- 2 | NESTED LOOPS | | 200 | 6400 | 130 (1)| 00:00:01 |
|- 3 | NESTED LOOPS | | | | | |
|- 4 | STATISTICS COLLECTOR | | | | | |
| 5 | TABLE ACCESS FULL | DEMO1 | 200 | 1000 | 3 (0)| 00:00:01 |
|- * 6 | INDEX UNIQUE SCAN | DEMOPK | | | | |
|- 7 | TABLE ACCESS BY INDEX ROWID| DEMO2 | 1 | 27 | 127 (1)| 00:00:01 |
| 8 | TABLE ACCESS FULL | DEMO2 | 100K| 2636K| 127 (1)| 00:00:01 |
------------------------------------------------------------------------------------------

The active branches (full table scan) have the correct cost: 127 + 3 = 130

However, it’s not the case with inactive ones: no estimations for ‘INDEX UNIQUE SCAN’ and it seems that the ‘TABLE ACCESS BY INDEX ROWID’ get the cost from the full table scan (here 127).

It’s just an observation here. I’ve no explanation about it and I’ve no idea about the consequences except the big surprise when you see the numbers. I guess that the cost of the inactive branches is meaningless. What is important is that the right cost has been used to determine the inflection point.

The index access having a cost of 1, the cost of the nested loop will be higher than full table scan (estimated to 127) when there are more than 127 loops. This is what we see from the 10053 trace:
SQL> host grep ^DP DEMO14_ora_19470_OPTIMIZER.trc
DP: Found point of inflection for NLJ vs. HJ: card = 127.34

Now, as I have no rows in the tables, the nested loop branch will be activated in place of the hash join. So if we display the plan once it is resolved, we will see the lines with an unexpected cost:

Plan hash value: 995663177
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 130 (100)| 0 |00:00:00.01 | 3 |
| 1 | NESTED LOOPS | | 1 | 200 | 130 (1)| 0 |00:00:00.01 | 3 |
| 2 | NESTED LOOPS | | 1 | | | 0 |00:00:00.01 | 3 |
| 3 | TABLE ACCESS FULL | DEMO1 | 1 | 200 | 3 (0)| 0 |00:00:00.01 | 3 |
|* 4 | INDEX UNIQUE SCAN | DEMOPK | 0 | | | 0 |00:00:00.01 | 0 |
| 5 | TABLE ACCESS BY INDEX ROWID| DEMO2 | 0 | 1 | 127 (1)| 0 |00:00:00.01 | 0 |
--------------------------------------------------------------------------------------------------------------

I think it’s harmless, just a bit misleading. 127 is not the cost of the index access. It’s the cost of the full table scan.
I had this surprise when trying to understand why the optimizer choose a full scan instead of index access. This is probably the only reason why I look at the cost: use hints to force the plan that I think is better, in order to understand where the optimizer thinks it is more expensive.

 

Cet article Adaptive Plans and cost of inactive branches est apparu en premier sur Blog dbi services.

Initiate a local GIT repository in command line

Fri, 2016-09-02 10:08

GIT

Objective of the document is to describe how to start manually with command lines a development project, from an existing GIT repository.

Usage of GIT protocol for software development empowers projects team management. It is intended to ease source code management in terms of versioning, branching and sharing between all team members.

    GIT platform Architecture

GIT is a distributed version control system, it means developers can share source code from their workstation to others without the need of any centralized repository. However, at dbi-services we made the choice to deploy a centralized repository platform, first in order to avoid manual synchronization between all developers, then to benefit from a shared common project collaboration platform, like GitHub or GitLab.

Prior being allowed to make a push request to centralized repository, a developer must first ensure having got latest source code revision into its local workstation’s repository (pull request). Then he can commit locally his changes, eventually correct merge conflicts, and finally make the push request to centralized platform.

GIT Architecture

 

    Manual / Command line management

This section will demonstrate how to initiate developer’s local source code management with a remote GIT repository, (as well as from a collaboration platform like GitLab), using the command line.

These commands run out of the box in a Linux operating system.
Under Windows, you must install “git-bash” application.

There are 2 cases for a project initialization:

–    Starting a project from your source code
–    Getting source code from a shared repository

First of all a GIT repository has to be created in the GIT collaboration platform. Do ask GIT platform’s administrators for project creation.

Before starting, it is recommended to update your GIT personal information:

git config --global user.name user
git config --global user.email user@xxx.com

 

Check status of your GIT configuration:

git config –list

       

        Project initialization from local source code

First you must go to your project folder. It is recommended to have the “src” folder underneath.

GIT repository initialization:

git init

 

Create a “master” branch on your local and on remote GIT repository

For local branch creation, you will need to add and commit something (like a README.txt file):

git add README.txt
git commit -m adding README.txt
git branch
* master

 

For remote branch creation, you must first create the local branch, add the remote repository “origin”, then make a pull request to shared repository:

git remote add origin http://<your git server>/<your repo>.git
git push origin master

“origin” represents a pointer name to remote repository.

 

        Project initialization getting source code from shared repository

Get source code from the repository:

git clone http://<your git server>/<your repo>.git <your destination folder>

 

Congratulations, you are now ready to use GIT with your new project !

 

Cet article Initiate a local GIT repository in command line est apparu en premier sur Blog dbi services.

Auditing in PostgreSQL

Fri, 2016-09-02 09:51

Today, especially in the Pharma and Banking sectors, sooner or later you will be faced with the requirement of auditing. Detailed requirements will vary but usually at least tracking logons to the database is a must. Some companies need more information to pass their internal audits such as: Who created which objects, who fired which sql against the database, who was given which permissions and so on. In this post we’ll look at what PostgreSQL can offer here.

PostgreSQL comes with a comprehensive logging system by default. In my 9.5.4 instance there are 28 parameters related to logging:

(postgres@[local]:5438) [postgres] > select count(*) from pg_settings where name like 'log%';
 count 
-------
    28
(1 row)

Not all of them are relevant when it comes to auditing but some can be used for a minimal auditing setup. For logons and loggoffs there are “log_connections” and “log_disconnections”:

(postgres@[local]:5438) [postgres] > alter system set log_connections=on;
ALTER SYSTEM
(postgres@[local]:5438) [postgres] > alter system set log_disconnections=on;
ALTER SYSTEM
(postgres@[local]:5438) [postgres] > select context from pg_settings where name in ('log_dicconnections','log_connections');
      context      
-------------------
 superuser-backend
(1 row)
(postgres@[local]:5438) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

From now on, as soon as someone connects to or disconnects from the instance it is reported in the logfile:

2016-09-02 10:35:56.983 CEST - 2 - 13021 - [local] - postgres@postgres LOG:  connection authorized: user=postgres database=postgres
2016-09-02 10:36:04.820 CEST - 3 - 13021 - [local] - postgres@postgres LOG:  disconnection: session time: 0:00:07.837 user=postgres database=postgres host=[local]

Another parameter that might be useful for auditing is “log_statement”. When you set this to “ddl” all DDLs are logged, when you set it to “mod” all DDLs plus all statements that modify data will be logged. To log all statements there is the value of “all”.

(postgres@[local]:5438) [postgres] > alter system set log_statement='all';
ALTER SYSTEM

For new session all statements will be logged from now on:

2016-09-02 10:45:15.859 CEST - 3 - 13086 - [local] - postgres@postgres LOG:  statement: create table t ( a int );
2016-09-02 10:46:44.064 CEST - 4 - 13098 - [local] - postgres@postgres LOG:  statement: insert into t values (1);
2016-09-02 10:47:00.162 CEST - 5 - 13098 - [local] - postgres@postgres LOG:  statement: update t set a = 2;
2016-09-02 10:47:10.606 CEST - 6 - 13098 - [local] - postgres@postgres LOG:  statement: delete from t;
2016-09-02 10:47:22.012 CEST - 7 - 13098 - [local] - postgres@postgres LOG:  statement: truncate table t;
2016-09-02 10:47:25.284 CEST - 8 - 13098 - [local] - postgres@postgres LOG:  statement: drop table t;

Be aware that your logfile can grow significantly if you turn this on and especially if you set the value to “all”.

That’s it more or less when it comes to auditing: You can audit logons, logoffs and SQL statements. This might be sufficient for your requirements but this also might not be sufficient for requirements. What do you do if you need e.g. to audit on object level? With the default logging parameters you can not do this. But, as always in PostgreSQL, there is an extension: pgaudit.

If you want to install this extension you’ll need the PostgreSQL source code. To show the complete procedure here is a PostgreSQL setup from source. Obiously the first step is to download and extract the source code:

postgres@pgbox:/u01/app/postgres/software/ [PG953] cd /u01/app/postgres/software/
postgres@pgbox:/u01/app/postgres/software/ [PG953] wget https://ftp.postgresql.org/pub/source/v9.5.4/postgresql-9.5.4.tar.bz2
--2016-09-02 09:39:29--  https://ftp.postgresql.org/pub/source/v9.5.4/postgresql-9.5.4.tar.bz2
Resolving ftp.postgresql.org (ftp.postgresql.org)... 213.189.17.228, 217.196.149.55, 87.238.57.227, ...
Connecting to ftp.postgresql.org (ftp.postgresql.org)|213.189.17.228|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 18496299 (18M) [application/x-bzip-compressed-tar]
Saving to: ‘postgresql-9.5.4.tar.bz2’

100%[==================================================================================>] 18'496'299  13.1MB/s   in 1.3s   

2016-09-02 09:39:30 (13.1 MB/s) - ‘postgresql-9.5.4.tar.bz2’ saved [18496299/18496299]

postgres@pgbox:/u01/app/postgres/software/ [PG953] tar -axf postgresql-9.5.4.tar.bz2 
postgres@pgbox:/u01/app/postgres/software/ [PG953] cd postgresql-9.5.4

Then do the usual configure, make and make install:

postgres@pgbox:/u01/app/postgres/software/ [PG953] PGHOME=/u01/app/postgres/product/95/db_4
postgres@pgbox:/u01/app/postgres/software/ [PG953] SEGSIZE=2
postgres@pgbox:/u01/app/postgres/software/ [PG953] BLOCKSIZE=8
postgres@pgbox:/u01/app/postgres/software/ [PG953] ./configure --prefix=${PGOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGOME}/bin \
            --libdir=${PGOME}/lib \
            --sysconfdir=${PGOME}/etc \
            --includedir=${PGOME}/include \
            --datarootdir=${PGOME}/share \
            --datadir=${PGOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-tcl \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
            --with-wal-segsize=16  \
            --with-extra-version=" dbi services build"
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] make world
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] make install
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] cd contrib
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] make install

Once this is done you can continue with the installation of the pgaudit extension:

postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/ [PG953] git clone https://github.com/pgaudit/pgaudit.git
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/contrib/ [PG953] cd pgaudit/
postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/contrib/pgaudit/ [PG953] make -s check
============== creating temporary instance            ==============
============== initializing database system           ==============
============== starting postmaster                    ==============
running on port 57736 with PID 8635
============== creating database "contrib_regression" ==============
CREATE DATABASE
ALTER DATABASE
============== running regression test queries        ==============
test pgaudit                  ... ok
============== shutting down postmaster               ==============
============== removing temporary instance            ==============

=====================
 All 1 tests passed. 
=====================

postgres@pgbox:/u01/app/postgres/software/postgresql-9.5.4/contrib/pgaudit/ [PG953] make install
/usr/bin/mkdir -p '/u01/app/postgres/product/95/db_4/lib'
/usr/bin/mkdir -p '/u01/app/postgres/product/95/db_4/share/extension'
/usr/bin/mkdir -p '/u01/app/postgres/product/95/db_4/share/extension'
/usr/bin/install -c -m 755  pgaudit.so '/u01/app/postgres/product/95/db_4/lib/pgaudit.so'
/usr/bin/install -c -m 644 ./pgaudit.control '/u01/app/postgres/product/95/db_4/share/extension/'
/usr/bin/install -c -m 644 ./pgaudit--1.0.sql  '/u01/app/postgres/product/95/db_4/share/extension/'

That’s it. Initialize a new cluster:

postgres@pgbox:/u01/app/postgres/software/ [PG954] initdb -D /u02/pgdata/PG954 -X /u03/pgdata/PG954
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /u02/pgdata/PG954 ... ok
creating directory /u03/pgdata/PG954 ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /u02/pgdata/PG954/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /u02/pgdata/PG954 -l logfile start

… and install the extension:

postgres@pgbox:/u02/pgdata/PG954/ [PG954] psql postgres
psql (9.5.4 dbi services build)
Type "help" for help.

(postgres@[local]:5438) [postgres] > create extension pgaudit;
ERROR:  pgaudit must be loaded via shared_preload_libraries
Time: 2.226 ms

(postgres@[local]:5438) [postgres] > alter system set shared_preload_libraries='pgaudit';
ALTER SYSTEM
Time: 18.236 ms

##### Restart the PostgreSQL instance

(postgres@[local]:5438) [postgres] > show shared_preload_libraries ;
 shared_preload_libraries 
--------------------------
 pgaudit
(1 row)

Time: 0.278 ms
(postgres@[local]:5438) [postgres] > create extension pgaudit;
CREATE EXTENSION
Time: 4.688 ms

(postgres@[local]:5438) [postgres] > \dx
                   List of installed extensions
  Name   | Version |   Schema   |           Description           
---------+---------+------------+---------------------------------
 pgaudit | 1.0     | public     | provides auditing functionality
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

Ready. So, what can you do with it? As the documentation is quite well here are just a few examples.

To log all statements against a role:

(postgres@[local]:5438) [postgres] > alter system set pgaudit.log = 'ROLE';

Altering or creating roles from now on is reported in the logfile as:

2016-09-02 14:50:45.432 CEST - 9 - 13353 - [local] - postgres@postgres LOG:  AUDIT: SESSION,2,1,ROLE,CREATE ROLE,,,create user uu login password ,
2016-09-02 14:52:03.745 CEST - 16 - 13353 - [local] - postgres@postgres LOG:  AUDIT: SESSION,3,1,ROLE,ALTER ROLE,,,alter user uu CREATEDB;,
2016-09-02 14:52:20.881 CEST - 18 - 13353 - [local] - postgres@postgres LOG:  AUDIT: SESSION,4,1,ROLE,DROP ROLE,,,drop user uu;,

Object level auditing can be implemented like this (check the documentation for the meaning of the pgaudit.role parameter):

(postgres@[local]:5438) [postgres] > create user audit;
CREATE ROLE
(postgres@[local]:5438) [postgres] > create table taudit ( a int );
CREATE TABLE
(postgres@[local]:5438) [postgres] > insert into taudit values ( 1 );
INSERT 0 1
(postgres@[local]:5438) [postgres] > grant select,delete on taudit to audit;
GRANT
(postgres@[local]:5438) [postgres] > alter system set pgaudit.role='audit';
ALTER SYSTEM
(postgres@[local]:5438) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

Once we touch the table:

(postgres@[local]:5438) [postgres] > select * from taudit;
 a 
---
 1
(1 row)
(postgres@[local]:5438) [postgres] > update taudit set a = 4;

… the audit information appears in the logfile:

2016-09-02 14:57:10.198 CEST - 5 - 13708 - [local] - postgres@postgres LOG:  AUDIT: OBJECT,1,1,READ,SELECT,TABLE,public.taudit,select * from taudit;,
2016-09-02 15:00:59.537 CEST - 9 - 13708 - [local] - postgres@postgres LOG:  AUDIT: OBJECT,2,1,WRITE,UPDATE,TABLE,public.taudit,update taudit set a = 4;,

Have fun with auditing …

 

Cet article Auditing in PostgreSQL est apparu en premier sur Blog dbi services.

Understanding Row Level Security on PostgreSQL

Fri, 2016-09-02 03:11

In this article we will talk about a nice feature Row Level Security on PostgreSQL. We are using EDB Postgres Advanced Server 9.5.
Suppose that I am a team manager and that employee bonus are stored in a table Bonus. I want that each employee can see only data related to him and not data for other. How Can I implement this? I can simply use Row Level Security.
Let’s go on. Below is the structure of my table Bonus

testdb=# \d Bonus
            Table "public.bonus"
 Column |         Type          | Modifiers
--------+-----------------------+-----------
 id     | numeric               | not null
 login   | character varying(20) |
 bonus  | numeric               |
Indexes:
    "bonus_pkey" PRIMARY KEY, btree (id)

Below data inside Bonus

testdb=# table bonus;
 id |        login         | bonus
----+----------------------+-------
  1 | james@example.com    |  2500
  2 | Moise@example.com    |  1500
  3 | Mikael@example.com   |  7500
  4 | jennifer@example.com |  3520
(4 rows)

Let’s create users with corresponding logins

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# create user "james@example.com" password 'secret';
CREATE ROLE
testdb=# create user "Moise@example.com" password 'secret';
CREATE ROLE
testdb=# create user "jennifer@example.com" password 'secret';
CREATE ROLE
testdb=# create user "Mikael@example.com" password 'secret';
CREATE ROLE

And let’s grant them select on Table Bonus

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# grant select on bonus to "james@example.com";
GRANT
testdb=# grant select on bonus to "Moise@example.com";
GRANT
testdb=# grant select on bonus to "Mikael@example.com";
GRANT
testdb=# grant select on bonus to "jennifer@example.com";
GRANT

We can verify that  by default   each user can see all data (what I don’t want). For example with user james@example.com

testdb=> select current_user;
   current_user
-------------------
 james@example.com
(1 row)

testdb=> select * from bonus;
 id |        login         | bonus
----+----------------------+-------
  1 | james@example.com    |  2500
  2 | Moise@example.com    |  1500
  3 | Mikael@example.com   |  7500
  4 | jennifer@example.com |  3520
(4 rows)

And with user jennifer@example.com

testdb=> select current_user;
     current_user
----------------------
 jennifer@example.com
(1 row)

testdb=> select * from bonus;
 id |        login         | bonus
----+----------------------+-------
  1 | james@example.com    |  2500
  2 | Moise@example.com    |  1500
  3 | Mikael@example.com   |  7500
  4 | jennifer@example.com |  3520
(4 rows)

To allow user to see only his data. I have first to create a policy on the table Bonus with an expression which will filter data.

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# create policy bonus_policy on bonus for all to public using (login=current_user);
CREATE POLICY
testdb=#

After creating the policy, let’s enable the RLS on table Bonus

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# alter table bonus enable row level security;
ALTER TABLE
testdb=#

And now bingo we can  verify that each user can only see his corresponding data

testdb=> select current_user;
     current_user
----------------------
 jennifer@example.com
(1 row)

testdb=> select * from bonus;
 id |        login         | bonus
----+----------------------+-------
  4 | jennifer@example.com |  3520
(1 row)
testdb=> select current_user;
   current_user
-------------------
 james@example.com
(1 row)

testdb=> select * from bonus;
 id |       login       | bonus
----+-------------------+-------
  1 | james@example.com |  2500
(1 row)

testdb=>

Now let’s drop the policy but let’s still keep table bonus with the RLS enabled. What happens?

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# drop policy bonus_policy on bonus;
DROP POLICY
testdb=#

Let’s Query  table bonus with user james@example.com for example

testdb=> select current_user;
   current_user
-------------------
 james@example.com
(1 row)

testdb=> select * from bonus;
 id | login | bonus
----+-------+-------
(0 rows)

testdb=>

But if we query the table with user enterprisedb which is the table owner (should also be a superuser)

testdb=# select current_user;
 current_user
--------------
 enterprisedb
(1 row)

testdb=# select * from bonus;
 id |        login         | bonus
----+----------------------+-------
  1 | james@example.com    |  2500
  2 | Moise@example.com    |  1500
  3 | Mikael@example.com   |  7500
  4 | jennifer@example.com |  3520
(4 rows)

So we see that if RLS is enabled on a table and that there is no defined policy, a default-deny  policy will be applied. Only owners, super users and users  with the BYPASSRLS attribute will be able to see data in the table

 

Cet article Understanding Row Level Security on PostgreSQL est apparu en premier sur Blog dbi services.

Rolling Invalidate Window Exceeded

Tue, 2016-08-30 16:05

Today I was doing a hard parse storm post-mortem analysis. One hypothesis was rolling invalidation causing invalidation, but figures didn’t match. I often reproduce the hypothesis to check the numbers to be sure I interpret them correctly. Especially the timestamps in V$SQL_SHARED_CURSOR.REASON. And as it may help others (including myself in the future) I share the test case.

I create a table with one row (12c online statistics gathering, so num_rows is 1) and then insert one more row.

21:31:26 SQL> create table DEMO as select * from dual;
Table created.
21:31:26 SQL> insert into DEMO select * from dual;
1 row created.
21:31:26 SQL> commit;
Commit complete.

I run a query on the table. I don’t care about the result, so let’s put it something that will be useful later: the UTC time as the number of seconds since Jan 1st, 1970 (aka Epoch)

21:32:52 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585572
1472585572

The execution plan cardinality estimation is 1 row as this is what is in object statistics.

21:32:52 SQL> select * from table(dbms_xplan.display_cursor(null,null));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 61x2h0y9zv0r6, child number 0
-------------------------------------
select (cast(sys_extract_utc(current_timestamp) as
date)-date'1970-01-01')*24*3600 from DEMO
 
Plan hash value: 4000794843
 
------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2 (100)| |
| 1 | TABLE ACCESS FULL| DEMO | 1 | 2 (0)| 00:00:01 |
------------------------------------------------------------------

I gather statistics with all default attributes, so rolling invalidation occurs.

21:32:52 SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
PL/SQL procedure successfully completed.

At this time, the cursor has been parsed only once:

21:32:52 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------
0 1 1 1 2016-08-30/21:32:51 2016-08-30/21:32:51 30-AUG-16 21:32:51

By default the invalidation window is 5 hours. I don’t want to wait so I set it to something shorter- 15 seconds:

21:32:54 SQL> alter system set "_optimizer_invalidation_period"=15;
System altered.

There will not be any invalidation until the next execution. To prove it I wait 20 seconds, run the query again and check the execution plan:

21:33:12 SQL> select (sysdate-date'1970-01-01')*24*3600 from DEMO;
 
(SYSDATE-DATE'1970-01-01')*24*3600
----------------------------------
1472592792
1472592792
 
21:33:12 SQL> select * from table(dbms_xplan.display_cursor('61x2h0y9zv0r6',null));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 61x2h0y9zv0r6, child number 0
-------------------------------------
select (cast(sys_extract_utc(current_timestamp) as
date)-date'1970-01-01')*24*3600 from DEMO
 
Plan hash value: 4000794843
 
------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2 (100)| |
| 1 | TABLE ACCESS FULL| DEMO | 1 | 2 (0)| 00:00:01 |
------------------------------------------------------------------

This is still the old cursor (child number 0) with old stats (num_rows=1)

However, from this point rolling invalidation occurs: a random timestamp is generated within the rolling window (15 seconds here – 5 hours in default database).

I don’t know how to see this timestamp at that point (comments welcome) so I run the query several times within this 15 seconds window to see when it occurs:

21:33:16 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585596
1472585596
 
21:33:19 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585599
1472585599
 
21:33:22 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585602
1472585602
 
21:33:25 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585605
1472585605
 
21:33:28 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585608
1472585608
 
21:33:31 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585611
1472585611

After those runs, I check that I have a new execution plan with new estimation from new statistics (num_rows=2):

21:33:31 SQL> select * from table(dbms_xplan.display_cursor('61x2h0y9zv0r6',null));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 61x2h0y9zv0r6, child number 0
-------------------------------------
select (cast(sys_extract_utc(current_timestamp) as
date)-date'1970-01-01')*24*3600 from DEMO
 
Plan hash value: 4000794843
 
------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2 (100)| |
| 1 | TABLE ACCESS FULL| DEMO | 1 | 2 (0)| 00:00:01 |
------------------------------------------------------------------
 
SQL_ID 61x2h0y9zv0r6, child number 1
-------------------------------------
select (cast(sys_extract_utc(current_timestamp) as
date)-date'1970-01-01')*24*3600 from DEMO
 
Plan hash value: 4000794843
 
------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2 (100)| |
| 1 | TABLE ACCESS FULL| DEMO | 2 | 2 (0)| 00:00:01 |
------------------------------------------------------------------

Yes, I have a new child cursor, child number 1. A new cursor means that I have a reason in V$SQL_SHARED_CURSOR:

21:33:31 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='61x2h0y9zv0r6';
 
CHILD_NUMBER REASON
------------ --------------------------------------------------------------------------------
0 <ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invalidate Win
dow Exceeded(2)</reason><size>0x0</size><details>already_processed</details></Ch
ildNode><ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invali
date Window Exceeded(3)</reason><size>2x4</size><invalidation_window>1472585604<
/invalidation_window><ksugctm>1472585607</ksugctm></ChildNode>
 
1

Child cursor number 0 has not been shared because of rolling invalidation. The invalidation_window number, 1472585604, is the timestamp set by rolling invalidation, set at first parse call after stats gathering, and defined within the rolling window that follows. After this one (1472585604 is 21:33:24 in my GMT+2 timezone) the cursor will not be shared and a new hard parse occurs. I think that ksugctm is the timestamp when the new cursor is created. 1472585607 is 21:33:27 here in Switzerland. You see the corresponding timestamps in V$SQL:

21:33:31 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------
0 1 5 5 2016-08-30/21:32:51 2016-08-30/21:32:51 30-AUG-16 21:33:24
0 1 2 2 2016-08-30/21:32:51 2016-08-30/21:33:27 30-AUG-16 21:33:30

Ok. Important thing is that the ‘rolling invalidation’ is not an invalidation (as V$SQL.INVALIDATIONS=0) of the cursor, but just non-sharing of the child.

If we gather statistics with immediate invalidation, it’s different:

21:33:31 SQL> exec dbms_stats.gather_table_stats(user,'DEMO',no_invalidate=>false);
PL/SQL procedure successfully completed.
 
21:33:34 SQL> select (cast(sys_extract_utc(current_timestamp) as date)-date'1970-01-01')*24*3600 from DEMO;
 
(CAST(SYS_EXTRACT_UTC(CURRENT_TIMESTAMP)ASDATE)-DATE'1970-01-01')*24*3600
-------------------------------------------------------------------------
1472585614
1472585614
21:33:34 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='61x2h0y9zv0r6';
 
CHILD_NUMBER REASON
------------ --------------------------------------------------------------------------------
0 <ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invalidate Win
dow Exceeded(3)</reason><size>2x4</size><invalidation_window>1472585604</invalid
ation_window><ksugctm>1472585607</ksugctm></ChildNode><ChildNode><ChildNumber>0<
/ChildNumber><ID>33</ID><reason>Rolling Invalidate Window Exceeded(2)</reason><s
ize>0x0</size><details>already_processed</details></ChildNode>

I’ve only one child here, a new one, and I’m not sure the reason has a meaning.

21:33:34 SQL> select invalidations,loads,parse_calls,executions,first_load_time,last_load_time,last_active_time from v$sql where sql_id='61x2h0y9zv0r6';
 
INVALIDATIONS LOADS PARSE_CALLS EXECUTIONS FIRST_LOAD_TIME LAST_LOAD_TIME LAST_ACTIVE_TIME
------------- ---------- ----------- ---------- ------------------- ------------------- ------------------
1 2 1 1 2016-08-30/21:32:51 2016-08-30/21:33:33 30-AUG-16 21:33:33

This is an invalidation of the cursor. Old children cursors are removed and the proud parent is marked as invalidated 1 time.

 

Cet article Rolling Invalidate Window Exceeded est apparu en premier sur Blog dbi services.

The fastest way to get the Oracle sample schemas

Tue, 2016-08-30 00:52

Do you need the Oracle sample schemas to do a quick test or demonstration? And, as always, you did not install the sample schemas when you did the setup of your environment? The probably fastest way to get them installed is to download them from github. Installation instructions are there as well. Have fun …

 

Cet article The fastest way to get the Oracle sample schemas est apparu en premier sur Blog dbi services.

Filenames in AWR reports

Mon, 2016-08-29 14:33

If you have read my latest blog posts, you know I’ve measured IOPS with SLOB to estimate ACFS overhead on a fast storage. This blog is about something I learned after wasting one hour on the result.

Here is how I did my tests:

  1. Create a SLOB database in ACFS
  2. Run SLOB PIO tests and tag the AWR report as ‘ACFS’
  3. Move datafile to +DATA
  4. Run SLOB PIO tests and tag the AWR report as ‘ASM’

Of course, I’ve scripted to run several tests varying the number of sessions, work unit, etc. while I was doing something more productive.

While done, I got a set of AWR report and the first task was to check that they were consistent. But they were not. The datafile in ‘File IO Stats’ section did not match the tag I’ve put in the file name. First I suspected a bug in my script with bad tagging or failed datafile move. I had to read the alert.log to get that my tagging was good but filename in AWR reports was wrong. I finally looked at AWR views to understand why the filename was wrong and understood the problem:

SQL> desc DBA_HIST_DATAFILE;
Name Null? Type
----------------------------------------- -------- ----------------------------
DBID NOT NULL NUMBER
FILE# NOT NULL NUMBER
CREATION_CHANGE# NOT NULL NUMBER
FILENAME NOT NULL VARCHAR2(513)
TS# NOT NULL NUMBER
TSNAME VARCHAR2(30)
BLOCK_SIZE NUMBER
CON_DBID NUMBER
CON_ID NUMBER

There’s no SNAP_ID. AWR do not store the history of file names. We can suppose that it stores only the latest filename, but then my reports would be good as they were generated immediately after the snapshot. Or that the first name stays, but I had some reports with ‘+DATA’.

Then, I grepped for ‘WRH$_HISTORY’ in ORACLE_HOME/rdbms/admin and came upon this:

dbmsawr.sql: -- This routine updates WRH$_DATAFILE rows for the datafile name and
dbmsawr.sql: -- WRH$_DATAFILE with the current information in database.

There is an update_datafile_info procedure here in the dbms_workload_repository and the comment says something like:
This change will be captured at max after some
-- (generally 50) snapshots. So the AWR and AWR report may be wrong with
-- respect to data file name or tablespace name for that duration.

I love to work with Oracle. All information is there if you know where to look at.

So if you want to rely on filename in an AWR report after a move, you should run this procedure before taking the report. And you should run this report before the next datafile move.

Here is the example:

SQL> exec dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.
 
SQL> select file#,filename from DBA_HIST_DATAFILE where file#=6;
 
FILE# FILENAME
---------- --------------------------------------
6 /u01/DEMO/oradata/DEMO14/users01.dbf
 
SQL> select snap_id,file#,filename from DBA_HIST_FILESTATXS where file#=6 order by snap_id fetch first 10 rows only;
 
SNAP_ID FILE# FILENAME
---------- ---------- --------------------------------------
1244 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1245 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1246 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1247 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1248 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1249 6 /u01/DEMO/oradata/DEMO14/users01.dbf
6 rows selected.

My file is user01 and this is what is stored in AWR.

I rename it to users02 (thanks to 12c online move)

SQL> alter database move datafile '/u01/DEMO/oradata/DEMO14/users01.dbf' to '/u01/DEMO/oradata/DEMO14/users02.dbf';
Database altered.

but AWR is not aware of the change even after a snapshot:

SQL> exec dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.
 
SQL> select file#,filename from DBA_HIST_DATAFILE where file#=6;
 
FILE# FILENAME
---------- --------------------------------------
6 /u01/DEMO/oradata/DEMO14/users01.dbf
 
SQL> select snap_id,file#,filename from DBA_HIST_FILESTATXS where file#=6 order by snap_id fetch first 10 rows only;
 
SNAP_ID FILE# FILENAME
---------- ---------- --------------------------------------
1244 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1245 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1246 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1247 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1248 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1249 6 /u01/DEMO/oradata/DEMO14/users01.dbf
1250 6 /u01/DEMO/oradata/DEMO14/users01.dbf

You have to wait for those 50 snapshots or run the update:

SQL> exec dbms_workload_repository.update_datafile_info;
PL/SQL procedure successfully completed.

SQL> select file#,filename from DBA_HIST_DATAFILE where file#=6;
 
FILE# FILENAME
---------- --------------------------------------
6 /u01/DEMO/oradata/DEMO14/users02.dbf
 
SQL> select snap_id,file#,filename from DBA_HIST_FILESTATXS where file#=6 order by snap_id fetch first 10 rows only;
 
SNAP_ID FILE# FILENAME
---------- ---------- --------------------------------------
1244 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1245 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1246 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1247 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1248 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1249 6 /u01/DEMO/oradata/DEMO14/users02.dbf
1250 6 /u01/DEMO/oradata/DEMO14/users02.dbf

But as you see no history about previous names.

Note that if you look at the table behind the view, there’s a SNAP_ID but it’s not part of the primary key. It is used by the purge procedures.

 

Cet article Filenames in AWR reports est apparu en premier sur Blog dbi services.

Letting GoldenGate automatically maintain the insert and update timestamps on the target

Mon, 2016-08-29 11:18

Today at one of our GoldenGate customers we were faced with the following requirement: For one of the tables on the target we needed two additional columns which shall hold the insert and update timestamps for when the row was either inserted or updated. Just to be clear: It was not about getting the timestamps for the inserts or updates as they happened on the source but the target. In this post we’ll look at how GoldenGate can be configured to do the work requested.

All the below examples work with the well known scott/tiger schema. There is one extract running for capturing on the source and one replicat is re-playing the changes on the target (over sqlnet, no integrated mode here). This is the current status of the streams:

GGSCI (oelogg1) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXTRSCO     00:00:00      00:00:02    
REPLICAT    RUNNING     REPLSCO     00:00:00      00:00:01    


GGSCI (oelogg1) 2> view params EXTRSCO

EXTRACT extrsco
USERIDALIAS DB1 DOMAIN admin
GETUPDATEBEFORES
REPORT AT 23:40
DDL INCLUDE OPTYPE TRUNCATE OBJNAME ARBOR.*, &
    INCLUDE OPTYPE ALTER OBJNAME ARBOR.*
EXTTRAIL /u01/app/ogg/product/12.1.2.1.9/dirdat/es
TABLE SCOTT.*;

GGSCI (oelogg1) 3> view params REPLSCO

REPLICAT REPLSCO
USERIDALIAS DB2 DOMAIN admin
GETUPDATEBEFORES
GETTRUNCATES
APPLYNOOPUPDATES
DDL INCLUDE OPTYPE TRUNCATE OBJNAME SCOTT.*, &
    INCLUDE OPTYPE ALTER OBJNAME SCOTT.* &
    EXCLUDE ALL INSTR 'CONSTRAINT' &
    EXCLUDE ALL INSTR 'TRIGGER' 
ASSUMETARGETDEFS
DBOPTIONS DEFERREFCONST
MAP SCOTT.*, TARGET SCOTT.*;

Pretty basic, no unusual stuff here. The table we’ll use for the scope of this post is the “project” table which has the following contents in a fresh scott/tiger installation:

SQL> col DESCRIPTION for a50
SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE
---------- -------------------------------------------------- ------------ ------------
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

Of course the table looks the same on the target:

SQL> col DESCRIPTION for a50
SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE
---------- -------------------------------------------------- ------------ ------------
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

To prove that the streams are really working lets add an additional row to the source:

SQL> insert into project values (1005, 'my fun project 1', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

SQL> 

… and then check if the row indeed was replicated to the target:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE
---------- -------------------------------------------------- ------------ ------------
      1005 my fun project 1				      29-AUG-16    28-SEP-16
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

Ok, looks fine, so back to the initial requirement. We need two additional columns on the target:

alter table SCOTT.PROJECT add create_dt timestamp with time zone;
alter table SCOTT.PROJECT add update_dt timestamp with time zone;
alter table SCOTT.PROJECT modify create_dt default to_date('01.01.2000','DD.MM.YYYY');
alter table SCOTT.PROJECT modify update_dt default to_date('01.01.2000','DD.MM.YYYY');

In our case we needed to set a default value as both column are not allowed to contain NULL values. For the moment the content of the table on the target is:

SQL> col CREATE_DT for a20
SQL> col UPDATE_DT for a20
SQL> select * from project;

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT	     UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ -------------------- --------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31
      1004 High capacity optical network		      2008-01-01

In real life when the table which will be extended holds millions of rows the following will probably be too simple and you’ll need to spend some time on thinking on how you organize the updates. For the scope of this post this is fine:

update SCOTT.PROJECT set create_dt = to_date('01.01.2000','DD.MM.YYYY') where create_dt is null;
update SCOTT.PROJECT set update_dt = to_date('01.01.2000','DD.MM.YYYY') where update_dt is null;
commit;
alter table SCOTT.PROJECT modify create_dt not null;
alter table SCOTT.PROJECT modify update_dt not null;

From now on we have identical insert and update timestamps for all of the rows on the target:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01                01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

A final check on the source for being sure that the default values work:

SQL> insert into project values (1006, 'my fun project 2', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

On the target:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

All fine. Time to do the GoldenGate work. Obviously the first step is to stop the replicat:

GGSCI (oelogg1) 2> stop REPLSCO

Sending STOP request to REPLICAT REPLSCO ...
Request processed.

GGSCI (oelogg1) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXTRSCO     00:00:00      00:00:09    
REPLICAT    STOPPED     REPLSCO     00:00:00      00:00:01    

Lets do the insert case in a first step by adding the following line (COLMAP) to the replicat configuration:

REPLICAT REPLSCO
USERIDALIAS DB2 DOMAIN admin
GETUPDATEBEFORES
GETTRUNCATES
APPLYNOOPUPDATES
DISCARDFILE /u01/app/ogg/product/12.1.2.1.9/dirdsc/dscsco.dsc, APPEND, MEGABYTES 10
DDL INCLUDE OPTYPE TRUNCATE OBJNAME SCOTT.*, &
    INCLUDE OPTYPE ALTER OBJNAME SCOTT.* &
    EXCLUDE ALL INSTR 'CONSTRAINT' &
    EXCLUDE ALL INSTR 'TRIGGER' 
ASSUMETARGETDEFS
DBOPTIONS DEFERREFCONST
MAP SCOTT.PROJECT, TARGET SCOTT.PROJECT, COLMAP (usedefaults,
    create_dt = @IF (@STREQ (@GETENV ('GGHEADER', 'OPTYPE'), 'INSERT'), @DATENOW(), @COLSTAT (MISSING)));
MAP SCOTT.*, TARGET SCOTT.*;

Start again:

GGSCI (oelogg1) 6> start REPLSCO

Sending START request to MANAGER ...
REPLICAT REPLSCO starting


GGSCI (oelogg1) 7> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXTRSCO     00:00:00      00:00:01    
REPLICAT    RUNNING     REPLSCO     00:00:00      00:00:00    

Looks good from a configuration perspective. Time to start:

SQL> insert into project values (1007, 'my fun project 3', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

On the target we should now see the exact insert date of the record instead of the default value of the column:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

Perfect, this works. Lets go on with the update case: All we need is to add the additional column for the update case and populate it:

REPLICAT REPLSCO
USERIDALIAS DB2 DOMAIN admin
GETUPDATEBEFORES
GETTRUNCATES
APPLYNOOPUPDATES
DISCARDFILE /u01/app/ogg/product/12.1.2.1.9/dirdsc/dscsco.dsc, APPEND, MEGABYTES 10
DDL INCLUDE OPTYPE TRUNCATE OBJNAME SCOTT.*, &
    INCLUDE OPTYPE ALTER OBJNAME SCOTT.* &
    EXCLUDE ALL INSTR 'CONSTRAINT' &
    EXCLUDE ALL INSTR 'TRIGGER' 
ASSUMETARGETDEFS
DBOPTIONS DEFERREFCONST
MAP SCOTT.PROJECT, TARGET SCOTT.PROJECT, COLMAP (usedefaults,
    create_dt = @IF (@STREQ (@GETENV ('GGHEADER', 'OPTYPE'), 'INSERT'), @DATENOW(), @COLSTAT (MISSING)),
    update_dt = @IF (@VALONEOF (@GETENV ('GGHEADER', 'OPTYPE'), 'UPDATE', 'SQL COMPUPDATE', 'PK UPDATE' ), @DATENOW(), @COLSTAT 
(MISSING)));
MAP SCOTT.*, TARGET SCOTT.*;

Then stop and start the replicat to bring the changes into effect. Lets create a new row just to see that this does not have any effect on the insert case:

SQL> insert into project values (1008, 'my fun project 4', sysdate, sysdate + 30 );

1 row created.

SQL> commit;

Commit complete.

What we should see on the target are two rows with an exact insert date but a default update date:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1008 my fun project 4				      29-AUG-16    28-SEP-16	29-AUG-16 12.58.40.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

Perfect. What about the update itself?

SQL> update project set DESCRIPTION = upper(description) where PROJECTNO = 1008;

1 row updated.

SQL> commit;

Commit complete.

This should result in one exact update date for my fun project 4:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1008 MY FUN PROJECT 4				      29-AUG-16    28-SEP-16	29-AUG-16 12.58.40.000000 PM +02:00 29-AUG-16 01.04.49.000000 PM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

8 rows selected.

Perfect, works fine, too. Just to be sure that deletes still work lets do a final test:

SQL> delete from project where PROJECTNO = 1008;

1 row deleted.

SQL> commit;

Commit complete.

SQL> 

The two additional columns should not prevent Goldengate from being able to delete the rows, lets see:

SQL> r
  1* select * from project

 PROJECTNO DESCRIPTION					      START_DATE   END_DATE	CREATE_DT			    UPDATE_DT
---------- -------------------------------------------------- ------------ ------------ ----------------------------------- -----------------------------------
      1005 my fun project 1				      29-AUG-16    31-AUG-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1006 my fun project 2				      29-AUG-16    28-SEP-16	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1001 Development of Novel Magnetic Suspension System    2006-01-01   2007-08-13	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1002 Research on thermofluid dynamics in Microdroplets  2006-08-22   2007-03-20	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1003 Foundation of Quantum Technology		      2007-02-24   2008-07-31	01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1004 High capacity optical network		      2008-01-01		01-JAN-00 12.00.00.000000 AM +02:00 01-JAN-00 12.00.00.000000 AM +02:00
      1007 my fun project 3				      29-AUG-16    28-SEP-16	29-AUG-16 12.52.56.000000 PM +02:00 01-JAN-00 12.00.00.000000 AM +02:00

The fun project 4 is gone and all works as expected. Hope this helps ….

 

Cet article Letting GoldenGate automatically maintain the insert and update timestamps on the target est apparu en premier sur Blog dbi services.

ODA X6 database classes and shapes

Mon, 2016-08-29 08:47

On the Oracle Database Appliance, like on the Oracle public Cloud, you define the CPU capacity with ‘shapes’. On the latest ODA, the X6, we have a new interface to provision a database. Let’s look at the different shapes available.

ODACLI

You can provision a new database with the command line ODACLI which replaces the OAKCLI you used in ODA X5:

[root@odax6m ~]# odacli create-database
Usage: create-database [options] Options:
* --adminpassword, -m
Password for SYS,SYSTEM and PDB Admin
--cdb, -c
Create Container Database
Default: false
--dbclass, -cl
Database Class OLTP/DSS/IMDB
Default: OLTP
--dbconsole, -co
Enable Database Console
Default: false
--dbhomeid, -dh
Database Home ID (Use Existing DB Home)
* --dbname, -n
Database Name
--dbshape, -s
Database Shape{odb1,odb2,odb3 etc.}
Default: odb1
--dbstorage, -r
Database Storage {ACFS|ASM}
Default: ACFS
--dbtype, -y
Database Type {SI|RAC}
Default: SI
--help, -h
get help
Default: false
--instanceonly, -io
Create Instance Only (For Standby)
Default: false
--json, -j
json output
Default: false
--pdbadmin, -d
Pluggable Database Admin User
Default: pdbadmin
--pdbname, -p
Pluggable Database Name
Default: pdb1
--targetnode, -g
Node Number (for single-instance databases)
Default: 0
--version, -v
Database Version
Default: 12.1.0.2

ODA WebConsole

But the ODA X6 has also a small graphical interface from the web console.

CaptureODACreate

12c multitenant is the default, but you can choose.

Edition

You don’t have the choice when you create the database. You install the ODA in Standard or Enterprise and then you cannot change.

Versions

Two database versions are available: 11.2.0.4 and 12.1.0.2

CaptureODAVersions

You choose ODA to get a stable, certified and supported system so it make sense to run only supported versions with latest PSU. If you have older versions, you must upgrade. Set optimizer_features_enable to previous if your application was not tuned for newer versions. Very often, when an ISV do not certify his software it’s because of optimizer regressions. With proper testing and optimizer settings you should be able to upgrade any application without the risk of regression.

Templates

There are four DBCA templates available

  • Standard Edition or Enterprise Edition
  • Multitenant or non-CDB

The main difference between Enterprise Edition and Standard Editions are:
Options OMS,SPATIAL,CWMLITE,DV are installed in Enterprise Edition but not in Standard Edition
fast_start_mttr_target=300 in Enterprise Edition (feature not supported in Standard Edition)

The main difference between multitenant and non-CDB:
Options JSERVER,IMEDIA,ORACLE_TEXT,APEX are installed in a CDB an not in a non-CDB
maxdatafiles=1024 in CDB (100 in non-CDB)

All templates are configured with filesystem_io_options=setall and use_large_pages=only

Following underscore parameters are set for all ODA templates:
*._datafile_write_errors_crash_instance=FALSE
*._disable_interface_checking=TRUE
*._enable_NUMA_support=FALSE
*._file_size_increase_increment=2143289344
*._gc_policy_time=0
*._gc_undo_affinity=FALSE

Note that both 12c and 11g are available in Enterprise Edition as well as Standard Edition (can even be Standard Edition One for 11g).
Of course, CDB is only for 12c.

Shapes

As in the Oracle Public Cloud, the CPU and Memory comes in shapes:

CaptureODACShape

The choice is the number of core. The cores are hyperthreaded, which means that odb1 will have cpu_count=2. And it is set in spfile. Note that at install no resource manager plan is active so instance caging will not occur except during the automatic maintenance window… My recommandation is to set a plan. In 12.1.0.2 Standard Edition resource manager is implicitly activated.

ODA X6-2 processors are Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz. Here is an example of the LIOPS you can reach when running on all the 40 threads of a X6-2M:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 39.9 545.6 0.00 25.43
DB CPU(s): 38.8 530.9 0.00 24.75
Logical read (blocks): 18,494,690.4 252,862,769.1

This is 18 million logical reads per seconds in this 2 sockets (2s10c20t) appliance. Half of it on the X6-2S which has one socket 1s10c20t.

The core factor for those processors is 0.5 which means that you can run an Enterprise Edition ‘odb2′ with a single processor license (public price 47,500$) and you can run 4 sessions in CPU which means more that you can do nearly 3 million logical reads per second, as here:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 13.80
DB CPU(s): 4.0 54.5 0.00 13.78
Logical read (blocks): 2,901,991.5 39,660,331.1

Those shapes are defined as:

CaptureODAShapes

Given the high LIOPS and the available memory, this entry-level appliance can be sufficient for most of medium OLTP workload.

Classes

Three classes are defined to derive the database parameters from the shape.

CaptureODAClasses

The SGA/PGA is calculated from the shape memory and a class factor.
OLTP gives 50% to SGA and 25% to PGA which means that for example a odb4 has sga_target=16 and pga_aggregate_target=8G
DSS gives 25% to SGA and 50% to PGA

Note that OLTP is the only one available in Standard Edition. This does not mean that you can run only OLTP. You can change memory settings later (DSS usually need more PGA than SGA) and you have very good storage bandwidth and IOPS (NVMe access to SSD). This setting is more an indication that most of datawarehouses need features available only on Enterprise Edition such as parallel query, partitioning, bitmap indexes.

ASM or ACFS?

CaptureODAStorage

The template shapes above define a 100GB database. When you create a new database you have the choice to put it directly on +DATA and +RECO, or create a 100GB ADVM volume and ACFS filesystem that will be mounted under /u02/app/oracle/oradata. If you choose ACFS the FRA and REDO will be created under the /u03/app/oracle mount point which is a common ACFS.

The default is ACFS but you should think about it. For production, best performance is ASM. You have SSD to reduce avoid disk latency. You have NVMe to reduce CPU latency. You don’t want to add the ACFS layer. The maximum IOPS we observe is 10 times higher with datafiles directly on ASM:

@FranckPachot Don't know? We get 75k IOPS on ACFS and 700k IOPS on pure ASM. /cc @kevinclosson pic.twitter.com/TcLzUsOh0d

— Marco Mischke (@DBAMarco) August 29, 2016

For test databases, where you use snapshot features, especially with multitenant, you may choose ACFS. However, why not create the CDB in ASM and use ACFS for the PDBs you will want to snapshot? No need for that additional layer for the CDB files. Better to isolate the master and clones for a specific environment into its own ACFS.

And anyway, ODA X6-2S and X6-2M are very interesting for Standard Edition, and you cannot use snapshots nor any ACFS features for a database in Standard Edition.

Storage performance is truly amazing. At 100000 IOPS we have 99% single block reads faster than 256 milliseconds and 97% faster than 128 ms. At 800000 IOPS here are the figures:

% of Waits
-----------------------------------------------
Total
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms 1s
------------------------- ------ ----- ----- ----- ----- ----- ----- ----- -----
db file parallel read 6.9M 4.0 74.1 21.7 .2 .0 .0 .0
db file sequential read 18.4M 83.1 16.3 .7 .0 .0 .0

So what?

It’s nice to have an easy GUI to provision a database on ODA. However there are some limits with it:

  • Be careful on the defaults. They may not fit what you want. Do you want you databases on ACFS?
  • Not all operations can be done though the GUI: you can create but not delete a database.

But there’s more. Performance is there. You can run application that need high performance.

Do you know any other solution which gives you a fully certified system installed in few hours with databases ready? With very good hardware and managed software costs (NUP, Standard Edition in socket metric or Entrerprise Edition capacity-on-demand by multiple of 1 processor license).
You need high-availability? In Standard Edition you cannot use Data Guard. In Standard Edition you can buy Dbvisit standby which gives you switchover and failover (Recovery Point Objective of few minutes) to a second ODA or to a cloud service. Of course, you can build or buy custom scripts to manage the manual standby. However, if you go to ODA you probably appreciate easy and robust software.

 

Cet article ODA X6 database classes and shapes est apparu en premier sur Blog dbi services.

ODA X6 command line and Web Console

Sat, 2016-08-27 14:33

The ODA X6 comes with a new command line (odacli) which replaces oakcli, and with a small web console which can display information about the appliance, the databases and the provisioning jobs. It also has the possibility to create a database, but this is for next blog post. In this post I’ll show which information are displayed once the ODA is installed.

The examples here come from ODA X6 version: 12.1.2.7.0

Appliance

The first screen is about the appliance information, the ones that you define when installed the ODA:

CaptureODAWEB001

The same information can be displayed from command line with odacli describe-appliance:

[root@odax6m ~]# odacli describe-appliance
 
Appliance Information
----------------------------------------------------------------
ID: bb8f0eec-0f5c-4319-bade-75770848b923
Platform: OdaliteM
Data Disk Count: 2
CPU Core Count: 20
Created: Aug 26, 2016 2:51:26 PM
 
System Information
----------------------------------------------------------------
Name: odax6m
Domain Name: oracle.democenter.arrowcs.ch
Time Zone: Europe/Zurich
DB Edition: EE
DNS Servers: 172.22.1.9
NTP Servers: 172.22.1.9
 
Disk Group Information
----------------------------------------------------------------
DG Name Redundancy Percentage
------------------------- ------------------------- ------------
Data Normal 75
Reco Normal 25

An important thing to note here is that the choice between Standard Edition and Enterprise Edition is at appliance level: you cannot mix.
There’s also no mention of virtualization because ODA X6 2S and 2M are only bare-metal.

odacli list-networks

[root@odax6m ~]# odacli list-networks
 
ID Name NIC IP Address Subnet Mask Gateway
---------------------------------------- -------------------- ---------- ------------------ ------------------ ------------------
ffcf7d89-8074-4342-9f19-5e72ed695ce7 Private-network priv0 192.168.16.24 255.255.255.240
71a422bc-39d3-483c-b79b-ffe25129dfd2 Public-network btbond1 172.22.1.23 255.255.255.224 172.22.1.2

I’ve no Auto Service Request configured here:
[root@odax6m ~]# odacli describe-asr
Aug 27, 2016 8:56:33 PM com.oracle.oda.dcscli.commands.AsrCommand$getAsr run
SEVERE: No asr found

Databases

The second screen is about the databases:

CaptureODAWEB002

From command line you have information about the ORACLE_HOMEs and databases.

[root@odax6m ~]# odacli list-dbhomes
 
ID Name DB Version Home Location
---------------------------------------- -------------------- ---------- ---------------------------------------------
67419075-d1f9-4c2e-85b1-c74430e35120 OraDB12102_home1 12.1.0.2 /u01/app/oracle/product/12.1.0.2/dbhome_1
cf76a90b-f9e3-44b2-9b43-56111c1785e4 OraDB12102_home2 12.1.0.2 /u01/app/oracle/product/12.1.0.2/dbhome_2
adcbe8bf-f26f-4ab7-98a1-0abcd4412305 OraDB11204_home1 11.2.0.4 /u01/app/oracle/product/11.2.0.4/dbhome_1

[root@odax6m ~]# odacli list-databases
 
ID DB Name DB Version CDB Class Shape Storage Status
---------------------------------------- ---------- ---------- ---------- -------- -------- ---------- ----------
4c182ffb-3e4a-45c0-a6c6-15d5e9b7b2b9 dbee1 12.1.0.2 false OLTP odb4 ACFS Configured
5564ea51-fc93-46f2-9188-c13c23caba94 odb1s 12.1.0.2 true OLTP odb1s ACFS Configured
26c2213d-5992-4b2b-94b0-2d0f4d0f9c2d dbee11g1 11.2.0.4 false OLTP odb2 ACFS Configured

You can get more detail about one database:

CaptureODAWEB0022

[root@odax6m ~]# odacli describe-dbhome -i 67419075-d1f9-4c2e-85b1-c74430e35120
 
DB Home details
----------------------------------------------------------------
ID: 67419075-d1f9-4c2e-85b1-c74430e35120
Name: OraDB12102_home1
Version: 12.1.0.2
Home Location: /u01/app/oracle/product/12.1.0.2/dbhome_1
Created: Aug 26, 2016 2:51:26 PM

[root@odax6m ~]# odacli describe-database -i 4c182ffb-3e4a-45c0-a6c6-15d5e9b7b2b9
 
Database details
----------------------------------------------------------------
ID: 4c182ffb-3e4a-45c0-a6c6-15d5e9b7b2b9
Description: dbee1
DB Name: dbee1
DB Version: 12.1.0.2
DBID: 2933563624
CDB: false
PDB Name:
PDB Admin User Name:
Class: OLTP
Shape: odb4
Storage: ACFS
CharacterSet: DbCharacterSet(characterSet=AL32UTF8, nlsCharacterset=AL16UTF16, dbTerritory=AMERICA, dbLanguage=AMERICAN)
Home ID: 67419075-d1f9-4c2e-85b1-c74430e35120
Console Enabled: false
Created: Aug 26, 2016 2:51:26 PM

Activity

Here is the log of what has been done on the ODA:

CaptureODAWEB003

[root@odax6m ~]# odacli list-jobs
 
ID Description Created Status
---------------------------------------- ------------------------------ ------------------------- ----------
1b99d278-6ab4-4ead-a5f8-f112c74a8f97 Provisioning service creation Aug 26, 2016 2:51:26 PM Success
f0ac9a2c-ba37-412c-8a81-9cc7cb301417 Database service creation with db name: odb1s Aug 26, 2016 4:03:39 PM Success
dec37817-feb7-46e5-b991-b23362268cb1 Database service creation with db name: dbee11g1 Aug 26, 2016 5:09:33 PM Success

And we have more info about the steps executed for one job:

CaptureODAWEB004

Same in command line:

[root@odax6m ~]# odacli describe-job -i 1b99d278-6ab4-4ead-a5f8-f112c74a8f97
 
Job details
----------------------------------------------------------------
ID: 1b99d278-6ab4-4ead-a5f8-f112c74a8f97
Description: Provisioning service creation
Status: Success
Created: 26.08.2016 14:51:26
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ------------------------- ------------------------- ----------
Setting up Network Aug 26, 2016 2:51:27 PM Aug 26, 2016 2:51:27 PM Success
Creating group :asmdba Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :asmoper Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :asmadmin Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :dba Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :dbaoper Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :oinstall Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating user :grid Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating user :oracle Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Setting up ssh equivalance Aug 26, 2016 2:51:39 PM Aug 26, 2016 2:51:39 PM Success
Gi Home creation Aug 26, 2016 2:54:49 PM Aug 26, 2016 2:57:54 PM Success
Creating GI home directories Aug 26, 2016 2:54:49 PM Aug 26, 2016 2:54:49 PM Success
Cloning Gi home Aug 26, 2016 2:54:49 PM Aug 26, 2016 2:57:54 PM Success
GI stack creation Aug 26, 2016 2:57:54 PM Aug 26, 2016 3:08:44 PM Success
Configuring GI Aug 26, 2016 2:57:54 PM Aug 26, 2016 2:58:21 PM Success
Running GI root scripts Aug 26, 2016 2:58:21 PM Aug 26, 2016 3:05:03 PM Success
Running GI config assistants Aug 26, 2016 3:05:03 PM Aug 26, 2016 3:05:34 PM Success
Creating RECO disk group Aug 26, 2016 3:10:02 PM Aug 26, 2016 3:10:16 PM Success
Creating volume reco Aug 26, 2016 3:10:16 PM Aug 26, 2016 3:10:26 PM Success
Creating volume datdbee1 Aug 26, 2016 3:10:26 PM Aug 26, 2016 3:10:35 PM Success
Creating ACFS filesystem for RECO Aug 26, 2016 3:10:35 PM Aug 26, 2016 3:10:42 PM Success
Creating ACFS filesystem for DATA Aug 26, 2016 3:10:42 PM Aug 26, 2016 3:10:49 PM Success
Db Home creation Aug 26, 2016 3:10:49 PM Aug 26, 2016 3:13:40 PM Success
Creating DbHome Directory Aug 26, 2016 3:10:49 PM Aug 26, 2016 3:10:49 PM Success
Extract DB clones Aug 26, 2016 3:10:49 PM Aug 26, 2016 3:12:29 PM Success
Enable DB options Aug 26, 2016 3:12:29 PM Aug 26, 2016 3:12:38 PM Success
Clone Db home Aug 26, 2016 3:12:38 PM Aug 26, 2016 3:13:37 PM Success
Run Root DB scripts Aug 26, 2016 3:13:37 PM Aug 26, 2016 3:13:37 PM Success
Database Service creation Aug 26, 2016 3:13:40 PM Aug 26, 2016 3:19:43 PM Success
Database Creation Aug 26, 2016 3:13:40 PM Aug 26, 2016 3:17:58 PM Success
Running DataPatch Aug 26, 2016 3:18:33 PM Aug 26, 2016 3:19:43 PM Success
create Users tablespace Aug 26, 2016 3:19:43 PM Aug 26, 2016 3:19:46 PM Success

Yes, this is the ODA installation. Half an hour to setup the OS, install Grid Infrastructure, setup the storage and create a first database.

Refresh

Those that all the screens are not refreshed automatically even when you navigate through them. Don’t forget the ‘Refresh’ button (circular arrow) on top-right.

JSON

You can also build your own interface from the JSON format:

[root@odax6m ~]# odacli list-jobs -j
[ {
"jobId" : "1b99d278-6ab4-4ead-a5f8-f112c74a8f97",
"status" : "Success",
"message" : null,
"createTimestamp" : 1472215886601,
"description" : "Provisioning service creation"
}, {
"jobId" : "f0ac9a2c-ba37-412c-8a81-9cc7cb301417",
"status" : "Success",
"message" : null,
"createTimestamp" : 1472220219016,
"description" : "Database service creation with db name: odb1s"
}, {
"jobId" : "dec37817-feb7-46e5-b991-b23362268cb1",
"status" : "Success",
"message" : null,
"createTimestamp" : 1472224173747,
"description" : "Database service creation with db name: dbee11g1"
} ]

So what?

ODA is for easy and fast provisioning and the GUI that was missing is finally there. Of course, it looks very simple, but that’s the goal of the appliance: setup quickly a standardized environment. ODA X6-2S is cheap and has good performance for small databases. You may find equivalent hardware, but can you build and install a stable hardware, OS and database in 30 minutes?

 

Cet article ODA X6 command line and Web Console est apparu en premier sur Blog dbi services.

Pages