Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 16 hours 4 min ago

MS Cloud Summit Paris 2017

Mon, 2017-01-02 15:10

Une nouvelle année commence et 2017 devrait être un bon cru dans le domaine de la base donnée notamment en prédiction des effets d’annonces faites par Microsoft cette fin d’année 2016. En effet, il a été notamment été question de la prochaine vNext de SQL Server qui sera porté sous Linux ainsi que des nouvelles fonctionnalités prometteuses. A l’habitude, l’équipe dbi services tentera couvrir les différents sujets au cours cette année. N’hésitez pas à revenir de temps en temps jetez un coup d’œil sur d’éventuels nouveaux posts!

Mais avant de parler de la prochaine version de SQL Server, attardons nous à la version courante – SQL Server 2016 – qui offre d’ores et déjà des améliorations et nouvelles possibilités intéressantes dans plusieurs domaines qu’il s’agisse de la performance, la haute disponibilité, la sécurité et bien d’autres. Par ailleurs, une autre nouvelle importante qui intéressera la plupart de nos clients est la sortie récente du SP1 de SQL Server 2016 et qui permet une homogénéisation de la surface des fonctionnalités entre les différentes éditions. Il sera notamment possible d’utiliser la compression, le partitionnement ou les index columnstore avec une édition standard par exemple. Je ne suis pas devin mais je pense ne pas me tromper en disant que cette année je verrais pousser quelques infrastructures 2016 dans les écosystèmes de nos clients!

En tout cas, pour commencer du bon pied avec cette nouvelle version de SQL Server, un événement francophone à inscrire absolument dans vos agendas est le MSCloudSummit qui se déroulera à Paris à partir 23 janvier prochain. Cet événement se veut beaucoup plus large que les journées SQL Server que vous connaissez certainement déjà. Le MS Cloud Summit, c’est 600 participants attendus, 7 tracks avec 72 sessions autour du cloud, des scénarios hybrides et on-premises.  Pour ma part, je serai présent avec dbi services et j’aurai le plaisir de vous présenter les nouveautés dont vous bénéficierez en termes de haute disponibilité et plan de récupération avec le couple gagnant Windows 2016 et SQL Server 2016.

blog 112 - session

Au plaisir de vous y retrouver. D’ici là je vous souhaite une très bonne année 2017 !

 

 

Cet article MS Cloud Summit Paris 2017 est apparu en premier sur Blog dbi services.

IT-Tage 2016 Informatik aktuell: feedback

Fri, 2016-12-23 09:07

Today, to finish the year, I post a brief personal impression of the IT-Tage 2016 in Frankfurt at the Hotel Maritim, where I was also be a speaker.

IMG_3808

I presented 2 sessions on SQL Server: “SQL Server Errorlog Entmystifizierung” & “SQL Server 2016: Neue Sicherheitsfunktionen”.
I wasn’t the only one from dbi services who spoke at that conference:

  • David Barbarin with also 2 sessions: “SQL Server – Locks, latches and spinlocks” & “SQL Server 2016 Availability Group Enhancements”
  • Clemens Bleile with 1 session: “SQL Plan Directives: Neuigkeiten in 12.2. Produktions-Ausführungspläne in Testumgebungen reproduzieren”
  • Philippe Schweitzer with 1 session: “Feasibility study for building a software factory based on GIT repository”
  • Daniel Westermann with 1 session: “Breaking the deadlock: Migrating from proprietary databases to PostgreSQL”

You can already download all presentations on this link.

After my presentation day, I had the opportunity to go to a very interesting session by Oliver Hock “Ein Prozess lernt laufen: LEGO-Mindstorms-Steuerung mit BPMN”. With a Lego Mindstorm kit, he showed how to solve a magic cube.

IMG_3823

This session is also on youtube and look the demo at the end (the last 60 seconds) . It was very nice! ;-)

I would like to thank the entire team of Informatik Aktuell, who have put together a smooth and interesting process.

I hope that I can go also next year, with new sessions and follow other interesting sessions…

In the evening, you could also enjoy the Christmas Market, which is 2 metro’s stop from the Hotel. IMG_3810

I wish you a merry Christmas and like we said in Alsace: “A guetta rutsch ins neja Johr!”

 

Cet article IT-Tage 2016 Informatik aktuell: feedback est apparu en premier sur Blog dbi services.

Migrating your existing availability group infrastructure to 2016

Fri, 2016-12-23 04:57

Have you already a plan to migrate your old SQL Server 2012 availability group infrastructure to windows 2016 and SQL Server 2016? In a previous post, I talked about distributed availability groups and cross-cluster migration scenarios but this time the game is not the same because we want to achieve an in-place upgrade of the existing AG infrastructure. This question will probably be a concern the next year and if we take a closer look at improvements shipped with new versions of Windows and SQL Server, we will not be disappointed for sure.

Indeed, cluster rolling upgrade is a new feature from Windows Server 2016 which allows us to migrate smoothly (and almost transparently) the WSFC side of the existing database infrastructure. On the other side, upgrading high available replicas from SQL Server 2012 to SQL Server 2016 is also possible without reinstalling completely the availability group infrastructure. Thus, we may benefit from a temporary and mixed infrastructure at the both sides to reduce the outage timeframe of our applications. I may think about some customers where it could be helpful regarding their business and their corresponding SLAs.

So let’s just demonstrate this kind of scenario which includes a classic customer’s availability group infrastructure. Most part of availability groups implemented in my area consists of two replicas meaning a WSFC with 2 cluster nodes at the low-level of the global architecture as shown above:

blog 111 - 00 - initial infra WSFC2012

So the question is how to achieve the migration of the above infrastructure from 2012 version to 2016 version (both Windows and SQL Server) with low downtime? Well, one solution would consist in preparing and adding temporary two extra nodes which would run on Windows Server 2016 and SQL Server 2012 in a first step. Unfortunately we cannot mix directly two different versions of SQL Server in an existing availability group yet. We have to execute an extra step to upgrade one by one each replica we want to run on SQL Server 2016. Having two extra nodes will allow to prepare smoothly our migration without impacting the existing high available infrastructure.

So let’s begin with adding 2 nodes with Windows Server 2016 version. Basically, we may use either GUI or PowerShell cmdlets command for that.

The initial scenario (2012 version) is as follows:

blog 111 - 0 - initial config cluster

Let’s add the two extra cluster nodes which run on Windows Server 2016. According to Microsoft technet procedure, the key point is to perform this action from a Windows Server 2016 node exclusively.

The PowerShell cmdlet used is the same than the previous version and I executed it for the two additional nodes (WIN20168SQL16 and WIN20169SQL16) which run both on Windows Server 2016. Just remember to exclude the two extra nodes from quorum vote to avoid impacting the existing configuration.

blog 111 - 1 - Add Cluster Node 2016

However we may notice new cluster functional level property as shown below. The value is equal to 8 meaning that the cluster has switched to a temporary / mixed mode because at the moment we have cluster nodes both on 2012 and 2016 versions. 

blog 111 - 2 - Cluster Functional Level 2016

The transitioned infrastructure includes now 4 nodes. The first 2 nodes run on Windows Server 2012 whereas the last 2 nodes run on Windows Server 2016. An availability group runs on the top of the first 2 nodes and two additional replicas are ready to be enrolled to the existing infrastructure.

blog 111 - 21 - transitioned WSFC

So now let’s move on the SQL Server side and let’s add the 2 additional replicas.

The initial AG scenario is as follows:

blog 111 - 3 - Initial AG 2012

After adding the two replicas in asynchronous mode, we get the following picture:

blog 111 - 4 - Add replica 2016 to AG 2012

At this step, we have now to upgrade the new added replicas to SQL Server 2016. One important thing to keep in mind here is that we have to prevent absolutely failover to an upgraded replica before ensuring all the new secondary replicas are already upgraded. Indeed according to the Microsoft documentation an upgraded primary replica can no longer ship logs to any secondary replica whose SQL Server 2016 instance has not yet been upgraded to the same version.

So in my case, I upgraded first the WIN20169SQL16\SQL12 replica and then the WIN20168SQL16\SQL12 as shown below:

blog 111 - 5- AG config after upgrade node

The new transitioned infrastructure is shown in the picture below:

blog 111 -51- Transitioned infrastructure

 

The next part of the migration step includes a short downtime. The previous steps did not imply outage so far.

Basically the next part of the procedure will include the following steps:

  • Choose the next replica in 2016 version that will be involved as primary and change its replication mode to synchronous in order to prevent losing data
  • Failover the availability group to this replica (at this step old replicas in 2012 version will not be synchronized because the new primary replica may no longer ship logs to them as said previously)

blog 111 - 5- AG health state after upgrade node

  • Change the replication mode of the second replica in 2016 to synchronous to meet the initial configuration
  • Remove old replicas in 2012 from the availability group

blog 111 -52- Transitioned infrastructure

The T-SQL script was as follows in my case:

:CONNECT WIN20121SQL16\SQL12

-- Change temporary replication to synchronous
-- for next SQL Server 2016 primary replica 
USE [master]
GO
ALTER AVAILABILITY GROUP [dummygrp]
MODIFY REPLICA ON N'WIN20168SQL16\SQL12' 
WITH (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT)
GO

:CONNECT WIN20168SQL16\SQL12

-- Initiate failover to next SQL Server 2016 primary replica  
USE [master]
GO

ALTER AVAILABILITY GROUP [dummygrp]
FAILOVER;
GO


:CONNECT WIN20168SQL16\SQL12

-- Change temporary replication to asynchronous
-- old replicas (2012)
USE [master]
GO

ALTER AVAILABILITY GROUP [dummygrp]
MODIFY REPLICA ON N'WIN20169SQL16\SQL12' 
WITH ( AVAILABILITY_MODE = SYNCHRONOUS_COMMIT );

ALTER AVAILABILITY GROUP [dummygrp]
MODIFY REPLICA ON N'WIN20169SQL16\SQL12' 
WITH ( FAILOVER_MODE = AUTOMATIC );


:CONNECT WIN20168SQL16\SQL12

-- Remove old replicas (2012)
USE [master]
GO

ALTER AVAILABILITY GROUP [dummygrp]
REMOVE REPLICA ON N'WIN20121SQL16\SQL12';
GO

ALTER AVAILABILITY GROUP [dummygrp]
REMOVE REPLICA ON N'WIN20122SQL16\SQL12';
GO

 

Let’s go back to the cluster side and the next step will consist in removing old cluster nodes from the WSFC by using usual commands as Remove-ClusterNode

blog 111 - 6- Remove old cluster nodes

… And we may finally update the cluster functional level to 9 (2016 version). Just be aware that upgrading the cluster functional level to 2016 will make the process un-reversible. So reverting back to the initial configuration will simply not be possible or at least it will require extra steps and longer downtime as well.

blog 111 - 7 - Update cluster functionalLevel 2016

And let’s get the final view of our upgraded availability group dashboard:

blog 111 - 8 - AG 2016 dashboard

The listener stays the same and it is almost transparent from applications.

 

Bottom line

In this blog post we had a glimpse of new capabilities of both Windows 2016 and SQL Server 2016 in terms of rolling upgrade. Of course the reality would be probably a little more complicated when introducing other parameters as customer context, number of availability groups, performance impact of adding temporary replicas, external dependencies and so on. But these feature seems to be promising and may be very helpful for future migration scenarios. I’m looking forward to experiment such feature at customer shops!

Happy upgrade!

 

 

Cet article Migrating your existing availability group infrastructure to 2016 est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 8 – Transportable tablespaces

Fri, 2016-12-23 01:47

My colleague Franck posted a comment to one of my last “Can I do it with PostgreSQL” blog posts. The comment he posted is: “Here is an idea for a future “Can I do it with PostgreSQL?”. My favorite Oracle feature: transportable tablespaces.” When you’ve read my post about how PostgreSQL implements tablespaces then you probably already know the answer: No, you can not do this in PostgreSQL. Having thought about this some time I decided to check the Oracle documentation for what transportable tablespaces are good for. I know the basics and how you can do it in Oracle, but better check twice :)

According to the documentation the main goal of transportable tablespaces is to transport data. This means you can transport one or more (which then becomes a transportable tablespace set) tablespaces from one host to another and then “plug” that set into an existing database (as long as the tablespaces are self containing). One, depending on the use case, great thing you can do with it is that the target database does not need to have the same standard block size as the source database. This means you can transport a tablespace space with a 16k block size to a database with a default block size of 8k. This is another thing you can not do in PostgreSQL: In PostgreSQL the block size is configured at configure/compile time. Once you have the compiled binaries you can not change that afterwards.

Probably the greatest benefit of transportable tablespaces is that it saves you time in moving your data around. You just copy the data files that make up your tablespace(s) and then use expdp/impdp for the meta data, that’s it. When you go for the multi-tenant architecture you can use transportable tablespaces to make a non-CDB a pluggable database on the target, too. For a more detailed introduction you can check the documentation (linked above).

Back to PostgreSQL: What options do you have for transporting your data from one database to another?

The tool of choice for most cases probably is pg_dump. The big advantage of pg_dump is that you can use it over the network and directly write everything you want to export into the target database using a pipe:

pg_dump -C -h [SOURCE_HOST] -U [USER] [DATABASE] | psql -h [TARGETHOST] -U [TARGETUSER] [TARGETDATABASE]

You can even do that using parallel processes when you combine pg_dump with pg_restore although using a pipe is not supported in that case. To demonstrate this we’ll need some sample data (I am using PostgreSQL 9.5.4 as the source):

postgres@pgbox:/home/postgres/ [PG954] psql
psql (9.5.4 dbi services build)
Type "help" for help.

(postgres@[local]:5438) [postgres] > create database test;
CREATE DATABASE
(postgres@[local]:5438) [postgres] > \c test
You are now connected to database "test" as user "postgres".
(postgres@[local]:5438) [test] > create table t1 ( a int, b int );
CREATE TABLE
(postgres@[local]:5438) [test] > create table t2 ( a int, b int );
CREATE TABLE
(postgres@[local]:5438) [test] > insert into t1 values (generate_series(1,1000000), generate_series(1,1000000));
INSERT 0 1000000
(postgres@[local]:5438) [test] > insert into t2 select * from t1;
INSERT 0 1000000

Lets say I want to copy the data to a PostgreSQL 9.6.1 instance on the same host. How can I do that? Quite easy:

postgres@pgbox:/home/postgres/ [PG961] pg_dump -V
pg_dump (PostgreSQL) 9.6.1 dbi services build
postgres@pgbox:/home/postgres/ [PG961] mkdir /var/tmp/exp
postgres@pgbox:/home/postgres/ [PG961] pg_dump -h localhost -p 5438 -C -c -F d -j 2 -f /var/tmp/exp/ test
postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p 5439 -d postgres -F d -C /var/tmp/exp/
postgres@pgbox:/home/postgres/ [PG961] psql -c "\l" postgres
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 test      | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
(4 rows)

postgres@pgbox:/home/postgres/ [PG961] psql -c "\d" test
        List of relations
 Schema | Name | Type  |  Owner   
--------+------+-------+----------
 public | t1   | table | postgres
 public | t2   | table | postgres
(2 rows)

Not a big deal and possible over the network, too.

Another option would be to use pg_basebackup to create a complete new instance from the source (source and target need to be of the same PostgreSQL release in this case). Again this is possible over the network.

You can even use rsync to copy the whole cluster to a new host and then do a second resync while the source is down for a short time. This will copy only the files that changed since the first rsync and will probably be very fast, but you will need a downtime of the source for the second rsync.

There are other methods for moving your data around in PostgreSQL but the above are the most popular.

Hope this helps …

 

Cet article Can I do it with PostgreSQL? – 8 – Transportable tablespaces est apparu en premier sur Blog dbi services.

SQL Plan Directives in 12cR2. Behavior is different than in 12cR1.

Wed, 2016-12-21 12:04

During the Frankfurter IT Tage I provided a presentation about SQL Plan Directives and how to reproduce a testcase in a test environment for a plan with used SQL Plan Directives. In that context I did a couple of tests with 12cR2 (12.2.0.1) and wanted to blog about the change in behavior in that release.

Here is my testcase I did initially run in 12.1.0.2:

First I created a table with 4 columns: A, B, C and D and load it with 1000 rows:


10:25:27 demo@GEN12102> create table DEMO_TABLE as
10:25:27 2 select mod(rownum,2) a ,mod(rownum,2) b ,mod(rownum,2) c ,mod(rownum,2) d
10:25:27 3 from dual connect by level <=1000;
 
10:25:30 demo@GEN12102> select * from DEMO_TABLE;
A B C D
---------- ---------- ---------- ----------
1 1 1 1
...
0 0 0 0
1 1 1 1
0 0 0 0

I.e. there is a correlation between the columns (all columns have a value 0 or all have a value 1).

Due to the new feature of stats gathering during CTAS and INSERT APPEND into empty tables in 12c the table has stats already:


10:28:29 demo@GEN12102> select column_name,num_distinct,density,last_analyzed from user_tab_col_statistics
10:28:29 2 where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT DENSITY LAST_ANA
------------ ------------ ---------- --------
A 2 .5 10:25:27
B 2 .5 10:25:27
C 2 .5 10:25:27
D 2 .5 10:25:27

Let’s run a query:


10:29:40 demo@GEN12102> select /*+ gather_plan_statistics */ * from DEMO_TABLE
10:29:40 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
...
0 0 0 0
0 0 0 0
 
10:29:40 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 63 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------

There is a misestimation of the calculated cardinality (E-Rows) versus actual cardinality (A-Rows) of a factor 8 (63:500).
Oracle detected that and put the cursor on IS_REOPTIMIZABLE=Y:


10:32:49 demo@GEN12102> select sql_id,child_number,is_reoptimizable from v$sql
10:32:49 2 where sql_id='gbwsdbt5usvt6';
 
SQL_ID CHILD_NUMBER IS_REOPTIMIZABLE
------------- ------------ ----------------
gbwsdbt5usvt6 0 Y

Executing the SQL again uses statistics feedback:


10:33:42 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------
 
Note
-----
- statistics feedback used for this statement

The computed cardinality is correct now.

After flushing the SQL Plan Directives to the SYSAUX-tablespace I can see them with the internal_state NEW:


10:34:37 demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
10:34:39 demo@GEN12102> select directive_id,type,state,reason,notes,created,last_modified,last_used
10:34:39 2 from dba_sql_plan_directives where directive_id in(
10:34:39 3 select directive_id from dba_sql_plan_dir_objects where owner='DEMO'
10:34:39 4 ) order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ---------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
615512554562037875 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:34:36
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>

Executing a different SQL with the same type of predicate will use the SQL Plan Directive and dynmically gathers stats:


10:35:54 demo@GEN12102> select /*+ gather_plan_statistics */ count(*) from DEMO_TABLE
10:35:54 2 where a=0 and b=0 and c=0 and d=0;
 
COUNT(*)
----------
500
 
10:35:54 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement

The internal_state of the directive changed to MISSING_STATS:


10:37:18 demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
10:37:20 demo@GEN12102> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in( select directive_id from dba_sql_plan_dir_objects where owner='DEMO' ) order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ---------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
615512554562037875 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:34:36
<internal_state>MISSING_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>

The SPD is still being used until fresh stats are gathered. Oracle will create extended (multi column) statistics then:


10:38:27 demo@GEN12102> select /*+ gather_plan_statistics */ distinct a,b,c,d from DEMO_TABLE
10:38:27 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
 
10:38:27 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
 
10:39:42 demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
10:40:01 demo@GEN12102> select column_name,num_distinct,density,last_analyzed from user_tab_col_statistics where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT DENSITY LAST_ANA
------------ ------------ ---------- --------
A 2 .5 10:25:27
B 2 .5 10:25:27
C 2 .5 10:25:27
D 2 .5 10:25:27
 
10:40:03 demo@GEN12102> exec dbms_stats.gather_table_stats(user,'DEMO_TABLE',options=>'GATHER AUTO',no_invalidate=>false);
 
10:40:05 demo@GEN12102> select column_name,num_distinct,last_analyzed from user_tab_col_statistics where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT LAST_ANA
------------ ------------ --------
A 2 10:40:04
B 2 10:40:04
C 2 10:40:04
D 2 10:40:04
SYS_STSPJNMI 2 10:40:04
 
10:40:05 demo@GEN12102> select extension_name,extension from user_stat_extensions where table_name='DEMO_TABLE';
 
EXTENSION_NAME
---------------
EXTENSION
----------------------------------------------------------------------------------------------------
SYS_STSPJNMIY_S
("A","B","C","D")

Running a SQL again still dynamically gatheres stats, but will change the internal state to HAS_STATS:


10:40:37 demo@GEN12102> select /*+ gather_plan_statistics */ distinct b,c,d,a from DEMO_TABLE
10:40:37 2 where a=0 and b=0 and c=0 and d=0;
 
B C D A
---------- ---------- ---------- ----------
0 0 0 0
 
10:40:37 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
 
demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
demo@GEN12102> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ---------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
615512554562037875 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:34:36
<internal_state>HAS_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
10:40:56 10:40:56

A next execution will use the extended stats instead of the directives:


10:45:10 demo@GEN12102> select /*+ gather_plan_statistics */ distinct a,b,c,d from DEMO_TABLE
10:45:10 2 where a=1 and b=1 and c=1 and d=1;
 
A B C D
---------- ---------- ---------- ----------
1 1 1 1
 
10:45:10 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))

Dropping the extended stats will result in cardinalilty misestimate and setting the directive to PERMANENT:


10:45:58 demo@GEN12102> exec dbms_stats.drop_extended_stats(user,'DEMO_TABLE','("A","B","C","D")');
 
10:46:19 demo@GEN12102> select /*+ gather_plan_statistics */ distinct a from DEMO_TABLE
10:46:19 2 where a=1 and b=1 and c=1 and d=1;
 
A
----------
1
 
10:46:19 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 62 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
--------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))
 
10:47:10 demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
10:47:10 demo@GEN12102> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ---------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
615512554562037875 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:34:36
<internal_state>PERMANENT</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>

Future executions of such SQL will use the directive again:


10:47:46 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement

That’s how it worked in 12.1.0.2. Here’s the behavior of 12.2.0.1:

First of all I have to enable adaptive statistics and switch on the preference to automatically create extended statistics:


10:49:22 demo@GEN12201> alter system set optimizer_adaptive_statistics=true;
 
10:49:22 demo@GEN12201> exec DBMS_STATS.SET_PARAM ('AUTO_STAT_EXTENSIONS','ON');

Creating the table with the appropriate stats is the same as in 12.1.:


10:50:23 demo@GEN12201> create table DEMO_TABLE as
10:50:23 2 select mod(rownum,2) a ,mod(rownum,2) b ,mod(rownum,2) c ,mod(rownum,2) d
10:50:23 3 from dual connect by level <=1000;
 
10:51:02 demo@GEN12201> select * from DEMO_TABLE;
 
A B C D
---------- ---------- ---------- ----------
1 1 1 1
...
1 1 1 1
0 0 0 0
 
10:51:31 demo@GEN12201> select column_name,num_distinct,density,last_analyzed from user_tab_col_statistics
10:51:31 2 where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT DENSITY LAST_ANA
------------ ------------ ---------- --------
A 2 .5 10:50:23
B 2 .5 10:50:23
C 2 .5 10:50:23
D 2 .5 10:50:23

Let’s run the first query:


10:51:42 demo@GEN12201> select /*+ gather_plan_statistics */ * from DEMO_TABLE
10:51:42 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
..
0 0 0 0
0 0 0 0
 
10:51:42 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 63 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
--------------------------------------------------
 
1 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))

As in 12.1. we have the misestimate in the computed cardinality (E-Rows) versus actual cardinality (A-Rows).

But here comes the first difference to 12.1.: The cursor is not marked as reoptimizable:


10:53:31 demo@GEN12201> select sql_id,child_number,is_reoptimizable from v$sql
10:53:31 2 where sql_id='gbwsdbt5usvt6';
 
SQL_ID CHILD_NUMBER IS_REOPTIMIZABLE
------------- ------------ ----------------
gbwsdbt5usvt6 0 N

I have to execute the SQL a second time to make the cursor IS_REOPTIMIZABLE=Y:


10:54:27 demo@GEN12201> select /*+ gather_plan_statistics */ * from DEMO_TABLE
10:54:27 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
...
0 0 0 0

The computed cardinality is still wrong:


10:54:27 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 63 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))

But the cursor is on IS_REOPTIMIZABLE=Y now:


10:54:27 demo@GEN12201> select sql_id,child_number,is_reoptimizable from v$sql
10:54:27 2 where sql_id='gbwsdbt5usvt6';
 
SQL_ID CHILD_NUMBER IS_REOPTIMIZABLE
------------- ------------ ----------------
gbwsdbt5usvt6 0 Y

However, statistics feedback is NOT USED in 12.2.:


10:56:33 demo@GEN12201> select /*+ gather_plan_statistics */ * from DEMO_TABLE
10:56:33 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
...
0 0 0 0
 
10:56:33 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 63 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
--------------------------------------------------
 
1 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))

I.e. the cardinality is still wrong and the Note saying “statistics feedback used for this statement” is not there.
But I can see a new SPD created:


10:58:37 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
10:58:39 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used
10:58:39 2 from dba_sql_plan_directives where directive_id in(
10:58:39 3 select directive_id from dba_sql_plan_dir_objects where owner='DEMO'
10:58:39 4 ) order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>

And it’s being used:


10:59:08 demo@GEN12201> select /*+ gather_plan_statistics */ count(*) from DEMO_TABLE
10:59:08 2 where a=0 and b=0 and c=0 and d=0;
 
COUNT(*)
----------
500
 
10:59:08 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement

The new feature in 12cR2 is the created Directive of type DYNAMIC_SAMPLING_RESULT:


10:59:31 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
10:59:31 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in( select directive_id from dba_sql_plan_dir_objects where owner='DEMO' ) order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>MISSING_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
 
14490288466401772154 DYNAMIC_SAMPLING_RESULT USABLE VERIFY CARDINALITY ESTIMATE
<spd_note> 10:59:08
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{(DEMO.DEMO_TABLE, num_rows=1000) - (SQL_ID:7fp7c6kcgmzux, T.CARD=500[-2 -2])}
</spd_text>
</spd_note>

I.e. the result of the dynamically sampled data is stored in the repository. The SQL_ID 7fp7c6kcgmzux if a stripped Dynamic Sampling query is stored, so that the result can be reused by other queries in future. I.e. Oracle just has to generate the SQL_ID of the Dynamic Sampling query and can use a prior result if it finds it in the repository.

As the internal state of the directive is on MISSING_STATS and the DBMS_STATS-preference AUTO_STAT_EXTENSIONS is set to ON, Oracle will create extended stats when gathering stats next time:


11:02:17 demo@GEN12201> select column_name,num_distinct,density,last_analyzed from user_tab_col_statistics where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT DENSITY LAST_ANA
------------ ------------ ---------- --------
A 2 .5 10:50:23
B 2 .5 10:50:23
C 2 .5 10:50:23
D 2 .5 10:50:23
 
11:04:10 demo@GEN12201> exec dbms_stats.gather_table_stats(user,'DEMO_TABLE',options=>'GATHER AUTO',no_invalidate=>false);
 
11:04:11 demo@GEN12201> select column_name,num_distinct,last_analyzed from user_tab_col_statistics where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT LAST_ANA
------------ ------------ --------
A 2 11:04:10
B 2 11:04:10
C 2 11:04:10
D 2 11:04:10
SYS_STSPJNMI 2 11:04:10
 
11:04:11 demo@GEN12201> select extension_name,extension from user_stat_extensions where table_name='DEMO_TABLE';
 
EXTENSION_NAME
---------------
EXTENSION
----------------------------------------------------------------------------------------------------
SYS_STSPJNMIY_S
("A","B","C","D")

Once a query is excecuted again the internal state changes to HAS_STATS (same as in 12.1.):


11:04:33 demo@GEN12201> select /*+ gather_plan_statistics */ distinct b,c,d,a from DEMO_TABLE
11:04:33 2 where a=0 and b=0 and c=0 and d=0;
 
B C D A
---------- ---------- ---------- ----------
0 0 0 0
 
11:04:33 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
 
11:04:35 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
11:04:35 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>HAS_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
 
14490288466401772154 DYNAMIC_SAMPLING_RESULT USABLE VERIFY CARDINALITY ESTIMATE
<spd_note> 10:59:08
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{(DEMO.DEMO_TABLE, num_rows=1000) - (SQL_ID:7fp7c6kcgmzux, T.CARD=500[-2 -2])}
</spd_text>
</spd_note>

The next query uses the stats instead of the SPD:


11:05:23 demo@GEN12201> select /*+ gather_plan_statistics */ distinct a,b,c,d from DEMO_TABLE
11:05:23 2 where a=1 and b=1 and c=1 and d=1;
 
A B C D
---------- ---------- ---------- ----------
1 1 1 1
 
11:05:23 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))

When dropping the extended stats the estimated cardinality is wrong again:


11:05:49 demo@GEN12201> exec dbms_stats.drop_extended_stats(user,'DEMO_TABLE','("A","B","C","D")');
 
11:05:57 demo@GEN12201> select /*+ gather_plan_statistics */ distinct a from DEMO_TABLE
11:05:57 2 where a=1 and b=1 and c=1 and d=1;
 
A
----------
1
 
11:05:57 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 62 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))

However, the SPD won’t change to state to PERMANENT as in 12.1.:


11:06:38 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
11:06:38 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>HAS_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
 
14490288466401772154 DYNAMIC_SAMPLING_RESULT USABLE VERIFY CARDINALITY ESTIMATE
<spd_note> 10:59:08
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{(DEMO.DEMO_TABLE, num_rows=1000) - (SQL_ID:7fp7c6kcgmzux, T.CARD=500[-2 -2])}
</spd_text>
</spd_note>

I.e. future queries will have the misestimate, because the SPD remains on HAS_STATS:


11:07:16 demo@GEN12201> select /*+ gather_plan_statistics */ distinct b from DEMO_TABLE
11:07:16 2 where a=1 and b=1 and c=1 and d=1;
 
B
----------
1
 
11:07:16 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 62 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))

REMARK: Interesting is the E-Rows value of 62 instead of 63 before, but I haven’t checked the reason for that yet.


11:07:46 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
11:07:46 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>HAS_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
 
14490288466401772154 DYNAMIC_SAMPLING_RESULT USABLE VERIFY CARDINALITY ESTIMATE
<spd_note> 10:59:08
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{(DEMO.DEMO_TABLE, num_rows=1000) - (SQL_ID:7fp7c6kcgmzux, T.CARD=500[-2 -2])}
</spd_text>
</spd_note>

So in summary there are several changes in 12.2. compared to 12.1.:

1.) Statistics feedback won’t mark the cursor for reoptimization after the first execution. It needs 2 executions with a cardinality misestimate to mark the cursor IS_REOPTIMIZABLE=Y
2.) If the cursor is marked IS_REOPTIMIZABLE=Y it won’t be parsed again with additional information from cardinality feedback. In 12.2. only a SQL Plan Directive is being created.
3.) When executing a query which can use a SQL Plan Directive the first time, a new SQL Plan Directive of type DYNAMIC_SAMPLING_RESULT will be created. Future queries can use that result.
4.) Dropping automatically created extended stats won’t change the state of the SPD from HAS_STATS to PERMANENT with next execution of a query (and flushing the SPD).

Bottomline is that even if you enable SPDs in 12.2 like in 12.1 (with optimizer_adaptive_statistics = TRUE and DBMS_STATS.SET_PARAM (‘AUTO_STAT_EXTENSIONS’,’ON’)) the behavior in 12.2 is different. I would describe the change of using statistics feedback only on 2nd execution of a query to create a SPD then and not consider the feedback for the next parse as a more conservative approach towards more stable plans and being less “adaptive” than in 12.1.

 

Cet article SQL Plan Directives in 12cR2. Behavior is different than in 12cR1. est apparu en premier sur Blog dbi services.

Documentum story – Add the Java urandom inside the wlst.sh script to improve the performance of all WLST sessions

Tue, 2016-12-20 04:13

WebLogicServer takes a long time (e.g. over 15 minutes) to startup. Can the performance be improved?
Using /dev/urandom option in the Weblogic Java Virtual Machine parameters from startup script, can be the solution.
In fact, /dev/urandom is a blocking device and during times of low entropy (when there is not enough random bits left in it), /dev/urandom will block any processes reading from it (and hence hang) until more random bits have been generated.
So, if you are running some custom WLST scripts to startup your Weblogic infrastructure, you have to know that there are a few things you could try to improve performance.

This change allows to improve the performances of your WLST sessions.. There is no functional impact to modify the wlst.sh script.

1- Login to the Weblogic Server.
2- Use the Unix (installation owner) account:
3- Go to the following directory using:

cd $ORACLE_HOME/oracle_common/common/bin

4- Edit the wlst.sh script with vi editor
5- Add the following option in JVM_ARGS

 JVM_ARGS="-Dprod.props.file='${WL_HOME}'/.product.properties ${WLST_PROPERTIES} ${JVM_D64} ${UTILS_MEM_ARGS} ${COMMON_JVM_ARGS} ${CONFIG_JVM_ARGS} -Djava.security.egd=file:///dev/./urandom" 

 
 
6- Please note that if you specifically want to reduce the load time in start time script, you can do It by providing :

java weblogic.WLST -skipWLSModuleScanning <script>.py 

It can be an advantage in some cases.

Others ways to increase performance of your WLS:

1- This blog deals with urandom parameter especially.
Be aware that you have another ways  to tune well your Weblogic Server(s).
Just in order to give you an overview of all possible changes :
Set Java parameters in starting Weblogic Server script or directly in JAVA_OPTS:
• urandom (see above)
• java heap size :
For higher performance throughput, set (if possible) the minimum java heap size equal to the maximum heap size. For example:

"$JAVA_HOME/bin/java" -server –Xms512m –Xmx512m -classpath $CLASSPATH ... 

 
2- Think about the number of Work Managers you’re really needed :
A default Work Manager might not be adapted to all application’s needs.
In this case, you’ll have to create a custom Work Manager.

3- Remember to have a good tuning regarding the stuck threads process detection.
Indeed, if WLS already detects when a thread process  becomes a “Stuck” thread, you have to know that the stuck status doesn’t allow to finish the current work or accept a new one.
So It impacts directly and quickly  the performance of the machine.
 
4- Don’t forget the basics :  Operating System tuning and/or Network tuning

 

Cet article Documentum story – Add the Java urandom inside the wlst.sh script to improve the performance of all WLST sessions est apparu en premier sur Blog dbi services.

Oracle 12cR2 – DataGuard and TCP.VALID_NODE_CHECKING

Tue, 2016-12-20 03:58

Security is a quite important topic, especially when running Oracle in the cloud. To make your DataGuard a little bit more secure, you can enable the TCP.VALIDNODE_CHECKING feature. I call it SQL*Net Firewall.

Enabling the SQL*Net Firewall is quite straight forward. There are only 3 sqlnet.ora parameters which are involved:

  • TCP.VALIDNODE_CHECKING
  • TCP.INVITED_NODES
  • TCP.EXCLUDED_NODES

The parameter TCP.VALIDNODE_CHECKING turns this feature on or off. If enabled, then the incoming connections are allowed only if they originate from a node that conforms to the list specified by TCP.INVITED_NODES or TCP.EXCLUDED_NODES parameters.

The parameter TCP.INVITED_NODES list all clients that are allowed access to the database.

The parameter TCP.EXCLUDED_NODES specifies which clients are not allowed to access the database. In fact not even the listener.

Some important notes before you start configuring your valid node checking
  • All host names in the TCP.INVITED_NODES or TCP.EXCLUDED_NODES must be resolvable or the listener will not start. In case a host name is not resolvable, the start of the listener fails with the error message: TNS-00584: Valid node checking configuration error In the listener trace file, you can identify which host name exactly is causing the issued. In my case it is the host dbidg04 which is not resolvable. Enabling listener tracing is very useful, especially if you have a long list of names and ip addresses.
  [20-DEC-2016 08:18:35:894] nlvlloadp: Adding host dbidg04
  [20-DEC-2016 08:18:35:894] snlinGetAddrInfo: entry
  [20-DEC-2016 08:18:35:937] snlinGetAddrInfo: getaddrinfo() failed with error -2
  [20-DEC-2016 08:18:35:938] snlinGetAddrInfo: exit
  [20-DEC-2016 08:18:35:938] nlvlloadp: Error 7001 while resolving host 17712412
  [20-DEC-2016 08:18:35:938] nlvlloadp: exit
  [20-DEC-2016 08:18:35:938] ntvllt: Problem in Loading tcp.invited_nodes
  • Any change of the values in TCP.VALIDNODE_CHECKING, TCP.INVITED_NODES or TCP.EXCLUDED_NODES requires the listener to be stopped and started. Just reloading the listener can be very dangerous is some situation. See alsoBug 22194469 : TCP.INVITED_NODES IS DISABLED AFTER RELOADING LISTENER
  • The TCP.INVITED_NODES list takes precedence over the TCP.EXCLUDED_NODES if both lists are used. E.g. if the following entries are present, the the host 192.168.56.203 is allowed to connected.
  TCP.INVITED_NODES  = (127.0.0.1, 192.168.56.201, 192.168.56.202, 192.168.56.203)
  TCP.EXCLUDED_NODES = (192.168.56.203)
  • All entries in TCP.INVITED_NODES or TCP.EXCLUDED_NODES have to be on one line. In case the entries are not in one line, the listener will not start and you will receive the following error: TNS-00583: Valid node checking: unable to parse configuration parameters. e.g. the following list is invalid
  TCP.INVITED_NODES = (127.0.0.1,
  192.168.56.201,
  192.168.56.202)
  • Adding empty placeholder strings like (comma, space, comma) are possible, however, I would not recommend them.
TCP.INVITED_NODES = (127.0.0.1, 192.168.56.201, 192.168.56.202, , ,)
  •  Entries can be added via Net Manager as well. Local -> Profile -> General -> Access Rights. HINT: If you have some special formatting or commenting in your sqlnet.ora, the netmgr might reorganize your file. So better do a copy beforehand. ;-)
  • In case you work with SCAN and the GRID Infrastructure GRID_HOME/network/admin/sqlnet.ora file, then don’t forget to add your NODE VIP’s and SCAN VIP’s to your TCP.INVITED_NODES list

By knowing all that, we can start configuring our SQL*Net Firewall for our DataGuard environment. In a minimum configuration, we need to add the localhost, all hosts involved in the DataGuard configuration (Primary, Standby 1, standby 2, observer and so on) and of course the application server. Right now I do have my primary (192.168.56.201), my standby (192.168.56.202) and the localhost in the invited nodes list. The application server 192.168.56.203 is missing.

TCP.VALIDNODE_CHECKING = YES
TCP.INVITED_NODES = (127.0.0.1, 192.168.56.201, 192.168.56.202)

If I try to connect now from the application server, I do get the following error: TNS-12547: TNS:lost contact. As you can see, not even a tnsping is possible.

oracle@dbidg03:/home/oracle/ [DBIT122] tnsping DBIT122_SITE1

TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 20-DEC-2016 09:11:51

Copyright (c) 1997, 2016, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = dbidg01)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = DBIT122_SITE1_DGMGRL)))
TNS-12547: TNS:lost contact

A sqlplus connection gives you the same error:

oracle@dbidg03:/home/oracle/ [DBIT122] sqlplus system@DBIT122_SITE1

SQL*Plus: Release 12.2.0.1.0 Production on Tue Dec 20 09:16:00 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password:
ERROR:
ORA-12547: TNS:lost contact

The TNS: lost contact can mean anything. Turning on listener logging gives you a much clearer error message saying that an incoming connect from 192.168.56.203 is rejected.

Incoming connection from 192.168.56.203 rejected
20-DEC-2016 09:11:51 * 12546
TNS-12546: TNS:permission denied
 TNS-12560: TNS:protocol adapter error
  TNS-00516: Permission denied

Through listener logging, you can monitor all incoming connected that are rejected. That’s quite cool from my point of view. Now … let’s add the application server (192.168.56.203) to the list, and restart the listener. (not reload) Don’t forget to do it on the Primary and the Standby.

TCP.VALIDNODE_CHECKING = YES
TCP.INVITED_NODES = (127.0.0.1, 192.168.56.201, 192.168.56.202, 192.168.56.203)

Now my application can tnsping and connect.

oracle@dbidg03:/home/oracle/ [DBIT122] tnsping DBIT122_SITE1

TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 20-DEC-2016 09:21:01

Copyright (c) 1997, 2016, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = dbidg01)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = DBIT122_SITE1_DGMGRL)))
OK (0 msec)
oracle@dbidg03:/home/oracle/ [DBIT122]
oracle@dbidg03:/home/oracle/ [DBIT122]
oracle@dbidg03:/home/oracle/ [DBIT122] sqlplus system@DBIT122_SITE1

SQL*Plus: Release 12.2.0.1.0 Production on Tue Dec 20 09:21:10 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password:
Last Successful login time: Mon Dec 19 2016 10:44:52 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>

 

Conclusion

Setting up valid node checking is quite easy, and the listener log gives you clear error messages if there are any rejected hosts.  Make sure that you have a DataGuard test environment where you can test this feature including switchover, failover and so on. And very important, verify your name resolution before you add any hostname to the invited host list. The only drawback I see, is that the listener has to be stopped and started, meaning that new connections are not possible for a very short period of time.

 

 

 

Cet article Oracle 12cR2 – DataGuard and TCP.VALID_NODE_CHECKING est apparu en premier sur Blog dbi services.

Documentum story – How to avoid “DFC_FILE_LOCK_ACQUIRE_WARN” messages in Java Method Server (jms) LOG

Tue, 2016-12-20 02:00

Ref : EMC article number
The last publication date is Sat Feb 20 21:39:14 GMT 2016. Here the link: https://support.emc.com/kb/335987

After upgrading from 6.7.x to 7.2, the following warning message is logged in JMS log files: com.documentum.fc.common.DfNewInterprocessLockImpl – [DFC_FILE_LOCK_ACQUIRE_WARN] Failed to acquire lock proceeding ahead with no lock java.nio.channels.OverlappingFileLockException at sun.nio.ch.SharedFileLockTable.checkList FileLockTable.java:255)

In order to avoid this warning, EMC has provided a solution (SR #69856498) that will be described below:

By default ACS and ServerApp dfc.properties are pointing to $DOCUMENTUM_SHARED/config/dfc.properties.

Adding separate ‘dfc.data.dir’ cache folder location in ACS and ServerApp dfc.properties.
After JAVA Method Server restart, two separate cache folders are created inside $DOCUMENTUM_SHARED/jboss7.1.1/server and then, WARNING messages had gone from acs.log.

In fact, this is just a warning that someone else has acquired lock on the physical file (in this case it is dfc.keystore).  Since ServerApps (Method Server) and ACS are invoking DFC simultaneously and both try to acquire lock on dfc.keystore file and Java throws OverlappingFileLockException. Then DFC warns that it could not lock the file and proceeds without lock. Ideally this should be just info message in this case, where file lock is acquired for read-only. But the same logic is used by other functionality like registry update and BOF Cache update, where this failure should be treated as genuine warning or error. Going forward, engineering will have to correct this code by taking appropriate actions for each functionality. There is no functional impact to use different data directory folder.

Please proceed as below to solve It:

  • Login to the Content Server
  • Change the current user to dmadmin :(administrator account)
  • Create some folders using:
 mkdir $DOCUMENTUM_SHARED/acs
 mkdir $DOCUMENTUM_SHARED/ServerApps
 mkdir $DOCUMENTUM_SHARED/bpm

 

  • Update all necessary dfc.properties files (with vi editor):

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/acs

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/APP-INF/classes/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/ServerApps

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/bpm.ear/APP-INF/classes/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/bpm

===============================================================================================================================

  • Verify that the recently created folders are empty using:
cd $DOCUMENTUM_SHARED
ls -l acs/ ServerApps/ bpm/

 

  • Restart the JMS using:
sh -c "cd $DOCUMENTUM_SHARED/jboss7.1.1/server;./stopMethodServer.sh"
sh -c "$DOCUMENTUM_SHARED/jboss7.1.1/server/startMethodServer.sh"

 

Verification
  • Verify that the recently created folders are now populated with default files and folders using:
cd $DOCUMENTUM_SHARED
ls -l acs/ ServerApps/ bpm/

Files must not be empty now.

  • Disconnect from the Content Server.

 

Using this procedure, you won’t see this WARNING message anymore.
Regards,

Source : EMC article number : 000335987

 

Cet article Documentum story – How to avoid “DFC_FILE_LOCK_ACQUIRE_WARN” messages in Java Method Server (jms) LOG est apparu en premier sur Blog dbi services.

Oracle 12cR2: AWR views in multitenant

Mon, 2016-12-19 13:42

In a previous post I explained how the AWR views have changed between 12.1.0.1 and 12.1.0.2 and now in 12.2.0.1 they have changed again. This is a good illustration of multitenant object link usage.

What’s new in 12cR2 is the ability to run AWR snapshots at CDB or PDB level. I really think that it makes more sense to read an AWR report at CDB level because it’s about analysing the system (=instance) activity. But with PDBaaS I can understand the need to give a report to analyse PDB sessions, resource and statements.

I’ll start with the conclusion – a map of AWR view to show which ones read from CDB level snapshots, or PDB snapshots, or both:

C0DLx2GXEAALIG4
I’ll explain AWR reports in a future post. Basically when you run awrrpt.sql from CDB$ROOT you get CDB snapshots and when you run it from PDB you have the choice.

In the diagram above, just follow the arrows to know which view reads from PDB or CDB or both. You see two switches between the root and the PDB: data link for one way and common view for the other way. Note that all are metadata links so switches occurs also at parse time.

WRM$_

Let’s start from the table where AWR snapshots are stored:


SQL> select owner,object_name,object_type,sharing from dba_objects where object_name='WRM$_SNAPSHOT';
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ------------------------------ ----------------------- ------------------
SYS WRM$_SNAPSHOT TABLE METADATA LINK

This is a table. METADATA LINK means that the structure is the same in all containers, but data is different.

I have the following containers:

SQL> select con_id,dbid,name from v$containers;
 
CON_ID DBID NAME
---------- ---------- ------------------------------
1 904475458 CDB$ROOT
2 2066620152 PDB$SEED
3 2623271973 PDB1

From CDB$ROOT I see data for the CDB:

SQL> select dbid,count(*) from WRM$_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

and from PDB I see snapshots taken from PDB:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from WRM$_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

So remember, CDB$ROOT has 91 snapshots with DBID= 904475458 and PDB1 has 79 snapshots with DBID=2623271973

AWR_ROOT_ and AWR_PDB_

Views on WRM$_SNAPSHOT are referenced in DBA_DEPENDENCIES:


SQL> select owner,name,type from dba_dependencies where referenced_name='WRM$_SNAPSHOT' and type like 'VIEW';
 
OWNER NAME TYPE
----- ------------------------------ -------------------
SYS AWR_ROOT_SNAPSHOT VIEW
SYS AWR_ROOT_SYSSTAT VIEW
SYS AWR_ROOT_ACTIVE_SESS_HISTORY VIEW
SYS AWR_ROOT_ASH_SNAPSHOT VIEW
SYS AWR_PDB_SNAPSHOT VIEW
SYS AWR_PDB_ACTIVE_SESS_HISTORY VIEW
SYS AWR_PDB_ASH_SNAPSHOT VIEW

I’m interested in views that show snapshot information: AWR_ROOT_SNAPSHOT and AWR_PDB_SNAPSHOT


SQL> select owner,object_name,object_type,sharing from dba_objects where object_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT') order by 3;
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
------ ------------------------------ ----------------------- ------------------
PUBLIC AWR_ROOT_SNAPSHOT SYNONYM METADATA LINK
PUBLIC AWR_PDB_SNAPSHOT SYNONYM METADATA LINK
SYS AWR_ROOT_SNAPSHOT VIEW DATA LINK
SYS AWR_PDB_SNAPSHOT VIEW METADATA LINK

Besides the synonyms, we have a metadata link view AWR_PDB_SNAPSHOT and a data link view AWR_ROOT_SNAPSHOT. The data link one means that it switches to CDB$ROOT when queried from a PDB. Here is the definition:


SQL> select owner,view_name,container_data,text from dba_views where view_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT');
 
OWNER VIEW_NAME C TEXT
------ ------------------------------ - --------------------------------------------------------------------------------
SYS AWR_ROOT_SNAPSHOT Y select snap_id, dbid, instance_number, startup_time,
begin_interval_time, end_interval_time,
flush_elapsed, snap_level, error_count, snap_flag, snap_timezone,
decode(con_dbid_to_id(dbid), 1, 0, con_dbid_to_id(dbid)) con_id
from WRM$_SNAPSHOT
where status = 0
 
SYS AWR_PDB_SNAPSHOT N select snap_id, dbid, instance_number, startup_time,
begin_interval_time, end_interval_time,
flush_elapsed, snap_level, error_count, snap_flag, snap_timezone,
decode(con_dbid_to_id(dbid), 1, 0, con_dbid_to_id(dbid)) con_id
from WRM$_SNAPSHOT
where status = 0

Same definition. The difference is that AWR_PDB_SNAPSHOT reads from the current container but AWR_ROOT_SNAPSHOT being a DATA LINK always read from CDB$ROOT.

This is what we can see:

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from AWR_ROOT_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
 
SQL> select dbid,count(*) from AWR_PDB_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
 
SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from AWR_ROOT_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

This query when run in PDB1 displays the 91 snapshots from the CDB.

SQL> select dbid,count(*) from AWR_PDB_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

This one shows what is in the current container.

Those are the views used by the AWR report, depending on the AWR location choice. But what about the DBA_HIST_ views that we know and use from previous versions?

DBA_HIST_ and CDB_HIST_

I continue to follow the dependencies:

SQL> select owner,name,type from dba_dependencies where referenced_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT') and name like '%SNAPSHOT' order by 3;
 
OWNER NAME TYPE
------ ------------------------------ -------------------
PUBLIC AWR_ROOT_SNAPSHOT SYNONYM
PUBLIC AWR_PDB_SNAPSHOT SYNONYM
SYS DBA_HIST_SNAPSHOT VIEW
SYS CDB_HIST_SNAPSHOT VIEW
 
SQL> select owner,object_name,object_type,sharing from dba_objects where object_name in ('CDB_HIST_SNAPSHOT','DBA_HIST_SNAPSHOT');
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
------ ------------------------------ ----------------------- ------------------
SYS DBA_HIST_SNAPSHOT VIEW METADATA LINK
SYS CDB_HIST_SNAPSHOT VIEW METADATA LINK
PUBLIC DBA_HIST_SNAPSHOT SYNONYM METADATA LINK
PUBLIC CDB_HIST_SNAPSHOT SYNONYM METADATA LINK

Here are the views I’m looking for. They are metadata link only. Not data link. This means that they do not switch to CDB$ROOT.

But there’s more in the view definition:

SQL> select owner,view_name,container_data,text from dba_views where view_name in ('CDB_HIST_SNAPSHOT','DBA_HIST_SNAPSHOT');
 
OWNER VIEW_NAME C TEXT
------ ------------------------------ - --------------------------------------------------------------------------------
SYS DBA_HIST_SNAPSHOT N select "SNAP_ID","DBID","INSTANCE_NUMBER","STARTUP_TIME","BEGIN_INTERVAL_TIME","
END_INTERVAL_TIME","FLUSH_ELAPSED","SNAP_LEVEL","ERROR_COUNT","SNAP_FLAG","SNAP_
TIMEZONE","CON_ID" from AWR_ROOT_SNAPSHOT
 
SYS CDB_HIST_SNAPSHOT Y SELECT k."SNAP_ID",k."DBID",k."INSTANCE_NUMBER",k."STARTUP_TIME",k."BEGIN_INTERV
AL_TIME",k."END_INTERVAL_TIME",k."FLUSH_ELAPSED",k."SNAP_LEVEL",k."ERROR_COUNT",
k."SNAP_FLAG",k."SNAP_TIMEZONE",k."CON_ID", k.CON$NAME, k.CDB$NAME FROM CONTAINE
RS("SYS"."AWR_PDB_SNAPSHOT") k

The DBA_HIST_SNAPSHOT is a simple one view on AWR_ROOT_SNAPSHOT which, as we have seen above, always show snapshots from CDB:


SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from DBA_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
&nbsp
SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from DBA_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

Then CDB_HIST_SNAPSHOT reads AWR_PDB_SNAPSHOT which show current container snapshots. But this view is a COMMON DATA one, with the CONTAINER() function. This means that from CDB$ROOT when executed with a common user data from all open containers will be retrieved:


SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from CDB_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79
904475458 91

However, from a PDB you cannot see anything else:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from CDB_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

So what?

Multitenant adds a new dimension in the dictionary views and we must be aware of that. However, compatibility is still there. The scripts that we used to run to query DBA_HIST views should still work. Don’t forget to always join on DBID and INSTANCE_NUMBER in addition to SNAP_ID so that your scripts are still working in RAC, and cross containers.
In 12.2 you can do the same for your application: used metadata links, data links, and common views for your tables. But remember to keep it simple…

 

Cet article Oracle 12cR2: AWR views in multitenant est apparu en premier sur Blog dbi services.

RMAN> TRANSPORT TABLESPACE

Sun, 2016-12-18 03:12

In a previous post I explained how to use transportabel tablespace from a standby database. Here I’m showing an alternative where you can transport from a backup instead of a standby database. RMAN can do that since 10gR2.

Transportable Tablespace is a beautiful feature: the performance of physical copy and the flexibility of logical export/import. But it has one drawback: the source tablespace must be opened read only when you copy it and export the metadata. This means that you cannot use it from production, such as moving data to a datawarehouse ODS. There’s an alternative to that: restore the tablespace with TSPITR (tablespace point-in-time recovery) into a temporary instance and transport from there.
This is what is automated by RMAN with a simple command: RMAN> TRANSPORT TABLESPACE.

Multitenant

This blog post shows how to do that when you are in 12c multitenant architecture. Even if 12.2 comes with online PDB clone, you may want to transport a single tablespace.

You cannot run TRANSPORT TABLESPACE when connected to a PDB. Let’s test it:

RMAN> connect target sys/oracle@//localhost/PDB1
connected to target database: CDB1:PDB1 (DBID=1975603085)

Here are the datafiles:

RMAN> report schema;
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1A
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
9 250 SYSTEM NO /u02/oradata/CDB1A/PDB1/system01.dbf
10 350 SYSAUX NO /u02/oradata/CDB1A/PDB1/sysaux01.dbf
11 520 UNDOTBS1 NO /u02/oradata/CDB1A/PDB1/undotbs01.dbf
12 5 USERS NO /u02/oradata/CDB1A/PDB1/users01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
3 20 TEMP 32767 /u02/oradata/CDB1A/PDB1/temp01.dbf

Let’s run the TRANSPORT TABLESPACE command:

RMAN> transport tablespace USERS auxiliary destination '/var/tmp/AUX' tablespace destination '/var/tmp/TTS';
RMAN-05026: warning: presuming following set of tablespaces applies to specified point-in-time
 
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1
 
Creating automatic instance, with SID='jlDa'
 
initialization parameters used for automatic instance:
db_name=CDB1
db_unique_name=jlDa_pitr_CDB1
compatible=12.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/u01/app/oracle
_system_trig_enabled=FALSE
sga_target=768M
processes=200
db_create_file_dest=/var/tmp/AUX
log_archive_dest_1='location=/var/tmp/AUX'
enable_pluggable_database=true
_clone_one_pdb_recovery=true
#No auxiliary parameter file used
 
starting up automatic instance CDB1
 
Oracle instance started
 
Total System Global Area 805306368 bytes
 
Fixed Size 8793056 bytes
Variable Size 234882080 bytes
Database Buffers 553648128 bytes
Redo Buffers 7983104 bytes
Automatic instance created
 
Removing automatic instance
shutting down automatic instance
Oracle instance shut down
Automatic instance removed
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of transport tablespace command at 12/17/2016 21:33:14
RMAN-07538: Pluggable Database qualifier not allowed when connected to a Pluggable Database

You got the idea: an auxiliary instance is automatically created but then it failed because an internal command cannot be run from a PDB.

Run from CDB

So let’s run it when connected to CDB$ROOT:

echo set on
 
RMAN> connect target sys/oracle
connected to target database: CDB1 (DBID=894360530)

Whe can see all pluggable databases and all datafiles:

RMAN> report schema;
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1A
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 800 SYSTEM YES /u02/oradata/CDB1A/system01.dbf
3 480 SYSAUX NO /u02/oradata/CDB1A/sysaux01.dbf
4 65 UNDOTBS1 YES /u02/oradata/CDB1A/undotbs01.dbf
5 250 PDB$SEED:SYSTEM NO /u02/oradata/CDB1A/pdbseed/system01.dbf
6 350 PDB$SEED:SYSAUX NO /u02/oradata/CDB1A/pdbseed/sysaux01.dbf
7 5 USERS NO /u02/oradata/CDB1A/users01.dbf
8 520 PDB$SEED:UNDOTBS1 NO /u02/oradata/CDB1A/pdbseed/undotbs01.dbf
9 250 PDB1:SYSTEM YES /u02/oradata/CDB1A/PDB1/system01.dbf
10 350 PDB1:SYSAUX NO /u02/oradata/CDB1A/PDB1/sysaux01.dbf
11 520 PDB1:UNDOTBS1 YES /u02/oradata/CDB1A/PDB1/undotbs01.dbf
12 5 PDB1:USERS NO /u02/oradata/CDB1A/PDB1/users01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 240 TEMP 32767 /u02/oradata/CDB1A/temp01.dbf
2 32 PDB$SEED:TEMP 32767 /u02/oradata/CDB1A/pdbseed/temp012016-08-23_14-12-45-799-PM.dbf
3 20 PDB1:TEMP 32767 /u02/oradata/CDB1A/PDB1/temp01.dbf

We can run the TRANSPORT TABLESPACE command from here, naming the tablespace prefixed with the PDB name PDB1:USERS

transport tablespace … auxiliary destination … tablespace destination

The TRANSPORT TABLESPACE command needs a destination where to put the datafiles and dump file to transport (TABLESPACE DESTINATION) and also needs an auxiliary destination (AUXILIARY DESTINATION). It seems it is mandatory, which is different from the PDBPITR where the FRA is used by default.


RMAN> transport tablespace PDB1:USERS auxiliary destination '/var/tmp/AUX' tablespace destination '/var/tmp/TTS';

And then you will see all what RMAN does for you. I’ll show most of the output.

UNDO

Restoring a tablespace will need to apply redo and then rollback the transactions that were opened at that PIT. This is why RMAN has to restore all tablespaces that may contain UNDO:

RMAN-05026: warning: presuming following set of tablespaces applies to specified point-in-time
 
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace PDB1:SYSTEM
Tablespace UNDOTBS1
Tablespace PDB1:UNDOTBS1

I suppose that when the UNDO_TABLESPACE has changed in the meantime, RMAN cannot guess which tablespace covered the transactions at the requested PIT but I seen nothing in the TRANSPORT TABLESPACE syntax to specify it. That’s probably for a future post and /or SR.

Auxiliary instance

So RMAN creates an auxiliary instance with some specific parameters to be sure there’s no side effect on the source database (the RMAN target one).

Creating automatic instance, with SID='qnDA'
 
initialization parameters used for automatic instance:
db_name=CDB1
db_unique_name=qnDA_pitr_PDB1_CDB1
compatible=12.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/u01/app/oracle
_system_trig_enabled=FALSE
sga_target=768M
processes=200
db_create_file_dest=/var/tmp/AUX
log_archive_dest_1='location=/var/tmp/AUX'
enable_pluggable_database=true
_clone_one_pdb_recovery=true
#No auxiliary parameter file used
 
 
starting up automatic instance CDB1
 
Oracle instance started
 
Total System Global Area 805306368 bytes
 
Fixed Size 8793056 bytes
Variable Size 234882080 bytes
Database Buffers 553648128 bytes
Redo Buffers 7983104 bytes
Automatic instance created

Restore

The goal is to transport the tablespace, so RMAN checks that they are self-contained:

Running TRANSPORT_SET_CHECK on recovery set tablespaces
TRANSPORT_SET_CHECK completed successfully

and starts the restore of controlfile and datafiles (the CDB SYSTEM, SYSAUX, UNDO and the PDB SYSTEM, SYSAUX, UNDO and the tablespaces to transport)

contents of Memory Script:
{
# set requested point in time
set until scn 1836277;
# restore the controlfile
restore clone controlfile;
 
# mount the controlfile
sql clone 'alter database mount clone database';
 
# archive current online log
sql 'alter system archive log current';
}
executing Memory Script
 
executing command: SET until clause
 
Starting restore at 17-DEC-16
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=253 device type=DISK
 
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/autobackup/2016_12_17/o1_mf_s_930864638_d5c83gxl_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/autobackup/2016_12_17/o1_mf_s_930864638_d5c83gxl_.bkp tag=TAG20161217T213038
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
output file name=/var/tmp/AUX/CDB1A/controlfile/o1_mf_d5c88zp3_.ctl
Finished restore at 17-DEC-16
 
sql statement: alter database mount clone database
 
sql statement: alter system archive log current
 
contents of Memory Script:
{
# set requested point in time
set until scn 1836277;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile 1 to new;
set newname for clone datafile 9 to new;
set newname for clone datafile 4 to new;
set newname for clone datafile 11 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 10 to new;
set newname for clone tempfile 1 to new;
set newname for clone tempfile 3 to new;
set newname for datafile 12 to
"/var/tmp/TTS/users01.dbf";
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 1, 9, 4, 11, 3, 10, 12;
 
switch clone datafile all;
}
executing Memory Script
 
executing command: SET until clause
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
renamed tempfile 1 to /var/tmp/AUX/CDB1A/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 3 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_temp_%u_.tmp in control file
 
Starting restore at 17-DEC-16
using channel ORA_AUX_DISK_1
 
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /var/tmp/AUX/CDB1A/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00004 to /var/tmp/AUX/CDB1A/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /var/tmp/AUX/CDB1A/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c83n81_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c83n81_.bkp tag=TAG20161217T213044
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:35
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00009 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00011 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00010 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00012 to /var/tmp/TTS/users01.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c851hh_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c851hh_.bkp tag=TAG20161217T213044
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:25
Finished restore at 17-DEC-16
 
datafile 1 switched to datafile copy
input datafile copy RECID=11 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_system_d5c8993k_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=12 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_system_d5c8d8ow_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=13 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_undotbs1_d5c8998b_.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=14 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undotbs1_d5c8d8g6_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=15 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_sysaux_d5c8996o_.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=16 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_sysaux_d5c8d8o7_.dbf
datafile 12 switched to datafile copy
input datafile copy RECID=17 STAMP=930865006 file name=/var/tmp/TTS/users01.dbf

You noticed that the SYSTEM, SYSAUX, UNDO are restored in the auxiliary location but the tablespaces to transport (USERS here) goes to its destination. If you want to transport it on the same server, you can avoid any copying of it.

Recover

The recovery continues automatically to the PIT (which you can also specify with an UNTIL clause or restore point)


contents of Memory Script:
{
# set requested point in time
set until scn 1836277;
# online the datafiles restored or switched
sql clone "alter database datafile 1 online";
sql clone 'PDB1' "alter database datafile
9 online";
sql clone "alter database datafile 4 online";
sql clone 'PDB1' "alter database datafile
11 online";
sql clone "alter database datafile 3 online";
sql clone 'PDB1' "alter database datafile
10 online";
sql clone 'PDB1' "alter database datafile
12 online";
# recover and open resetlogs
recover clone database tablespace "PDB1":"USERS", "SYSTEM", "PDB1":"SYSTEM", "UNDOTBS1", "PDB1":"UNDOTBS1", "SYSAUX", "PDB1":"SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script
 
executing command: SET until clause
 
sql statement: alter database datafile 1 online
 
sql statement: alter database datafile 9 online
 
sql statement: alter database datafile 4 online
 
sql statement: alter database datafile 11 online
 
sql statement: alter database datafile 3 online
 
sql statement: alter database datafile 10 online
 
sql statement: alter database datafile 12 online
 
Starting recover at 17-DEC-16
using channel ORA_AUX_DISK_1
 
starting media recovery
 
archived log for thread 1 with sequence 30 is already on disk as file /u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_30_d5c83ll5_.arc
archived log for thread 1 with sequence 31 is already on disk as file /u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_31_d5c8783v_.arc
archived log file name=/u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_30_d5c83ll5_.arc thread=1 sequence=30
archived log file name=/u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_31_d5c8783v_.arc thread=1 sequence=31
media recovery complete, elapsed time: 00:00:02
Finished recover at 17-DEC-16
 
database opened
 
contents of Memory Script:
{
sql clone 'alter pluggable database PDB1 open';
}
executing Memory Script
 
sql statement: alter pluggable database PDB1 open

Export TTS

The restored tablespaces can be set read only, which was the goal.

contents of Memory Script:
{
# make read only the tablespace that will be exported
sql clone 'PDB1' 'alter tablespace
USERS read only';
# create directory for datapump export
sql clone 'PDB1' "create or replace directory
STREAMS_DIROBJ_DPDIR as ''
/var/tmp/TTS''";
}
executing Memory Script
 
sql statement: alter tablespace USERS read only

Now the export of metadata run (equivalent to expdp transport_tablespace=Y)


sql statement: create or replace directory STREAMS_DIROBJ_DPDIR as ''/var/tmp/TTS''
 
Performing export of metadata...
EXPDP> Starting "SYS"."TSPITR_EXP_qnDA_urDb":
 
EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
EXPDP> Master table "SYS"."TSPITR_EXP_qnDA_urDb" successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_qnDA_urDb is:
EXPDP> /var/tmp/TTS/dmpfile.dmp
EXPDP> ******************************************************************************
EXPDP> Datafiles required for transportable tablespace USERS:
EXPDP> /var/tmp/TTS/users01.dbf
EXPDP> Job "SYS"."TSPITR_EXP_qnDA_urDb" successfully completed at Sat Dec 17 21:41:06 2016 elapsed 0 00:00:47
Export completed
 
Not performing table import after point-in-time recovery

The last message let me think that the RMAN codes shares the one that manages RECOVER TABLE.

Then RMAN lists the commands to run the import (also available in a generated script) and removes the auxiliary instance.

Cleanup

Not everything has been removed:
[oracle@VM117 blogs]$ du -ha /var/tmp/AUX
0 /var/tmp/AUX/CDB1A/controlfile
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_51_d5c8k0oo_.log
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_52_d5c8kcjp_.log
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_53_d5c8kskz_.log
601M /var/tmp/AUX/CDB1A/onlinelog
0 /var/tmp/AUX/CDB1A/datafile
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undo_1_d5c8m1nx_.dbf
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9
1.1G /var/tmp/AUX/CDB1A
1.1G /var/tmp/AUX

Import TTS

In the destination you find the tablespace datafiles, the dump of metadata and a script that can be run to import it to the destination:

[oracle@VM117 blogs]$ du -ha /var/tmp/TTS
5.1M /var/tmp/TTS/users01.dbf
132K /var/tmp/TTS/dmpfile.dmp
4.0K /var/tmp/TTS/impscrpt.sql
5.2M /var/tmp/TTS

For this example, I import it on the same server, in a different pluggable database:


SQL> connect / as sysdba
Connected.
SQL> alter session set container=PDB2;
Session altered.

and simply run the script provided:

SQL> set echo on
 
SQL> @/var/tmp/TTS/impscrpt.sql
 
SQL> /*
SQL> The following command may be used to import the tablespaces.
SQL> Substitute values for and .
SQL>
SQL> impdp directory= dumpfile= 'dmpfile.dmp' transport_datafiles= /var/tmp/TTS/users01.dbf
SQL> */
SQL>
SQL> --
SQL> --
SQL> --
SQL> --
SQL> CREATE DIRECTORY STREAMS$DIROBJ$1 AS '/var/tmp/TTS/';
Directory created.
 
SQL> CREATE DIRECTORY STREAMS$DIROBJ$DPDIR AS '/var/tmp/TTS';
Directory created.
 
SQL> /* PL/SQL Script to import the exported tablespaces */
SQL> DECLARE
2 --
3 tbs_files dbms_streams_tablespace_adm.file_set;
4 cvt_files dbms_streams_tablespace_adm.file_set;
5
6 --
7 dump_file dbms_streams_tablespace_adm.file;
8 dp_job_name VARCHAR2(30) := NULL;
9
10 --
11 ts_names dbms_streams_tablespace_adm.tablespace_set;
12 BEGIN
13 --
14 dump_file.file_name := 'dmpfile.dmp';
15 dump_file.directory_object := 'STREAMS$DIROBJ$DPDIR';
16
17 --
18 tbs_files( 1).file_name := 'users01.dbf';
19 tbs_files( 1).directory_object := 'STREAMS$DIROBJ$1';
20
21 --
22 dbms_streams_tablespace_adm.attach_tablespaces(
23 datapump_job_name => dp_job_name,
24 dump_file => dump_file,
25 tablespace_files => tbs_files,
26 converted_files => cvt_files,
27 tablespace_names => ts_names);
28
29 --
30 IF ts_names IS NOT NULL AND ts_names.first IS NOT NULL THEN
31 FOR i IN ts_names.first .. ts_names.last LOOP
32 dbms_output.put_line('imported tablespace '|| ts_names(i));
33 END LOOP;
34 END IF;
35 END;
36 /
PL/SQL procedure successfully completed.
 
SQL>
SQL> --
SQL> DROP DIRECTORY STREAMS$DIROBJ$1;
Directory dropped.
 
SQL> DROP DIRECTORY STREAMS$DIROBJ$DPDIR;
Directory dropped.
 
SQL> --------------------------------------------------------------
SQL> -- End of sample PL/SQL script
SQL> --------------------------------------------------------------

Of course, you don’t need to and you can run the import with IMPDP:

SQL> alter session set container=pdb2;
Session altered.
SQL> create directory tts as '/var/tmp/TTS';
Directory created.
SQL> host impdp pdbadmin/oracle@//localhost/PDB2 directory=TTS dumpfile='dmpfile.dmp' transport_datafiles=/var/tmp/TTS/users01.dbf

You may also use convert to transport to a different endianness.

Local Undo

Note that if you run it on current 12.2.0.1.0 cloud first DBaaS you will get an error when RMAN opens the PDB in the auxiliary instance because there’s a bug with local undo. Here is the alert.log part:

PDB1(3):Opening pdb with no Resource Manager plan active
PDB1(3):CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 188743680 AUTOEXTEND ON NEXT 5242880 MAXSIZE 34359721984 ONLINE
PDB1(3):Force tablespace UNDO_1 to be encrypted with AES128
2016-12-17T18:05:14.759732+00:00
PDB1(3):ORA-00060: deadlock resolved; details in file /u01/app/oracle/diag/rdbms/fqkn_pitr_pdb1_cdb1/fqkn/trace/fqkn_ora_26146.trc
PDB1(3):ORA-60 signalled during: CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 188743680 AUTOEXTEND ON NEXT 5242880 MAXSIZE 34359721984 ONLINE...
PDB1(3):Automatic creation of undo tablespace failed with error 604 60
ORA-604 signalled during: alter pluggable database PDB1 open...

I did this demo with LOCAL UNDO OFF.

So what?

You can use Transportable Tablespaces from a database where you cannot put the tablespace read-only. The additional cost of it is to recover it from a backup, along with SYSTEM, SYSAUX and UNDO. But it is fully automated with only one RMAN command.

 

Cet article RMAN> TRANSPORT TABLESPACE est apparu en premier sur Blog dbi services.

Oracle 12cR2 : Partitioning improvements – multi-column list partitioning & read-only partitions

Fri, 2016-12-16 10:10

In my last blog post I presented auto-list partitioning, a new partitioning functionality coming with 12cR2.
In this one I will introduce two others : multi-column list partitioning and read-only partitions.

Multi-column list partitioning

auto-list-partImage : “Oracle Partitioning in Oracle Database 12c Release 2″ – Hermann Bär

With the first release of 12c it wasn’t possible to create list partitioned tables based on multi-column partition key :
ORA-14304: List partitioning method expects a single partitioning column
But now you can easily implement this functionality :
SQL> CREATE TABLE cars(
car_make VARCHAR2(30),
car_model VARCHAR2(30)
)
PARTITION BY LIST (car_make, car_model)
(PARTITION P1 VALUES ('Ford','Focus'),
PARTITION P_DEFAULT VALUES (DEFAULT));

Table created.

Check partitions :
SQL> SELECT partition_name, high_value FROM dba_tab_partitions WHERE table_name = 'CARS';


PARTITION_NAME HIGH_VALUE
-------------------- ------------------------------
P1 ( 'Ford', 'Focus' )
P_DEFAULT DEFAULT


SQL> SELECT partitioning_type, partition_count, partitioning_key_count FROM dba_part_tables WHERE table_name = 'CARS';


PARTITION PARTITION_COUNT PARTITIONING_KEY_COUNT
--------- --------------- ----------------------
LIST 2 2

This functionality allows you to use up to 16 columns for the partition key and as you can see it in this example, it’s also possible to define a DEFAULT partition.
Multi-column list partitioning on subpartitions is also permitted.

Read-only partitions

auto-list-partImage : “Oracle Partitioning in Oracle Database 12c Release 2″ – Hermann Bär

We all know that Oracle is able to define a tablespace or a table as read-only. But did you know that with 12cR2 it’s now possible to define this attribute at the partition level ? :
CREATE TABLE sales(
sales_product VARCHAR2(30),
sales_date DATE
) READ WRITE
PARTITION BY RANGE (sales_date)
(
PARTITION P_2013 VALUES LESS THAN (TO_DATE('01-JAN-2014','dd-MON-yyyy')) READ ONLY,
PARTITION P_2014 VALUES LESS THAN (TO_DATE('01-JAN-2015','dd-MON-yyyy')) READ ONLY,
PARTITION P_2015 VALUES LESS THAN (TO_DATE('01-JAN-2016','dd-MON-yyyy')) READ ONLY,
PARTITION P_2016 VALUES LESS THAN (TO_DATE('01-JAN-2017','dd-MON-yyyy')));

Data insertion :
SQL> INSERT INTO sales VALUES ('SHIRT', '02-MAY-2013');
INSERT INTO sales VALUES ('SHIRT', '02-MAY-2013')
*
ERROR at line 1:
ORA-14466: Data in a read-only partition or subpartition cannot be modified.

SQL> INSERT INTO sales VALUES ('SHOES', '29-MAR-2016');


1 row created.

Easy to implement and reliable way to protect data changes inside a table.

 

Cet article Oracle 12cR2 : Partitioning improvements – multi-column list partitioning & read-only partitions est apparu en premier sur Blog dbi services.

Dataguard Oracle 12.2 : Support for Multiple Automatic Failover Targets

Thu, 2016-12-15 15:41

In a previous blog, we talked about Fast-Start Failover with the MaxProtection mode.
With Oracle 12.2 Oracle Data Guard now supports multiple failover targets in a fast-start failover configuration.
Previous functionality allowed for only a single fast-start failover target. If the failover target was unable to meet the requirements for fast-start failover at the time of primary failure, then an automatic failure would not occur.
Designating multiple failover targets significantly improves the likelihood that there will always be a standby suitable for automatic failover when needed.
Multiple failover targets increase high availability by making an automatic failover more likely to occur if there is a primary outage.
Indeed FastStartFailoverTarget can now have as value one site

DGMGRL> edit database 'ORCL_SITE' set property FastStartFailoverTarget='ORCL_SITE1';
Property "faststartfailovertarget" updated
DGMGRL>


DGMGRL> enable fast_start failover;
Enabled.
DGMGRL> show fast_start failover;
Fast-Start Failover: ENABLED
Threshold: 30 seconds
Target: ORCL_SITE1
Observer: standserver1.localdomain
Lag Limit: 30 seconds (not in use)
Shutdown Primary: TRUE
Auto-reinstate: TRUE
Observer Reconnect: (none)
Observer Override: FALSE

It can also have as values two sites (Priority following the order)

DGMGRL> disable Fast_start Failover;
Disabled.
DGMGRL> edit database 'ORCL_SITE' set property FastStartFailoverTarget='ORCL_SITE1,ORCL_SITE2';
Property "faststartfailovertarget" updated
DGMGRL>


DGMGRL> enable fast_start failover;
Enabled.
DGMGRL> show fast_start failover;
Fast-Start Failover: ENABLED
Threshold: 30 seconds
Target: ORCL_SITE1
Candidate Targets: ORCL_SITE1,ORCL_SITE2
Observer: standserver1.localdomain
Lag Limit: 30 seconds (not in use)
Shutdown Primary: TRUE
Auto-reinstate: TRUE
Observer Reconnect: (none)
Observer Override: FALSE

It can also have as value the keyword ANY (Priority defined by oracle)

DGMGRL> edit database 'ORCL_SITE' set property FastStartFailoverTarget='ANY';
Property "faststartfailovertarget" updated
DGMGRL> edit database 'ORCL_SITE1' set property FastStartFailoverTarget='ANY';
Property "faststartfailovertarget" updated
DGMGRL> edit database 'ORCL_SITE2' set property FastStartFailoverTarget='ANY';
Property "faststartfailovertarget" updated

With the value set to ANY we can see that there is a target failover but also there are also candidate targets. If a failover occurs the order will be ORCL_SITE1 first and if Not available ORCL_SITE2 will be the target.

DGMGRL> enable fast_start failover;
Enabled.
DGMGRL> show fast_start failover;
Fast-Start Failover: ENABLED
Threshold: 30 seconds
Target: ORCL_SITE1
Candidate Targets: ORCL_SITE1,ORCL_SITE2
Observer: standserver1.localdomain
Lag Limit: 30 seconds (not in use)
Shutdown Primary: TRUE
Auto-reinstate: TRUE
Observer Reconnect: (none)
Observer Override: FALSE

From the observer we can confirm that target is ORCL_SITE1

[W000 12/15 14:40:25.12] This observer is no longer registered with the configuration.
[W000 12/15 14:40:25.20] FSFO target standby is ORCL_SITE1
[W000 12/15 14:40:27.66] Observer trace level is set to USER
[W000 12/15 14:40:27.66] Try to connect to the primary.
[W000 12/15 14:40:27.66] Try to connect to the primary ORCL_SITE.
[W000 12/15 14:40:27.67] The standby ORCL_SITE1 is ready to be a FSFO target
[W000 12/15 14:40:29.67] Connection to the primary restored!

Now let’s simulate the case that ORCL_SITE1 is no longer available, for this let’s simply shutdown ORCL_SITE1 database, the failover target should be changed to ORCL_SITE2

SQL> shutdown abort
ORACLE instance shut down.
SQL>


DGMGRL> show fast_start failover;
Fast-Start Failover: ENABLED
Threshold: 30 seconds
Target: ORCL_SITE2
Candidate Targets: ORCL_SITE1,ORCL_SITE2
Observer: standserver1.localdomain
Lag Limit: 30 seconds (not in use)
Shutdown Primary: TRUE
Auto-reinstate: TRUE
Observer Reconnect: (none)
Observer Override: FALSE

From the observer we can see following messages that confirm that the failover target was switched to ORCL_SITE2

[W000 12/15 14:54:52.00] Permission granted to the primary database for target switch.
[W000 12/15 14:54:54.03] Standby database has changed to ORCL_SITE2.
[W000 12/15 14:54:54.04] Try to connect to the primary.
[W000 12/15 14:54:54.04] Try to connect to the primary ORCL_SITE.
[W000 12/15 14:54:55.48] The primary database returned to SYNC/NOT LAGGING state with the standby database ORCL_SITE2.
[W000 12/15 14:54:55.48] Connection to the primary restored!
[W000 12/15 14:54:58.48] Disconnecting from database ORCL_SITE.
[W000 12/15 14:55:04.52] The standby ORCL_SITE2 is ready to be a FSFO target

Conclusion
We have seen that we can now have multiple targets for the Fast-Start Failover and how to implement it

 

Cet article Dataguard Oracle 12.2 : Support for Multiple Automatic Failover Targets est apparu en premier sur Blog dbi services.

Oracle 12cR2 – Howto setup Active DataGuard with Temporary Tablespace Groups

Thu, 2016-12-15 06:01

Temporary Tablespaces Groups exist for quite a while now (since 10gR2), but they are, for whatever reason not so often used. Personally, I think they are quite cool. Very easy to setup, and especially in big environments with a lot of parallel processing very useful. But this blog will not be about Temporary Tablespace Groups. They are already explained in the 12.2 Admin Guide.

https://docs.oracle.com/database/122/ADMIN/managing-tablespaces.htm#ADMIN-GUID-55A6AE10-6875-4B73-9A5C-CB4965AD5AFE

For my Active DataGuard environment, I would like to use Temporary Tablespace Groups in combination with Temporary Undo, which is enabled by default as soon as you activate the ADG feature. And of course, I would like to do it online, without shutting down neither the primary nor the ADG standby.

Is this possible? We will see. I start first with creating a Temporary Tablespace Group called DBITEMP
with 3 bigfile temp tablespaces DBITEMP1/DBITEMP2/DBITEMP3.

-- On Primary

SQL> CREATE BIGFILE TEMPORARY TABLESPACE DBITEMP1 TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T TABLESPACE GROUP DBITEMP;

Tablespace created.

SQL> CREATE BIGFILE TEMPORARY TABLESPACE DBITEMP2 TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T TABLESPACE GROUP DBITEMP;

Tablespace created.

SQL> CREATE BIGFILE TEMPORARY TABLESPACE DBITEMP3 TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T TABLESPACE GROUP DBITEMP;

Tablespace created.

SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE DBITEMP;

Database altered.

SQL> alter system archive log current;

System altered.

SQL> select TABLESPACE_NAME, CONTENTS, BIGFILE from dba_tablespaces where CONTENTS = 'TEMPORARY' and TABLESPACE_NAME like '%DBI%';

TABLESPACE_NAME                CONTENTS              BIG
------------------------------ --------------------- ---
DBITEMP1                       TEMPORARY             YES
DBITEMP2                       TEMPORARY             YES
DBITEMP3                       TEMPORARY             YES

SQL> select * from DBA_TABLESPACE_GROUPS;

GROUP_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
DBITEMP                        DBITEMP1
DBITEMP                        DBITEMP2
DBITEMP                        DBITEMP3

SQL> select NAME, BYTES from v$tempfile where name like '%dbitemp%';

NAME                                                                         BYTES
-------------------------------------------------------------------- -------------
/u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp          33554432
/u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp2_d52c6yqv_.tmp          33554432
/u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp3_d52c7954_.tmp          33554432

SQL> !ls -l /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp
-rw-r----- 1 oracle oinstall 33562624 Dec 14 12:28 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp

SQL> !ls -l /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp2_d52c6yqv_.tmp
-rw-r----- 1 oracle oinstall 33562624 Dec 14 12:29 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp2_d52c6yqv_.tmp

SQL> !ls -l /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp3_d52c7954_.tmp
-rw-r----- 1 oracle oinstall 33562624 Dec 14 12:28 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp3_d52c7954_.tmp

SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM database_properties
  2  WHERE property_name='DEFAULT_TEMP_TABLESPACE';

PROPERTY_NAME            PROPERTY_VALUE
------------------------ ----------------------
DEFAULT_TEMP_TABLESPACE  DBITEMP


DGMGRL> show database 'DBIT122_SITE1';

Database - DBIT122_SITE1

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    DBIT122

Database Status:
SUCCESS

DGMGRL> show database 'DBIT122_SITE2';

Database - DBIT122_SITE2

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 0 Byte/s
  Real Time Query:    ON
  Instance(s):
    DBIT122

Database Status:
SUCCESS

On the primary database, it looks good. Everything was setup correctly, and the Temporary Tablespace Group is ready to be used. Now lets check what we have on the Standby … almost nothing.

My temporary tablespaces and the group are there, however, the tempfiles are missing. So … not really useful at that stage.

SQL> select TABLESPACE_NAME, CONTENTS, BIGFILE from dba_tablespaces where CONTENTS = 'TEMPORARY' and TABLESPACE_NAME like '%DBI%';

TABLESPACE_NAME                CONTENTS              BIG
------------------------------ --------------------- ---
DBITEMP1                       TEMPORARY             YES
DBITEMP2                       TEMPORARY             YES
DBITEMP3                       TEMPORARY             YES

SQL> select * from DBA_TABLESPACE_GROUPS;

GROUP_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
DBITEMP                        DBITEMP1
DBITEMP                        DBITEMP2
DBITEMP                        DBITEMP3


SQL> SELECT PROPERTY_NAME, PROPERTY_VALUE FROM database_properties
  2  WHERE property_name='DEFAULT_TEMP_TABLESPACE';

PROPERTY_NAME            PROPERTY_VALUE
------------------------ ----------------------
DEFAULT_TEMP_TABLESPACE  DBITEMP

SQL> select NAME, BYTES from v$tempfile where name like '%dbitemp%';

no rows selected

Ok. Lets take a look at the alert.log, if Oracle is telling something. And yes, indeed, a big WARNING message is posted into the alert.log saying that the temporary tablespaces DBITEMP1/2/3 are empty.

*********************************************************************
WARNING: The following temporary tablespaces contain no files.
         This condition can occur when a backup controlfile has
         been restored.  It may be necessary to add files to these
         tablespaces.  That can be done using the SQL statement:

         ALTER TABLESPACE <tablespace_name> ADD TEMPFILE

         Alternatively, if these temporary tablespaces are no longer
         needed, then they can be dropped.
           Empty temporary tablespace: DBITEMP1
           Empty temporary tablespace: DBITEMP2
           Empty temporary tablespace: DBITEMP3
*********************************************************************

Ok. With 12.2 it is like it always was. You have to do it manually.

SQL> ALTER TABLESPACE DBITEMP1 ADD TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T;

Tablespace altered.

SQL> ALTER TABLESPACE DBITEMP2 ADD TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T;

Tablespace altered.

SQL> ALTER TABLESPACE DBITEMP3 ADD TEMPFILE SIZE 32M AUTOEXTEND ON MAXSIZE 12T;

Tablespace altered.

Now I have everthing in place. What about dropping a temporary tablespace out of the tablespace group First, I take it out of temporary tablespace group. This change is immediately reflected on the Standby.

-- Primary

SQL> alter tablespace DBITEMP3 TABLESPACE GROUP '';

Tablespace altered.

SQL> select * from DBA_TABLESPACE_GROUPS;

GROUP_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
DBITEMP                        DBITEMP1
DBITEMP                        DBITEMP2

-- Standby 

SQL> select * from DBA_TABLESPACE_GROUPS;

GROUP_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
DBITEMP                        DBITEMP1
DBITEMP                        DBITEMP2

And now you can drop it.

-- Primary

SQL> drop tablespace DBITEMP3 including contents and datafiles;

Tablespace dropped.

alert.log
...
drop tablespace DBITEMP3 including contents and datafiles
2016-12-14T16:35:25.567597+01:00
Deleted Oracle managed file /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp3_d52c7954_.tmp
Completed: drop tablespace DBITEMP3 including contents and datafiles
...

-- Standby

alert.log
...
Deleted Oracle managed file /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp3_d52dv3dv_.tmp
Recovery deleting tempfile #4:'/u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp3_d52dv3dv_.tmp'
Recovery dropped temporary tablespace 'DBITEMP3'
...

ok. Dropping works fine. This change is also immediately reflected. What about resizing a temporary file? Is this change also immediately reflected on the standby? Due to OMF, we do have different names for the tempfiles, so should we use a “alter database tempfile <NAME> resize” or a “alter database tempfile <NUMBER> resize” to make it work. In regards of regular datafiles, it doesn’t matter, you can use both ways and Oracle does it correctly.

- Primary

SQL> select file#, name, bytes from v$tempfile where name like '%dbitemp%';

     FILE# NAME                                                                      BYTES
---------- -------------------------------------------------------------------- ----------
         2 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp1_d52dthwg_.tmp       33554432
         3 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp2_d52dtx7f_.tmp       33554432

		 
SQL> alter database tempfile '/u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp' resize 512M;

Database altered.

SQL> select file#, name, bytes from v$tempfile where name like '%dbitemp%';

     FILE# NAME                                                                      BYTES
---------- -------------------------------------------------------------------- ----------
         2 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp      536870912
         3 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp2_d52c6yqv_.tmp       33554432


-- Standby

SQL> select file#, name, bytes from v$tempfile where name like '%dbitemp%';

     FILE# NAME                                                                      BYTES
---------- -------------------------------------------------------------------- ----------
         2 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp1_d52dthwg_.tmp       33554432
         3 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp2_d52dtx7f_.tmp       33554432

Ok. Resizing by name is not reflected on the standby. But what about resizing the tempfile by using the file number instead of the name?

-- Primary
 
SQL> alter database tempfile 3 resize 768M;

Database altered.

SQL> select file#, name, bytes from v$tempfile where name like '%dbitemp%';

     FILE# NAME                                                                      BYTES
---------- -------------------------------------------------------------------- ----------
         2 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp1_d52c66g5_.tmp      536870912
         3 /u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbitemp2_d52c6yqv_.tmp      805306368

		 
-- Standby

SQL> select file#, name, bytes from v$tempfile where name like '%dbitemp%';

     FILE# NAME                                                                      BYTES
---------- -------------------------------------------------------------------- ----------
         2 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp1_d52dthwg_.tmp       33554432
         3 /u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbitemp2_d52dtx7f_.tmp       33554432

As you can see, I doesn’t matter. Resize operations on tempfiles are not replicated to the standby.

Conclusion

Temporary Tablespace Groups can be used with Active DataGuard, and of course, it can be done online. However, when you initially create them, or when you resized them, some manual work has to be done on the standby. And yes, the Temporary Tablespace Groups can be used in combination with the Temporary UNDO feature on the Active DataGuard Standby.

 

 

Cet article Oracle 12cR2 – Howto setup Active DataGuard with Temporary Tablespace Groups est apparu en premier sur Blog dbi services.

Dataguard Oracle 12.2 : Fast-Start Failover with Maximum Protection

Wed, 2016-12-14 17:04

With Oracle 12.1 the one requirement to configure Fast-start Failover is to ensure the broker configuration is operating in either Maximum Availability mode or Maximum Performance mode.
With 12.2 Fast-Start Failover can be now configured with the Maximum Protection
Below our broker configuration

DGMGRL> show configuration;
Configuration - ORCL_DR
Protection Mode: MaxPerformance
Members:
ORCL_SITE - Primary database
ORCL_SITE1 - Physical standby database
ORCL_SITE2 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 48 seconds ago)

Let’s configure the maximum protection mode.
We first have to update some database properties.

DGMGRL> edit database 'ORCL_SITE1' set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database 'ORCL_SITE' set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database 'ORCL_SITE2' set property LogXptMode='SYNC';
Property "logxptmode" updated

After we have to enable the Maximum Availability before enabling the Maximum protection

DGMGRL> edit configuration set protection mode as maxavailability;
Succeeded.
DGMGRL> edit configuration set protection mode as maxprotection;
Succeeded.
DGMGRL>

And now let’s enable Fast-Start failover

DGMGRL> enable fast_start failover;
Error: ORA-16693: requirements not met for enabling fast-start failover
Failed.
DGMGRL>

Oh what happens?
Remember before enabling Fast-Start Failover we have to enable flashback for databases and also to set the database property FastStartFailoverTarget
Let’s enable flashback for databases
For the Primary

SQL> alter database flashback on;
Database altered.

For Standby databases

DGMGRL> edit database 'ORCL_SITE1' set state='APPLY-OFF';
Succeeded.
SQL> alter database flashback on;
Database altered.
SQL>
DGMGRL> edit database 'ORCL_SITE1' set state='APPLY-ON';
Succeeded.
DGMGRL>


DGMGRL> edit database 'ORCL_SITE2' set state='APPLY-OFF';
Succeeded.
DGMGRL>
SQL> alter database flashback on;
Database altered.
DGMGRL> edit database 'ORCL_SITE2' set state='APPLY-ON';
Succeeded.
DGMGRL>

Let’s set FastStartFailoverTarget property for the primary database

DGMGRL> edit database 'ORCL_SITE' set property FastStartFailoverTarget='ORCL_SITE2';
Property "faststartfailovertarget" updated

And then now we can enable the Fast-Start Failover with the maximum protection

DGMGRL> enable fast_start failover;
Enabled.
DGMGRL>

Checking after our configuration. The observer must be started, otherwise you will get warning about observer

DGMGRL> show configuration;
Configuration - ORCL_DR
Protection Mode: MaxProtection
Members:
ORCL_SITE - Primary database
ORCL_SITE2 - (*) Physical standby database
ORCL_SITE1 - Physical standby database
Fast-Start Failover: ENABLED
Configuration Status:
SUCCESS (status updated 1 second ago)
DGMGRL>

Just note that At least 2 standby databases must be available, otherwise the mode will be retrograded to MAXPERFORMANCE after Failover

 

Cet article Dataguard Oracle 12.2 : Fast-Start Failover with Maximum Protection est apparu en premier sur Blog dbi services.

Oracle Database 12c Release 2 Multitenant (Oracle Press)

Wed, 2016-12-14 16:00

Here it is. The multitenant book is out for sale…

CaptureBooks
One year ago, at DOAG2015, Vit Spinka came to me with this idea: with Anton Els they planned to write a book on multitenant and proposed me to be a co-author. I was already quite busy at that time and my short-term plan was to prepare and pass the OCM 12c exam. But this book idea was something great and that had to be started quickly. At that time, we expected the 12cR2 to be out on June 2016 and then the book should be at for Oracle Open World. So no time to waste: propose the idea to Oracle Press, find a reviewer and start as soon as possible.

For reviewers, I was very happy that Deiby Gomez accepted to do the technical review. And Mike Donovan was volunteer to do the English review. I think he didn’t imagine how hard it can be to take non-native English speakers writing, with very limited vocabulary, and put that to something that makes sense to read. It’s an amazing chance to have the language review done by someone with deep technical knowledge. This ensures that the improved style do not change the meaning. Having that language review is also a good way to uniformise the style for what is written by three different authors. I bet you cannot guess who has written what. In addition to that, Oracle Press asked to Arup Nanda to do an additional review which was great because Arup has experience about book writing.

So we worked on the 12.2 beta, tested everything (there are lot of code listings in the book), filled bugs, clarified everything. We had good interaction with support engineers and product managers. The result is a book on multitenant which covers all administration tasks you can do on a 12c database.

Cs11EMPWcAAdlSqIt was an amazing adventure from the get-go. You know people for their skills, blogs, presentations and discussions at events. And then you start to work with them on a common thing – the book – and remotely – we’re all on different timezones. How to be sure that you can work together? Actually, it was easy and went smooth. We listed the chapters and each of us has marked which chapter he prefers. And that was done: in one or two e-mail exchange the distribution of tasks was done with everybody happy. We had very short schedule: need to deliver one chapter every 2 or 3 weeks. I was happy with what I wrote and was equally happy with what I’ve read from Vit and Anton. Reviews from Deiby, Mike, Arup were all adding higher precision and clarity. Incredible team work without the need for long discussions. Besides the hard work and the delightful result, working with this team was an amazing human adventure.

Oracle Database 12c Release 2 Multitenant (Oracle Press)

Master the Powerful Multitenant Features of Oracle Database 12c
• Build high-performance multitenant Oracle databases
• Create single-tenant, multitenant, and application containers
• Establish network connections and manage services
• Handle security using authentication, authorization, and encryption
• Back up and restore your mission-critical data
• Work with point-in-time recovery and Oracle Flashback
• Move data and replicate and clone databases
• Work with Oracle’s Resource Manager and Data Guard

 

Cet article Oracle Database 12c Release 2 Multitenant (Oracle Press) est apparu en premier sur Blog dbi services.

Oracle 12cR2 – Is the SYSDG Administrative Privilege enough for doing Oracle Data Guard Operations?

Wed, 2016-12-14 01:33

For security reasons, you may want that your DataGuard operations are done with a different UNIX user and with a different Oracle user which is not so highly privileged like the SYSDBA.  This is exactly where the SYSDG Administrative Privilege for Oracle Data Guard Operations comes into play.

The SYSDG privilege is quite powerful and allows you to work with the Broker (DGMGRL) command line interface and besides that, it enables the following operations:

  • STARTUP
  • SHUTDOWN
  • ALTER DATABASE
  • ALTER SESSION
  • ALTER SYSTEM
  • CREATE RESTORE POINT (including GUARANTEED restore points)
  • CREATE SESSION
  • DROP RESTORE POINT (including GUARANTEED restore points)
  • FLASHBACK DATABASE
  • SELECT ANY DICTIONARY
  • SELECT
    • X$ tables (that is, the fixed tables)
    • V$ and GV$ views (that is, the dynamic performance views
    • APPQOSSYS.WLM_CLASSIFIER_PLAN
  • DELETE
    • APPQOSSYS.WLM_CLASSIFIER_PLAN
  • EXECUTE
    • SYS.DBMS_DRS

In addition, the SYSDG privilege enables you to connect to the database even if it is not open.

Ok. Let’s give it a try. I want to give the user scott all the privileges he needs to do the DataGuard operational tasks. So … I create a UNIX user scott and a database user scott with the SYSDG privilege.

[root@dbidg02 ~]# useradd scott
[root@dbidg02 ~]# usermod -a -G sysdg scott
[root@dbidg02 ~]# cat /etc/group |grep sysdg
sysdg:x:54324:oracle,scott

SQL> create user scott identified by tiger;

User created.

SQL> grant sysdg to scott;

Grant succeeded.

SQL> col username format a22
SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS where USERNAME = 'SCOTT';

USERNAME               SYSDB SYSOP SYSBA SYSDG SYSKM
---------------------- ----- ----- ----- ----- -----
SCOTT                  FALSE FALSE FALSE TRUE  FALSE

So far so good. Everything works. Scott can do switchovers, convert the physical standby to a snapshot database, create restore points and many more. But what happens when an error pops up? You need to take a look into the most important log files which are the alert log and broker log file in a DataGuard environment.

If you do a “show database verbose”, you will find at the end of the output the locations of the log files, which is quite useful from my point of view. This is new with Oracle 12cR2.

DGMGRL> show database verbose 'DBIT122_SITE1';

Database - DBIT122_SITE1

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 3.00 KByte/s
  Active Apply Rate:  152.00 KByte/s
  Maximum Apply Rate: 152.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    DBIT122
  ...
  ...
Broker shows you the Log file location:

    Alert log               : /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log
    Data Guard Broker log   : /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log

But unfortunately, the scott user can’t read those files, because there are no read permissions for others and
scott is not part of the oinstall group.

[scott@dbidg01 ~]$ tail -40f /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log
tail: cannot open ‘/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log’ for reading: Permission denied
tail: no files remaining

[scott@dbidg01 ~]$ tail -40f /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log
tail: cannot open ‘/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log’ for reading: Permission denied
tail: no files remaining

[scott@dbidg01 trace]$ ls -l drcDBIT122.log
-rw-r----- 1 oracle oinstall 37787 Dec 13 10:36 drcDBIT122.log
[scott@dbidg01 trace]$ ls -l alert_DBIT122.log
-rw-r----- 1 oracle oinstall 221096 Dec 13 12:04 alert_DBIT122.log

So what possibilities do we have to overcome this issue?

1. We can add user scott to the oinstall group, but then we haven’t won to much security
2. We can set the parameter “_trace_files_public”=true, but when this one is enable, then all oracle
trace files are world readable, not just the alert and broker log
3. We can configure XFS access control lists, so that user scott gets only the permissions he needs

For security reasons, I decided to go for the last one.

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(sysbkp),54324(sysdg),54325(syskm),54326(oper)
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log
-rw-r----- 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l drcDBIT122.log
-rw-r----- 1 oracle oinstall 56145 Dec 13 13:47 drcDBIT122.log

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] setfacl -m u:scott:r alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] setfacl -m u:scott:r drcDBIT122.log


oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log
-rw-r-----+ 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l drcDBIT122.log
-rw-r-----+ 1 oracle oinstall 56145 Dec 13 13:47 drcDBIT122.log

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] getfacl alert_DBIT122.log
# file: alert_DBIT122.log
# owner: oracle
# group: oinstall
user::rw-
user:scott:r--
group::r--
mask::r--
other::---

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] getfacl drcDBIT122.log
# file: drcDBIT122.log
# owner: oracle
# group: oinstall
user::rw-
user:scott:r--
group::r--
mask::r--
other::---

Cool. Now the scott user is really able to do a lot of DataGuard operation tasks, including some debugging.

[scott@dbidg01 ~]$ cat /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/alert_DBIT122.log | grep 'MAXIMUM AVAILABILITY mode' | tail -1
Primary database is in MAXIMUM AVAILABILITY mode

[scott@dbidg01 ~]$ cat /u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/drcDBIT122.log |grep "Protection Mode" | tail -1
      Protection Mode:            Maximum Availability
Conclusion

Using XFS ACL lists is quite cool if you want to give a specific user permissions to a file, but you don’t want to add him to a group, or make all files world readable. But be careful, that you configure the same ACL list on all other Standby nodes as well, and make sure that you use a Backup solution which supports ACL’s.

For example, using ‘cp’ or ‘cp -p’ makes a huge difference. In one case you loose your ACL list in the copy, in the other case you preserve it. The (+) sign at the end of the file permissions shows the difference.

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] cp alert_DBIT122.log alert_DBIT122.log.a
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] cp -p alert_DBIT122.log alert_DBIT122.log.b
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log.a
-rw-r----- 1 oracle oinstall 312894 Dec 13 14:25 alert_DBIT122.log.a
oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit122_site1/DBIT122/trace/ [DBIT122] ls -l alert_DBIT122.log.b
-rw-r-----+ 1 oracle oinstall 312894 Dec 13 13:52 alert_DBIT122.log.b
 

Cet article Oracle 12cR2 – Is the SYSDG Administrative Privilege enough for doing Oracle Data Guard Operations? est apparu en premier sur Blog dbi services.

Oracle 12cR2 – DataGuard and the REDO_TRANSPORT_USER

Wed, 2016-12-14 01:18

In a DataGuard environment, by default, the password of the SYS user is used to authenticate redo transport sessions when a password file is used. But for security reasons you might not want to use such a high privileged user only for the redo transmission. To overcome this issue, Oracle has implemented the REDO_TRANSPORT_USER initialization parameter.

The REDO_TRANSPORT_USER specifies the name of the user whose password verifier is used when a remote login password file is used for redo transport authentication.

But take care, the password must be the same at both databases to create a redo transport session, and the value of this parameter is case sensitive and must exactly match the value of the USERNAME column in the V$PWFILE_USERS view.

Besides that, this user must have the SYSDBA or SYSOPER privilege. However, we don’t want to grant the SYSDBA privilege. For administrative ease, Oracle recommends that the REDO_TRANSPORT_USER parameter be set to the same value on the redo source database and at each redo transport destination.

Ok. Let’s give it a try. I am creating an user called ‘DBIDG’ which will be used for redo transmission between my primary and standby.

SQL> create user DBIDG identified by manager;

User created.

SQL> grant connect to DBIDG;

Grant succeeded.

SQL> grant sysoper to DBIDG;

Grant succeeded.

Once done, I check the v$pwfile_users to see if my new user ‘DBIDG’ exist.

-- On Primary

SQL> col username format a22
SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS
  2  where USERNAME = 'DBIDG';

USERNAME               SYSDB SYSOP SYSBA SYSDG SYSKM
---------------------- ----- ----- ----- ----- -----
DBIDG                  FALSE TRUE  FALSE FALSE FALSE


-- On Standby
SQL> col username format a22
SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS
  2  where USERNAME = 'DBIDG';

no rows selected

Ok. Like in previous versions of Oracle, I have to copy the password myself to the destination host to make it work.

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] scp -p orapwDBIT122 oracle@dbidg02:$PWD

SQL> select USERNAME, SYSDBA, SYSOPER, SYSBACKUP, SYSDG, SYSKM from V$PWFILE_USERS
  2  where USERNAME = 'DBIDG';

USERNAME               SYSDB SYSOP SYSBA SYSDG SYSKM
---------------------- ----- ----- ----- ----- -----
DBIDG                  FALSE TRUE  FALSE FALSE FALSE

 

By connecting with the ‘DBIDG’ user, you almost can’t do anything. Not even selecting from the dba_tablespaces view e.g. From the security perspective, this user is much less of a concern.

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] sqlplus dbidg/Manager1@DBIT122_SITE1 as sysoper

SQL*Plus: Release 12.2.0.1.0 Production on Tue Dec 13 11:08:00 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> desc dba_tablespaces
ERROR:
ORA-04043: object "SYS"."DBA_TABLESPACES" does not exist

Nevertheless, the ‘DBIDG’ user is completely sufficient for my use case. Now, as I got my ‘DBIDG’ redo transport user in both password files (primary and standby), I can activate the redo_transport_user feature on (primary and standby) and check if everything works, by doing a switch over and switch back.

-- On Primary and Standby

SQL> alter system set redo_transport_user='DBIDG';

System altered.


DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 33 seconds ago)

DGMGRL> SWITCHOVER TO 'DBIT122_SITE2' WAIT 5;
Stopping services and waiting up to 5 seconds for sessions to drain...
Performing switchover NOW, please wait...
Operation requires a connection to database "DBIT122_SITE2"
Connecting ...
Connected to "DBIT122_SITE2"
Connected as SYSDBA.
New primary database "DBIT122_SITE2" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE1"
Starting instance "DBIT122"...
ORACLE instance started.
Database mounted.
Connected to "DBIT122_SITE1"
Switchover succeeded, new primary is "DBIT122_SITE2"

DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE2 - Primary database
    DBIT122_SITE1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 71 seconds ago)


DGMGRL> SWITCHOVER TO 'DBIT122_SITE1' WAIT 5;
Stopping services and waiting up to 5 seconds for sessions to drain...
Performing switchover NOW, please wait...
Operation requires a connection to database "DBIT122_SITE1"
Connecting ...
Connected to "DBIT122_SITE1"
Connected as SYSDBA.
New primary database "DBIT122_SITE1" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE2"
Starting instance "DBIT122"...
ORACLE instance started.
Database mounted.
Connected to "DBIT122_SITE2"
Switchover succeeded, new primary is "DBIT122_SITE1"

Looks very good so far. But what happens if I have to change the password of the ‘DBIDG’ user?

-- On Primary

SQL> alter user dbidg identified by Manager1;

User altered.

-- On Primary
oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] ls -l orapwDBIT122
-rw-r----- 1 oracle oinstall 4096 Dec 13 10:30 orapwDBIT122

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] md5sum orapwDBIT122
3b7b2787943a07641b8af9f9e5284389  orapwDBIT122


-- On Standby
oracle@dbidg02:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] ls -l orapwDBIT122
-rw-r----- 1 oracle oinstall 4096 Dec 13 10:30 orapwDBIT122

oracle@dbidg02:/u01/app/oracle/admin/DBIT122/pfile/ [DBIT122] md5sum orapwDBIT122
3b7b2787943a07641b8af9f9e5284389  orapwDBIT122

That’s cool. Passwords on both sites have been updated successfully. They have the same time stamps and even the MD5 checksums are exactly the same. This is because of the new “Automatic Password Propagation to Standby” feature of 12cR2.

Conclusion

REDO_TRANSPORT_USER and “Automatic Password Propagation to Standby” are nice little features from Oracle.  The REDO_TRANSPORT_USER exists for quite a while now, at least since 11gR2, however, the “Automatic Password Propagation to Standby” is new with 12cR2.

 

Cet article Oracle 12cR2 – DataGuard and the REDO_TRANSPORT_USER est apparu en premier sur Blog dbi services.

Upgrade to Enterprise Manager 13.2

Tue, 2016-12-13 08:11

I will describe how to upgrade Enterprise Manager 13.1.0.0 to the new 13.2.0.0 version.

At the beginning we have to ensure that we applied the latest PSU on the repository database.

It is mandatory to apply the following patch before upgrading to Enterprise Manager Cloud Control 13.2.0.0:

DATABASE PATCH SET UPDATE 12.1.0.2.160419

To install the patch you have to check the following point:

- you have to use Oracle Interim Patch Installer version 12.2.0.1.8

Once you have downloaded the patch and unzipped it, you can check for potential conflicts:

oracle@vmCC13c:/u01/app/oracle/ [EMREP13C] opatch prereq 
CheckConflictAgainstOHWithDetail 
-phBaseDir /oracle/u01/app/oracle/software/22899531/22806133
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed. 
OPatch succeeded.

 

oracle@vmCC13c:/u01/app/oracle/ [EMREP13C] opatch prereq 
CheckConflictAgainstOHWithDetail -phBaseDir 
/oracle/u01/app/oracle/software/22899531/23006522
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

And finally, you check for the system space available:

oracle@vmCC13c:/u01/app/oracle/ [EMREP13C] $ORACLE_HOME/OPatch/opatch prereq 
CheckSystemSpace -phBaseFile /oracle/u01/app/oracle/software/patch_list_dbhome.txt
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
PREREQ session
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
Invoking prereq "checksystemspace"
Prereq "checkSystemSpace" passed.
OPatch succeeded.

Then once the pre requisites are ok, you stop the repository database, and you run the classical opatch apply command from the directory where you have unzipped the PSU.

You finally check the Oracle inventory:

oracle@vmtestoraCC13c:/home/oracle/ [EMREP13C] opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.8
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.8
OUI version       : 12.1.0.2.0
--------------------------------------------------------------------------------
Local Machine Information:
ARU platform id: 226
ARU platform description:: Linux x86-64
Installed Top-level Products (1):
Oracle Database 12c                                      12.1.0.2.0
There are 1 products installed in this Oracle Home. 
Interim patches (1) : 
Patch  22806133     : applied on Tue Nov 22 11:19:55 CET 2016
Unique Patch ID:  19983161
Patch description:  "DATABASE BUNDLE PATCH: 12.1.0.2.160419 (22806133)"

Secondly you have to disable the optimizer_adaptive_features parameter in the repository database:

SQL> alter system set optimizer_adaptive_features=false scope=both;

System altered.

Then we have to ensure that the tables in the Management Repository do not have any snapshots created:

SQL> select master , log_table from all_mview_logs where log_owner='SYSMAN';
no rows selected

Then we have to ensure that the tables in the Management Repository do not have any snapshots created:

SQL> select master , log_table from all_mview_logs where log_owner='SYSMAN';
no rows selected

We verify if any login triggers are set:

SQL> SELECT COUNT (trigger_name) FROM sys.dba_triggers 
WHERE TRIGGERING_EVENT LIKE 'LOGON%' AND status='ENABLED';
SQL> SELECT trigger_name FROM sys.dba_triggers 
WHERE TRIGGERING_EVENT LIKE 'LOGON%' AND status='ENABLED';

We verify if any logoff triggers are set:

SQL> SELECT COUNT (trigger_name) FROM sys.dba_triggers
WHERE TRIGGERING_EVENT LIKE 'LOGOFF%' AND status='ENABLED';
SQL> SELECT trigger_name FROM sys.dba_triggers
WHERE TRIGGERING_EVENT LIKE 'LOGOFF%' AND status='ENABLED';

If we find a trigger enabled, we disable it:

SQL> SELECT trigger_name, owner from sys.dba_triggers 
WHERE TRIGGERING_EVENT LIKE 'LOGOFF%' AND status='ENABLED';
 
TRIGGER_NAME         OWNER
GSMLOGOFF            GSMADMIN_INTERNAL
 
SQL> alter trigger gsmadmin_internal.gsmlogoff disable;

Then you have to copy the EMKEY to the repository database:

oracle@vmCC13c:/u03/app/oracle/oms13c/bin/ [oms13c] emctl config emkey 
-copy_to_repos -sysman_pwd dbi05manager
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
The EMKey has been copied to the Management Repository. 
This operation will cause the EMKey to become unsecure.
After the required operation has been completed, 
secure the EMKey by running "emctl config emkey -remove_from_repos".

Concerning the OMS shutdown we follow the following procedure:

We stop the JVMD and ADP engines explicitly:

oracle@vmCC13c:/u03/app/oracle/oms13c/bin/ [oms13c] emctl extended oms jvmd stop -all
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
No verb found to be registered with emctl extensibles framework
 
oracle@vmCC13c:/u03/app/oracle/oms13c/bin/ [oms13c] emctl extended oms adp stop -all
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
No verb found to be registered with emctl extensibles framework

Then we shut down completely the OMS:

oracle@v333:/home/oracle/ [oms13c] emctl stop oms -all
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Stopping Oracle Management Server...
WebTier Successfully Stopped
Oracle Management Server Successfully Stopped
Oracle Management Server is Down
JVMD Engine is Down
Stopping BI Publisher Server...
BI Publisher Server Successfully Stopped
AdminServer Successfully Stopped
BI Publisher Server is Down

It is mandatory to stop the management agent in order to avoid errors during the migration phase:

oracle@vmtestoraCC13c:/home/oracle/ [agent13c] emctl stop agent
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Stopping agent ... stopped.

WATCH OUT: before running the installer unset PERLLIB and PERL5LIB variables (if for example you have environment variables defined for your oracle user) otherwise you will get the following error:

/opt/oracle/Middleware13c/perl/lib/5.10.0/x86_64-linux-thread-multi/auto/Cwd/Cwd.so: 
undefined symbol: Perl_Gthr_key_ptr
ERROR: Unable to continue with the installation 
because some one-off patches could not be applied successfully.

Then once the oms and the management agent are stopped you can run:

oracle@v333:/opt/software/ [oms13c] ./em13200_linux64.bin
Checking monitor: must be configured to display at least 256 colors.   
Actual 16777216    Passed
Checking swap space: must be greater than 512 MB.   Actual 7999 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    
Passed (64-bit not required)
Preparing to launch the Oracle Universal Installer 
from /tmp/OraInstall2016-11-18_10-07-45AM
====Prereq Config Location main===
/tmp/OraInstall2016-11-18_10-07-45AM/stage/prereq
EMGCInstaller args -scratchPath
EMGCInstaller args /tmp/OraInstall2016-11-18_10-07-45AM
EMGCInstaller args -sourceType
EMGCInstaller args network
EMGCInstaller args -timestamp
EMGCInstaller args 2016-11-18_10-07-45AM
EMGCInstaller args -paramFile
EMGCInstaller args /tmp/sfx_f8wrWz/Disk1/install/linux64/oraparam.ini
EMGCInstaller args -nocleanUpOnExit
DiskLoc inside SourceLoc/opt/software
EMFileLoc:/tmp/OraInstall2016-11-18_10-07-45AM/oui/em/
ScratchPathValue :/tmp/OraInstall2016-11-18_10-07-45AM

The first installer screen appears:

em1

I choosed not to receive Security alerts then I Select Next:

em2

I select Next, I skip the software Updates

em3

We check the prerequisites are ok

em4

We choose to upgrade an existing Enterprise Manager system, we enter the old Middleware home, and we select Next

em5

We enter the new Middleware home, and we select Next

em6

We enter the repository connection details, the sys and sysman passords, we confirm we have a correct repository backupm we disable the DDMP jobs and we select Next

em7

We select Yes to fix the issues

em8

We review the plugins and we select Next

em9

We have the possibility to add plugins we want to deploy while upgrading to EM 13.2

em10

We enter the weblogic username and password and we select Next

em11

I choosed not to configure a shared location for Oracle BI publisher

em12

We choose the default ports and we select Next

em13

We select Upgrade:

em14

The upgrade is running fine:=)

At the end of the upgrade, you have to run connected as root from the new OMS home the shell root.sh:

root@vmCC13c oms13cr2]# . root.sh
/etc exist
/u03/app/oracle/oms13cr2

Finally you have successfully upgraded to EM 13.2:

em15

Once the OMs is upgraded, you have to upgrade the management agents with the classical procedure. From the EM console you select upgrade agent as follows:

em16

em17

You add the management agent, and you select Submit

em18

You can follow the upgrade phase, finally you have to run the root.sh script on the management agent and cleanup the old agent environment with the EM console as follows:

You select the Post Agent Upgrade Tasks, you select the agent previously migrated:

em19

And you select Submit

The old agent home will be removed; you will just have to adapt your /etc/oratab file to give the new ORACLE_HOME for your agent 13.2

The last phase consists in deleting the old OMS home. As we were in 13.1 version, we only have to check nothing is running with the old environment, then delete the old home.

oracle@vmCC13c:/u03/app/oracle/oms13cr2/ [oms13c] ps -ef | grep oms | grep -v 13cr2
oracle   28463 27736  0 15:52 pts/5    00:00:00 grep --color=auto oms
 
oracle@vmCC13c:/u03/app/oracle/ [oms13c] ls
agent13c  CC13c_setupinfo.txt  gc_inst  gc_inst1  oms13c  oms13cr2  swlib
oracle@vmCC13c:/u03/app/oracle/ [oms13c] rm -rf oms13c

The new features are listed in the following URL and mainly concern the virtualization, the Middleware and Cloud Management, and some new features about incident management (always on monitoring installed on a different host for example).

http://docs.oracle.com/cd/E73210_01/EMCON/GUID-503991BC-D1CD-46EC-8373-8423B2D43437.htm#EMCON-GUID-503991BC-D1CD-46EC-8373-8423B2D43437

Finally the upgrade phase to EM 13.2 is finished in a couple of hours and did not present any errors.

 

 

 

 

 

 

 

 

Cet article Upgrade to Enterprise Manager 13.2 est apparu en premier sur Blog dbi services.

Getting started with Docker – 2 – building your own base image

Mon, 2016-12-12 13:30

In the last post we quickly looked at how you can install everything you need to run docker containers on a CentOS 7 host and did bring up the official PostgreSQL image. However there are good reasons not to rely on an existing image: You want to deliver a pre-configured base image which includes everything your application requires and you want to have control over that image. When this image needs to get updated you can update the base image and then re-deploy it to wherever you need to update the base image.

For CentOS there is a script which you can use to build your base image on GitHub. For other distributions you can check the docker documentation. This script basically makes use of the “–installroot” switch of yum which allows to install packages to another location than the default one.

Using this script we can build a CentOS 7 base image. There are three parameters you can use:

  -p ""  The list of packages to install in the container.
                   The default is blank.
  -g ""    The groups of packages to install in the container.
                   The default is "Core".
  -y      The path to the yum config to install packages from. The
                   default is /etc/yum.conf for Centos/RHEL and /etc/dnf/dnf.conf for Fedora

We’ll use the “-p” switch to install all the packages which are required to build PostgreSQL from source (this depends on the compile options, of course) and some additional packages which are useful for daily work (such as screen):

[docker@centos7 ~]$ sudo ./mkimage-yum.sh -p "gcc openldap-devel python-devel readline-devel redhat-lsb bison flex perl-ExtUtils-Embed zlib-devel crypto-utils openssl-devel pam-devel libxml2-devel libxslt-devel tcl tcl-devel openssh-clients bzip2 net-tools wget screen ksh unzip" centospg

Once done the new image is available locally:

[docker@centos7 ~]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centospg            7.2.1511            184509483d52        40 seconds ago      510.6 MB
postgres            latest              78e3985acac0        2 days ago          264.7 MB
hello-world         latest              c54a2cc56cbb        5 months ago        1.848 kB

If you wonder how that image made it into docker take a look a the last lines of the script:

tar --numeric-owner -c -C "$target" . | docker import - $name:$version

What happened is that a tar file of the temporary chroot directory was created:

[docker@centos7 ~]$ ls -latr /tmp/mkimage-yum.sh.dKHtsq/
total 24
drwxrwxrwt.  2 root root    6 Aug 12  2015 tmp
dr-xr-xr-x.  2 root root    6 Aug 12  2015 sys
drwxr-xr-x.  2 root root    6 Aug 12  2015 srv
drwxr-xr-x.  2 root root    6 Aug 12  2015 run
dr-xr-x---.  2 root root    6 Aug 12  2015 root
dr-xr-xr-x.  2 root root    6 Aug 12  2015 proc
drwxr-xr-x.  2 root root    6 Aug 12  2015 opt
drwxr-xr-x.  2 root root    6 Aug 12  2015 mnt
drwxr-xr-x.  2 root root    6 Aug 12  2015 media
drwxr-xr-x.  2 root root    6 Aug 12  2015 home
drwxr-xr-x.  2 root root 4096 Aug 12  2015 dev
dr-xr-xr-x.  2 root root    6 Aug 12  2015 boot
lrwxrwxrwx.  1 root root    7 Dec 10 15:02 bin -> usr/bin
lrwxrwxrwx.  1 root root    8 Dec 10 15:02 sbin -> usr/sbin
lrwxrwxrwx.  1 root root    9 Dec 10 15:02 lib64 -> usr/lib64
lrwxrwxrwx.  1 root root    7 Dec 10 15:02 lib -> usr/lib
dr-xr-xr-x. 17 root root 4096 Dec 10 15:02 .
drwxr-xr-x. 13 root root 4096 Dec 10 15:02 usr
drwxr-xr-x. 17 root root 4096 Dec 10 15:02 var
drwxr-xr-x. 19 root root 4096 Dec 10 15:02 etc
drwxrwxrwt. 11 root root 4096 Dec 12 11:43 ..

… which then was imported into docker with the “docker import” command.

To test if it really works we can start it and execute bash:

[docker@centos7 ~]$ docker run -it --rm  centospg:7.2.1511 bash
[root@cf690e9d9476 /]$ cat /etc/os-release 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

Fine, so far so good. Now we can really begin: We have our own CentOS base image where we want our PostgreSQL image to be based on. How then can we create a PostgreSQL image on top of our base image?

We’ll execute the commands necessary first and provide explanations afterwards. So, the first step:

[docker@centos7 ~]$ sudo yum install -y git
[docker@centos7 ~]$ mkdir centospg
[docker@centos7 ~]$ cd centospg
[docker@centos7 centospg]$ git init
Initialized empty Git repository in /home/docker/centospglatest/.git/
[docker@centos7 centospg]$ git config --global user.email "daniel@abc.def"
[docker@centos7 centospg]$ git config --global user.name "Daniel"

Why do we need git? It is not required to use git at all but you probably would like to have your files which are used to build your containers managed by git so that you can use all advantages of GIT combined with the advantages of docker. It will will make more and more sense as we step through all the commands.

What we need to create now is a so called Dockerfile. This file lists the instructions that Docker will execute to build you image. Lets go:

[docker@centos7 centospg]$ touch Dockerfile
[docker@centos7 centospg]$ git add Dockerfile 
[docker@centos7 centospg]$ git commit -m "initial" Dockerfile 
[master (root-commit) ce3727a] initial
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 Dockerfile

Our very basic Dockerfile will look like this:

# use our CentOS base images as source
FROM centospg:7.2.1511
# set the PostgreSQL we will download
ENV PG_VERSION 9.6.1
# include the PostgreSQL binaries in the PATH
ENV PATH /u01/app/postgres/product/96/db_01/bin:$PATH
# add a postgres group and postgres user
RUN groupadd postgres
RUN useradd -g postgres -m postgres
# prepare the directories
RUN mkdir -p /u01/app/postgres
RUN chown postgres:postgres /u01/app/postgres
# allow sudo for the postgres user
RUN echo 'postgres ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
# download, configure, compile and install PostgreSQL from source
USER postgres
RUN wget https://ftp.postgresql.org/pub/source/v${PG_VERSION}/postgresql-${PG_VERSION}.tar.bz2 -O /var/tmp/postgresql-${PG_VERSION}.tar.bz2
RUN cd /var/tmp; tar -axf /var/tmp/postgresql-${PG_VERSION}.tar.bz2
RUN rm -f /var/tmp/postgresql-${PG_VERSION}.tar.bz2
RUN cd /var/tmp/postgres*; ./configure --prefix=/u01/app/postgres/product/96/db_01
RUN cd /var/tmp/postgres*; make
RUN cd /var/tmp/postgres*; make install
# cleanup
RUN rm -rf /var/tmp/postgres*

Using this Dockerfile we can build our PostgreSQL image:

[docker@centos7 centospg]$ docker build -t centospg:PG9.6.1 .

The output of this is quite long, here a snippet:

Sending build context to Docker daemon 45.06 kB
Step 1 : FROM centospg:7.2.1511
 ---> 184509483d52
Step 2 : ENV PG_VERSION 9.6.1
 ---> Running in 054900c7ebe1
 ---> 866815b9f092
Removing intermediate container 054900c7ebe1
Step 3 : ENV PATH /u01/app/postgres/product/96/db_01/bin:$PATH
 ---> Running in 46bcf7667a06
 ---> 94c9adb0402b
Removing intermediate container 46bcf7667a06
Step 4 : RUN groupadd postgres
 ---> Running in 24a7d9b7a1ea
 ---> eb4ff8268e2e
Removing intermediate container 24a7d9b7a1ea
Step 5 : RUN useradd -g postgres -m postgres
 ---> Running in 3e09b556fed8
 ---> acff1dcf2d4c
Removing intermediate container 3e09b556fed8
Step 6 : RUN mkdir -p /u01/app/postgres
 ---> Running in 999a62d075c0
 ---> fa4bdfa74d31
Removing intermediate container 999a62d075c0
Step 7 : RUN chown postgres:postgres /u01/app/postgres
 ---> Running in 37773e484260
 ---> 668c491b534b
Removing intermediate container 37773e484260
Step 8 : RUN echo 'postgres ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
 ---> Running in bb9cbfd20623
 ---> 589959efbda5
Removing intermediate container bb9cbfd20623
Step 9 : USER postgres
 ---> Running in f70b8c70c3fc
 ---> 32d3d3d603d2
Removing intermediate container f70b8c70c3fc
Step 10 : RUN wget https://ftp.postgresql.org/pub/source/v${PG_VERSION}/postgresql-${PG_VERSION}.tar.bz2 -O /var/tmp/postgresql-${PG_VERSION}.tar.bz2
 ---> Running in c5cc11840a15
--2016-12-12 12:43:05--  https://ftp.postgresql.org/pub/source/v9.6.1/postgresql-9.6.1.tar.bz2
Resolving ftp.postgresql.org (ftp.postgresql.org)... 174.143.35.246, 217.196.149.55, 87.238.57.227, ...
Connecting to ftp.postgresql.org (ftp.postgresql.org)|174.143.35.246|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 19260568 (18M) [application/x-bzip]
Saving to: '/var/tmp/postgresql-9.6.1.tar.bz2'

     0K .......... .......... .......... .......... ..........  0%  180K 1m44s
    50K .......... .......... .......... .......... ..........  0%  368K 77s
...
 ---> 645cf59717f4
Removing intermediate container c5cc11840a15
Step 11 : RUN cd /var/tmp; tar -axf /var/tmp/postgresql-${PG_VERSION}.tar.bz2
 ---> Running in 6a47968ddeb5
...
# PostgreSQL configure, make, make install
...
PostgreSQL installation complete.
 ---> 7f6b11b357d7
Removing intermediate container 041441816c4d
Step 16 : RUN rm -rf /var/tmp/postgres*
 ---> Running in 480cc2157b9a
 ---> a7b0bf1d1c35
Removing intermediate container 480cc2157b9a
Successfully built a7b0bf1d1c35

Once all of that completed we have a new Docker image:

[docker@centos7 centospg]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centospg            PG9.6.1             a7b0bf1d1c35        45 seconds ago      706.7 MB
centospg            7.2.1511            184509483d52        47 hours ago        510.6 MB
postgres            latest              78e3985acac0        4 days ago          264.7 MB
hello-world         latest              c54a2cc56cbb        5 months ago        1.848 kB

Using this image we can now bring up a container which complies to all our standards we build into the base images:

[docker@centos7 centospg]$ docker run -it centospg:PG9.6.1 bash
[postgres@7ac7780b0b1b /]$ which initdb
/u01/app/postgres/product/96/db_01/bin/initdb
[postgres@7ac7780b0b1b /]$ initdb --version
initdb (PostgreSQL) 9.6.1

Finally we commit our Dockerfile:

[docker@centos7 centospg]$ git commit -m "First working version" Dockerfile 
[master f0ba897] First working version
 1 file changed, 25 insertions(+)

Something to start with, isn’t it?

Note: The Docker best practices tell you to add only few instructions to a Dockerfile because every instruction creates a new layer. In general you should run only one service per image. This makes it easier to scale you containers.

 

Cet article Getting started with Docker – 2 – building your own base image est apparu en premier sur Blog dbi services.

Getting started with Docker – 1 – overview and installation

Sun, 2016-12-11 04:30

Everybody is talking about Docker nowadays. What it is about? Do you remember Solaris Zones or Containers? It is more or less the same although development of Docker during the last years made Linux Containers the de-facto standard for deploying applications in a standardized and isolated way. Docker is build in a classical client server model: There is the docker server (or daemon) which servers the requests of docker clients. The client is the one you’ll use to tell the server what you want to do. The main difference from the classical client/server model is that docker uses the same binary for the server as well as for the client. It is just a matter of how you invoke the docker binary that makes it a server or client application. In contrast to the Solaris Zones Docker containers are stateless by default, that means: When you shutdown a docker container you’ll lose everything that was done when the container started to what happened when container got destroyed (Although there are ways to avoid that). This is important to remember.

When you start a docker container on a host the host’s resources are shared with the container (Although you can limit that). It is not like when you fire up a virtual machine (which brings up an instance of a whole operating system) but more like a process that shares resources with the host it is running on. This might be as simple as running a “wget” command but it might be as complicated as bringing up a whole infrastructure that serves your service desk. Docker containers should be lightweight.

So what does make docker unique then? It is the concept of a layered filesystem. We’ll come to that soon. Lets start by installing everything we need to run a docker daemon. As always we’ll start with as CentOS 7 minimal installation:

[root@centos7 ~]$ cat /etc/centos-release
CentOS Linux release 7.2.1511 (Core) 
[root@centos7 ~]$ 

The easiest way to get docker installed is to add the official docker yum repository (for CentOS in this case):

[root@centos7 ~]$ echo "[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg" > /etc/yum.repos.d/docker.repo

Working directly as root never is a good idea so lets create a user for that and let this user do everything via sudo ( not a good practice, I know :) ):

[root@centos7 ~]$ groupadd docker
[root@centos7 ~]$ useradd -g docker docker
[root@centos7 ~]$ passwd docker
[root@centos7 ~]$ echo "docker ALL=(ALL)   NOPASSWD: ALL" >> /etc/sudoers
[root@centos7 ~]$ su - docker
[docker@centos7 ~]$ sudo ls

Ready to install:

[docker@centos7 ~]$ sudo yum install docker-engine

This will install the docker engine and these additional packages:

======================================================================================================================================
 Package                                Arch                   Version                               Repository                  Size
======================================================================================================================================
Installing:
 docker-engine                          x86_64                 1.12.3-1.el7.centos                   dockerrepo                  19 M
Installing for dependencies:
 audit-libs-python                      x86_64                 2.4.1-5.el7                           base                        69 k
 checkpolicy                            x86_64                 2.1.12-6.el7                          base                       247 k
 docker-engine-selinux                  noarch                 1.12.3-1.el7.centos                   dockerrepo                  28 k
 libcgroup                              x86_64                 0.41-8.el7                            base                        64 k
 libseccomp                             x86_64                 2.2.1-1.el7                           base                        49 k
 libsemanage-python                     x86_64                 2.1.10-18.el7                         base                        94 k
 libtool-ltdl                           x86_64                 2.4.2-21.el7_2                        updates                     49 k
 policycoreutils-python                 x86_64                 2.2.5-20.el7                          base                       435 k
 python-IPy                             noarch                 0.75-6.el7                            base                        32 k
 setools-libs                           x86_64                 3.3.7-46.el7                          base                       485 k

Transaction Summary
======================================================================================================================================

Enable the service:

[docker@centos7 ~]$ sudo systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

Start the service:

[docker@centos7 ~]$ sudo systemctl start docker
[docker@centos7 ~]$ sudo systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2016-12-10 12:26:46 CET; 6s ago
     Docs: https://docs.docker.com
 Main PID: 2957 (dockerd)
   Memory: 12.9M
   CGroup: /system.slice/docker.service
           ├─2957 /usr/bin/dockerd
           └─2960 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --...

Dec 10 12:26:45 centos7.local dockerd[2957]: time="2016-12-10T12:26:45.481380483+01:00" level=info msg="Graph migration to co...conds"
Dec 10 12:26:45 centos7.local dockerd[2957]: time="2016-12-10T12:26:45.481751429+01:00" level=warning msg="mountpoint for pid...found"
Dec 10 12:26:45 centos7.local dockerd[2957]: time="2016-12-10T12:26:45.481751451+01:00" level=info msg="Loading containers: start."
Dec 10 12:26:45 centos7.local dockerd[2957]: time="2016-12-10T12:26:45.574330143+01:00" level=info msg="Firewalld running: false"
Dec 10 12:26:45 centos7.local dockerd[2957]: time="2016-12-10T12:26:45.822997195+01:00" level=info msg="Default bridge (docke...dress"
Dec 10 12:26:46 centos7.local dockerd[2957]: time="2016-12-10T12:26:46.201798804+01:00" level=info msg="Loading containers: done."
Dec 10 12:26:46 centos7.local dockerd[2957]: time="2016-12-10T12:26:46.201984648+01:00" level=info msg="Daemon has completed ...ation"
Dec 10 12:26:46 centos7.local dockerd[2957]: time="2016-12-10T12:26:46.202003760+01:00" level=info msg="Docker daemon" commit...1.12.3
Dec 10 12:26:46 centos7.local dockerd[2957]: time="2016-12-10T12:26:46.207416263+01:00" level=info msg="API listen on /var/ru....sock"
Dec 10 12:26:46 centos7.local systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.

And we’re done. Lets check if docker is working as expected:

[docker@centos7 ~]$ sudo docker run --rm hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c04b14da8d14: Pull complete 
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:

https://hub.docker.com

For more examples and ideas, visit:

https://docs.docker.com/engine/userguide/

What happened here is that we already executed our first docker image: “hello-world”. The “–rm” flag tells docker to automatically remove the image once it exits. As the image was not available on our host it was automatically downloaded from the docker hub:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c04b14da8d14: Pull complete 
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest

You can browse the docker hub for many, many other images using your favorite browser or you can use the command line:

[docker@centos7 ~]$ docker search postgres
NAME                      DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
postgres                  The PostgreSQL object-relational database ...   2939                 [OK]       
kiasaki/alpine-postgres   PostgreSQL docker image based on Alpine Linux   28                   [OK]
abevoelker/postgres       Postgres 9.3 + WAL-E + PL/V8 and PL/Python...   10                   [OK]
onjin/alpine-postgres     PostgreSQL / v9.1 - v9.6 / size <  50MB      9                    [OK]
macadmins/postgres        Postgres that accepts remote connections b...   8                    [OK]
jamesbrink/postgres       Highly configurable PostgreSQL container.       5                    [OK]
eeacms/postgres           Docker image for PostgreSQL (RelStorage re...   4                    [OK]
cptactionhank/postgres                                                    4                    [OK]
azukiapp/postgres         Docker image to run PostgreSQL by Azuki - ...   2                    [OK]
kampka/postgres           A postgresql image build on top of an arch...   2                    [OK]
clkao/postgres-plv8       Docker image for running PLV8 1.4 on Postg...   2                    [OK]
2020ip/postgres           Docker image for PostgreSQL with PLV8           1                    [OK]
steenzout/postgres        Steenzout's docker image packaging for Pos.1                    [OK]
blacklabelops/postgres    Postgres Image for Atlassian Applications       1                    [OK]
buker/postgres            postgres                                        0                    [OK]
kobotoolbox/postgres      Postgres image for KoBo Toolbox.                0                    [OK]
vrtsystems/postgres       PostgreSQL image with added init hooks, bu...   0                    [OK]
timbira/postgres          Postgres  containers                            0                    [OK]
coreroller/postgres       official postgres:9.4 image but it adds 2 ...   0                    [OK]
livingdocs/postgres       Postgres v9.3 with the plv8 extension inst...   0                    [OK]
1maa/postgres             PostgreSQL base image                           0                    [OK]
opencog/postgres          This is a configured postgres database for...   0                    [OK]
khipu/postgres            postgres with custom uids                       0                    [OK]
travix/postgres           A container to run the PostgreSQL database.     0                    [OK]
beorc/postgres            Ubuntu-based PostgreSQL server                  0                    [OK]

The first one is the official PostgreSQL image. How do I run it?

[docker@centos7 ~]$ docker run -it postgres
Unable to find image 'postgres:latest' locally
latest: Pulling from library/postgres
386a066cd84a: Pull complete 
e6dd80b38d38: Pull complete 
9cd706823821: Pull complete 
40c17ac202a9: Pull complete 
7380b383ba3d: Pull complete 
538e418b46ce: Pull complete 
c3b9d41b7758: Pull complete 
dd4f9522dd30: Pull complete 
920e548f9635: Pull complete 
628af7ef2ee5: Pull complete 
004275e6f5b5: Pull complete 
Digest: sha256:e761829c4b5ec27a0798a867e5929049f4cbf243a364c81cad07e4b7ac2df3f1
Status: Downloaded newer image for postgres:latest
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /var/lib/postgresql/data -l logfile start

****************************************************
WARNING: No password has been set for the database.
         This will allow anyone with access to the
         Postgres port to access your database. In
         Docker's default configuration, this is
         effectively any other container on the same
         system.

         Use "-e POSTGRES_PASSWORD=password" to set
         it in "docker run".
****************************************************
waiting for server to start....LOG:  database system was shut down at 2016-12-10 11:42:01 UTC
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
 done
server started
ALTER ROLE


/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*

waiting for server to shut down....LOG:  received fast shutdown request
LOG:  aborting any active transactions
LOG:  autovacuum launcher shutting down
LOG:  shutting down
LOG:  database system is shut down
 done
server stopped

PostgreSQL init process complete; ready for start up.

LOG:  database system was shut down at 2016-12-10 11:42:04 UTC
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

And ready. As with the “hello:world” image docker had to download the image as it was not available locally. Once that was done the image was started and new PostgreSQL instance was created automatically. Here you can see what the layered filesystem is about:

386a066cd84a: Pull complete 
e6dd80b38d38: Pull complete 
9cd706823821: Pull complete 
40c17ac202a9: Pull complete 
7380b383ba3d: Pull complete 
538e418b46ce: Pull complete 
c3b9d41b7758: Pull complete 
dd4f9522dd30: Pull complete 
920e548f9635: Pull complete 
628af7ef2ee5: Pull complete 
004275e6f5b5: Pull complete 

Each of this lines represents a layered/stacked filesystem on top of the previous one. This is an important concept because when you change things only the layer that contains the change needs to be rebuild, but not the layers below. In other words you could build an image based on a CentOS 7 image and then deploy your changes on top of that. You deliver that image and some time later you need to make some modifications: The only thing you need to deliver are the modifications you did because the layers below did not change.

You will notice that you cannot type any command when the image was started. As soon as you enter “CRTL-C” the container will shutdown (this is because of the “-it” switch, which is “interactive” and “pseudo terminal”):

^CLOG:  received fast shutdown request
LOG:  aborting any active transactions
LOG:  autovacuum launcher shutting down
LOG:  shutting down
LOG:  database system is shut down

Everything what happened inside the container is now gone. The correct way to launch it is:

[docker@centos7 ~]$ docker run --name my-first-postgres -e POSTGRES_PASSWORD=postgres -d postgres
d51abc52108d3040817474fa8c85ab15020c12cb753515543c2d064143277155

The “-d” switch tells docker to detach, so we get back our shell. The magic string dockers returns is the container id:

[docker@centos7 ~]$ docker ps --no-trunc
CONTAINER ID                                                       IMAGE               COMMAND                            CREATED             STATUS              PORTS               NAMES
d51abc52108d3040817474fa8c85ab15020c12cb753515543c2d064143277155   postgres            "/docker-entrypoint.sh postgres"   3 minutes ago       Up 3 minutes        5432/tcp            my-first-postgres

When you want to know what images you have available locally you can ask docker for that:

[docker@centos7 ~]$ docker images 
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
postgres            latest              78e3985acac0        2 days ago          264.7 MB
hello-world         latest              c54a2cc56cbb        5 months ago        1.848 kB

How do you now connect to the PostgreSQL image?

[docker@centos7 ~]$ docker run -it --rm --link my-first-postgres:postgres postgres psql -h postgres -U postgres
Password for user postgres: 
psql (9.6.1)
Type "help" for help.

postgres=# \l+
                                                                   List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   |  Size   | Tablespace |                Description                 
-----------+----------+----------+------------+------------+-----------------------+---------+------------+--------------------------------------------
 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 |                       | 7063 kB | pg_default | default administrative connection database
 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +| 6953 kB | pg_default | unmodifiable empty database
           |          |          |            |            | postgres=CTc/postgres |         |            | 
 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +| 6953 kB | pg_default | default template for new databases
           |          |          |            |            | postgres=CTc/postgres |         |            | 
(3 rows)

Or to get bash:

[docker@centos7 ~]$ docker run -it --rm --link my-first-postgres:postgres postgres bash
root@f8c3b3738336:/$ cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
NAME="Debian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=debian
HOME_URL="http://www.debian.org/"
SUPPORT_URL="http://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

Ok, this PostgreSQL image is based on Debian 8. Lets say this is not what I like because I want my PostgreSQL image based on CentOS. This is the topic for the next post: We’ll build our own CentOS image and get deeper in what the stacked filesystem is about. Once we’ll have that available we’ll use that image to build a PostgreSQL image on top of that.

 

Cet article Getting started with Docker – 1 – overview and installation est apparu en premier sur Blog dbi services.

Pages