Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 16 hours 59 min ago

What is in a transportable tablespace dumpfile?

Mon, 2017-05-08 15:20

On 31st of May in Düsseldorf, at DOAG Datenbank, I’ll talk about transportable tablespaces and pluggable databases. Both methods are transporting data physically, the difference is in the transport of the metadata, which can be more flexible when transported logically, as with TTS, but faster when transported physically with PDB. I have a lot of demos to show transportable tablespaces with RMAN, and the different cloning features available in 12cR2. If I have time I’ll show what is inside the dumpfile when using Data Pump to export the metadata. Here is the idea.

expdp transport_tablespaces

Here is how we export metadata with Data Pump for transportable tablespaces.


expdp system/oracle@//localhost/PDB1 directory=VAR_TMP dumpfile=expdat.tmp transport_tablespaces=USERS exclude=table_statistics,index_statistics;
 
...
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
/u01/oradata/tmp/expdat.dmp
******************************************************************************
Datafiles required for transportable tablespace USERS:
/u01/oradata/CDB1/PDB1/users01.dbf

The metadata is exported into expdata.dmp and the data resides in the original datafile. The dumpfile is a binary file but there is a way to extract metadata as DDL using impdp

impdp sqlfile

Here I run impdp with sqlfile to generate all DDL into this file. Nothing is imported and the datafiles are not read, reason why I’ve just put something wrong to transport_datafiles:


impdp system/oracle@//localhost/PDB1 directory=VAR_TMP transport_datafiles=blahblahblah sqlfile=sqlfile.sql ;

No error. Only the dumpfile has been read and here is an extract of the DDP in sqlfile.sql concerning the PK_DEPT and PK_EMP indexes:


-- new object type path: TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
ALTER TABLE "SCOTT"."DEPT" ADD CONSTRAINT "PK_DEPT" PRIMARY KEY ("DEPTNO")
USING INDEX (CREATE UNIQUE INDEX "SCOTT"."PK_DEPT" ON "SCOTT"."DEPT" ("DEPTNO")
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(SEG_FILE 26 SEG_BLOCK 138 OBJNO_REUSE 73197
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ) ENABLE;
ALTER TABLE "SCOTT"."EMP" ADD CONSTRAINT "PK_EMP" PRIMARY KEY ("EMPNO")
USING INDEX (CREATE UNIQUE INDEX "SCOTT"."PK_EMP" ON "SCOTT"."EMP" ("EMPNO")
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(SEG_FILE 26 SEG_BLOCK 154 OBJNO_REUSE 73205
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ) ENABLE;
-- new object type path: TRANSPORTABLE_EXPORT/INDEX_STATISTICS
-- new object type path: TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
ALTER TABLE "SCOTT"."EMP" ADD CONSTRAINT "FK_DEPTNO" FOREIGN KEY ("DEPTNO")
REFERENCES "SCOTT"."DEPT" ("DEPTNO") ENABLE;
-- new object type path: TRANSPORTABLE_EXPORT/TABLE_STATISTICS
-- new object type path: TRANSPORTABLE_EXPORT/STATISTICS/MARKER
-- new object type path: TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK

This looks like the DDL used to re-create the same table except that we can see two storage attributes that are not documented:

  • SEG_FILE and SEG_BLOCK
  • OBJNO_REUSE
SEG_FILE and SEG_BLOCK

When you create an empty table, you just provide the tablespace name and Oracle will allocate the first extent, with the segment header. You don’t choose the data placement within the tablespace. But here we are in a different case: the extents already exist in the datafiles that we transport, and the DDL must just map to it. This is why in this case the segment header file number and block number is specified. The remaining extent allocation information is stored within the datafiles (Locally Managed Tablespace), only the segment header must be known by the dictionary.

As an example, when I look at the database where the export comes from, I can see that the attributes for PK_EMP (SEG_FILE 26 SEG_BLOCK 154) are the relative file number and header block number of the PK_EMP segment:


10:49:10 SQL> select owner,segment_name,header_file,header_block,blocks,extents,tablespace_name,relative_fno from dba_segments where owner='SCOTT';
 
OWNER SEGMENT_NAME HEADER_FILE HEADER_BLOCK BLOCKS EXTENTS TABLESPACE_NAME RELATIVE_FNO
----- ------------ ----------- ------------ ------ ------- --------------- ------------
SCOTT DEPT 31 130 8 1 USERS 26
SCOTT EMP 31 146 8 1 USERS 26
SCOTT SALGRADE 31 162 8 1 USERS 26
SCOTT PK_DEPT 31 138 8 1 USERS 26
SCOTT PK_EMP 31 154 8 1 USERS 26

This file identifier is a relative file number within the tablespace, which means that there is no need to change it when a tablespace is transported.

You will see exactly the same information in the database where you import the tablespace (except for HEADER_FILE which is the absolute file number).

OBJNO_REUSE

Each segment has a DATA_OBJECT_ID, which is referenced in each block, the ROWIDs. This must not change when we transport a tablespace because the goal is that nothing has to be modified in the datafiles. For this reason, the data object id is exported with the metadata, as we can see for this PK_EMP example (OBJNO_REUSE 73205), and set to the same in the target dictionary. Here are the data object IDs for the objects exported here:


10:49:20 SQL> select owner,object_name,object_type,object_id,data_object_id from dba_objects where owner='SCOTT';
 
OWNER OBJECT_NAME OBJECT_TYPE OBJECT_ID DATA_OBJECT_ID
----- ----------- ----------- --------- --------------
SCOTT DEPT TABLE 73196 73196
SCOTT PK_DEPT INDEX 73197 73197
SCOTT EMP TABLE 73198 73206
SCOTT PK_EMP INDEX 73199 73205
SCOTT BONUS TABLE 73200 73200
SCOTT SALGRADE TABLE 73201 73201

The OBJECT_ID will be different in the target, assigned in the same way as when we create an object, but this one is not referenced anywhere within the datafiles.

So what?

Usually, the metadata precedes the data. With transportable tablespaces, it is the opposite: data is there and metadata is re-created to map the data. This metadata is what is stored into the dumpfile exported to transport tablespaces.
From what you have seen, you can understand now that the RELATIVE_FNO and the DATA_OBJECT_ID are not unique within a database, but only within a tablespace. You can understand also that Transportable Tablespace import duration does not depend on the size of data, but is proportional to the number of objects (metadata). This is where Pluggable Databases is more efficient: metadata is transported physically and import duration does not depend on the number of objects, especially when it does not involve an upgrade to new version and object recompilation.

 

Cet article What is in a transportable tablespace dumpfile? est apparu en premier sur Blog dbi services.

Oracle 12cR2 : Optimizer Statistics Advisor

Mon, 2017-05-08 10:47

The Optimizer Statistics Advisor is a new Advisor in Oracle 12.2.
The goal of this Advisor is to check the way you gather the statistics on your database, and depending on what is found, it will makes some recommendations on how you can improve the statistics gathering strategy in order to provide more efficient statistics to the CBO.
This Advisor is also able to generate remediation scripts to apply the statistics gathering “best practices”.
adv
The recommendations are based on 23 predefined rules :

SQL> select rule_id, name, rule_type, description from v$stats_advisor_rules;


RULE_ID NAME RULE_TYPE DESCRIPTION
---------- ----------------------------------- --------- -------------------------------------------------------------------------------------
0 SYSTEM
1 UseAutoJob SYSTEM Use Auto Job for Statistics Collection
2 CompleteAutoJob SYSTEM Auto Statistics Gather Job should complete successfully
3 MaintainStatsHistory SYSTEM Maintain Statistics History
4 UseConcurrent SYSTEM Use Concurrent preference for Statistics Collection
5 UseDefaultPreference SYSTEM Use Default Preference for Stats Collection
6 TurnOnSQLPlanDirective SYSTEM SQL Plan Directives should not be disabled
7 AvoidSetProcedures OPERATION Avoid Set Statistics Procedures
8 UseDefaultParams OPERATION Use Default Parameters in Statistics Collection Procedures
9 UseGatherSchemaStats OPERATION Use gather_schema_stats procedure
10 AvoidInefficientStatsOprSeq OPERATION Avoid inefficient statistics operation sequences
11 AvoidUnnecessaryStatsCollection OBJECT Avoid unnecessary statistics collection
12 AvoidStaleStats OBJECT Avoid objects with stale or no statistics
13 GatherStatsAfterBulkDML OBJECT Do not gather statistics right before bulk DML
14 LockVolatileTable OBJECT Statistics for objects with volatile data should be locked
15 UnlockNonVolatileTable OBJECT Statistics for objects with non-volatile should not be locked
16 MaintainStatsConsistency OBJECT Statistics of dependent objects should be consistent
17 AvoidDropRecreate OBJECT Avoid drop and recreate object seqauences
18 UseIncremental OBJECT Statistics should be maintained incrementally when it is beneficial
19 NotUseIncremental OBJECT Statistics should not be maintained incrementally when it is not beneficial
20 AvoidOutOfRange OBJECT Avoid Out of Range Histogram endpoints
21 UseAutoDegree OBJECT Use Auto Degree for statistics collection
22 UseDefaultObjectPreference OBJECT Use Default Object Preference for statistics collection
23 AvoidAnalyzeTable OBJECT Avoid using analyze table commands for statistics collection


24 rows selected.


SQL>

You can have a look at this blog if you want a little bit more informations about these rules.
If you want to exclude some rules or some database objects of the Advisor’s recommandation, you can define multiple filters. (I will do that below.)

Well, let’s see how to use the Advisor. The first step is to create a task which will run it :

DECLARE
tname VARCHAR2(32767);
ret VARCHAR2(32767);
BEGIN
tname := 'stat_advisor_1';
ret := DBMS_STATS.CREATE_ADVISOR_TASK(tname);
END;
/

The task is created :

SQL> select task_name, advisor_name, created, status from dba_advisor_tasks where advisor_name = 'Statistics Advisor';


TASK_NAME ADVISOR_NAME CREATED STATUS
------------------------------ ------------------------------ ------------------- -----------
STAT_ADVISOR_1 Statistics Advisor 04.05.2017-11:19:25 INITIAL


SQL>

Now, I want to define some filters.
The first one will disable the Advisor for all objects, the 2nd will enable it only on a specific table and the 3th and 4th will exclude two rules :

DECLARE
filter1 CLOB;
filter2 CLOB;
filter3 CLOB;
filter4 CLOB;
BEGIN
filter1 := DBMS_STATS.CONFIGURE_ADVISOR_OBJ_FILTER(
task_name => 'STAT_ADVISOR_1',
stats_adv_opr_type => 'EXECUTE',
rule_name => NULL,
ownname => NULL,
tabname => NULL,
action => 'DISABLE' );


filter2 := DBMS_STATS.CONFIGURE_ADVISOR_OBJ_FILTER(
task_name => 'STAT_ADVISOR_1',
stats_adv_opr_type => 'EXECUTE',
rule_name => NULL,
ownname => 'JOC',
tabname => 'T2',
action => 'ENABLE' );


filter3 := DBMS_STATS.CONFIGURE_ADVISOR_RULE_FILTER(
task_name => 'STAT_ADVISOR_1',
stats_adv_opr_type => 'EXECUTE',
rule_name => 'AvoidDropRecreate',
action => 'DISABLE' );


filter4 := DBMS_STATS.CONFIGURE_ADVISOR_RULE_FILTER(
task_name => 'STAT_ADVISOR_1',
stats_adv_opr_type => 'EXECUTE',
rule_name => 'UseGatherSchemaStats',
action => 'DISABLE' );
END;
/

All is ready, let’s run the task…

DECLARE
tname VARCHAR2(32767);
ret VARCHAR2(32767);
BEGIN
tname := 'stat_advisor_1';
ret := DBMS_STATS.EXECUTE_ADVISOR_TASK(tname);
END;
/

…and generate the report :

SQL> select dbms_stats.report_advisor_task('stat_advisor_1',null,'text','all','all') as report from dual;
GENERAL INFORMATION
-------------------------------------------------------------------------------
Task Name : STAT_ADVISOR_1
Execution Name : EXEC_2172
Created : 05-04-17 11:34:51
Last Modified : 05-04-17 11:35:10
-------------------------------------------------------------------------------
SUMMARY
-------------------------------------------------------------------------------
For execution EXEC_2172 of task STAT_ADVISOR_1, the Statistics Advisor has no
findings.

-------------------------------------------------------------------------------
SQL>

Cool ! Nothing to report regarding statistics gathering on my the table JOC.T2 (see filter2 above).
But how does the Advisor reacts when I run it after having deleted the statistics on this table ?
SQL> exec dbms_stats.delete_table_stats(ownname=>'JOC',tabname=>'T2');


PL/SQL procedure successfully completed.


SQL> DECLARE
2 tname VARCHAR2(32767);
3 ret VARCHAR2(32767);
4 BEGIN
5 tname := 'stat_advisor_1';
6 ret := DBMS_STATS.EXECUTE_ADVISOR_TASK(tname);
7 END;
8 /


PL/SQL procedure successfully completed.


SQL> select dbms_stats.report_advisor_task('stat_advisor_1',null,'text','all','all') as report from dual;
GENERAL INFORMATION
-------------------------------------------------------------------------------
Task Name : STAT_ADVISOR_1
Execution Name : EXEC_2182
Created : 05-04-17 11:34:51
Last Modified : 05-04-17 11:44:22
-------------------------------------------------------------------------------
SUMMARY
-------------------------------------------------------------------------------
For execution EXEC_2182 of task STAT_ADVISOR_1, the Statistics Advisor has 1
finding(s). The findings are related to the following rules: AVOIDSTALESTATS.
Please refer to the finding section for detailed information.

-------------------------------------------------------------------------------
FINDINGS
-------------------------------------------------------------------------------
Rule Name: AvoidStaleStats
Rule Description: Avoid objects with stale or no statistics
Finding: There are 1 object(s) with no statistics.
Schema:
JOC
Objects:
T2


Recommendation: Gather Statistics on those objects with no statistics.
Example:
-- Gathering statistics for tables with stale or no statistics in schema, SH:
exec dbms_stats.gather_schema_stats('SH', options => 'GATHER AUTO')
Rationale: Stale statistics or no statistics will result in bad plans.
-------------------------------------------------------------------------------

It looks to work well. The Advisor detected that there is no stats on the table, and a rule were triggered.
And what about the remediation scripts ? Firstly, we have to generate them :

VARIABLE script CLOB
DECLARE
tname VARCHAR2(32767);
BEGIN
tname := 'stat_advisor_1';
:script := DBMS_STATS.SCRIPT_ADVISOR_TASK(tname);
END;
/


PL/SQL procedure successfully completed.

And then display them :

set linesize 3000
set long 500000
set pagesize 0
set longchunksize 100000
set serveroutput on


DECLARE
v_len NUMBER(10);
v_offset NUMBER(10) :=1;
v_amount NUMBER(10) :=10000;
BEGIN
v_len := DBMS_LOB.getlength(:script);
WHILE (v_offset < v_len)
LOOP
DBMS_OUTPUT.PUT_LINE(DBMS_LOB.SUBSTR(:script,v_amount,v_offset));
v_offset := v_offset + v_amount;
END LOOP;
END;
13 /
-- Script generated for the recommendations from execution EXEC_2182
-- in the statistics advisor task STAT_ADVISOR_1
-- Script version 12.2
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- Scripts for rule USECONCURRENT
-- Rule Description: Use Concurrent preference for Statistics Collection
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- Scripts for rule USEDEFAULTPREFERENCE
-- Rule Description: Use Default Preference for Stats Collection
-- Set global preferenes to default values.
-- Scripts for rule USEDEFAULTOBJECTPREFERENCE
-- Rule Description: Use Default Object Preference for statistics collection
-- Setting object-level preferences to default values
-- setting CASCADE to default value for object level preference
-- setting ESTIMATE_PERCENT to default value for object level preference
-- setting METHOD_OPT to default value for object level preference
-- setting GRANULARITY to default value for object level preference
-- setting NO_INVALIDATE to default value for object level preference
-- Scripts for rule USEINCREMENTAL
-- Rule Description: Statistics should be maintained incrementally when it is beneficial
-- Turn on the incremental option for those objects for which using incremental is helpful.
-- Scripts for rule UNLOCKNONVOLATILETABLE
-- Rule Description: Statistics for objects with non-volatile should not be locked
-- Unlock statistics for objects that are not volatile.
-- Scripts for rule LOCKVOLATILETABLE
-- Rule Description: Statistics for objects with volatile data should be locked
-- Lock statistics for volatile objects.
-- Scripts for rule NOTUSEINCREMENTAL
-- Rule Description: Statistics should not be maintained incrementally when it is not beneficial
-- Turn off incremental option for those objects for which using incremental is not helpful.
-- Scripts for rule USEAUTODEGREE
-- Rule Description: Use Auto Degree for statistics collection
-- Turn on auto degree for those objects for which using auto degree is helpful.
-- Scripts for rule AVOIDSTALESTATS
-- Rule Description: Avoid objects with stale or no statistics
-- Gather statistics for those objcts that are missing or have no statistics.
-- Scripts for rule MAINTAINSTATSCONSISTENCY
-- Rule Description: Statistics of dependent objects should be consistent
-- Gather statistics for those objcts that are missing or have no statistics.
declare
obj_filter_list dbms_stats.ObjectTab;
obj_filter dbms_stats.ObjectElem;
obj_cnt number := 0;
begin
obj_filter_list := dbms_stats.ObjectTab();
obj_filter.ownname := 'JOC';
obj_filter.objtype := 'TABLE';
obj_filter.objname := 'T2';
obj_filter_list.extend();
obj_cnt := obj_cnt + 1;
obj_filter_list(obj_cnt) := obj_filter;
dbms_stats.gather_database_stats(
obj_filter_list=>obj_filter_list);
end;
/

PL/SQL procedure successfully completed.
SQL>

It was a very simple demo, but as you can see above, the Advisor provides a small script to adjust what is wrong or what is missing concerning the statistics of the table.

Conclusion :
Once you have upgraded your database to Oracle 12.2, don’t hesitate to set up the new Statistics Advisor. It is easy to deploy and can be fully personalized depending on what you want to check (which objects ? which rules ?). Moreover, it has been developped by the same team who develops and maintains the CBO. Therefore, they know which statistics the Optimizer needs !

 

Cet article Oracle 12cR2 : Optimizer Statistics Advisor est apparu en premier sur Blog dbi services.

PostgreSQL on a Pure Storage All Flash Array – Populating PostgreSQL

Mon, 2017-05-08 07:51

In the last post we explained on how we did the setup for connecting a PostgreSQL server to the Pure Storage array. In this post we’ll actually give the array something to work with. What we use for the tests is pgbench which is included with PostgreSQL and can be used to load a database and then perform a TPC-B like performance test against the loaded data. Btw: pgbench is a great tool when you want to make the same tests against different configurations of PostgreSQL or the same configurations of PostgreSQL against different physical or virtual hardware configurations.

To begin with lets create a dedicated database for loading the data into:

postgres@pgpurestorage:/home/postgres/ [pgpure] psql -c "create database purestorage" postgres
CREATE DATABASE
Time: 552.693 ms

We’ll use pgbench in initialization (“-i”) mode with a scale factor (“-s”) of 10000 for populating the database. The will create the pgbench_accounts, pgbench_branches, pgbench_history, and pgbench_tellers and will result in 1000000000 rows in total for the pgbench_accounts table.

postgres@pgpurestorage:/home/postgres/ [pgpure] pgbench -i -s 10000 purestorage

The total run-time (including the vacuum at the end) was around 2,5 hours (this is including the index creation and the vacuum at the end) and this is the size of the database:

pgpurestorage/postgres MASTER (postgres@5432) # \l+
                                                                      List of databases
┌─────────────┬──────────┬──────────┬─────────────┬─────────────┬───────────────────────┬─────────┬────────────┬────────────────────────
│    Name     │  Owner   │ Encoding │   Collate   │    Ctype    │   Access privileges   │  Size   │ Tablespace │                Descript
├─────────────┼──────────┼──────────┼─────────────┼─────────────┼───────────────────────┼─────────┼────────────┼────────────────────────
│ postgres    │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │                       │ 7629 kB │ pg_default │ default administrative 
│ purestorage │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │                       │ 146 GB  │ pg_default │                        
│ template0   │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │ =c/postgres          ↵│ 7497 kB │ pg_default │ unmodifiable empty data
│             │          │          │             │             │ postgres=CTc/postgres │         │            │                        
│ template1   │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │ =c/postgres          ↵│ 7497 kB │ pg_default │ default template for ne
│             │          │          │             │             │ postgres=CTc/postgres │         │            │                        
└─────────────┴──────────┴──────────┴─────────────┴─────────────┴───────────────────────┴─────────┴────────────┴────────────────────────
(4 rows)

The biggest table is the pgbench_accounts table:

pgpurestorage/purestorage MASTER (postgres@5432) # \d+
                           List of relations
┌────────┬──────────────────┬───────┬──────────┬─────────┬─────────────┐
│ Schema │       Name       │ Type  │  Owner   │  Size   │ Description │
├────────┼──────────────────┼───────┼──────────┼─────────┼─────────────┤
│ public │ pgbench_accounts │ table │ postgres │ 125 GB  │             │
│ public │ pgbench_branches │ table │ postgres │ 392 kB  │             │
│ public │ pgbench_history  │ table │ postgres │ 0 bytes │             │
│ public │ pgbench_tellers  │ table │ postgres │ 4360 kB │             │
└────────┴──────────────────┴───────┴──────────┴─────────┴─────────────┘
(4 rows)

.. which contains 1000000000 rows:

pgpurestorage/purestorage MASTER (postgres@5432) # select count(*) from pgbench_accounts;
┌────────────┐
│   count    │
├────────────┤
│ 1000000000 │
└────────────┘
(1 row)

Time: 219538.051 ms (03:39.538)

Here are the screenshots from the Pure Storage management console for the duration of the run (which started some minutes before 20:00). Orange are write operations, blue is read:

purestorage-pg-load-1
purestorage-pg-load-2

As you can see we are fare away from saturating the storage. So, lets do the same load again but this time with two sessions into two databases in parallel:

# session one 
postgres@pgpurestorage:/home/postgres/ [pgpure] psql -c "drop database if exists purestorage" postgres
DROP DATABASE
Time: 983.297 ms
postgres@pgpurestorage:/home/postgres/ [pgpure] psql -c "create database purestorage" postgres
CREATE DATABASE
Time: 514.140 ms
postgres@pgpurestorage:/home/postgres/ [pgpure] pgbench -i -s 10000 purestorage

# session two
postgres@pgpurestorage:/home/postgres/ [pgpure] psql -c "drop database if exists purestorage2" postgres
DROP DATABASE
Time: 0.110 ms
postgres@pgpurestorage:/home/postgres/ [pgpure] psql -c "create database purestorage2" postgres
CREATE DATABASE
Time: 274.576 ms
postgres@pgpurestorage:/home/postgres/ [pgpure] pgbench -i -s 10000 purestorage2

This screenshots from the Pure Storage console:

purestorage-pg-load-2-1
purestorage-pg-load-2-2

It took more than double as long to load the 2000000000 rows but the limit is not on the storage. The average bandwidth went up from around 65 mb/sec to around 90 mb/sec. What is interesting to see is that we need only around 45GB real storage:

purestorage-pg-load-2-reduction

This means we have a almost 9 times compression/de-duplication on the storage layer. Remember that each database has a size of 146GB ( ( 146 * 2 = 292 ) / 9 = 32, plus the WAL files ):

                                                                     List of databases
┌──────────────┬──────────┬──────────┬─────────────┬─────────────┬───────────────────────┬─────────┬────────────┬──────────────────────────────────────
│     Name     │  Owner   │ Encoding │   Collate   │    Ctype    │   Access privileges   │  Size   │ Tablespace │                Description           
├──────────────┼──────────┼──────────┼─────────────┼─────────────┼───────────────────────┼─────────┼────────────┼──────────────────────────────────────
│ postgres     │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │                       │ 7629 kB │ pg_default │ default administrative connection dat
│ purestorage  │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │                       │ 146 GB  │ pg_default │                                      
│ purestorage2 │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │                       │ 146 GB  │ pg_default │                                      
│ template0    │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │ =c/postgres          ↵│ 7497 kB │ pg_default │ unmodifiable empty database          
│              │          │          │             │             │ postgres=CTc/postgres │         │            │                                      
│ template1    │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │ =c/postgres          ↵│ 7497 kB │ pg_default │ default template for new databases   
│              │          │          │             │             │ postgres=CTc/postgres │         │            │                                      
└──────────────┴──────────┴──────────┴─────────────┴─────────────┴───────────────────────┴─────────┴────────────┴──────────────────────────────────────

What happens then when the file system gets full on the host but there still is storage available in the back-end? Lets copy one of the test databases: That should almost fill the file system:

pgpurestorage/postgres MASTER (postgres@5432) # \! df -h
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/cl_pgpurestorage-root   26G  2.2G   24G   9% /
devtmpfs                           3.9G     0  3.9G   0% /dev
tmpfs                              3.9G   16K  3.9G   1% /dev/shm
tmpfs                              3.9G   41M  3.8G   2% /run
tmpfs                              3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda1                         1014M  183M  832M  19% /boot
tmpfs                              781M     0  781M   0% /run/user/1000
/dev/mapper/vgpure-lvpure          450G  302G  149G  68% /u02/pgdata
pgpurestorage/postgres MASTER (postgres@5432) # create database test template = purestorage;

CREATE DATABASE
Time: 3312062.975 ms (55:12.063)
pgpurestorage/postgres MASTER (postgres@5432) # 
pgpurestorage/postgres MASTER (postgres@5432) # \! df -h
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/cl_pgpurestorage-root   26G  2.2G   24G   9% /
devtmpfs                           3.9G     0  3.9G   0% /dev
tmpfs                              3.9G   16K  3.9G   1% /dev/shm
tmpfs                              3.9G   41M  3.8G   2% /run
tmpfs                              3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda1                         1014M  183M  832M  19% /boot
tmpfs                              781M     0  781M   0% /run/user/1000
/dev/mapper/vgpure-lvpure          450G  449G  1.7G 100% /u02/pgdata
pgpurestorage/postgres MASTER (postgres@5432) # 

Almost full, so lets create another database:

pgpurestorage/postgres MASTER (postgres@5432) # create database test2 template = purestorage;
ERROR:  could not write to file "base/16562/16534": No space left on device
Time: 34449.626 ms (00:34.450)

So, behavior is what has been expected.

In the next post we’ll do some pgbench standard benchmarks against the data set although the numbers will probably not be very good as we are running in a ESX test environment and this seems to be the bottle neck in our case. We’ll have to analyze the sar statistics for that, but no time yet.

What I can say about the storage system so far: I like the simplicity of the management console. There is not much more you can do than creating hosts, volumes and connections between them, that’s it. Reduction is happening automatically and you have the option to replicate to another storage system. Snapshots are available as well, but not yet tested.

 

Cet article PostgreSQL on a Pure Storage All Flash Array – Populating PostgreSQL est apparu en premier sur Blog dbi services.

Can you open PDB$SEED read write?

Sun, 2017-05-07 10:27

If you are in multitenant, you probably already felt the desire to open the PDB$SEED in READ WRITE mode.

  • Can you open PDB$SEED read write yourseld? Yes and No.
  • Should you open PDB$SEED read write yourself? Yes and No.
  • How to run upgrade scripts that need to write to PDB$SEED? catcon.pl


In 12.1 you have no reason to open the seed read write yourself. In 12.2 there is one reason when you are in LOCAL UNDO mode, because you may want to customize the UNDO tablespace.

12c in local undo

I am in 12.1 or in 12.2 in shared undo mode:
SYS@CDB$ROOT SQL> select * from database_properties where property_name like '%UNDO%';
 
no rows selected

When the CDB is opened, the PDB$SEED is opened in read only mode.
SYS@CDB$ROOT SQL> show pdbs
&nsbp;
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB01 READ WRITE NO

I try to open the PDB$SEED in read write mode (FORCE is a shortcut to avoid to close it before)
SYS@CDB$ROOT SQL> alter pluggable database pdb$seed open force;
&nsbp;
Error starting at line : 1 in command -
alter pluggable database pdb$seed open force
Error report -
ORA-65017: seed pluggable database may not be dropped or altered
65017. 00000 - "seed pluggable database may not be dropped or altered"
*Cause: User attempted to drop or alter the Seed pluggable database which is not allowed.
*Action: Specify a legal pluggable database name.
SYS@CDB$ROOT SQL>

Obviously, this is impossible and clearly documented. PDB$SEED is not a legal pluggable database for this operation.

Oracle Script

There is an exception to that: internal Oracle scripts need to run statements in the PDB$SEED. They run with “_oracle_script”=true where this operation is possible:

SYS@CDB$ROOT SQL> alter session set "_oracle_script"=true;
Session altered.
 
SYS@CDB$ROOT SQL> alter pluggable database pdb$seed open read write force;
Pluggable database PDB$SEED altered.

catcon.pl

Of course, when upgrading, there are phases where you need the seed opened read-write. But you don’t to that yourself. The scripts to run in each container are called through catcon.pl which, by default, opens the seed read-write and ensures that the initial open mode is restored at the end even in case of error.

-m mode in which PDB$SEED should be opened; one of the following values
may be specified:
- UNCHANGED - leave PDB$SEED in whatever mode it is already open
- READ WRITE (default)
- READ ONLY
- UPGRADE
- DOWNGRADE

I have the following “/tmp/show_open_mode.sql” script

column name format a10
select name,open_mode,current_timestamp-open_time from v$containers;

I call it with catcon to run in PDB$SEED:

$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -c 'PDB$SEED' -n 1 -d /tmp -l /tmp -b tmp -show_open_mode.sql

Here is the output in /tmp/tmp0.log

CATCON_STATEMENT
--------------------------------------
catconExec(): @/tmp/show_open_mode.sql
SQL> SQL> column name format a10
SQL> select name,open_mode,current_timestamp-open_time from v$containers;
NAME OPEN_MODE CURRENT_TIMESTAMP-OPEN_TIME
---------- ---------- ---------------------------------------------------------------------------
PDB$SEED READ WRITE +000000000 00:00:00.471398
SQL>
END_RUNNING
------------------------------------------------------------------------------------------------------------------------
==== @/tmp/show_open_mode.sql Container:PDB$SEED Id:2 17-05-07 05:02:06 Proc:0 ====
SQL>
END_RUNNING
------------------------------------------------------------------------------------------------------------------------
==== @/tmp/show_open_mode.sql Container:PDB$SEED Id:2 17-05-07 05:02:06 Proc:0 ====

The PDB$SEED was opened READ WRITE to run the statements.

We can see that in alert.log:

alter pluggable database pdb$seed close immediate instances=all
ALTER SYSTEM: Flushing buffer cache inst=0 container=2 local
Pluggable database PDB$SEED closed
Completed: alter pluggable database pdb$seed close immediate instances=all
alter pluggable database pdb$seed OPEN READ WRITE
Database Characterset for PDB$SEED is WE8MSWIN1252
Opening pdb PDB$SEED (2) with no Resource Manager plan active
Pluggable database PDB$SEED opened read write
Completed: alter pluggable database pdb$seed OPEN READ WRITE
alter pluggable database pdb$seed close immediate instances=all
ALTER SYSTEM: Flushing buffer cache inst=0 container=2 local
Pluggable database PDB$SEED closed
Completed: alter pluggable database pdb$seed close immediate instances=all
alter pluggable database pdb$seed OPEN READ ONLY instances=all
Database Characterset for PDB$SEED is WE8MSWIN1252
Opening pdb PDB$SEED (2) with no Resource Manager plan active
Pluggable database PDB$SEED opened read only
Completed: alter pluggable database pdb$seed OPEN READ ONLY instances=all

When the pre-upgrade and post-upgrade scripts are run from DBUA you can see the following in the logs:
exec_DB_script: opened Reader and Writer
exec_DB_script: executed connect / AS SYSDBA
exec_DB_script: executed alter session set "_oracle_script"=TRUE
/
exec_DB_script: executed alter pluggable database pdb$seed close immediate instances=all
/
exec_DB_script: executed alter pluggable database pdb$seed OPEN READ WRITE
/

This is displayed because DBUA runs catcon.pl in debug mode and you can do the same by adding ‘-g’ to the catcon.pl arguments.

12cR2 in local undo

In 12.2 there is a case where you can make a change to the PDB$SEED to customize the UNDO tablespace template. Here I am changing to LOCAL UNDO:


SYS@CDB$ROOT SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
&nsbp;
SYS@CDB$ROOT SQL> startup upgrade;
ORACLE instance started.
&nsbp;
Total System Global Area 1107296256 bytes
Fixed Size 8791864 bytes
Variable Size 939526344 bytes
Database Buffers 150994944 bytes
Redo Buffers 7983104 bytes
Database mounted.
Database opened.
&nsbp;
SYS@CDB$ROOT SQL> alter database local undo on;
Database altered.
&nsbp;
SYS@CDB$ROOT SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
 
SYS@CDB$ROOT SQL> select * from database_properties where property_name like '%UNDO%';
 
PROPERTY_NAME PROPERTY_VALUE DESCRIPTION
------------- -------------- -----------
LOCAL_UNDO_ENABLED TRUE true if local undo is enabled

PDB$SEED is read only:

SYS@CDB$ROOT SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB01 READ WRITE NO

and _oracle_script is not set:

SYS@CDB$ROOT SQL> show parameter script
 
NAME TYPE VALUE
---- ---- -----
 

I get no error now and can open the seed in read-write mode:

SYS@CDB$ROOT SQL> alter pluggable database PDB$SEED open force;
Pluggable database PDB$SEED altered.
 
SYS@CDB$ROOT SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ WRITE NO
3 PDB01 READ WRITE NO

Customize UNDO seed

Once you open read write an undo tablespace is created. If you want to customize it, you can create another one and drop the previous one. This requires changing the undo_tablespace parameter:


SYS@CDB$ROOT SQL> show parameter undo
NAME TYPE VALUE
----------------- ------- ------
undo_tablespace string UNDO_1
 
SYS@CDB$ROOT SQL> create undo tablespace UNDO;
Tablespace UNDO created.
 
SYS@CDB$ROOT SQL> alter system set undo_tablespace=UNDO;
System SET altered.
 
SYS@CDB$ROOT SQL> drop tablespace UNDO_1 including contents and datafiles;
Tablespace UNDO_1 dropped.
 
SYS@CDB$ROOT SQL> shutdown immediate
Pluggable Database closed

You can leave it like this, just close and re-open read only. If you want to keep the same undo tablespace name as before, you need to play with create and drop, and change undo_tablespace again.

So what?

Don’t forget that you should not modify or drop PDB$SEED. If you want a customized template for your PDB creations, then you should create your PDB template to clone. You can clone remotely, so this is possible in single-tenant as well. Being able to open the PDB$SEED in read write is possible only for the exception of creating the UNDO tablespace in PDB$SEED when you move to local undo mode. This is not required, and then an UNDO tablespace will be created when you open a PDB with no undo_tablespace.
When running pre-upgrade and post-upgrade scripts, then don’t worry: catcon.pl is there to help run scripts in containers and handles that for you.

 

Cet article Can you open PDB$SEED read write? est apparu en premier sur Blog dbi services.

Testing new PostgreSQL features before alpha/beta/rc releases

Sun, 2017-05-07 04:32

A long time ago I blogged on how you can use the PostgreSQL development snapshots to test new PostgreSQL features before alpha/beta/rc releases are officially released. Another way to do this is to use git to get the latest sources and build PostgreSQL from there. Everything which was committed will be available to test. Btw: A great way to stay up to date is to subscribe to the mailing list just referenced. You’ll get a mail for each commit that happened, maybe one of those is getting your attention?

To start you’ll obviously need git. For distributions using yum this is just a matter of:

postgres@pgbox:/home/postgres/ [pg960final] sudo yum install git

For systems using apt use:

postgres@pgbox:/home/postgres/ [pg960final] sudo apt-get install git

Depending on how you want to configure PostgreSQL you’ll need some development packages as well. For yum based systems this is a good starting point:

postgres@pgbox:/home/postgres/ [pg960final] sudo yum install -y gcc openldap-devel python-devel readline-devel redhat-lsb bison flex perl-ExtUtils-Embed zlib-devel crypto-utils openssl-devel pam-devel libxml2-devel libxslt-devel tcl tcl-devel openssh-clients bzip2 net-tools wget screen ksh unzip

For apt based systems you might want to start with this:

postgres@pgbox:/home/postgres/ [pg960final] sudo apt-get install libldap2-dev libpython-dev libreadline-dev libssl-dev bison flex libghc-zlib-dev libcrypto++-dev libxml2-dev libxslt1-dev tcl tclcl-dev bzip2 wget screen ksh libpam0g-dev libperl-dev make unzip libpam0g-dev tcl-dev python

Not all of those packages are required, they just reflect what we usually install before building PostgreSQL from source. Of course you should adjust this and remove packages that are not required for what you plan to do.

How do you then get the latest PostgreSQL sources? Quite easy, it is documented in the PostgreSQL wiki:

postgres@pgbox:/home/postgres/ [pg960final] mkdir IwantToTest
postgres@pgbox:/home/postgres/ [pg960final] cd IwantToTest/
postgres@pgbox:/home/postgres/IwantToTest/ [pg960final] git clone git://git.postgresql.org/git/postgresql.git

The result should look similar to this:

Cloning into 'postgresql'...
remote: Counting objects: 629074, done.
remote: Compressing objects: 100% (95148/95148), done.
remote: Total 629074 (delta 534080), reused 626282 (delta 531478)
Receiving objects: 100% (629074/629074), 184.31 MiB | 26.40 MiB/s, done.
Resolving deltas: 100% (534080/534080), done.

From now on you have the complete PostgreSQL sources locally available.

postgres@pgbox:/home/postgres/IwantToTest/ [pg960final] cd postgresql/; ls
aclocal.m4  config  configure  configure.in  contrib  COPYRIGHT  doc  GNUmakefile.in  HISTORY  Makefile  README  README.git  src

Ready to test? Yes, but what? One possible way to start is asking git for what was committed recently:

postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] git log
commit 0de791ed760614991e7cb8a78fddd6874ea6919d
Author: Peter Eisentraut peter_e@gmx.net
Date:   Wed May 3 21:25:01 2017 -0400

    Fix cursor_to_xml in tableforest false mode
    
    It only produced  elements but no wrapping table element.
    
    By contrast, cursor_to_xmlschema produced a schema that is now correct
    but did not previously match the XML data produced by cursor_to_xml.
    
    In passing, also fix a minor misunderstanding about moving cursors in
    the tests related to this.
    
    Reported-by: filip@jirsak.org
    Based-on-patch-by: Thomas Munro thomas.munro@enterprisedb.com
...

Usually you can find a link to the discussion in the commit message so can you read through the history of a specific commit. Another way is to read the development documentation or the upcoming release notes once available.

All you need to do then is to build PostgreSQL:

postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] ./configure
postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] make all
postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] sudo make install
postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] cd contrib
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] make all
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] sudo make install
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] /usr/local/pgsql/bin/initdb -D /var/tmp/test
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] /usr/local/pgsql/bin/pg_ctl -D /var/tmp/test start
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] /usr/local/pgsql/bin/psql postgres
psql (10devel)
Type "help" for help.

pgbox/postgres MASTER (postgres@5432) # 

Happy testing …

 

Cet article Testing new PostgreSQL features before alpha/beta/rc releases est apparu en premier sur Blog dbi services.

PostgreSQL on a Pure Storage All Flash Array – Preparations

Thu, 2017-05-04 09:07

Yesterday we got a Pure Storage All Flash Array for testing. As the name implies this is all about Flash storage. What makes Pure Storage different from other vendors is that you don’t buy just a storage box and then pay the usual maintenance costs but you pay for a storage subscription which should keep your storage up to date all the time. The promise is that all the components of the array get replaced by the then current versions over time without forcing you to re-buy. Check the link above for more details on the available subscriptions. This is the first post and describes the setup we did for connecting a PostgreSQL VMWare based machine to the Pure Storage box. The PostgreSQL server will be running as a virtual machine in VMWare ESX and connect over iSCSI to the storage system.

As usual we used CentOS 7 for the PostgreSQL server:

[root@pgpurestorage ~]$ cat /etc/centos-release
CentOS Linux release 7.3.1611 (Core) 
[root@pgpurestorage ~]$ uname -a
Linux pgpurestorage.it.dbi-services.com 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

We have 4 vCPUs:

[root@pgpurestorage ~]$ lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             4
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
Stepping:              1
CPU MHz:               2399.583
BogoMIPS:              4799.99
Hypervisor vendor:     VMware
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              35840K
NUMA node0 CPU(s):     0-3

… and 8GB of memory:

[root@pgpurestorage ~]$ cat /proc/meminfo | head -5
MemTotal:        7994324 kB
MemFree:         7508232 kB
MemAvailable:    7528048 kB
Buffers:            1812 kB
Cached:           233648 kB

purestorage-setup-vm

Because by default you’ll get the “virtual-guest” tuned profile when you install CentOS in a virtualized environment we created our own and switched to the same:

root@:/home/postgres/ [] tuned-adm active
Current active profile: virtual-guest
root@:/home/postgres/ [] tuned-adm profile dbi-postgres
root@:/home/postgres/ [] tuned-adm active
Current active profile: dbi-postgres
root@:/home/postgres/ [] cat /usr/lib/tuned/dbi-postgres/tuned.conf | egrep -v "^#|^$"
[main]
summary=dbi services tuned profile for PostgreSQL servers
[cpu]
governor=performance
energy_perf_bias=performance
min_perf_pct=100
[disk]
readahead=>4096
[sysctl]
kernel.sched_min_granularity_ns = 10000000
kernel.sched_wakeup_granularity_ns = 15000000
vm.overcommit_memory=2
vm.swappiness=0
vm.dirty_ratio=2
vm.dirty_background_ratio=1
vm.nr_hugepages=1024

To gather statistics we created a cronjob:

root@:/home/postgres/ []  crontab -l
* * * * * /usr/lib64/sa/sa1 -S XALL 60 1

PostgreSQL was installed from source with what was committed to the source tree as of today with the following options:

[postgres@pgpurestorage postgresql]$ PGHOME=/u01/app/postgres/product/10/db_0
[postgres@pgpurestorage postgresql]$ SEGSIZE=2
[postgres@pgpurestorage postgresql]$ BLOCKSIZE=8
[postgres@pgpurestorage postgresql]$ WALSEGSIZE=64
[postgres@pgpurestorage postgresql]$ ./configure --prefix=${PGHOME} \
>             --exec-prefix=${PGHOME} \
>             --bindir=${PGHOME}/bin \
>             --libdir=${PGHOME}/lib \
>             --sysconfdir=${PGHOME}/etc \
>             --includedir=${PGHOME}/include \
>             --datarootdir=${PGHOME}/share \
>             --datadir=${PGHOME}/share \
>             --with-pgport=5432 \
>             --with-perl \
>             --with-python \
>             --with-tcl \
>             --with-openssl \
>             --with-pam \
>             --with-ldap \
>             --with-libxml \
>             --with-libxslt \
>             --with-segsize=${SEGSIZE} \
>             --with-blocksize=${BLOCKSIZE} \
>             --with-wal-segsize=${WALSEGSIZE}  \
>             --with-extra-version=" dbi services build"

For being able to connect to the Pure Storage box you’ll need the iSCSI IQN:

root@:/home/postgres/ [] cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:185a3499ac9

Knowing the IQN (InitiatorName) we can logon to the Pure Storage console for adding our host, creating a volume and attaching the volume to the host:

Thu May 04 11:44:10 2017
Welcome pureuser. This is Purity Version 4.8.8 on FlashArray dbipure01

http://www.purestorage.com/

pureuser@dbipure01> purehost create --iqn iqn.1994-05.com.redhat:185a3499ac9 pgpurestorage
Name           WWN  IQN                               
pgpurestorage  -    iqn.1994-05.com.redhat:185a3499ac9
pureuser@dbipure01> purevol create --size 500G volpgtest
Name       Size  Source  Created                   Serial                  
volpgtest  500G  -       2017-05-04 11:46:58 CEST  BA56B4A72DE94A4400011012
pureuser@dbipure01> purehost connect --vol volpgtest pgpurestorage
Name           Vol        LUN
pgpurestorage  volpgtest  1  

The Pure Storage system has two controllers (10.10.1.93 and 10.10.1.94) so we should be able to ping them:

root@:/home/postgres/ [] ping 10.10.1.93
PING 10.10.1.93 (10.10.1.93) 56(84) bytes of data.
64 bytes from 10.10.1.93: icmp_seq=1 ttl=63 time=2.53 ms
64 bytes from 10.10.1.93: icmp_seq=2 ttl=63 time=0.816 ms
64 bytes from 10.10.1.93: icmp_seq=3 ttl=63 time=0.831 ms
...
root@:/u02/pgdata/pgpure/ [] ping 10.10.1.94
PING 10.10.1.94 (10.10.1.94) 56(84) bytes of data.
64 bytes from 10.10.1.94: icmp_seq=1 ttl=63 time=0.980 ms
64 bytes from 10.10.1.94: icmp_seq=2 ttl=63 time=0.848 ms
...

Ok for the connectivity so a discover should work as well:

root@:/home/postgres/ [] iscsiadm -m discovery -t st -p 10.10.1.93
10.10.1.93:3260,1 iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21
10.10.1.94:3260,1 iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21
root@:/home/postgres/ [] iscsiadm -m node
10.10.1.93:3260,1 iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21
10.10.1.94:3260,1 iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21

Fine as well, so login:

root@:/home/postgres/ [] iscsiadm -m node --login 
Logging in to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21, portal: 10.10.1.93,3260] (multiple)
Logging in to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21, portal: 10.10.1.94,3260] (multiple)
Login to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21, portal: 10.10.1.93,3260] successful.
Login to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21, portal: 10.10.1.94,3260] successful.
root@:/home/postgres/ [] iscsiadm -m session -o show 
tcp: [13] 10.10.1.93:3260,1 iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21 (non-flash)
tcp: [14] 10.10.1.94:3260,1 iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21 (non-flash)

The new device is available (sdb) from now on:

root@:/home/postgres/ [] ls -la /dev/sd*
brw-rw----. 1 root disk 8,  0 May  4 13:23 /dev/sda
brw-rw----. 1 root disk 8,  1 May  4 13:23 /dev/sda1
brw-rw----. 1 root disk 8,  2 May  4 13:23 /dev/sda2
brw-rw----. 1 root disk 8, 16 May  4 13:23 /dev/sdb
brw-rw----. 1 root disk 8, 32 May  4 13:23 /dev/sdc

LVM setup:

root@:/home/postgres/ [] pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.
root@:/home/postgres/ [] vgcreate vgpure /dev/sdb
  Volume group "vgpure" successfully created
root@:/home/postgres/ [] lvcreate -L 450G -n lvpure vgpure
  Logical volume "lvpure" created.
root@:/home/postgres/ [] mkdir -p /u02/pgdata
root@:/home/postgres/ [] mkfs.xfs /dev/mapper/vgpure-lvpure 
meta-data=/dev/mapper/vgpure-lvpure isize=512    agcount=4, agsize=29491200 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=117964800, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=57600, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

root@:/home/postgres/ [] echo "/dev/mapper/vgpure-lvpure  /u02/pgdata  xfs defaults,noatime 0 0" >> /etc/fstab
root@:/home/postgres/ [] mount -a
root@:/home/postgres/ [] df -h
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/cl_pgpurestorage-root   26G  2.0G   25G   8% /
devtmpfs                           3.9G     0  3.9G   0% /dev
tmpfs                              3.9G     0  3.9G   0% /dev/shm
tmpfs                              3.9G  8.5M  3.9G   1% /run
tmpfs                              3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda1                         1014M  183M  832M  19% /boot
tmpfs                              781M     0  781M   0% /run/user/1000
/dev/mapper/vgpure-lvpure          450G   33M  450G   1% /u02/pgdata
root@:/home/postgres/ [] chown postgres:postgres /u02/pgdata

Initialized the PostgreSQL cluster:

postgres@pgpurestorage:/home/postgres/ [pg10] initdb -D /u02/pgdata/
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /u02/pgdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /u02/pgdata/ -l logfile start

What we changed from the default configuration is:

postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] cat postgresql.auto.conf 
# Do not edit this file manually!
# It will be overwritten by the ALTER SYSTEM command.
listen_addresses = '*'
logging_collector = 'on'
log_truncate_on_rotation = 'on'
log_filename = 'postgresql-%a.log'
log_rotation_age = '8d'
log_line_prefix = '%m - %l - %p - %h - %u@%d '
log_directory = 'pg_log'
log_min_messages = 'WARNING'
log_autovacuum_min_duration = '360s'
log_min_error_statement = 'error'
log_min_duration_statement = '5min'
log_checkpoints = 'on'
log_statement = 'ddl'
log_lock_waits = 'on'
log_temp_files = '1'
log_timezone = 'Europe/Zurich'
client_min_messages = 'WARNING'
wal_level = 'replica'
hot_standby_feedback = 'on'
max_wal_senders = '10'
cluster_name = 'pgpure'
max_replication_slots = '10'
shared_buffers=2048MB
work_mem=128MB
effective_cache_size=6144MB
maintenance_work_mem=512MB
max_wal_size=10GB

Calculating the minimum required amount of huge pages for the PostgreSQL instance:

postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] head -1 $PGDATA/postmaster.pid
3662
postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] grep ^VmPeak /proc/3662//status
VmPeak:	 2415832 kB
postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] echo "2415832/2048" | bc
1179

Set it slightly higher:

postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] sudo sed -i 's/vm.nr_hugepages=1024/vm.nr_hugepages=1200/g' /usr/lib/tuned/dbi-postgres/tuned.conf
postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] sudo tuned-adm profile dbi-postgres
postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] cat /proc/meminfo | grep Huge
AnonHugePages:      6144 kB
HugePages_Total:    1200
HugePages_Free:     1200
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

To disable transparent huge pages we created a file called “disable-thp.service” (from here):

postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] cat /etc/systemd/system/disable-thp.service
# Disable transparent huge pages
# put this file under:
#   /etc/systemd/system/disable-thp.service
# Then:
#   sudo systemctl daemon-reload
#   sudo systemctl start disable-thp
#   sudo systemctl enable disable-thp
[Unit]
Description=Disable Transparent Huge Pages (THP)

[Service]
Type=simple
ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"

[Install]
WantedBy=multi-user.target

Then reload the systemd daemon and start and enable the service:

postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] sudo systemctl daemon-reload
postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] sudo systemctl start disable-thp
postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] sudo systemctl enable disable-thp

To verify:

postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] cat /sys/kernel/mm/transparent_hugepage/defrag 
always madvise [never]

For being sure that PostgreSQL really will use the huge pages set huge_pages to ‘on’ as this will prevent PostgreSQL from starting when the required pages can not be allocated:

pgpurestorage/postgres MASTER (postgres@5432) # alter system set huge_pages='on';
ALTER SYSTEM
Time: 2.417 ms

… and then restart the instance. When all is fine PostgreSQL will come up.

Finally to close this setup post here are some screenshots of the Pure Storage Management Web Console. The first one shows the “Storage” tab where you can see that the volume “volpgtest” is mapped to my host “pgpurestorage”.
purestorage-setup-storage-1

The name you give the server is not important. The important information is the mapping of the “Host Port” which you can see here (this is the iSCSI IQN):

purestorage-setup-storage-2

Once your server is connected you can see it in the connection map of the server in the console:
purestorage-connection-map

System health:
purestorage-system-health

Last, but not least, here is the dashboard:
purestorage-dashboard

Not much traffic right now but we’ll be changing that in the next post.

 

Cet article PostgreSQL on a Pure Storage All Flash Array – Preparations est apparu en premier sur Blog dbi services.

12c nologging and Data Guard

Wed, 2017-05-03 14:23

The title sounds weird because Data Guard synchronisation is based on the redo stream, so it makes no sense to do nologging operations on the primary. And this is the reason why we set FORCE LOGGING on a Data Guard configuration. However, to lower the downtime of a migration done with Data Pump, you may want to import with minimal logging and then re-synchronize the standby. This post is about the re-synchronisation in 12.1

Nologging Data Pump

When you want to lower the downtime for a migration, you can disable force logging (alter database no force logging), and run impdp with the following: transform=disable_archive_logging:y
Don’t forget to re-enable force_logging at the end and to re-synchronize the standby.

nonlogged (aka unrecoverable)

So, you have nonlogged blocks, we also call that unrecoverable because it cannot be recovered with the redo stream. If you are in 12.2 then everything is easy with recover database nonlogged block; and I explained that in a previous post: https://blog.dbi-services.com/12cr2-recover-nonlogged-blocks-after-nologging-in-data-guard/

If you are in 12.2 then it is half easy only. You can see where you have nonlogged blocks:
RMAN> select file#,reason,count(*) from v$nonlogged_block group by file#,reason;
&bsp;
FILE# REASON COUNT(*)
---------- ------- ----------
5 UNKNOWN 158
6 UNKNOWN 159
7 UNKNOWN 336
8 UNKNOWN 94
9 UNKNOWN 16
10 UNKNOWN 14

and this is the right way to query them. If you use RMAN ‘report unrecoverable’ it will not display the datafiles that had nologging operations on the primary.

In 12.1 you can RESTORE FROM SERVICE to recover from the primary rather than from a backup. It is straightforward. I’m just writing this blog post in case you see the following when you try to do this because the message can be misinterpreted:


RMAN> restore database from service 'MYDB_SITE1_dgmgrl';
 
Starting restore at 03-MAY-2017 13:22:12
using channel ORA_DISK_1
 
skipping datafile 1; already restored to SCN 3849354
skipping datafile 2; already restored to SCN 3849356
skipping datafile 3; already restored to SCN 3849358
skipping datafile 4; already restored to SCN 3849360
skipping datafile 5; already restored to SCN 3849365
skipping datafile 6; already restored to SCN 3849372
skipping datafile 7; already restored to SCN 3849382
skipping datafile 8; already restored to SCN 3849389
skipping datafile 9; already restored to SCN 3849395
skipping datafile 10; already restored to SCN 3849398
restore not done; all files read only, offline, or already restored
Finished restore at 03-MAY-2017 13:22:12

RMAN is clever enough: the data files are ok, according to their header and it skipped the restore. But you know that they are not ok, because some blocks are marked as corrupt because of nologging operations. Then what to do? There is a FORCE option in the restore command. But you probably don’t need it. If you get the previous message, it means that the datafiles are synchronized, which means that the APPLY is running. And, anyway, in order to restore you need to stop the APPLY.


DGMGRL> edit database orclb set state=apply-off;

Of course, once you stopped the apply, you run your RESTORE DATABASE FORCE. But you probably don’t need it. Now, the datafiles are stale and RMAN will not skip them even without the FORCE keyword.


RMAN> restore database from service 'MYDB_SITE1_dgmgrl';
 
Starting restore at 03-MAY-2017 13:22:37
using channel ORA_DISK_1
 
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service MYDB_SITE1_dgmgrl
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /media/raid-db/MYDB/system01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service MYDB_SITE1_dgmgrl
...
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 03-MAY-2017 13:25:30
RMAN> exit

Don’t forget to re-enable the Data Guard Apply at the end.

So what?

When you see all datafiles skipped, that probably means that you didn’t stop the APPLY. With APPLY stopped, and you probably stop it before the import as you plan to restore the standby later, then you probably don’t need the FORCE command. However, I’ll always recommend using the FORCE in this case because RMAN will skip the files without looking at the unlogged blocks. Imagine that you put a tablespace in read-only after the non-logged import but before stopping the apply. Then this one will be skipped.

 

Cet article 12c nologging and Data Guard est apparu en premier sur Blog dbi services.

Oracle OGG JAGENT 12.2 installation for GoldenGate monitoring using the Cloud Control plugin

Wed, 2017-05-03 06:34

This post is mainly for self documentation as I have to search my old notes every time I want to do this (I am faster by searching on our blog :) ): If you want to monitor GoldenGate with Cloud Control using the GoldenGate plugin you’ll have to install the JAGENT on the host where GoldenGate is running on and the documentation is not quite clear on what you have to do exactly (at least when you want to use the silent installation). This is what you need to do for 12.2:

Download “Oracle GoldenGate Monitor 12.2.1.2.0″ from here. You need to have Java 8 installed (Oracle version, not openjdk) on the server you want to install the JAGENT on. You can download it from here.

Once you have Java 8 installed create a response file like this:

oracle@oelogg1:/var/tmp/ [rdbms12102] cat oggmon.rsp 
#DO NOT CHANGE THIS.
Response File Version=1.0.0.0.0
[GENERIC]
#The oracle home location. This can be an existing Oracle Home or a new Oracle Home
ORACLE_HOME=/u01/app/ogg/product/agent/12.2.1.2.0
INSTALL_TYPE=GoldenGate Monitor Agent Installation

Then you are ready to install:

oracle@oelogg1:/var/tmp/ [rdbms12102] /usr/java/jdk1.8.0_131/bin/java -jar fmw_12.2.1.2.0_ogg.jar -silent -responseFile /var/tmp/oggmon.rsp  

The result when everything went fine:

Launcher log file is /tmp/OraInstall2017-05-03_11-00-32AM/launcher2017-05-03_11-00-32AM.log.
Extracting the installer . . . . Done
Checking if CPU speed is above 300 MHz.   Actual 2793.552 MHz    Passed
Checking swap space: must be greater than 512 MB.   Actual 3967 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    Passed (64-bit not required)
Checking temp space: must be greater than 300 MB.   Actual 8719 MB    Passed

Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2017-05-03_11-00-32AM
Log: /tmp/OraInstall2017-05-03_11-00-32AM/install2017-05-03_11-00-32AM.log
Copyright (c) 2016, Oracle and/or its affiliates. All rights reserved.
Reading response file..
Skipping Software Updates
Starting check : CertifiedVersions
Expected result: One of oracle-6,oracle-7,redhat-7,redhat-6,SuSE-11,SuSE-12
Actual Result: oracle-7.3
Check complete. The overall result of this check is: Passed
CertifiedVersions Check: Success.

Starting check : CheckJDKVersion
Expected result: 1.8.0_101
Actual Result: 1.8.0_131
Check complete. The overall result of this check is: Passed
CheckJDKVersion Check: Success.

Validations are enabled for this session.
Verifying data
Copying Files
Percent Complete : 10
Percent Complete : 20
Percent Complete : 30
Percent Complete : 40
Percent Complete : 50
Percent Complete : 60
Percent Complete : 70
Percent Complete : 80
Percent Complete : 90
Percent Complete : 100

The installation of Oracle Fusion Middleware 12c GoldenGate Monitor & Veridata 12.2.1.2.0 completed successfully.
Logs successfully copied to /u01/app/oraInventory/logs.

Once installed you can proceed by creating the Agent instance:

oracle@oelogg1:/u01/app/ogg/product/agent/12.2.1.2.0/oggmon/ogg_agent/ [rdbms12102] pwd
/u01/app/ogg/product/agent/12.2.1.2.0/oggmon/ogg_agent
oracle@oelogg1:/u01/app/ogg/product/agent/12.2.1.2.0/oggmon/ogg_agent/ [rdbms12102] ./createMonitorAgentInstance.sh
Please enter absolute path of Oracle GoldenGate home directory : /u01/app/ogg/product/12.2.0.1.1/
Please enter absolute path of OGG Agent instance : /u01/app/ogg/product/12.2.0.1.1/agent_inst 
Please enter unique name to replace timestamp in startMonitorAgent script (startMonitorAgentInstance_20170503113738.sh) : 
Successfully created OGG Agent instance.

Create the wallet for the Agent:

oracle@oelogg1:/u01/app/ogg/product/12.2.0.1.1/agent_inst/bin/ [rdbms12102] pwd
/u01/app/ogg/product/12.2.0.1.1/agent_inst/bin
oracle@oelogg1:/u01/app/ogg/product/12.2.0.1.1/agent_inst/bin/ [rdbms12102] export JAVA_HOME=/usr/java/jdk1.8.0_131
oracle@oelogg1:/u01/app/ogg/product/12.2.0.1.1/agent_inst/bin/ [rdbms12102] ./pw_agent_util.sh -jagentonly
Please create a password for Java Agent: 
Please confirm password for Java Agent: 
May 03, 2017 11:46:06 AM oracle.security.jps.JpsStartup start
INFO: Jps initializing.
May 03, 2017 11:46:06 AM oracle.security.jps.JpsStartup start
INFO: Jps started.
Wallet is created successfully.

Adjust the Agent configuration file for your environment:

oracle@oelogg1:/u01/app/ogg/product/12.2.0.1.1/agent_inst/cfg/ [rdbms12102] pwd
/u01/app/ogg/product/12.2.0.1.1/agent_inst/cfg
oracle@oelogg1:/u01/app/ogg/product/12.2.0.1.1/agent_inst/cfg/ [rdbms12102] cat Config.properties | egrep -v "^#|^$"
jagent.host=localhost
jagent.jmx.port=5555
interval.regular=60
interval.quick=30
monitor.host=localhost
monitor.jmx.port=5502
monitor.jmx.username=oggmsjmxusr
jagent.username=oggmajmxusr
reg.retry.interval=10
instance.query.initial.interval=5
incremental.registration.quiet.interval=5
maximum.message.retrieval=500
jagent.rmi.port=5559
agent.type.enabled=OEM
status.polling.interval=5
message.polling.interval=5
reg.retry.times=-1
jagent.backward.compatibility=false
jagent.ssl=false
jagent.keystore.file=jagentKeyStore
jagent.truststore.file=jagentKeyStore
jagent.restful.ws.timeout=15000
jagent.ggsci.timeout=30

Make sure you change “agent.type.enabled” to “OEM”.

Enable monitoring in the GLOBALS file:

oracle@oelogg1:/var/tmp/ [OGG122] ggi

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Dec 12 2015 02:56:48
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (oelogg1) 1> view params ./GLOBALS

ENABLEMONITORING

Restart ggscsi:

oracle@oelogg1:/var/tmp/ [OGG122] ggi

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Dec 12 2015 02:56:48
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.



GGSCI (oelogg1) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
JAGENT      STOPPED                                           


GGSCI (oelogg1) 2> start jagent

Sending START request to MANAGER ...
GGCMD JAGENT starting


GGSCI (oelogg1) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
JAGENT      RUNNING                              

… and you’re done. Hope this helps.

 

Cet article Oracle OGG JAGENT 12.2 installation for GoldenGate monitoring using the Cloud Control plugin est apparu en premier sur Blog dbi services.

12cR2 partial PDB backup

Mon, 2017-05-01 01:21

I had a recent question about the following mention from the 12cR2 Multitenant book, about Partial PDB backups:
CapturePArtialPDBBackup.
Here is an example in 12.2 with local undo to illustrate the answer, which may help to understand what is a partial PDB backup.

Of course, since 12cR1 you can backup PDB individually, without the CDB$ROOT, in the same way you can backup only a few tablespaces subset of a CDB. It can be part of your backup strategy, but it is not to be considered as a backup that you can restore elsewhere later. A PDB is not self-consistent without the PDB$ROOT except if is has been closed and unplugged. In 12.1 you cannot restore a partial PDB backup if you don’t have the CDB$ROOT at the same point in time, because the recovery phase will need to rollback the ongoing transactions, and this requires to have the UNDO tablespace recovered at the same point in time.

However, in 12.2 with LOCAL UNDO, the partial PDB backup contains the local UNDO tablespace and then it can be sufficient to do a PDB Point In Time Recovery within the same CDB. And, in this case only, it is not required to have a backup of the root.

Let’s test it. I explicitly delete all backups


Recovery Manager: Release 12.2.0.1.0 - Production on Sun Apr 30 22:11:38 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
RMAN>
echo set on
 
RMAN> connect target /
connected to target database: CDB1 (DBID=914521258)
 
RMAN> delete noprompt backup;
 
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4
specification does not match any backup in the repository
 
 
RMAN> list backup;
specification does not match any backup in the repository

No backup

I have only one PDB here:


RMAN> report schema;
Report of database schema for database with db_unique_name CDB1A
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 820 SYSTEM YES /u01/oradata/CDB1A/system01.dbf
3 630 SYSAUX NO /u01/oradata/CDB1A/sysaux01.dbf
4 70 UNDOTBS1 YES /u01/oradata/CDB1A/undotbs01.dbf
5 250 PDB$SEED:SYSTEM NO /u01/oradata/CDB1A/pdbseed/system01.dbf
6 330 PDB$SEED:SYSAUX NO /u01/oradata/CDB1A/pdbseed/sysaux01.dbf
7 5 USERS NO /u01/oradata/CDB1A/users01.dbf
8 100 PDB$SEED:UNDOTBS1 NO /u01/oradata/CDB1A/pdbseed/undotbs01.dbf
103 250 PDB1:SYSTEM YES /u01/oradata/CDB1A/PDB1/system01.dbf
104 350 PDB1:SYSAUX NO /u01/oradata/CDB1A/PDB1/sysaux01.dbf
105 100 PDB1:UNDOTBS1 YES /u01/oradata/CDB1A/PDB1/undotbs01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 131 TEMP 32767 /u01/oradata/CDB1A/temp01.dbf
2 64 PDB$SEED:TEMP 32767 /u01/oradata/CDB1A/pdbseed/temp012017-04-08_22-24-09-441-PM.dbf
4 64 PDB1:TEMP 32767 /u01/oradata/CDB1A/PDB1/temp012017-04-08_22-24-09-441-PM.dbf

all datafiles need backup:


RMAN> report need backup;
RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
Report of files with less than 1 redundant backups
File #bkps Name
---- ----- -----------------------------------------------------
1 0 /u01/oradata/CDB1A/system01.dbf
3 0 /u01/oradata/CDB1A/sysaux01.dbf
4 0 /u01/oradata/CDB1A/undotbs01.dbf
5 0 /u01/oradata/CDB1A/pdbseed/system01.dbf
6 0 /u01/oradata/CDB1A/pdbseed/sysaux01.dbf
7 0 /u01/oradata/CDB1A/users01.dbf
8 0 /u01/oradata/CDB1A/pdbseed/undotbs01.dbf
103 0 /u01/oradata/CDB1A/PDB1/system01.dbf
104 0 /u01/oradata/CDB1A/PDB1/sysaux01.dbf
105 0 /u01/oradata/CDB1A/PDB1/undotbs01.dbf

Partial backup not including the root

I backup only the pluggable database PDB1


RMAN> backup pluggable database PDB1;
Starting backup at 30-APR-17
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00104 name=/u01/oradata/CDB1A/PDB1/sysaux01.dbf
channel ORA_DISK_1: starting piece 1 at 30-APR-17
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00103 name=/u01/oradata/CDB1A/PDB1/system01.dbf
channel ORA_DISK_2: starting piece 1 at 30-APR-17
channel ORA_DISK_3: starting full datafile backup set
channel ORA_DISK_3: specifying datafile(s) in backup set
input datafile file number=00105 name=/u01/oradata/CDB1A/PDB1/undotbs01.dbf
channel ORA_DISK_3: starting piece 1 at 30-APR-17
channel ORA_DISK_1: finished piece 1 at 30-APR-17
piece handle=/u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk827s_.bkp tag=TAG20170430T221146 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_3: finished piece 1 at 30-APR-17
piece handle=/u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk83go_.bkp tag=TAG20170430T221146 comment=NONE
channel ORA_DISK_3: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_2: finished piece 1 at 30-APR-17
piece handle=/u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk830z_.bkp tag=TAG20170430T221146 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03
Finished backup at 30-APR-17
 
Starting Control File and SPFILE Autobackup at 30-APR-17
piece handle=/u01/fast_recovery_area/CDB1A/autobackup/2017_04_30/o1_mf_s_942703909_djdk85m1_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 30-APR-17

Here is the proof that only PDB1 has a backup, the CDB$ROOT has no backup:


RMAN> report need backup;
RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
Report of files with less than 1 redundant backups
File #bkps Name
---- ----- -----------------------------------------------------
1 0 /u01/oradata/CDB1A/system01.dbf
3 0 /u01/oradata/CDB1A/sysaux01.dbf
4 0 /u01/oradata/CDB1A/undotbs01.dbf
5 0 /u01/oradata/CDB1A/pdbseed/system01.dbf
6 0 /u01/oradata/CDB1A/pdbseed/sysaux01.dbf
7 0 /u01/oradata/CDB1A/users01.dbf
8 0 /u01/oradata/CDB1A/pdbseed/undotbs01.dbf

Restore the PDB

I will do PDB Point In Time Recovery, using a restore point


RMAN> create restore point RP;
Statement processed
 
RMAN> alter pluggable database PDB1 close;
Statement processed
 

Here is the restore


RMAN> restore pluggable database PDB1 until restore point RP;
Starting restore at 30-APR-17
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=15 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=149 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=268 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=398 device type=DISK
 
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00104 to /u01/oradata/CDB1A/PDB1/sysaux01.dbf
channel ORA_DISK_1: reading from backup piece /u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk827s_.bkp
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00105 to /u01/oradata/CDB1A/PDB1/undotbs01.dbf
channel ORA_DISK_2: reading from backup piece /u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk83go_.bkp
channel ORA_DISK_3: starting datafile backup set restore
channel ORA_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_DISK_3: restoring datafile 00103 to /u01/oradata/CDB1A/PDB1/system01.dbf
channel ORA_DISK_3: reading from backup piece /u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk830z_.bkp
channel ORA_DISK_2: piece handle=/u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk83go_.bkp tag=TAG20170430T221146
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:03
channel ORA_DISK_3: piece handle=/u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk830z_.bkp tag=TAG20170430T221146
channel ORA_DISK_3: restored backup piece 1
channel ORA_DISK_3: restore complete, elapsed time: 00:00:03
channel ORA_DISK_1: piece handle=/u01/fast_recovery_area/CDB1A/4E68DF57035A648FE053684EA8C01C78/backupset/2017_04_30/o1_mf_nnndf_TAG20170430T221146_djdk827s_.bkp tag=TAG20170430T221146
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 30-APR-17

and the recover


RMAN> recover pluggable database PDB1 until restore point RP;
Starting recover at 30-APR-17
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4
 
starting media recovery
media recovery complete, elapsed time: 00:00:00
 
Finished recover at 30-APR-17

Fimnally, I open resetlogs


RMAN> alter pluggable database PDB1 open resetlogs;
Statement processed

Thanks to LOCAL UNDO there is no need to restore the CDB$ROOT into an auxiliary instance, as it was the case for PDBPITR in 12.1 and then we can do PDBPITR without a backup of the root.

So what?

In theory, and as demonstrated above, including CDB$ROOT into a partial PDB backup is not mandatory in 12cR2 in local undo mode. However, keep in mind that this is for academic purpose only, not for real-life production. For short-term point in time, you will not use backups but flashback. For long-term restore, then you may have different reasons to restore the PDB elsewhere with its CDB$ROOT at the same point in time: some common objects (users, roles, directories, etc) may have changed. And anyway, your backup strategy should be at CDB level.

 

Cet article 12cR2 partial PDB backup est apparu en premier sur Blog dbi services.

A new OGG challenge: Migrate an Oracle 8.1.7.4 instance to 10.2.0.5

Thu, 2017-04-27 08:52

Yes, these beasts still exist in production:

[/u01/appl/ora817]$ sqlplus /nolog

SQL*Plus: Release 8.1.7.0.0 - Production on Thu Apr 27 09:54:20 2017

(c) Copyright 2000 Oracle Corporation.  All rights reserved.

SQL> connect / as sysdba 
Connected.
SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
PL/SQL Release 8.1.7.4.0 - Production
CORE    8.1.7.0.0       Production
TNS for Solaris: Version 8.1.7.4.0 - Production
NLSRTL Version 3.4.1.0.0 - Production

First question: Is there a version of GoldenGate which is able to extract from Oracle 8.1.7.4 on Solaris 8? Yes there is: p9271762_1040_SOLARIS64.zip. When you want to download that you’ll have to create a service request with Oracle Support because these old releases are password protected and you have to agree that you do not get any support for this.

The schema check script referenced here: Oracle GoldenGate database Complete Database Profile check script for Oracle DB (All Schemas) Classic Extract (Doc ID 1298562.1) has quite a few issues with 8.1.7.4 as it references columns in the dictionary that do not exist in this version of Oracle (e.g. the checks for compression on table and tablespace level) but you can still use the output and do some hand work.

The documentation for GoldenGate 10.4 is still online.

This will be an interesting project :)

 

Cet article A new OGG challenge: Migrate an Oracle 8.1.7.4 instance to 10.2.0.5 est apparu en premier sur Blog dbi services.

Migrating from oracle to postgresql with ora2pg

Wed, 2017-04-26 12:59

PostgreSQL is one of the most used Rdbms.
In this blog we are going to talk about migrating from oracle to postgresql using ora2pg. A previous blog about this topic can be found here.
After installing ora2pg tool, we will see how to configure and to run it to migrate our data. We are using an oracle 12.1 database and a postgresql 9.6.2. The server is runing on OEL 7.2
The oracle database and the postgresql server are running on the same server. But different servers can be used.

To install ora2pg we need following:
-DBD-Oracle-1.75_2.tar.gz : Oracle database driver for the DBI module
-DBD-Pg-3.6.0.tar.gz : PostgreSQL database driver for the DBI module
-DBI-1.636.tar.gz : Database independent interface for Perl
-ora2pg-18.1.tar.gz : ora2pg archive

The DBI modules can be found here
And the ora2pg can be downloaded from here

Install DBI module
[root@serveroracle postgres]#tar xvzf DBI-1.636.tar.gz
[root@serveroracle postgres]#cd DBI-1.636
[root@serveroracle postgres]#perl Makefile.Pl
[root@serveroracle postgres]#make
[root@serveroracle postgres]#make install

Install DBD-Oracle
[root@serveroracle postgres]# tar xvzf DBD-Oracle-1.75_2.tar.gz
[root@serveroracle postgres]# cd DBD-Oracle-1.75_2/
[root@serveroracle DBD-Oracle-1.75_2]# export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
[root@serveroracle DBD-Oracle-1.75_2]# export LD_LIBRARY_PATH=/u01/app/oracle/product/12.1.0/dbhome_1/lib
[root@serveroracle DBD-Oracle-1.75_2]# perl Makefile.PL
[root@serveroracle DBD-Oracle-1.75_2]# make
[root@serveroracle DBD-Oracle-1.75_2]# make install

Install DBD-Pg
[root@serveroracle postgres]# tar xvzf DBD-Pg-3.6.0.tar.gz
[root@serveroracle postgres]# cd DBD-Pg-3.6.0
[root@serveroracle DBD-Pg-3.6.0]# perl Makefile.PL
Configuring DBD::Pg 3.6.0
Path to pg_config? /u01/app/postgres/product/96/db_2/bin/pg_config
[root@serveroracle DBD-Pg-3.6.0]# make
[root@serveroracle DBD-Pg-3.6.0]# make install

When we first run the command perl MakeFile.PL, we got following errors
[root@serveroracle DBD-Pg-3.6.0]# perl Makefile.PL
Configuring DBD::Pg 3.6.0
PostgreSQL version: 90602 (default port: 5432)
POSTGRES_HOME: /u01/app/postgres/product/96/db_2
POSTGRES_INCLUDE: /u01/app/postgres/product/96/db_2/include
POSTGRES_LIB: /u01/app/postgres/product/96/db_2/lib
OS: linux
Warning: prerequisite version 0 not found.
Could not eval '
package ExtUtils::MakeMaker::_version;
no strict;
BEGIN { eval {
# Ensure any version() routine which might have leaked
# into this package has been deleted. Interferes with
# version->import()
undef *version;
require version;
"version"->import;
} }
local $VERSION;
$VERSION=undef;
do {
use version; our $VERSION = qv('3.6.0');
};
$VERSION;
' in Pg.pm: Can't locate version.pm in @INC (@INC contains: t/lib /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at (eval 11) line 16, line 19.
BEGIN failed--compilation aborted at (eval 11) line 16, line 19.
WARNING: Setting VERSION via file 'Pg.pm' failed
at /usr/share/perl5/vendor_perl/ExtUtils/MakeMaker.pm line 619.
Using DBI 1.636 (for perl 5.016003 on x86_64-linux-thread-multi) installed in /usr/local/lib64/perl5/auto/DBI/
Writing Makefile for DBD::Pg

We correct errors by running the command below.
[root@serveroracle DBD-Pg-3.6.0]# yum -y install 'perl(version)'
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package perl-version.x86_64 3:0.99.07-2.el7 will be installed
--> Finished Dependency Resolution

Install ora2pg

[postgres@serveroracle ~]$ tar xvzf ora2pg-18.1.tar.gz
[root@serveroracle postgres]# cd ora2pg-18.1/
[root@serveroracle postgres]# perl Makefile.PL
[root@serveroracle postgres]# make
[root@serveroracle postgres]# make install


[root@serveroracle postgres]# ora2pg -version
Ora2Pg v18.1
[root@serveroracle postgres]#

Now that ora2pg is installed, we can proceed with the migration. The first step is to configure the ora2pg.conf file. We can do a copy of the default template and then modify the file. In our case the configuration file is located in /etc/ora2pg directory.

[root@serveroracle ora2pg]# cp ora2pg.conf.dist ora2pg.conf
[root@serveroracle ora2pg]# vi ora2pg.conf

In our configuration file, following changes where done. We are exporting only the HR schema

ORACLE_DSN dbi:Oracle:host=serveroracle.localdomain;sid=ORCL
ORACLE_USER system
ORACLE_PWD root
SCHEMA HR
TYPE TABLE PACKAGE COPY VIEW GRANT SEQUENCE TRIGGER FUNCTION PROCEDURE TABLESPACE TYPE PARTITION
OUTPUT HR_output.sql

ora2pg works by exporting and importing schemas. We can only export one schema at a time if we use the option SCHEMA. If we want to export all schemas we can just comment the option SCHEMA. In this case all non-oracle users will be extracted.
In the documentation we also have the option SYSUSERS
# Allow to add a comma separated list of system user to exclude from
# from Oracle extraction. Oracle have many of them following the modules
# installed. By default it will suppress all object owned by the following
# system users:
# CTXSYS,DBSNMP,EXFSYS,LBACSYS,MDSYS,MGMT_VIEW,OLAPSYS,ORDDATA,OWBSYS,
# ORDPLUGINS,ORDSYS,OUTLN,SI_INFORMTN_SCHEMA,SYS,SYSMAN,SYSTEM,WK_TEST,
# WKSYS,WKPROXY,WMSYS,XDB,APEX_PUBLIC_USER,DIP,FLOWS_020100,FLOWS_030000,
# FLOWS_040100,FLOWS_FILES,MDDATA,ORACLE_OCM,SPATIAL_CSW_ADMIN_USR,
# SPATIAL_WFS_ADMIN_USR,XS$NULL,PERFSTAT,SQLTXPLAIN,DMSYS,TSMSYS,WKSYS,
# APEX_040200,DVSYS,OJVMSYS,GSMADMIN_INTERNAL,APPQOSSYS
# Other list of users set to this directive will be added to this list.
#SYSUSERS OE,HR

Once configuration done, we can run the orap2g command. Note that you can see all options by running ora2pg -help
[root@serveroracle ora2pg]# ora2pg
[========================>] 7/7 tables (100.0%) end of scanning.
[> ] 0/7 tables (0.0%) end of scanning.
[========================>] 7/7 tables (100.0%) end of table export.
[========================>] 0/0 packages (100.0%) end of output.
[========================>] 25/25 rows (100.0%) Table COUNTRIES (25 recs/sec)
[==> ] 25/215 total rows (11.6%) - (0 sec., avg: 25 recs/sec).
[========================>] 27/27 rows (100.0%) Table DEPARTMENTS (27 recs/sec)
[=====> ] 52/215 total rows (24.2%) - (1 sec., avg: 52 recs/sec).
[========================>] 107/107 rows (100.0%) Table EMPLOYEES (107 recs/sec)
[=================> ] 159/215 total rows (74.0%) - (1 sec., avg: 159 recs/sec).
[========================>] 19/19 rows (100.0%) Table JOBS (19 recs/sec)
[===================> ] 178/215 total rows (82.8%) - (1 sec., avg: 178 recs/sec).
[========================>] 10/10 rows (100.0%) Table JOB_HISTORY (10 recs/sec)
[====================> ] 188/215 total rows (87.4%) - (2 sec., avg: 94 recs/sec).
[========================>] 23/23 rows (100.0%) Table LOCATIONS (23 recs/sec)
[=======================> ] 211/215 total rows (98.1%) - (2 sec., avg: 105 recs/sec).
[========================>] 4/4 rows (100.0%) Table REGIONS (4 recs/sec)
[========================>] 215/215 total rows (100.0%) - (3 sec., avg: 71 recs/sec).
[========================>] 215/215 rows (100.0%) on total estimated data (3 sec., avg: 71 recs/sec)
[========================>] 1/1 views (100.0%) end of output.
[========================>] 3/3 sequences (100.0%) end of output.
[========================>] 1/1 triggers (100.0%) end of output.
[========================>] 0/0 functions (100.0%) end of output.
[========================>] 2/2 procedures (100.0%) end of output.
[========================>] 0/0 types (100.0%) end of output.
[========================>] 0/0 partitions (100.0%) end of output.
[root@serveroracle ora2pg]#

Once finished, a file HR_output.sql is generated. This file can be used to load data in the postgresql database. We can also load data directly without using a file. We just have to specify the connection info for the postgresql.
Let’s know prepare the postgresql server to receive our data. First we create the user HR.

[postgres@serveroracle ~]$ psql
psql (9.6.2 dbi services build)
Type "help" for help.
postgres=# \c orclpg
You are now connected to database "orclpg" as user "postgres".
orclpg=# create user HR WITH PASSWORD 'root';

And then we can execute the file. The first time we ran the file, we had some constraints errors

orclpg=# \i HR_output.sql
ERROR: insert or update on table "countries" violates foreign key constraint "countr_reg_fk"
DETAIL: Key (region_id)=(2) is not present in table "regions".
STATEMENT: COPY countries (country_id,country_name,region_id) FROM STDIN;
psql:HR_output.sql:224: ERROR: insert or update on table "countries" violates foreign key constraint "countr_reg_fk"
DETAIL: Key (region_id)=(2) is not present in table "regions".
orclpg=#

To correct this, we put the option in the configuration file DROP_FKEY to 1

DROP_FKEY 1

With this option all foreign keys will be dropped before all data import and recreate them at the end of the import. After the load was successful.

orclpg=# \i HR_output.sql
SET
CREATE SCHEMA
ALTER SCHEMA
SET
CREATE TABLE
COMMENT
SET
SET
SET
SET
SET
BEGIN
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
SET
COPY 107
SET
COPY 19
SET
COPY 10
SET
COPY 23
SET
COPY 4
ALTER TABLE
ALTER TABLE
ALTER TABLE


COMMIT

We can verify that tables were created and that data were inserted.


orclpg=# \d
List of relations
Schema | Name | Type | Owner
--------+-------------+-------+----------
hr | countries | table | postgres
hr | departments | table | postgres
hr | employees | table | postgres
hr | job_history | table | postgres
hr | jobs | table | postgres
hr | locations | table | postgres
hr | regions | table | postgres
(7 rows)

orclpg=# select count(*) from countries;
count
-------
25
(1 row)
orclpg=#

Conclusion
As we can see ora2pg is a free easy tool to migrate data from oracle to postgresql. In coming blog we will talk about other tools that can be used to move data from oracle to postgresql

 

Cet article Migrating from oracle to postgresql with ora2pg est apparu en premier sur Blog dbi services.

Save the date: Swiss PGDay 2017

Wed, 2017-04-26 05:52

The 30th of June is not too far away and if you want to learn more about the “World’s most advanced open source database” you should register soon. The sessions are not yet officially announced but you can expect that to happen next week. What we can say already is that we will be there and that you can expect these sessions from dbi:

  • How to migrate data from MongoDB to Postgres with ToroDB
  • Elking your PostgreSQL database infrastructure
  • PostgreSQL upgrade best practices

Hope to see you in June.
PGDay2017CH

 

Cet article Save the date: Swiss PGDay 2017 est apparu en premier sur Blog dbi services.

Oracle 12.2 tables, indexes new features

Wed, 2017-04-26 04:26

The Oracle 12.2.0.1 version has some interesting new features  concerning tables or indexes.

The first new feature is about the online table move.

In 12.2 version Oracle offers now the possibility to move non-partitioned tables without blocking any DML operations.

To realise this operation, we must use the ONLINE keyword and/or the UPDATE_INDEXES clause. If you remember, in the previous Oracle version, we encountered the classical ORA-01502 error.

If you remember in version 12.1, we have the following behaviour, we create a table with a constraint and we insert some values:

SQL> create table emp (name varchar2(10), salary number );
Table created.
SQL> alter table emp add constraint emp_pk primary key (name);
Table altered.
SQL> insert into emp values ('Bill', 100000);
1 row created.
SQL> insert into emp values ('Larry', 10000000);
1 row created.
SQL> commit;
Commit complete.

SQL> select * from emp;
NAME           SALARY
 ---------- ----------
 Bill           100000
 Larry         10000000

Then if  we move the table to another tablespace, the index become unusable, and if we try to insert some data we receive the ORA-1502 error:

SQL> alter table emp move tablespace PSI;
Table altered.
 
SQL> select index_name, status from user_indexes;
 
INDEX_NAME        STATUS
EMP_PK           UNUSABLE
 
SQL> insert into emp values ('Pierre', 99999);
insert into emp values ('Pierre', 99999)
*
ERROR at line 1:
ORA-01502: index 'PSI.EMP_PK' or partition of such index is in unusable state

Now in 12.2 version, we do not have this problem anymore:

SQL> create table emp (name varchar2(10), salary number);
Table created.
 
SQL> alter table emp add constraint emp_pk primary key (name);
Table altered.
 
SQL> insert into emp values ('Bill', 100000);
1 row created.
 
SQL> insert into emp values ('Larry', 999999); 
1 row created.
 
SQL> commit;
Commit complete.
 
SQL> select * from emp;
 
NAME       SALARY
---------- ----------
Bill         100000
Larry        999999
 
 
SQL> select index_name, status from user_indexes;
 
INDEX_NAME        STATUS
EMP_PK            VALID
 
SQL> alter table emp move tablespace PSI2 update indexes;
Table altered.
 
SQL> select index_name, status from user_indexes;
INDEX_NAME        STATUS
EMP_PK            VALID
 
SQL> insert into emp values ('Pierre', 99999);
1 row created.

Using this way, we also can move indexes in different tablespaces:
SQL> alter table emp move online tablespace PSI
  2  update indexes
  3  (emp_pk tablespace psi_ix1,
  4  emp_ix2 tablespace psi_ix1);
 
Table altered.
 
SQL> select index_name, status, tablespace_name from user_indexes;
 
INDEX_NAME.   STATUS TABLESPACE_NAME
EMP_PK        VALID  PSI_IX1
EMP_IX2       VALID  PSI_IX1

Another interesting new feature is about the conversion to a partitioned table. Before the Oracle 12.2 version, the methods used to convert a non-partitioned table to a partitioned table were not online or were using dbms_redefinition. Now in 12.2 we have the possibility to realize the operation in online mode:

SQL> create table emp (name varchar2(10), emp_id number, salary number);

Table created.

SQL> insert into emp values(‘Larry’, 1, 1000000);

1 row created.

SQL> insert into emp values (‘Bill’, 100, 999999);

1 row created.

SQL> insert into emp values (‘Steve’, 1000, 1000000);

1 row created.

SQL> alter table emp modify

partition by range (emp_id) interval (100)

(partition p1 values less than (50),

partition p2 values less than (500),

partition p3 values less than (5000)

) online;

Table altered.

SQL> select table_name , partition_name, high_value from user_tab_partitions;

TABLE_NAME.     PARTITION_NAME      HIGH_VALUE

EMP                                  P1                                      50

EMP                                  P2                                     500

EMP                                  P3.                                    5000

As you can see, this is really an easy way to move a non partitioned table to a partitioned table.

The next new feature I will talk about is concerning the advanced compression. For example, we have the possibility to create a tablespace with such an argument:

SQL> create tablespace psi_ix_compress
  2  default index compress advanced high
  3  datafile '/u01/oradata/db1/db1pdb1/psi_ix_comp01.dbf' size 10M;
 
Tablespace created.

Every new index created in this tablespace will use high advanced compression. But at first approach it does not seem to work very well:

SQL> create index psi_ix2 on emp(salary) tablespace psi_ix_compress;
 
Index created.
 
SQL> select index_name, compression from user_indexes;
 
INDEX_NAME             COMPRESSION
EMP_PK                  DISABLED
EMP_IX2                 DISABLED
PSI_IX2                 DISABLED

But if you have a more precise look, there is a parameter you have also to modify:

SQL> show parameter db_index_compression_inheritance
 
NAME                                  TYPE VALUE
db_index_compression_inheritance     string NONE

SQL> alter system set db_index_compression_inheritance = 'TABLESPACE';
 
System altered.

And finally, it works fine:

SQL> create index psi_ix1_comp on emp (name, emp_id,salary) tablespace psi_ix_compress;
Index created.
 
SQL> select index_name, compression from user_indexes where index_name like '%COMP';
INDEX_NAME.      COMPRESSION
PSI_IX1_COMP     ADVANCED HIGH

We have the possibility to specify the ADVANCED COMPRESS HIGH or LOW argument in the create or rebuild statement:

 

SQL> create index psi_ix1 on emp (salary) compress advanced low;
 
Index created.
 
SQL> select index_name, compression from user_indexes;
 
INDEX_NAME         COMPRESSION
EMP_PK              DISABLED
PSI_IX1             ADVANCED LOW
PSI_NEW             DISABLED

And we can use alter index rebuild to modify to high advanced compression:

SQL> alter index psi_ix1 rebuild compress advanced high;
Index altered.
 
SQL> select index_name, compression from user_indexes;
 
INDEX_NAME.        COMPRESSION
EMP_PK              DISABLED
PSI_IX1.            ADVANCED HIGH
PSI_NEW             DISABLED

Enjoy using those Oracle 12.2.0.1 new features !

 

Cet article Oracle 12.2 tables, indexes new features est apparu en premier sur Blog dbi services.

Listener and Virtual IP

Tue, 2017-04-25 15:24

When you configure a standby database, you want the application to transparently connect to the primary database, wherever it is. That’s the role of Transparent Application Failover, but this requires configuration on the client side. If you can’t configure TAF, you can use a virtual IP address. But then the question is how to configure the listener.ora to handle connections to this VIP.

Don’t worry, if you configured everything as recommended, with the hostname declared in /etc/hosts, and listener.ora referencing this host name, then you can simply ignore the VIP for your configuration. The reason is that when the host specified in the listener.ora resolves to the same IP address as the hostname of the server, then Oracle listener binds the port on all interfaces, and this includes the VIP.

However, if you mentioned an IP address in the listener.ora, or if you mentioned a host that resolves to a different IP than the hostname, then it listens only tho this interface.

Why not just listen to the VIP? There are two reasons for that. First, you will need to listen to the host IP anyway for the dynamic registration of instances. You don’t want the standby database to contact the listener on the primary server. The second reason is that you cannot start the listener if the IP is not up. Then, if you want to explicitly listen to the VIP you will need two listeners, some security rules to allow only local registration and to manage the start of the listener, monitoring, etc.

The simplest configuration is to have one listener configured on the server hostname, then it listens on all interfaces and clients can connect with the VIP (for the application) or with the server IP (for Data Guard broker, backups, monitoring, administration).

The behaviour is described in How The Listener Binds On TCP Protocol Addresses (Doc ID 421305.1)

Examples

I have two network interfaces on my system, the loopback (lo) and Ethernet (enp0s3). This interface has the IP 192.168.78.104 and I have added a virtual IP 192.168.66.102 with:

ip a add 192.168.66.102/24 dev enp0s3

Here is the list of interfaces:

[oracle@VM104 tmp]$ ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:cc:00:4e:68 brd ff:ff:ff:ff:ff:ff
inet 192.168.78.104/24 brd 192.168.78.255 scope global enp0s3
inet 192.168.66.102/24 scope global enp0s3
inet6 fe80::a00:ccff:fe00:4e68/64 scope link
valid_lft forever preferred_lft forever

Here is the content of my /etc/hosts where I have two names that resolve to my server IP address 192.168.78.104

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.78.104 VM104 myhost

One of these names is my server hostname:


[oracle@VM104 tmp]$ hostname
VM104

I’ll try different configuration of my listener.ora

(HOST=127.0.0.1)

I mentioned the IP address of the loopback interface

Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=6666)))

The listener listens to this address only:

[oracle@VM104 tmp]$ ss -elpunt | grep -E "^Net|tnslsnr"
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 127.0.0.1:6666 *:* users:(("tnslsnr",4685,8))

With this configuration, I’m able to connect only through the mentioned address, 127.0.0.1

Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.78.104)(PORT=6666)))
TNS-12541: TNS:no listener
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.66.102)(PORT=6666)))
TNS-12541: TNS:no listener
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.0.1)(PORT=6666)))
OK (10 msec)

(HOST=localhost)

I mentioned the loopback interface by a host name

Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=6666)))

This is actually the same as above: the host mentioned has been resolved at listener startup.

(HOST=1192.168.78.104)

I mentioned the IP address of the host interface

Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.78.104)(PORT=6666)))

The listener listens to this address only:

[oracle@VM104 tmp]$ ss -elpunt | grep -E "^Net|tnslsnr"
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 192.168.78.104:6666 *:* users:(("tnslsnr",4735,8))

With this configuration, I’m able to connect only through the mentioned address, not the virtual IP, not other interfaces:

Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.78.104)(PORT=6666)))
OK (0 msec)
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.66.102)(PORT=6666)))
TNS-12541: TNS:no listener
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.0.1)(PORT=6666)))
TNS-12541: TNS:no listener

(HOST=localhost)

I mentioned the loopback interface by a host name

Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=6666)))

This is actually the same as above: the host mentioned has been resolved at listener startup.

(HOST=VM104)

I mentioned the host name which resolves to the IP address of the host interface – this is the default when creating with DBCA, and the recommended configuration.

Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=VM104)(PORT=6666)))

The listener socket do not mention the IP address:

[oracle@VM104 tmp]$ ss -elpunt | grep -E "^Net|tnslsnr"
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 :::6666 :::* users:(("tnslsnr",4760,8))

We see something different here as there’s no mention of a local address in :::6666

With this configuration, I’m able to connect through any IP address, including the virtual IP

Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.78.104)(PORT=6666)))
OK (0 msec)
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.66.102)(PORT=6666)))
OK (10 msec)
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.0.1)(PORT=6666)))
OK (10 msec)

(HOST=myhost)

I mentioned another host name which resolves to the IP address of the host interface (see the /etc/hosts above). It is not the hostname returned by $(hostname) but it resolve to same IP.

Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=VM104)(PORT=6666)))

The listener has resolved the address through /etc/hosts and then, because the IP matches the resolution of $(hostname), has used the $(hostname). We are then in the same situation as above where we can connect through any interface:

[oracle@VM104 tmp]$ ss -elpunt | grep -E "^Net|tnslsnr"
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 :::6666 :::* users:(("tnslsnr",4760,8))

(HOST=0.0.0.0)

Finally, when you want to listen on all interfaces, why not configure the host to 0.0.0.0

Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=VM104)(PORT=6666)))

We are again in the same situation here and the listener has replaced it with the hostname. This may be convenient when you want to use the same listener.ora for different hosts. However, as it finally show the hostname, better to avoid confusion and have it in the listener.ora

(HOST=VM104)(IP=FIRST)

This is the way to bypass the ‘listen on all interfaces’ rule, even when you resolve to the hostname.

Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.78.104)(PORT=6666)))

Because of (IP=FIRST) the listener listens to the first IP address returned by gethostbyname()

Conclusion

It is easy to know if the listener listens on one specific IP address, or on all interfaces. You get the hostname and the listener endpoints

hostname
lsnrctl status

If the ‘HOST=’ matches the hostname, then it listens to all interfaces. If the ‘HOST=’ mentions an IP address, then it listens on this IP only. If it mentions a name which is not the hostname, then maybe someone has changed the hostname after the listener was started?

The other way is to look at the socket information with:

netstat -elpunt
ss -elpunt

If you think that it is a security problem to listen to all interfaces, then you should understand that the listener is not a firewall. It is just a convenient way to route connections by service name to the right instance. But remember that you can even connect to the database without the listener (read https://amitzil.wordpress.com/2015/10/19/bypassing-the-listener/), just connecting to the dispatcher:

Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SID=CDB1))(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.0.1)(PORT=30229)))
OK (0 msec)

And this one listens to all interfaces:


[oracle@VM104 tmp]$ ss -elpunt | grep -E "(^Net|ora_d)"
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 :::30229 :::* users:(("ora_d000_cdb1",1362,8)) uid:54321 ino:20635 sk:ffff880117900800
tcp LISTEN 0 128 :::32316 :::* users:(("ora_d000_cdb2",1729,8)) uid:54321 ino:20958 sk:ffff880117900040

Security is done by firewall rules. Listener is there only to help, so keep it simple.

 

Cet article Listener and Virtual IP est apparu en premier sur Blog dbi services.

Does the wal segment size matter in PostgreSQL?

Tue, 2017-04-25 02:39

In PostgreSQL you configure the size of the wal (write ahead log) segments when you compile from source. If you use an installer or if you use the packages provided by your OS distribution the size of the wal segments is usually 16MB. Although 16MB seems very low you don’t need to worry about that in most of the cases, it just works fine. However there are cases where you might want to adjust this, e.g. when you have an application that generates thousands of transactions in a very short time and therefore forces PostgreSQL to generate huge amounts of wal segments. In this post we’ll look at a specific case: Usually you want to archive the wal segments for being able to do point in time recovery in case your severs crashes for some reason. Does the size of the wal segments matter for archiving?

Archiving of wal segments in PostgreSQL is done by specifying an archive_command. Whatever you put there will be executed by PostgreSQL once a new wal segment is created. Usually you’ll find something like this in archive_command (from the documentation):

archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'  # Unix
archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"'  # Windows

Or something like this:

archive_command = 'rsync -a %p postgres@[SOME_OTHER_HOST]:/path/to/wal_archive/%f'

Or:

archive_command ='scp %p postgres@[SOME_OTHER_HOST]:/path/to/wal_archive/%f'

Lets test how the size of wal segments impact the three ways of archiving outlined above. To begin with lets create 100 files each 16MB (the same as the default wal segment size in PostgreSQL) and 25 files 64MB each:

rm -rf /var/tmp/test16mb
mkdir /var/tmp/test16mb
for i in {1..100}; do
   dd if=/dev/zero of=/var/tmp/test16mb/${i} bs=1M count=16
done
ls -la /var/tmp/test16mb
rm -rf /var/tmp/test64mb
mkdir /var/tmp/test64mb
for i in {1..25}; do
   dd if=/dev/zero of=/var/tmp/test64mb/${i} bs=1M count=64
done
ls -la /var/tmp/test64mb
du -sh /var/tmp/test16mb
du -sh /var/tmp/test64mb

This will give us a total size of 1.6GB for each of the wal sizes (16MB and 64MB). Lets start by testing the “cp” way:

echo 3 > /proc/sys/vm/drop_caches
mkdir -p /var/tmp/target
rm -rf /var/tmp/target/*
time for i in `ls /var/tmp/test16mb`; do
    cp /var/tmp/test16mb/${i} /var/tmp/target/
done

My result (on a VM local on my notebook):

real	0m17.444s
user	0m0.275s
sys	0m8.569s

The same test for the 64MB files:

echo 3 > /proc/sys/vm/drop_caches
mkdir -p /var/tmp/target
rm -rf /var/tmp/target/*
time for i in `ls /var/tmp/test64mb`; do
    cp /var/tmp/test16mb/${i} /var/tmp/target/
done

It is almost 3 times as fast to copy the large files than to copy the smaller files:

real	0m5.365s
user	0m0.065s
sys	0m1.835s

Of course, for production systems, you would copy the files not locally but rather to e.g. NFS mount and then the numbers will change.

What are the numbers for scp? For the smaller files:

echo 3 > /proc/sys/vm/drop_caches
mkdir -p /var/tmp/target
rm -rf /var/tmp/target/*
time for i in `ls /var/tmp/test16mb`; do
    scp /var/tmp/test16mb/${i} root@localhost:/var/tmp/target/
done

The result:

real	2m51.708s
user	0m14.136s
sys	0m35.292s

Quite a huge overhead. What is the result with the 64MB files?:

echo 3 > /proc/sys/vm/drop_caches
mkdir -p /var/tmp/target
rm -rf /var/tmp/target/*
time for i in `ls /var/tmp/test64mb`; do
    scp /var/tmp/test64mb/${i} root@localhost:/var/tmp/target/
done

Approximately double as fast:

real	1m23.326s
user	0m10.353s
sys	0m30.814s

And finally rsync, for the smaller files:

echo 3 > /proc/sys/vm/drop_caches
mkdir -p /var/tmp/target
rm -rf /var/tmp/target/*
time for i in `ls /var/tmp/test16mb`; do
    rsync -a /var/tmp/test16mb/${i} root@localhost:/var/tmp/target/${i}
done

The result:

real	0m51.624s
user	0m4.488s
sys	0m10.247s

For the larger ones:

echo 3 > /proc/sys/vm/drop_caches
mkdir -p /var/tmp/target
rm -rf /var/tmp/target/*
time for i in `ls /var/tmp/test64mb`; do
    rsync -a /var/tmp/test64mb/${i} root@localhost:/var/tmp/target/${i}
done

The result:

real	0m34.342s
user	0m3.623s
sys	0m9.685s

Conclusion: When you have applications with high transaction rates it can make sense to increase the default wal segment size as archiving will usually be much faster when you use bigger segments. Of course you’ll need to test this on your specific hardware and for your specific workload. In a next post we’ll look at how bigger segments affect performance of PostgreSQL.

 

Cet article Does the wal segment size matter in PostgreSQL? est apparu en premier sur Blog dbi services.

12cR2 RMAN> REPAIR

Sun, 2017-04-23 15:39

Do you know the RMAN Recovery advisor? It detects the problems, and then you:

RMAN> list failure;
RMAN> advise failure;
RMAN> repair failure;

You need to have a failure detected. You can run Health Check if it was not detected automatically (see https://blog.dbi-services.com/oracle-12c-rman-list-failure-does-not-show-any-failure-even-if-there-is-one/). In 12.2 you can run the repair directly, by specifying what you want to repair.

Syntax

There is no online help on RMAN but you can list which keywords are expected by supplying a wrong one:
RMAN> repair xxx;
 
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00558: error encountered while parsing input commands
RMAN-01009: syntax error: found "identifier": expecting one of: "failure"
RMAN-01008: the bad identifier was: xxx
RMAN-01007: at line 1 column 8 file: standard input

This is 12.1.0.2 where the only option is REPAIR FAILURE. In 12.2 we have a lot more:


RMAN> repair xxx
 
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00558: error encountered while parsing input commands
RMAN-01009: syntax error: found "identifier": expecting one of: "database, database root, datafile, failure, pluggable, tablespace, ("
RMAN-01008: the bad identifier was: xxx
RMAN-01007: at line 1 column 8 file: standard input

When you know what is broken, you can repair it without having to know what to restore and what to recover. You can repair:

  • database: the whole database
  • database root: the CDB$ROOT container, which means all its tablespaces
  • pluggable database: it means all the PDB tablespaces
  • a specific datafile
Repair pluggable database

I corrupt one datafile from PDB01:


RMAN> host "> /u01/oradata/CDB2/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/datafile/o1_mf_system_d8k2t4wj_.dbf";
host command complete

And I repair the pluggable database:


RMAN> repair pluggable database PDB01;
 
Starting restore at 23-APR-17
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=203 device type=DISK
Executing: alter database datafile 21 offline
Executing: alter database datafile 22 offline
Executing: alter database datafile 23 offline
 
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00021 to /u01/oradata/CDB2/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/datafile/o1_mf_system_d8k2t4wj_.dbf
channel ORA_DISK_1: restoring datafile 00022 to /u01/oradata/CDB2/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/datafile/o1_mf_sysaux_d8k2t4wn_.dbf
channel ORA_DISK_1: restoring datafile 00023 to /u01/oradata/CDB2/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/datafile/o1_mf_users_d8kbmy6w_.dbf
channel ORA_DISK_1: reading from backup piece /u90/fast_recovery_area/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/backupset/2017_04_23/o1_mf_nnndf_B_dht2d4ow_.bkp
channel ORA_DISK_1: piece handle=/u90/fast_recovery_area/CDB2_SITE1/46EA7EF707457B4FE0531416A8C027F2/backupset/2017_04_23/o1_mf_nnndf_B_dht2d4ow_.bkp tag=B
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
Finished restore at 23-APR-17
 
Starting recover at 23-APR-17
using channel ORA_DISK_1
 
starting media recovery
media recovery complete, elapsed time: 00:00:00
 
Executing: alter database datafile 21 online
Executing: alter database datafile 22 online
Executing: alter database datafile 23 online
Finished recover at 23-APR-17

The good thing is that it automatically restores and recovers the datafiles with only one command.
But we see here that all datafiles have been restored. AsI knew that only one datafile was corrupted, it would have been faster to use REPAIR DATAFILE for it.

However, doing the same and calling the recovery advisor is not better: it advises to:

1 Restore and recover datafile 21; Restore and recover datafile 23; Recover datafile 22

When dealing with recovery, you need to understand how it works, what was the scope of the failure, and how to repair it. The advisors or automatic actions can help but do not alleviate the need to understand.

 

Cet article 12cR2 RMAN> REPAIR est apparu en premier sur Blog dbi services.

Data Pump LOGTIME, DUMPFILE, PARFILE, DATA_PUMP_DIR in 12c

Sat, 2017-04-22 16:28

Data Pump is a powerful way to save data or metadata, move it, migrate, etc. Here is an example showing few new features in 12cR1 and 12cR2.

New parameters

Here is the result of a diff between 12.1 and 12.2 ‘imp help=y’
CaptureDataPump122

But for this post, I’ll show the parameters that existed in 12.1 but have been enhanced in 12.2

LOGTIME

This is a 12.1 feature. The parameter LOGFILE=ALL displays the system timestamp in front of the messages in at the screen and in the logfile. The default is NONE and you can also set it to STATUS for screen only and LOGFILE for logfile only.


[oracle@vmreforanf12c01 tmp]$ expdp system/manager@PDB01 parfile=impdp.par logfile=impdp.log
 
Export: Release 12.2.0.1.0 - Production on Sat Apr 22 22:20:22 2017
 
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
 
Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
22-APR-17 22:20:29.671: Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@PDB01 parfile=impdp.par logfile=impdp.log
22-APR-17 22:20:35.505: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
22-APR-17 22:20:36.032: Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
22-APR-17 22:20:36.407: Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
22-APR-17 22:20:43.586: Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
22-APR-17 22:20:44.126: Processing object type SCHEMA_EXPORT/USER
22-APR-17 22:20:44.199: Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
22-APR-17 22:20:44.243: Processing object type SCHEMA_EXPORT/ROLE_GRANT
22-APR-17 22:20:44.296: Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
22-APR-17 22:20:44.760: Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
22-APR-17 22:20:53.706: Processing object type SCHEMA_EXPORT/TABLE/TABLE
22-APR-17 22:20:59.699: Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
22-APR-17 22:21:00.712: Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
22-APR-17 22:21:03.494: . . exported "SCOTT"."DEMO" 8.789 KB 14 rows
22-APR-17 22:21:03.651: . . exported "SCOTT"."EMP" 8.781 KB 14 rows
22-APR-17 22:21:03.652: . . exported "SCOTT"."DEPT" 6.031 KB 4 rows
22-APR-17 22:21:03.654: . . exported "SCOTT"."SALGRADE" 5.960 KB 5 rows
22-APR-17 22:21:03.656: . . exported "SCOTT"."BONUS" 0 KB 0 rows
22-APR-17 22:21:04.532: Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
22-APR-17 22:21:04.558: ******************************************************************************
22-APR-17 22:21:04.559: Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
22-APR-17 22:21:04.569: /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/log/46EA7EF707457B4FE0531416A8C027F2/SCOTT_20170422.01.dmp
22-APR-17 22:21:04.622: Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Apr 22 22:21:04 2017 elapsed 0 00:00:41

You will always appreciate finding timestamps in the log file. But remember that your import/export is processed by multiple workers and it is difficult to estimate duration between the different lines. I explained this in https://blog.dbi-services.com/datapump-processing-object-type-misleading-messages/

DUMPFILE

You can see that my DUMPFILE contains also the timestamp in the file name. This is possible in 12.2 with the %T substitution variable. Here was my PARFILE where DUMPFILE mentions %U (in addition to %U if there are multiple files):

[oracle@vmreforanf12c01 tmp]$ cat impdp.par
schemas=SCOTT
logtime=all
dumpfile=SCOTT_%T.%U.dmp
reuse_dumpfiles=yes
filesize=1M

PARFILE parameters

I don’t usually use a PARFILE and prefer to pass all parameters on the command line, even if this requires escaping a lot of quotes, because I like to ship the log file with the DUMPFILE. And before 12.2 the LOGFILE mentions only the parameters passed on command line. In 12.2 the PARFILE parameters are mentioned into the LOGFILE (but not to the screen):


;;;
Export: Release 12.2.0.1.0 - Production on Sat Apr 22 22:20:22 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
;;;
Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
22-APR-17 22:20:24.899: ;;; **************************************************************************
22-APR-17 22:20:24.901: ;;; Parfile values:
22-APR-17 22:20:24.903: ;;; parfile: filesize=1M
22-APR-17 22:20:24.905: ;;; parfile: reuse_dumpfiles=Y
22-APR-17 22:20:24.907: ;;; parfile: dumpfile=SCOTT_%T.%U.dmp
22-APR-17 22:20:24.909: ;;; parfile: logtime=all
22-APR-17 22:20:24.911: ;;; parfile: schemas=SCOTT
22-APR-17 22:20:24.913: ;;; **************************************************************************
22-APR-17 22:20:29.654: Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@PDB01 parfile=impdp.par logfile=impdp.log
22-APR-17 22:20:35.469: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
22-APR-17 22:20:36.032: Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
22-APR-17 22:20:36.407: Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
22-APR-17 22:20:43.535: Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
22-APR-17 22:20:44.126: Processing object type SCHEMA_EXPORT/USER
22-APR-17 22:20:44.199: Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
22-APR-17 22:20:44.243: Processing object type SCHEMA_EXPORT/ROLE_GRANT
22-APR-17 22:20:44.296: Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
22-APR-17 22:20:44.760: Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
22-APR-17 22:20:53.620: Processing object type SCHEMA_EXPORT/TABLE/TABLE
22-APR-17 22:20:59.699: Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
22-APR-17 22:21:00.712: Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
22-APR-17 22:21:03.494: . . exported "SCOTT"."DEMO" 8.789 KB 14 rows
22-APR-17 22:21:03.651: . . exported "SCOTT"."EMP" 8.781 KB 14 rows
22-APR-17 22:21:03.652: . . exported "SCOTT"."DEPT" 6.031 KB 4 rows
22-APR-17 22:21:03.654: . . exported "SCOTT"."SALGRADE" 5.960 KB 5 rows
22-APR-17 22:21:03.656: . . exported "SCOTT"."BONUS" 0 KB 0 rows
22-APR-17 22:21:04.532: Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
22-APR-17 22:21:04.558: ******************************************************************************
22-APR-17 22:21:04.559: Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
22-APR-17 22:21:04.569: /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/log/46EA7EF707457B4FE0531416A8C027F2/SCOTT_20170422.01.dmp
22-APR-17 22:21:04.621: Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Apr 22 22:21:04 2017 elapsed 0 00:00:41

Now the LOGFILE shows all export information. Only the password is hidden.

DATA_PUMP_DIR

In 12.1 multitenant, you cannot use the default DATA_PUMP_DIR. It is there, but you just cannot use it implicitly or explicitly. With my PARFILE above when DIRECTORY is not mentioned I would have the following error:

ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-39087: directory name DATA_PUMP_DIR is invalid

This means that there is no default possible and we need to mention DIRECTORY.

But in 12.2 it worked, going to /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/log/46EA7EF707457B4FE0531416A8C027F2/ which is the default DATA_PUMP_DIR:

SYSTEM@PDB01 SQL> select * from dba_directories;
 
OWNER DIRECTORY_NAME DIRECTORY_PATH ORIGIN_CON_ID
----- -------------- -------------- -------------
SYS TSPITR_DIROBJ_DPDIR /u90/tmp_data_restore 3
SYS PREUPGRADE_DIR /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin 1
SYS XMLDIR /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/xml 1
SYS ORA_DBMS_FCP_LOGDIR /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs 1
SYS ORA_DBMS_FCP_ADMINDIR /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin 1
SYS ORACLE_OCM_CONFIG_DIR /u01/app/oracle/product/12.2.0/dbhome_1/ccr/state 1
SYS ORACLE_OCM_CONFIG_DIR2 /u01/app/oracle/product/12.2.0/dbhome_1/ccr/state 1
SYS XSDDIR /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/xml/schema 1
SYS DATA_PUMP_DIR /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/log/46EA7EF707457B4FE0531416A8C027F2 1
SYS OPATCH_INST_DIR /u01/app/oracle/product/12.2.0/dbhome_1/OPatch 1
SYS OPATCH_SCRIPT_DIR /u01/app/oracle/product/12.2.0/dbhome_1/QOpatch 1
SYS OPATCH_LOG_DIR /u01/app/oracle/product/12.2.0/dbhome_1/QOpatch 1
SYS ORACLE_BASE / 1
SYS ORACLE_HOME / 1

Of course, don’t leave it under ORACLE_HOME which is on a filesystem for binaries where you don’t want to put variable size files. But it is good to have a default.

 

Cet article Data Pump LOGTIME, DUMPFILE, PARFILE, DATA_PUMP_DIR in 12c est apparu en premier sur Blog dbi services.

SecureFiles on multi-datafiles tablespaces

Fri, 2017-04-21 14:47

When we have a tablespace with multiple datafiles, we are used to seeing the datafiles filled evenly, the extents being allocated in a round-robin fashion. In the old time, we used that to maximize performance, distributing the tables to all disks. Today, we use LVM striping, maximum Inter-Policy, ASM even distribution. And we may even use bigfile tablespaces, so that we don’t care about having multiple datafiles.

But recently, during test phase of migration, I came upon something like this:
SecureFile003

To reproduce the case, I’ve created a tablespace with 8 datafiles:

SQL> create tablespace MYTABS datafile
2 '/tmp/MYTABS01.dbf' size 1M autoextend on maxsize 100M,
3 '/tmp/MYTABS02.dbf' size 1M autoextend on maxsize 100M,
4 '/tmp/MYTABS03.dbf' size 1M autoextend on maxsize 100M,
5 '/tmp/MYTABS04.dbf' size 1M autoextend on maxsize 100M,
6 '/tmp/MYTABS05.dbf' size 1M autoextend on maxsize 100M,
7 '/tmp/MYTABS06.dbf' size 1M autoextend on maxsize 100M,
8 '/tmp/MYTABS07.dbf' size 1M autoextend on maxsize 100M,
9 '/tmp/MYTABS08.dbf' size 1M autoextend on maxsize 100M
10 /
 
Tablespace created.

SecureFiles

This was a 11g to 12c migration, with Data Pump, and a good occasion to convert all LOB to SecureFiles with the transform=lob_storage:securefile parameter. And this tablespace is the one where the LOB segments are stored. I reproduced it with:

SQL> create table MYTABLE ( x clob ) tablespace USERS
2 LOB(x) store as securefile MYLOBSEG (tablespace MYTABS disable storage in row);
 
Table created.

Then I inserted about 80MB:

SQL> insert into MYTABLE select lpad('x',100000) from xmltable('1 to 8000') ;
8000 rows created.
 
SQL> select file_id,file_name,blocks,user_blocks from dba_data_files where tablespace_name='MYTABS' order by file_id;
 
FILE_ID FILE_NAME BLOCKS USER_BLOCKS
---------- ------------------------------ ---------- -----------
61 /tmp/MYTABS01.dbf 128 120
62 /tmp/MYTABS02.dbf 128 120
63 /tmp/MYTABS03.dbf 128 120
64 /tmp/MYTABS04.dbf 9344 9336
65 /tmp/MYTABS05.dbf 128 120
66 /tmp/MYTABS06.dbf 128 120
67 /tmp/MYTABS07.dbf 128 120
68 /tmp/MYTABS08.dbf 128 120
 
8 rows selected.
 
SQL> select file_id,segment_type,sum(blocks) from dba_extents where tablespace_name='MYTABS' group by file_id,segment_type order by file_id,segment_type;
 
FILE_ID SEGMENT_TYPE SUM(BLOCKS)
---------- ------------------ -----------
64 LOBSEGMENT 9216
65 LOBINDEX 8
65 LOBSEGMENT 24

And I continued to load rows, and observed the datafiles filled to their maxsize one after the other, without numeric or alphabetical order.

SQL> select file_id,file_name,blocks,user_blocks from dba_data_files where tablespace_name='MYTABS' order by file_id;
 
FILE_ID FILE_NAME BLOCKS USER_BLOCKS
---------- ------------------------------ ---------- -----------
61 /tmp/MYTABS01.dbf 8320 8312
62 /tmp/MYTABS02.dbf 12416 12408
63 /tmp/MYTABS03.dbf 12416 12408
64 /tmp/MYTABS04.dbf 12416 12408
65 /tmp/MYTABS05.dbf 128 120
66 /tmp/MYTABS06.dbf 128 120
67 /tmp/MYTABS07.dbf 128 120
68 /tmp/MYTABS08.dbf 128 120
 
8 rows selected.
 
SQL> select file_id,segment_type,sum(blocks) from dba_extents where tablespace_name='MYTABS' group by file_id,segment_type order by file_id,segment_type;
 
FILE_ID SEGMENT_TYPE SUM(BLOCKS)
---------- ------------------ -----------
61 LOBSEGMENT 8192
62 LOBSEGMENT 12288
63 LOBSEGMENT 12288
64 LOBSEGMENT 12288
65 LOBINDEX 8
65 LOBSEGMENT 24

Here are some screenshots during this load

SecureFile001

SecureFile002

SecureFile003

BasicFiles

This occurs only with SecureFiles. With the same load into LOB stored as BasicFile I got the following distribution:
SecureFile004

Note that the inserts to BasicFile were about two times slower than the same into SecureFiles.

So what?

I don’t see any bad consequences about that, and this may even be an improvement when concurrent sessions are loading the LOBs. When the datafile picked-up looks random, a guess is that it depends on a process ID to try to distribute the concurrent load. And if you want to distribute data over multiple disks, then you should do it at a lower level. However, it is 30 years behavior that changes and it’s better to be aware of it: datafiles may reach their maxsize even when the tablespace is not full. I order to be sure that it is the expected behavior, I opened an SR with easy to reproduce testcase. Status is ‘review update’ for 9 days (SR 3-14677784041) and I’ll update this post is I get an answer.

 

Cet article SecureFiles on multi-datafiles tablespaces est apparu en premier sur Blog dbi services.

SQL Server 2017 AlwaysOn AGs and new read-scale architectures

Fri, 2017-04-21 06:40

As you probably know Microsoft announced an official name for SQL Server vNext during the last Microsoft Data Amp event on April 19 2017. It becomes officially SQL Serve 2017.

In my first blog post, I wrote about SQL Server on Linux and the introduction of availability groups features. At this moment the SQL Server release version was CTP 1.3. As a reminder, with previous CTP releases, listeners were unusable because they did not persist when switch over events occurred as well as they didn’t provide any transparent redirection capabilities. Today, we are currently on the CTP 2.0 and this last CTP release comes with an important improvement concerning AGs with the support of listeners.

In this blog post I don’t want to write about creating an AG listener on Linux environment. The process is basically the same that creating a listener on Windows and it is well documented by Microsoft for a while. But several things shipped with the last CTP 2.0 have drawn my attention and will allow extending some scenarios with AGs.

First of all, from the Microsoft documentation we may notice a “Create for read-scale only” section. In a nutshell, we are now able to create a cluster-less availability group. Indeed, in this context we want to prioritize scale-out scenarios in favor of HA meaning the cluster layer is not mandatory here. That’s the point. Using Linux or Windows operating system in this case? Well, we may have a long debate here but let’s say we will use a Linux operating system for this scenario.

You also probably noticed that the CLUSTER_TYPE parameter includes now a new EXTERNAL value. So we may create an availability group? by using one of the following values:

  • WSFC = A Windows Server Failover Cluster will manage the availability group
  • EXTERNAL = An external entity will manage the availability group (pacemaker on Linux so far)
  • NONE = No cluster entity will manage the availability group

In my opinion, introducing the EXTERNAL value does make sense regarding the previous CTP releases. Indeed we were able only to specify NONE value to either use an external entity to manage AGs or to use nothing for read-scale scenarios making it meaningless.

At the same time FAILOVER_MODE parameter includes also a new EXTERNAL value which must be specified when using an external entity to manage AGs failover. Before going further in this blog post let’s set the scene. A pretty basic environment which includes 3 high available replicas on Linux involved in a read-scale scenario meaning no extra layer of HA management and asynchronous mode as well.

 blog 121 - ag linux read-scale scenario

As a reminder, implementing a listener with corresponding read-only routes is very useful for the following reasons:

  • Applications are transparently redirected to the corresponding read-only replica when read intent parameter is specified
  • Since SQL Server 2016 applications may be redirected in a round-robin fashion, there’s no need to implement extra component (ok .. round-robin algorithm is pretty basic but that’s not so bad actually)
  • Application does not need to know the underlying infrastructure. They have to connect to the AG listener and that’s it.

But in such scenario where no cluster layer is installed, we are not able to benefit from a floating virtual IP which is part of the automatic redirection to the primary replica in case of a failover event and as you already know, connections must be redirected to the primary in order to benefit from transparent redirection / round robin capabilities. So the remaining question is how to achieve redirection without a floating IP address in this case?

Firstly let’s say creating an AG listener on Linux doesn’t imply creating a corresponding virtual IP and Network Name on the cluster side and especially in this case where AG doesn’t rely on the cluster layer. However creating an AG listener that relies on the primary replica IP address to benefit from transparent / round-robin redirection remains a viable option. This is only the first part of the solution because we have also to address scenarios that include switchover events. Indeed, in this case, primary replica may change regarding the context and the current listener’s configuration becomes invalid (we refer to the previous primary’s IP address). At this stage, I would like to thank again @MihaelaBlendea from Microsoft who put me on the right track.

This is not an official / supported solution but it seems to work well according to my tests. Update 21.04.2017 : Mihaela has confirmed this is a supported solution from Microsoft.

The solution consists in including all the replica IP addresses included in the topology in the listener definition and we may use a DNS record to point to the correct primary replica after a manual failover event. Therefore, applications do have only to know the DNS record to connect to the underlying SQL Server infrastructure.

Here the definition of my availability group including the listener:

CREATE AVAILABILITY GROUP [AdvGrpDRLinux]
WITH
(
    DB_FAILOVER = ON, --> Trigger the failover of the entire AG if one DB fails 
    CLUSTER_TYPE = NONE 
)
FOR REPLICA ON
N'LINUX07'
WITH
(
    ENDPOINT_URL = N'tcp://192.168.40.23:5022',
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
    FAILOVER_MODE = MANUAL,
    SEEDING_MODE = AUTOMATIC, --> use direct seeding
    SECONDARY_ROLE (ALLOW_CONNECTIONS = ALL)
),
N'LINUX08'
WITH
( 
    ENDPOINT_URL = N'tcp://192.168.40.24:5022', 
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
    FAILOVER_MODE = MANUAL, 
    SEEDING_MODE = AUTOMATIC, --> use direct seeding
    SECONDARY_ROLE (ALLOW_CONNECTIONS = ALL)
),
N'LINUX09'
WITH
( 
    ENDPOINT_URL = N'tcp://192.168.40.25:5022', 
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
    FAILOVER_MODE = MANUAL, 
    SEEDING_MODE = AUTOMATIC, --> use direct seeding
    SECONDARY_ROLE (ALLOW_CONNECTIONS = ALL)
);
GO

ALTER AVAILABILITY GROUP [AdvGrpDRLinux] GRANT CREATE ANY DATABASE;
GO


ALTER AVAILABILITY GROUP [AdvGrpDRLinux]   
ADD LISTENER 'lst-advgrplinux' 
( 
	WITH IP ( ('192.168.40.23', '255.255.255.0'), --> LINUX07 IP Address
			  ('192.168.40.24', '255.255.255.0'), --> LINUX08 IP Address
			  ('192.168.40.25', '255.255.255.0')  --> LINUX09 IP Address
	        ) 
		, PORT = 1433 
);   
GO

Notable parameters are:

  • CLUSTER_TYPE = NONE
  • AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT
  • FAILOVER_MODE = MANUAL
  • Direct seeding is enabled.

Let’s have a look at the AG configuration by using AG DMVs:

blog 121 - ag linuxag config

Then here the listener configuration:

blog 121 - ag linuxag listener config

And finally the configuration of my read-only routes and the priority list for redirection:

blog 121 - ag linuxag ro config

You may notice that I use round-robin capabilities for each replica.

I also created a DNS A record with the address of the current primary replica (lst-advgrplinux – 192.168.40.23). DNS record will be used by applications to connect the AdvGrpDRLinux AG.

Let’s test the new configuration by using SQLCMD tool with –K READONLY option. Redirection and round-robin feature come into play. First test is conclusive.

blog 121 - ag linux first test sqlcmd

Go ahead and let’s perform a manual failover. In this case, the primary replica is still available, so I just switched momentary on synchronous mode to avoid resuming replication databases from secondary replicas afterwards. Then I performed a switch over to the LINUX08 replica. According to the Microsoft documentation, in order to guarantee no data loss I also changed temporary the REQUIERED_COPIES_TO_COMMIT to 1. Finally, after performing the manual failover successfully, I switched back to asynchronous mode (REQUIERED_COPIES_TO_COMMIT must be reverted to 0 in this case).

USE [master]
GO

-- switch momentary to synchronous mode
ALTER AVAILABILITY GROUP [AdvGrpDRLinux]
MODIFY REPLICA ON N'LINUX07' WITH (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT)
GO
ALTER AVAILABILITY GROUP [AdvGrpDRLinux]
MODIFY REPLICA ON N'LINUX08' WITH (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT)
GO
ALTER AVAILABILITY GROUP [AdvGrpDRLinux]
MODIFY REPLICA ON N'LINUX09' WITH (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT)
GO

ALTER AVAILABILITY GROUP [AdvGrpDRLinux]
SET (REQUIRED_COPIES_TO_COMMIT = 1)

-- demote old primary replica LINUX07
ALTER AVAILABILITY GROUP [AdvGrpDRLinux] SET (ROLE = SECONDARY); 

-- switch to new primary replica LINUX08
:CONNECT LINUX08 -U sa -PXXXXX
ALTER AVAILABILITY GROUP [AdvGrpDRLinux] FAILOVER;
GO

-- revert back to asynchronous mode
:CONNECT LINUX08 -U sa -PXXXXX
ALTER AVAILABILITY GROUP [AdvGrpDRLinux]
SET (REQUIRED_COPIES_TO_COMMIT = 0)

ALTER AVAILABILITY GROUP [AdvGrpDRLinux]
MODIFY REPLICA ON N'LINUX07' WITH (AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT)
GO
ALTER AVAILABILITY GROUP [AdvGrpDRLinux]
MODIFY REPLICA ON N'LINUX08' WITH (AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT)
GO
ALTER AVAILABILITY GROUP [AdvGrpDRLinux]
MODIFY REPLICA ON N'LINUX09' WITH (AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT)
GO

After updating the DNS record to point to the new primary replica – LINUX08 with IP address equal to 192.168.40.24, transparent redirection and round-robin capabilities continued to work correctly.

blog 121 - ag linux second test sqlcmd

See you soon for other interesting new scenarios with availability groups on Linux!

 

 

 

Cet article SQL Server 2017 AlwaysOn AGs and new read-scale architectures est apparu en premier sur Blog dbi services.

OGG: Patch 17030189 is required on your Oracle mining database for trail format RELEASE 12.2

Tue, 2017-04-18 06:20

Another GoldenGate 12.2 one: Some days ago I had this in the GoldenGate error log:

2017-04-12 14:56:08  WARNING OGG-02901  Oracle GoldenGate Capture for Oracle, extimch.prm:  Replication of UDT and ANYDATA from redo logs is not supported with the Oracle compatible parameter setting. Using fetch instead.
2017-04-12 14:56:08  ERROR   OGG-02912  Oracle GoldenGate Capture for Oracle, extimch.prm:  Patch 17030189 is required on your Oracle mining database for trail format RELEASE 12.2 or later.

Seemed pretty obvious that I was missing a patch.

Headed over to mos and searched for the patch mentioned:

ogg_17030189

Hm, only two hits and I was neither on Exadata nor on the 12.1.0.1 database release. After digging around a bit more in various mos notes there was one (Doc ID 2091679.1) which finally mentioned a workaround. When you install GoldenGate 12.2 you get a script by default in the GoldenGate Home which is called “prvtlmpg.plb”. Looking at the script:

oracle@xxxxx:/u01/app/ogg/ch_src/product/12.2.0.1.160823/ [xxxxx] ls prvtlmpg.plb
prvtlmpg.plb
oracle@xxxxx:/u01/app/ogg/ch_src/product/12.2.0.1.160823/ [xxxxx] strings prvtlmpg.plb
WHENEVER SQLERROR EXIT
set verify off 
set feedback off
set echo off
set serveroutput on
column quotedMiningUser new_value quotedMiningUser noprint
column quotedCurrentSchema new_value quotedCurrentSchema noprint
variable status number
prompt
prompt Oracle GoldenGate Workaround prvtlmpg
prompt
prompt This script provides a temporary workaround for bug 17030189.
prompt It is strongly recommended that you apply the official Oracle 
prompt Patch for bug 17030189 from My Oracle Support instead of using
prompt this workaround.
prompt
prompt This script must be executed in the mining database of Integrated
prompt Capture. You will be prompted for the username of the mining user.
prompt Use a double quoted identifier if the username is case sensitive
prompt or contains special characters. In a CDB environment, this script
prompt must be executed from the CDB$ROOT container and the mining user
prompt must be a common user.
prompt
prompt ===========================  WARNING  ==========================
prompt You MUST stop all Integrated Captures that belong to this mining
prompt user before proceeding!
prompt ================================================================

Really? You get a script to workaround a known issue by default? Lets try:

SQL> @prvtlmpg.plb

Oracle GoldenGate Workaround prvtlmpg

This script provides a temporary workaround for bug 17030189.
It is strongly recommended that you apply the official Oracle
Patch for bug 17030189 from My Oracle Support instead of using
this workaround.

This script must be executed in the mining database of Integrated
Capture. You will be prompted for the username of the mining user.
Use a double quoted identifier if the username is case sensitive
or contains special characters. In a CDB environment, this script
must be executed from the CDB$ROOT container and the mining user
must be a common user.

===========================  WARNING  ==========================
You MUST stop all Integrated Captures that belong to this mining
user before proceeding!
================================================================

Enter Integrated Capture mining user: GGADMIN

Installing workaround...                                                                                         
No errors.                                                                                                       
No errors.
No errors.                                                                                                       
Installation completed.                                                                                          
SQL>                                                                                                             

And finally the extract started fine. Interesting … There seems to be a patch for 11.2.0.4.7DBPSU in development but nothing else for the moment. Even the latest PSU for 11.2.0.4 seems not to include the patch.

 

Cet article OGG: Patch 17030189 is required on your Oracle mining database for trail format RELEASE 12.2 est apparu en premier sur Blog dbi services.

Pages