Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 11 hours 48 min ago

Multitenant internals – Summary

Mon, 2016-12-05 02:02

Today at UKOUG TECH16 conference I’m presenting the internals of the new multitenant architecture: 12c Multitenant: Not a Revolution, Just an Evolution. My goal is to show how it works, that metadata links and object links are not blind magic.
Here are the links to the blog posts I’ve published about multitenant internals.

Fichier 05.12.16 07 39 43
The dictionary separation, METADATA LINK and OBJECT LINK (now called DATA LINK): :
http://blog.dbi-services.com/multitenant-dictionary-what-is-stored-only-in-cdbroot/
http://blog.dbi-services.com/oracle-12c-cdb-metadata-a-object-links-internals/
http://blog.dbi-services.com/oracle-multitenant-dictionary-metadata-links/
http://blog.dbi-services.com/oracle-multitenant-dictionary-object-links/
http://blog.dbi-services.com/multitenant-internals-how-object-links-are-parsedexecuted/
http://blog.dbi-services.com/multitenant-internals-object-links-on-fixed-tables/
An exemple with the AWR views:
http://blog.dbi-services.com/12c-multitenant-internals-awr-tables-and-views/
How the upgrades should work:
http://blog.dbi-services.com/oracle-multitenant-dictionary-upgrade/
What about shared pool rowcache and library cache:
http://blog.dbi-services.com/oracle-multitenant-dictionary-rowcache/
http://blog.dbi-services.com/12c-multitenant-cursor-sharing-in-cdb/
And how to see when session switches to CDB$ROOT:
http://blog.dbi-services.com/oracle-12cr2-multitenant-containers-in-sql_trace/

If you are in Birmingham, I’m speaking on Monday and Wednesday.

CaptureUKOUGFeaturedSpeaker

 

Cet article Multitenant internals – Summary est apparu en premier sur Blog dbi services.

UKOUG Super Sunday

Sun, 2016-12-04 16:30

uk1

Today at the UKOUG Super Sunday in Birmingham, I had the opportunity to assist at interesting conferences.

The first presentation was about Oracle RAC internals and its new features in version 12.2.0.1 on Oracle Cloud. The main new features concern the cache fusion, the undo header hash table, the leaf nodes, and the hang manager.

In 12c release 2 in a RAC environment, the cache fusion automatically chooses an optimal path; the cache fusion collects and maintains statistics on the private network, and will use this information to find the optimal path network or disk to serve blocks. We can consider that flash storage will provide better acces time to data than the private network in case of high load.

In order to reduce remote lookups, each instance maintain a hash table of recent transactions (active and commited). So the Undo Header Hash table will improve the scalibility by eliminating remote lookups.

Flex Cluster and leaf nodes were implemented in 12cR1. With 12cR2, it is now possible to run read-only workload on instances running on leaf nodes.

Hang Manager has been introduced with 12cR2. It determines sessions holding resources on which sessions are waiting.  Hang Manager has the possibility to detect hangs across layers.

The second was about the use of strace, perf and gdb. This was a very funny presentation with no slides, only technical demos. It was talking on how to use strace, perf or gdb without being an expert. The speaker showed us the different analysis we can realize with strace gdb or perf in case we realize a sql query over a table in a file system tablespace or an ASM tablespace.

Using those tools allowed us to understand the mechanism of physical read and asynchronous I/O, and showed us the differences between asynchronous I/O and direct path read between ASM and file system.

It showed us that the use of strace and gdb is very simple but not recommended in a production environment.

The last session was talking about dba_feature_usage_statistics, and the speaker describes us the components behind the scene.

This view  as its name indicates it displays information about database feature usage statistics. The view gives an overview of each option pack taht have been used in the database and are currently in use. It pprovides also information when the product was first used and when it was used for the last time.

It is not very easy to find information in the Oracle documentation about how this view is populated. But the speaker gave us important information about wrl$_dbu_usage_sample, wrl$_dbu_feature_usage and wrl$_dbu_feature_metadata which are important for the dba_feature_usage_statistics view.

He also showed us a method to refresh manually the view dba_feature_usage_statistics.

Tomorrow another day of interesting sessions is waiting for us !

 

 

Cet article UKOUG Super Sunday est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 6 – Server programming

Sat, 2016-12-03 05:43

Today we’ll continue this series with another topic: What does PostgreSQL provide when it comes to server programming, that is: Writing functions and triggers to support your application? In Oracle you can either use PL/SQL or Java, in MariaDB you can use stored procedures written in SQL, MS SQL Server provides Transact SQL and with DB2 you can write stored procedures in a host language or SQL.

We’ll use the same sample data as in the last post:

\c postgres
drop database if exists ddl;
create database ddl;
\c ddl
create table t1 ( a int, b int );
insert into t1 (a,b)
       values (generate_series(1,1000000)
              ,generate_series(1,1000000));
select count(*) from t1;
create index i1 on t1(a);
create unique index i2 on t1(b);
create view v1 as select a from t1;
alter table t1 add constraint con1 check ( a < 2000000 );
\d t1

So, what can you do? To begin with you can create functions containing pure SQL commands. These are called “query language functions”. You can for example do things like this (although this function is not very useful as can you do the same by just selecting the whole table):

CREATE FUNCTION select_all_from_t1() RETURNS SETOF t1 AS '
  SELECT * 
    FROM t1;
' LANGUAGE SQL;

There are two important points here: The “LANGUAGE” part which means that the function is written in pure SQL. The keyword “SETOF” which means that we want to return a whole set of the rows of t1. Once the function is created you can use it in SQL:

(postgres@[local]:5439) [ddl] > select select_all_from_t1();
 select_all_from_t1 
--------------------
 (1,1)
 (2,2)
 (3,3)
...

When you want to do something where it does not make sense to return anything you can do it by using the “VOID” keyword:

CREATE FUNCTION update_t1() RETURNS VOID AS '
  UPDATE t1
     SET a = 5
   WHERE a < 10
' LANGUAGE SQL;

When you execute this you do not get a result:

(postgres@[local]:5439) [ddl] > select update_t1();
 update_t1 
-----------
 NULL
(1 row)
(postgres@[local]:5439) [ddl] > select count(*) from t1 where a = 5;
 count 
-------
     9
(1 row)

What about parameters? You can do this as well:

CREATE FUNCTION do_the_math(anumber1 numeric, anumber2 numeric ) RETURNS numeric AS '
  SELECT do_the_math.anumber1 * do_the_math.anumber2;
' LANGUAGE SQL;

Execute it:

(postgres@[local]:5439) [ddl] > select do_the_math(1.1,1.2);
 do_the_math 
-------------
        1.32

Another great feature is that you can have a variable/dynamic amount of input parameters when you specify the input parameter as an array:

CREATE FUNCTION dynamic_input(VARIADIC arr numeric[]) RETURNS int AS $$
    SELECT array_length($1,1);
$$ LANGUAGE SQL;

(postgres@[local]:5439) [ddl] > select dynamic_input( 1,2,3,4 );
 dynamic_input 
---------------
             4

So far for the SQL functions. What can you do when you need more than SQL? Then you can use the so called “procedural language functions”. One of these which is available by default is PL/pgSQL:

(postgres@[local]:5439) [ddl] > \dx
                 List of installed extensions
  Name   | Version |   Schema   |         Description          
---------+---------+------------+------------------------------
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(1 row)

By using PL/pgSQL you can add control structures around your SQL very much as you can do it in PL/SQL (except that you cannot create packages).

CREATE FUNCTION f1(int,int) RETURNS text AS $$
DECLARE
    t_row t1%ROWTYPE;
    result text;
BEGIN
    SELECT * 
      INTO t_row
      FROM t1
     WHERE a = 99;
    IF t_row.b > 0
    THEN
        result := 'aaaaaa';
    ELSE
        result := 'bbbbbb';
    END IF;
    RETURN result;
END;
$$ LANGUAGE plpgsql;
(postgres@[local]:5439) [ddl] > select f1(1,1);
   f1   
--------
 aaaaaa

You can also use anonymous blocks:

(postgres@[local]:5439) [ddl] > DO $$
BEGIN
  FOR i IN 1..10
  LOOP
    raise notice 'blubb';
  END LOOP;
END$$ LANGUAGE plpgsql;
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
NOTICE:  blubb
DO

Of course there is more than IF-THEN-ELSE which is documented here.

So by now we know two options to write functions in PostgreSQL. Is there more we can do? Of course: You prefer to write your functions in Perl?

(postgres@[local]:5439) [ddl] > create extension plperl;
CREATE EXTENSION
(postgres@[local]:5439) [ddl] > \dx
                 List of installed extensions
  Name   | Version |   Schema   |         Description          
---------+---------+------------+------------------------------
 plperl  | 1.0     | pg_catalog | PL/Perl procedural language
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language


CREATE FUNCTION perl_max (integer, integer) RETURNS integer AS $$
    my ($x, $y) = @_;
    if (not defined $x) {
        return undef if not defined $y;
        return $y;
    }
    return $x if not defined $y;
    return $x if $x > $y;
    return $y;
$$ LANGUAGE plperl;

(postgres@[local]:5439) [ddl] > select perl_max(1,2);
 perl_max 
----------
        2

You prefer python?

(postgres@[local]:5439) [ddl] > create extension plpythonu;
CREATE EXTENSION
Time: 327.434 ms
(postgres@[local]:5439) [ddl] > \dx
                        List of installed extensions
   Name    | Version |   Schema   |               Description                
-----------+---------+------------+------------------------------------------
 plperl    | 1.0     | pg_catalog | PL/Perl procedural language
 plpgsql   | 1.0     | pg_catalog | PL/pgSQL procedural language
 plpythonu | 1.0     | pg_catalog | PL/PythonU untrusted procedural language

CREATE FUNCTION pymax (a integer, b integer)
  RETURNS integer
AS $$
  if a > b:
    return a
  return b
$$ LANGUAGE plpythonu;

(postgres@[local]:5439) [ddl] > select pymax(1,1);
 pymax 
-------
     1

… or better TcL?

(postgres@[local]:5439) [ddl] > create extension pltclu;
CREATE EXTENSION
Time: 382.982 ms
(postgres@[local]:5439) [ddl] > \dx
                        List of installed extensions
   Name    | Version |   Schema   |               Description                
-----------+---------+------------+------------------------------------------
 plperl    | 1.0     | pg_catalog | PL/Perl procedural language
 plpgsql   | 1.0     | pg_catalog | PL/pgSQL procedural language
 plpythonu | 1.0     | pg_catalog | PL/PythonU untrusted procedural language
 pltclu    | 1.0     | pg_catalog | PL/TclU untrusted procedural language

And these are only the default extensions. There is much more you can do:

  • Java
  • PHP
  • R
  • Ruby
  • Scheme
  • Unix shell

You see: PostgreSQL gives you the maximum flexibility :)

 

Cet article Can I do it with PostgreSQL? – 6 – Server programming est apparu en premier sur Blog dbi services.

Histograms on character strings between 11.2.0.3 and 11.2.0.4

Wed, 2016-11-30 15:33

In short, when have statistics gathered in 11.2.0.3 but the query is now running with the 11.2.0.4 optimizer, you may have wrong cardinality estimation on histograms, leading to sub-optimal plans.

I had a table with a flag that has two values ‘Y’ and ‘N’ with even distribution between them. It’s a good case for frequency histograms. I had frequency histograms and expected exact cardinality estimation for a WHERE FLAG=’Y’ predicate. But that was not the case: very low estimation leading to very bad execution plan. Because the cardinality estimation was far from what we have in histograms and far from what we would have without histograms, I checked a 10053 trace and this is what I’ve find:
Using prorated density: 0.004557 of col #97 as selectivity of out-of-range/non-existent value pred

This is linear decay because of out-of-range predicate (I’ll show that next Wednesday at UKOUG TECH16) but I don’t expect an out-of-range condition when I provide one of the two values that are in the frequency histogram.

Here is my testcase


SQL> create table DEMO ( flag char);
Table created.
 
SQL> select count(*) from DEMO where flag='Y';
 
COUNT(*)
----------
0
 
SQL> insert into DEMO select 'Y' from xmltable('1 to 100000');
100000 rows created.
 
SQL> insert into DEMO select 'N' from xmltable('1 to 1000');
1000 rows created.
 
SQL> select flag,to_char(ascii(flag),'XX'),count(*) from DEMO group by flag;
 
FLAG TO_CHAR(ASCII(FLAG),'XX') COUNT(*)
---- ------------------------- --------
Y 59 100000
N 4E 1000

100000 rows with ‘Y’ and 1000 rows with ‘N’.

11.2.0.3

I gather statistics in 11.2.0.3


SQL> alter session set optimizer_features_enable='11.2.0.3';
Session altered.
 
SQL> exec dbms_stats.gather_table_stats(user,'DEMO',no_invalidate=>false);
PL/SQL procedure successfully completed.

And run a query looking for rows where flag is ‘Y’


SQL> explain plan for select count(*) from DEMO where flag='Y';
Explained.
 
SQL> select * from table(dbms_xplan.display(format=>'basic +rows'));
PLAN_TABLE_OUTPUT
Plan hash value: 2180342005
 
-------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | TABLE ACCESS FULL| DEMO | 100K|
-------------------------------------------

The estimate is perfect thanks to the histograms.

11.2.0.4

Now the CBO will running in 11.2.0.4 but statistics have not been gathered since 11.2.0.3


SQL> alter session set optimizer_features_enable='11.2.0.4';
Session altered.
 
SQL> explain plan for select count(*) from DEMO where flag='Y';
Explained.
 
SQL> select * from table(dbms_xplan.display(format=>'basic +rows'));
PLAN_TABLE_OUTPUT
Plan hash value: 2180342005
 
-------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | TABLE ACCESS FULL| DEMO | 500 |
-------------------------------------------

Bad estimation here. Were those 500 come from?
It’s not from the histogram that knows that 100K rows have value ‘Y’
Without histograms the estimation would be based on 2 distinct values among 101000 so that would be 50500.
CBO trace says:

Access path analysis for DEMO
***************************************
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for DEMO[DEMO] SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
 
kkecdn: Single Table Predicate:"DEMO"."FLAG"='Y'
Column (#1):
NewDensity:0.004950, OldDensity:0.000005 BktCnt:101000.000000, PopBktCnt:101000.000000, PopValCnt:2, NDV:2
Column (#1): FLAG(CHARACTER)
AvgLen: 2 NDV: 2 Nulls: 0 Density: 0.004950
Histogram: Freq #Bkts: 2 UncompBkts: 101000 EndPtVals: 2 ActualVal: no
Using density: 0.004950 of col #1 as selectivity of pred having unreasonably low value
Table: DEMO Alias: DEMO
Card: Original: 101000.000000 Rounded: 500 Computed: 500.000000 Non Adjusted: 500.000000

Predicate having unreasonably low value…

Time to look at the histograms.

ENDPOINT_VALUE

You can get the histogram entries. For CHAR datatype (my case here) the endpoint value is hashed through their ASCII representation put in decimal. Or at least with first characters.

SQL> select endpoint_number,endpoint_value
2 ,to_char(endpoint_value,rpad('FM',65,'X'))
3 ,utl_raw.cast_to_varchar2(hextoraw(to_char(endpoint_value,rpad('FM',65,'X'))))
4 from user_histograms h where table_name='DEMO';
ENDPOINT_NUMBER ENDPOINT_VALUE TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X')) UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW(TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X'))))
--------------- ------------------------------------ ----------------------------------------- ------------------------------------------------------------------------------
1000 405650737316592000000000000000000000 4E20202020203A7BB119D5F6000000 N :{��
101000 462766002760475000000000000000000000 59202020202034D998FF0B5AE00000 Y 4٘�
Z�

Looks good. I can see the ‘N’ and ‘Y’ values here. But obviously the CBO sees that as different than ‘Y’.

Let’s gather statistics again (I’m now with 11.2.0.4 CBO):

SQL> exec dbms_stats.gather_table_stats(user,'DEMO',no_invalidate=>false);
PL/SQL procedure successfully completed.

And look at the difference:

SQL> select endpoint_number,endpoint_value
2 ,to_char(endpoint_value,rpad('FM',65,'X'))
3 ,utl_raw.cast_to_varchar2(hextoraw(to_char(endpoint_value,rpad('FM',65,'X'))))
4 from user_histograms h where table_name='DEMO';
ENDPOINT_NUMBER ENDPOINT_VALUE TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X')) UTL_RAW.CAST_TO_VARCHAR2(HEXTORAW(TO_CHAR(ENDPOINT_VALUE,RPAD('FM',65,'X'))))
--------------- ------------------------------------ ----------------------------------------- ------------------------------------------------------------------------------
1000 404999154965717000000000000000000000 4E0000000000181F436C7BBB200000 NCl{�
101000 462114420409600000000000000000000000 590000000000127D2B51B120000000 Y}+Q�

In 11.2.0.3 the characters (‘Y’ is ASCII 0x59) were padded with spaces (ASCII 0x20). But In 11.2.0.4 they are padded with nulls (ASCII 0x00).
This is the reason why it was considered different. The ENDPOINT_VALUE for ‘Y’ calculated by the 11.2.0.4 version of the CDB is different from the one calculated by the 11.2.0.3 dbms_stats.

Now, the estimation is good again:


SQL> explain plan for select count(*) from DEMO where flag='Y';
Explained.
 
SQL> select * from table(dbms_xplan.display(format=>'basic +rows'));
PLAN_TABLE_OUTPUT
Plan hash value: 2180342005
 
-------------------------------------------
| Id | Operation | Name | Rows |
-------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | TABLE ACCESS FULL| DEMO | 100K|
-------------------------------------------

char_value

The ENDPOINT_VALUE function has been described by Jonathan Lewis in Cost-Based Oracle Fundamentals and the script to encode a character string is downloadable rom http://www.jlcomp.demon.co.uk/cbo_book/book_cbo_scripts.zip. I use it (as an inline function because I’m running that in 12c) to show the values calculated from ‘Y’ and ‘N’ as well as the values calculated from same strings right-padded with spaces:


SQL> with function char_value(i_vc varchar2) return number
-- function coming from Jonathan Lewis char_value.sql http://www.jlcomp.demon.co.uk/cbo_book/book_cbo_scripts.zip
is
m_vc varchar2(15) := substr(rpad(i_vc,15,chr(0)),1,15);
m_n number := 0;
begin
for i in 1..15 loop
m_n := m_n + power(256,15-i) * ascii(substr(m_vc,i,1));
end loop;
m_n := round(m_n, -21);
return m_n;
end;
select flag,char_value(flag),cast(flag as char(32)),char_value(cast(flag as char(32))),count(*) from DEMO.DEMO group by flag
/
 
FLAG CHAR_VALUE(FLAG) CAST(FLAGASCHAR(32)) CHAR_VALUE(CAST(FLAGASCHAR(32))) COUNT(*)
---- ------------------------------------ -------------------------------- ------------------------------------ --------
Y 462114420409600000000000000000000000 Y 462766002760475000000000000000000000 100000
N 404999154965717000000000000000000000 N 405650737316592000000000000000000000 1000

Comparing the calculated values with the ENDPOINT_VALUE I had above, we can see that the function has not changed but the input string was padded with spaces before which is not the case anymore.

So what?

Be careful when upgrading from <= 11.2.0.3 to higher version. You should re-gather the statistics (but then read http://blog.dbi-services.com/a-migration-pitfall-with-all-column-size-auto/ as there’s another pitfall). Don't set optimizer_features_enable to lower version as the scope is very wide and many things may be different. It is always better to take the time to focus on the features you want to disable. And never give up. The weirdest situations can be troubleshooted. Oracle Database has lot of ways to understand what happens, from the debugging tools provided with the software, and from the excellent literature about it.

 

Cet article Histograms on character strings between 11.2.0.3 and 11.2.0.4 est apparu en premier sur Blog dbi services.

Switchover and Failover with Dbvisit 8

Wed, 2016-11-30 12:19

In this blog we will talk about how to do a switchover and how to do a failover. We suppose that dbvisit is already installed and that a standby database is already configured. Our instance is named DBPRIMA.

Switchover
SWITCHOVER is the fact to change database role. The primary becomes the standby and the standby becomes the primary. This can be useful for many reasons.
Before performing a switchover, we have first to send archived logs if any not already sent on the primary server. For this we use the magic command dbvctl

[oracle@dbvisit2 DBPRIMA]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 9201)
dbvctl started on dbvisit2: Tue Nov 29 14:46:15 2016
=============================================================
>>> Applying Log file(s) from dbvisit1 to DBPRIMA on dbvisit2:
thread 1 sequence 30 (1_30_926978008.arc)
=============================================================
dbvctl ended on dbvisit2: Tue Nov 29 14:46:17 2016
=============================================================
[oracle@dbvisit2 DBPRIMA]$

After let’s apply all archived logs on the standby

[oracle@dbvisit2 DBPRIMA]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 9201)
dbvctl started on dbvisit2: Tue Nov 29 14:46:15 2016
=============================================================
>>> Applying Log file(s) from dbvisit1 to DBPRIMA on dbvisit2:
thread 1 sequence 30 (1_30_926978008.arc)
=============================================================
dbvctl ended on dbvisit2: Tue Nov 29 14:46:17 2016
=============================================================
[oracle@dbvisit2 DBPRIMA]$

Note that the commands above can be scheduled on both servers using crontab for example on linux system
On the Primary Server:

00,10,20,30,40,50 * * * * cd /u01/app/dbvisit/standby; ./dbvctl -d DBPRIMA >>/dev/null 2>&1

On the Standby Server:

00,10,20,30,40,50 * * * * cd /u01/app/dbvisit/standby; ./dbvctl -d DBPRIMA >>/dev/null 2>&1

After sending and applying archived logs, we can check the status of the synchronization

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -i
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 4420)
dbvctl started on dbvisit1: Tue Nov 29 14:26:11 2016
=============================================================
Dbvisit Standby log gap report for DBPRIMA thread 1 at 201611291426:
-------------------------------------------------------------
Destination database on dbvisit2 is at sequence: 29.
Source database on dbvisit1 is at log sequence: 30.
Source database on dbvisit1 is at archived log sequence: 29.
Dbvisit Standby last transfer log sequence: 29.
Dbvisit Standby last transfer at: 2016-11-29 14:24:16.
Archive log gap for DBPRIMA: 0.
Transfer log gap for DBPRIMA: 0.
Standby database time lag (DAYS-HH:MI:SS): +0:01:37.
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:26:12 2016
=============================================================
[oracle@dbvisit1 ~]$

If all is ok, we can now we can go for the SWITCHOVER. We can do it either by the graphical console or by command line. We are using the command line method. The command should be run only on the primary server.

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -o switchover
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 5081)
dbvctl started on dbvisit1: Tue Nov 29 14:47:32 2016
=============================================================
>>> Starting Switchover between dbvisit1 and dbvisit2
Running pre-checks ... done
Pre processing ... done
Processing primary ... done
Processing standby ... done
Converting standby ... done
Converting primary ... done
Completing ... done
Synchronizing ... done
Post processing ... done
>>> Graceful switchover completed.
Primary Database Server: dbvisit2
Standby Database Server: dbvisit1
>>> Dbvisit Standby can be run as per normal:
dbvctl -d DBPRIMA
PID:5081
TRACE:5081_dbvctl_switchover_DBPRIMA_201611291447.trc
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:50:23 2016
=============================================================
[oracle@dbvisit1 ~]$

Failover
The FAILOVER process happens when the primary database is no longer working. In this case the standby should be activated and will become the primary one. This FAILOVER process is NOT reversible unlike SWITCHOVER process.
A good practice before activating the standby database is to run a quick test to ensure that the standby database is in a consistent state (datafile headers and controlfile is in sync) and ready to be activated. This can be done by opening the standby database read-only.

[oracle@dbvisit2 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -o read
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 2542)
dbvctl started on dbvisit2: Wed Nov 30 09:40:50 2016
=============================================================
Open Standby Database DBPRIMA in READ ONLY mode...
Standby Database DBPRIMA opened in READ ONLY mode.
Log files cannot be applied to Database while in READ ONLY mode.
Database tempfile(s) may need to be added to this database.
=============================================================
dbvctl ended on dbvisit2: Wed Nov 30 09:40:55 2016
=============================================================
[oracle@dbvisit2 ~]$

As we don’t get any error and that we know we can open the standby read-only, let’s start it back into recovery mode (mount state).

[oracle@dbvisit2 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -o restart
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 2667)
dbvctl started on dbvisit2: Wed Nov 30 09:45:45 2016
=============================================================
Stopping instance DBPRIMA...
Standby Instance DBPRIMA shutdown successfully on dbvisit2.
Starting instance DBPRIMA...
Standby Instance DBPRIMA started
=============================================================
dbvctl ended on dbvisit2: Wed Nov 30 09:45:57 2016
=============================================================
[oracle@dbvisit2 ~]$

And proceed with the FAILOVER. Dbvisit will ask to confirm. We can use the –force option to avoid this.

[oracle@dbvisit2 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -o activate
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 2796)
dbvctl started on dbvisit2: Wed Nov 30 09:47:12 2016
=============================================================
Activating means this database will become a Primary Database.
It will no longer be a Standby Database for DBPRIMA on dbvisit1.
Activation cannot be reversed.
=>Activate Standby Database on dbvisit2? [No]: yes
Are you sure? [No]: yes
>>> Activating now...
File /u01/app/dbvisit/standby/conf/dbv_DBPRIMA.env copied to
/u01/app/dbvisit/standby/conf/dbv_DBPRIMA.env.201611300947.
>>> Activation complete. Please ensure a backup of this Database is made
>>> Old archives from before the activation should be removed to avoid mix-up between new
and old archive logs
>>> If the Dbvisit Standby process is to be reversed, then database on dbvisit1 will need
to be rebuilt as a Standby Database
=============================================================
dbvctl ended on dbvisit2: Wed Nov 30 09:48:07 2016
=============================================================
[oracle@dbvisit2 ~]$

In this article we have seen how easy it is to do a switchover or failover with Dbvisit 8 .

 

Cet article Switchover and Failover with Dbvisit 8 est apparu en premier sur Blog dbi services.

Creating standby database with Dbvisit 8

Wed, 2016-11-30 12:13

The new version of Dbvisit standby is now released. In this blog we are going to see how to install dbvisit 8 and how to create a standby database.
For the installation after downloading the software, just uncompress it and then run the install-dbvisit command under the installer directory. The installation process is divided in 2 parts.
1- Core Components (Dbvisit Standby Cli, Dbvnet, Dbvagent) –On both servers
2- Dbvserver console installation. Just note that the console now needs to be installed only on one server.

We will not show all outputs, more info for the installation can be found in my precedent blog about upgrading dbvisit 7 to dbvisit 8

[oracle@dbvisit1 installer]$ pwd
/home/oracle/dbvisit/dbvisit/dbvisit/installer
[oracle@dbvisit1 installer]$ ls
install-dbvisit
[oracle@dbvisit1 installer]$
[oracle@dbvisit1 installer]$ ./install-dbvisit
-----------------------------------------------------------
Welcome to the Dbvisit software installer.
-----------------------------------------------------------
It is recommended to make a backup of our current Dbvisit software
location (Dbvisit Base location) for rollback purposes.
Installer Directory /home/oracle/dbvisit/dbvisit/dbvisit
>>> Please specify the Dbvisit installation directory (Dbvisit Base).
The various Dbvisit products and components - such as Dbvisit Standby,
Dbvisit Dbvnet will be installed in the appropriate subdirectories of
this path.
Enter a custom value or press ENTER to accept default [/usr/dbvisit]:
> /u01/app/dbvisit
DBVISIT_BASE = /u01/app/dbvisit
-----------------------------------------------------------
Component Installer Version Installed Version
-----------------------------------------------------------
standby 8.0.04.18184 not installed
dbvnet 2.0.04.18184 not installed
dbvagent 2.0.04.18184 not installed
dbvserver 2.0.04.18184 not installed
-----------------------------------------------------------
What action would you like to perform?
1 - Install component(s)
2 - Uninstall component(s)
3 - Terminate
Your choice: 1
Which component do you want to install?
1 - Core Components (Dbvisit Standby Cli, Dbvnet, Dbvagent)
2 - Dbvisit Standby Core (Command Line Interface)
3 - Dbvnet (Dbvisit Network Communication)
4 - Dbvagent (Dbvisit Agent)
5 - Dbvserver (Dbvisit Central Console)
6 - Exit Installer
Your choice: 1

And then follow instructions.
At the end of the proceesus we can start the different components: the dbvagent and the dbvserver


[oracle@dbvisit1 installer]$ /u01/app/dbvisit/dbvagent/dbvagent -d start


[oracle@dbvisit1 installer]$ /u01/app/dbvisit/dbvnet/dbvnet -d start


[oracle@dbvisit1 installer]$ ps -ef|egrep 'dbvagent|dbvnet' | grep -v grep
oracle 4064 1 0 10:46 ? 00:00:00 /u01/app/dbvisit/dbvagent/dbvagent -d start
oracle 4140 1 0 10:47 ? 00:00:00 /u01/app/dbvisit/dbvnet/dbvnet -d start
[oracle@dbvisit1 installer]$

After core components installation, we can proceed with Dbvisit Standby Central console installation (dbvserver). Just launch again install-dbvisit command and follow instructions.

[oracle@dbvisit1 installer]$ ./install-dbvisit
-----------------------------------------------------------
Welcome to the Dbvisit software installer.
-----------------------------------------------------------
It is recommended to make a backup of our current Dbvisit software
location (Dbvisit Base location) for rollback purposes.
Installer Directory /home/oracle/dbvisit/dbvisit/dbvisit
.....
.....
What action would you like to perform?
1 - Install component(s)
2 - Uninstall component(s)
3 - Terminate
Your choice: 1
Which component do you want to install?
1 - Core Components (Dbvisit Standby Cli, Dbvnet, Dbvagent)
2 - Dbvisit Standby Core (Command Line Interface)
3 - Dbvnet (Dbvisit Network Communication)
4 - Dbvagent (Dbvisit Agent)
5 - Dbvserver (Dbvisit Central Console)
6 - Exit Installer
Your choice: 5

At the end of the installation, We can now start dbserver

[oracle@dbvisit1 installer]$ /u01/app/dbvisit/dbvserver/dbvserver -d start


[root@dbvisit1 ~]# netstat -taupen | grep dbvser
tcp 0 0 0.0.0.0:4433 0.0.0.0:* LISTEN 1000 37848 5348/dbvserver
[root@dbvisit1 ~]#

We should be able to connect to dbvserver console on port 4433, with admin user. The default password is admin
dbvserverconsole1
dbvserverconsole2
For example using MANAGE USERS, we can change default admin password.
Now that dbvisit standby is installed on both server, the first thing is to test connectivity. For this we will use the command dbvctl which is the main command of dbvisit 8. In dbvisit documentation you can find The first command you need to be fimiliar with is the “dbvctl -h” command which will display the syntax and usage options for the Dbvisit Standby command line interface. and it’s really true. The command dbvctl -f system_readiness will be used.

[oracle@dbvisit1 installer]$ /u01/app/dbvisit/standby/dbvctl -f system_readiness
Please supply the following information to complete the test.
Default values are in [].
Enter Dbvisit Standby location on local server: [/u01/app/dbvisit]:
Your input: /u01/app/dbvisit
Is this correct? <Yes/No> [Yes]:
Enter the name of the remote server: []: dbvisit2
Your input: dbvisit2
Is this correct? <Yes/No> [Yes]:
Enter Dbvisit Standby location on remote server: [/u01/app/dbvisit]:
Your input: /u01/app/dbvisit
Is this correct? <Yes/No> [Yes]:
Enter the name of a file to transfer relative to local install directory
/u01/app/dbvisit: [standby/doc/README.txt]:
Your input: standby/doc/README.txt
Is this correct? <Yes/No> [Yes]:
Choose copy method:
1) /u01/app/dbvisit/dbvnet/dbvnet
2) /usr/bin/scp
Please enter choice [1] : 1
Is this correct? <Yes/No> [Yes]:
Enter port for method /u01/app/dbvisit/dbvnet/dbvnet: [7890]:
Your input: 7890
Is this correct? <Yes/No> [Yes]:
-------------------------------------------------------------
Testing the network connection between local server and remote server dbvisit2.
-------------------------------------------------------------
Settings
========
Remote server =dbvisit2
Dbvisit Standby location on local server =/u01/app/dbvisit
Dbvisit Standby location on remote server =/u01/app/dbvisit
Test file to copy =/u01/app/dbvisit/standby/doc/README.txt
Transfer method =/u01/app/dbvisit/dbvnet/dbvnet
port =7890
-------------------------------------------------------------
Checking network connection by copying file to remote server dbvisit2...
-------------------------------------------------------------
Trace file /u01/app/dbvisit/standby/trace/5589_dbvctl_system_readiness_201611291139.trc
File copied successfully. Network connection between local and dbvisit2
correctly configured.
[oracle@dbvisit1 installer]$

If everything is fine with the installation, now we can create a standby database using dbvctl. The configuration we use is the following:
Primary Server : dbvisit1
Standby Server: dbvisit2
Database Instance : DBPRIMA ( We suppose that database is in archive mode and in force logging mode )
First Let’s create Dbvisit Standby Database Configuration (DDC) file.The name of the DDC is generally the name of the database instance. The command dbvctl -o setup is used.

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -o setup
========================================================
Dbvisit Standby Database Technology (8.0.04.18184)
=========================================================
=>dbvctl only needs to be run on the primary server.
Is this the primary server? <Yes/No> [Yes]:
The following Dbvisit Database configuration (DDC) file(s) found on this
server:
DDC
===
1) Create New DDC
2) Cancel
Please enter choice [] : 1
Is this correct? <Yes/No> [Yes]:
END USER LICENSE AGREEMENT
PLEASE READ THIS END USER LICENSE AGREEMENT (AGREEMENT) CAREFULLY BEFORE
DOWNLOADING THE LICENSED PRODUCT. BY CLICKING I AGREE BELOW, YOU ARE
REPRESENTING THAT YOU HAVE THE RIGHT AND AUTHORITY TO LEGALLY BIND YOURSELF
…….
…….
of it on future occasions. Headings are for reference purposes only and
have no substantive effect.
Continue ? <Yes/No> [No]: yes
=========================================================
Dbvisit Standby setup begins.
=========================================================
The following Oracle instance(s) have been found on this server:
SID ORACLE_HOME
=== ===========
1) DBPRIMA /u01/app/oracle/product/12.1.0.2/dbhome_1
2) Enter own ORACLE_SID and ORACLE_HOME
Please enter choice [] : 1
Is this correct? <Yes/No> [Yes]:
=>ORACLE_SID will be: DBPRIMA
=>ORACLE_HOME will be: /u01/app/oracle/product/12.1.0.2/dbhome_1
------------------------------------------------------------------------------
Enter the primary server name.
Note: The hostname is case sensitive on UNIX based systems.
=>SOURCE is [dbvisit1]:
Your input: dbvisit1
Is this correct? <Yes/No> [Yes]:
Choice is dbvisit1
------------------------------------------------------------------------------
Please enter a filesystem directory that Dbvisit Standby use to store (archive) log files. This directory
is not the same as the database recovery area or archive destinations and should not be located in these
areas.
....
Please ensure that this directory exists on
=>ARCHSOURCE is [/u01/app/oracle/dbvisit_arch/DBPRIMA]: /u01/app/archivedbvisit/DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is /u01/app/archivedbvisit/DBPRIMA
------------------------------------------------------------------------------
Do you want to use SSH to connect to the standby server? Note that if you are using SSH, passwordless SSH
authentication between the hosts must already be configured. By default Dbvnet will be used.
=>USE_SSH is [N]:
Your input: N
Is this correct? <Yes/No> [Yes]:
Choice is N
------------------------------------------------------------------------------
Enter the standby database hostname.
If the standby database will be Oracle RAC enabled:
Enter the the Virtual Hostname (linked to a Virtual IP) for standby database. This virtual hostname is
....
For non-RAC configurations specify the standby database server name here.
=>DESTINATION is []: dbvisit2
Your input: dbvisit2
Is this correct? <Yes/No> [Yes]:
Choice is dbvisit2
------------------------------------------------------------------------------
Specify the DBVNET or SSH port number on the standby server. The default value supplied is the dbvnet port
7890. If you specified the use of SSH, please specify the SSH port here.
=>NETPORT is [7890]:
Your input: 7890
Is this correct? <Yes/No> [Yes]:
Choice is 7890
------------------------------------------------------------------------------
Enter Dbvisit Standby installation directory on the standby server
=>DBVISIT_BASE_DR is [/u01/app/dbvisit]:
Your input: /u01/app/dbvisit
Is this correct? <Yes/No> [Yes]:
Choice is /u01/app/dbvisit
------------------------------------------------------------------------------
Enter ORACLE_HOME directory on the standby server
=>ORACLE_HOME_DR is [/u01/app/oracle/product/12.1.0.2/dbhome_1]:
Your input: /u01/app/oracle/product/12.1.0.2/dbhome_1
Is this correct? <Yes/No> [Yes]:
Choice is /u01/app/oracle/product/12.1.0.2/dbhome_1
------------------------------------------------------------------------------
Enter DB_UNIQUE_NAME on the standby server
=>DB_UNIQUE_NAME_DR is [DBPRIMA]:
Your input: DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is DBPRIMA
------------------------------------------------------------------------------
Please enter the directory where Dbvisit Standby will transfer the (archive) log files to on standby
server.
...
Please ensure that this directory exists on the standby server
=>ARCHDEST is [/u01/app/archivedbvisit/DBPRIMA]:
Your input: /u01/app/archivedbvisit/DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is /u01/app/archivedbvisit/DBPRIMA
------------------------------------------------------------------------------
Enter ORACLE_SID on the standby server
=>ORACLE_SID_DR is [DBPRIMA]:
Your input: DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is DBPRIMA
------------------------------------------------------------------------------
Enter ASM instance name on the standby server, if your standby is using ASM. If you are not using ASM on
the standby leave the value blank.
=>ORACLE_SID_ASM_DR is []:
Is this correct? <Yes/No> [Yes]:
Choice is null
------------------------------------------------------------------------------
Please specify the name of the Dbvisit Database configuration (DDC) file.
The DDC file is a plain text file that contains all the Dbvisit Standby settings.
=>ENV_FILE is [DBPRIMA]:
Your input: DBPRIMA
Is this correct? <Yes/No> [Yes]:
Choice is DBPRIMA
------------------------------------------------------------------------------
Below are the list of configuration variables provided during the setup process:
Configuration Variable Value Provided
====================== ==============
ORACLE_SID DBPRIMA
ORACLE_HOME /u01/app/oracle/product/12.1.0.2/dbhome_1
SOURCE dbvisit1
ARCHSOURCE /u01/app/archivedbvisit/DBPRIMA
RAC_DR N
USE_SSH N
DESTINATION dbvisit2
NETPORT 7890
DBVISIT_BASE_DR /u01/app/dbvisit
ORACLE_HOME_DR /u01/app/oracle/product/12.1.0.2/dbhome_1
DB_UNIQUE_NAME_DR DBPRIMA
ARCHDEST /u01/app/archivedbvisit/DBPRIMA
ORACLE_SID_DR DBPRIMA
ENV_FILE DBPRIMA
Are these variables correct? <Yes/No> [Yes]:
>>> Dbvisit Database configuration (DDC) file DBPRIMA created.
>>> Dbvisit Database repository (DDR) DBPRIMA created.
Repository Version 8.0
Software Version 8.0
Repository Status VALID
PID:2330
TRACE:dbvisit_install.log
[oracle@dbvisit1 ~]$

The DDC file we created will be used to create the standby database. But before we must enter the license key, otherwise we will have an error.Following command is used dbvctl -d DDC -l License_Key

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -l 4jo6z-8aaai-u09b6-ijjq5-m1u6k-1uwpp-cmjfq
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 3835)
dbvctl started on dbvisit1: Tue Nov 29 14:12:08 2016
=============================================================
=>Update with license key: 4jo6z-8aaai-u09b6-ijjq5-m1u6k-1uwpp-cmjfq? <Yes/No> [Yes]:
>>> Dbvisit Standby License
License Key : 4jo6z-8aaai-u09b6-ijjq5-m1u6k-1uwpp-cmjfq
customer_number : 1
dbname :
expiry_date : 2016-12-29
product_id : 8
sequence : 1
status : VALID
updated : YES
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:12:17 2016
=============================================================
[oracle@dbvisit1 ~]$

And then we can proceed with the standby database creation using the command dbvctl -d DDC –csd

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA --csd
>>> Running pre-checks please wait... done
What would you like to do:
1 - Create standby database (and optionally save settings in template)
2 - Help
3 - Terminate processing
Please enter your choice [1]: 1
-------------------------------------------------------------------------------
Answer Yes to transfer the datafiles directly to the standby server (not
recommended for large database or slow networks)
Note tablespaces will be placed in backup mode for direct copy.
Answer No (recommended) to create an RMAN backup of primary datafiles in a
temporary location and transfer the backup to the standby server
=>Do you want to copy database files directly to the standby server? [N]: y
Your input: Y
Is this correct? <Yes/No> [Yes]: yes
The following oracle database parameters will be set in the standby database pfile or spfile:
-------------------------------------------------------------------------------
SID NAME VALUE
* audit_file_dest /u01/app/oracle/admin/DBPRIMA/adump
* compatible 12.1.0.2.0
* control_files /u01/app/oracle/oradata/DBPRIMA/control01.ctl,/u01/app/oracle/fast_recovery_area/DBPRIMA/control02.ctl
* db_block_size 8192
* db_domain
* db_name DBPRIMA
* db_recovery_file_dest /u01/app/oracle/fast_recovery_area
* db_recovery_file_dest_size 10485760000
* db_unique_name DBPRIMA
* diagnostic_dest /u01/app/oracle
* dispatchers (PROTOCOL=TCP) (SERVICE=DBPRIMAXDB)
* log_archive_format %t_%s_%r.dbf
* open_cursors 300
* pga_aggregate_target 304087040
* processes 300
* remote_login_passwordfile EXCLUSIVE
* sga_target 912261120
* spfile OS default
* undo_tablespace UNDOTBS1
-------------------------------------------------------------------------------
What would you like to do:
1 - Proceed with creating the standby database
2 - Edit oracle database parameters for the standby database pfile/spfile
3 - Terminate processing
Please enter your choice [1]: 1
......
=>Create standby database template for DBPRIMA using provided answers? [Y]:
Your input: 1
Is this correct? <Yes/No> [Yes]:
-------------------------------------------------------------------------------
=>Continue with creating a standby database? (If No processing will terminate,
the saved template will be available for future use) [Y]:
Is this correct? <Yes/No> [Yes]:
>>> dbvctl will now run a pre-flight check for standby database creation. An attempt will
be made to create a standby (s)pfile using oracle standby database parameters, followed
by trying to start the standby instance. If this step fails, then please double-check
the following items before re-running dbvctl again:
1) Review the standby database parameters you have supplied and provide valid values
unless a template is used.
2) Recreate the template to provide valid values for standby database parameters if a
template is used.
>>> Running pre-flight check for standby creation, please wait... done
>>> Total database size for DBPRIMA is 1.57GB
>>> Creating standby control file... done
>>> Transferring datafiles from dbvisit1 to dbvisit2...
Transferring /u01/app/oracle/oradata/DBPRIMA/system01.dbf...
Transferring /u01/app/oracle/oradata/DBPRIMA/sysaux01.dbf... done
Transferring /u01/app/oracle/oradata/DBPRIMA/undotbs01.dbf... done
Transferring /u01/app/oracle/oradata/DBPRIMA/users01.dbf... done
>>> Restoring standby control files... done
>>> Starting standby database DBPRIMA on dbvisit2 mount... done
>>> Restoring datafiles on dbvisit2...
>>> Renaming standby redo logs and tempfiles on dbvisit2... done
>>> Performing checkpoint and archiving logs... done
>>> Finishing standby database creation... done
>>> Standby database created.
To complete creating standby database please run dbvctl on the primary server first,
then on the standby server, to ensure the standby database is in sync with the primary
database.
PID:3915
TRACE:3915_dbvctl_csd_DBPRIMA_201611291413.trc

Let’s now run dbvctl -d DDC on the primary server to send all archived logs and on the standby server to apply all archived logs.

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 4354)
dbvctl started on dbvisit1: Tue Nov 29 14:24:11 2016
=============================================================
>>> Obtaining information from standby database (RUN_INSPECT=Y)... done
>>> Sending heartbeat message... skipped
Performing a log switch...
>>> Transferring Log file(s) from DBPRIMA on dbvisit1 to dbvisit2 for thread 1:
thread 1 sequence 29 (o1_mf_1_29_d3v083x6_.arc)
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:24:16 2016
=============================================================
[oracle@dbvisit1 ~]$


[oracle@dbvisit2 DBPRIMA]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 8665)
dbvctl started on dbvisit2: Tue Nov 29 14:25:07 2016
=============================================================
>>> Sending heartbeat message... skipped
>>> Applying Log file(s) from dbvisit1 to DBPRIMA on dbvisit2:
thread 1 sequence 29 (1_29_926978008.arc)
=============================================================
dbvctl ended on dbvisit2: Tue Nov 29 14:25:09 2016
=============================================================
[oracle@dbvisit2 DBPRIMA]$

We can check the sync status with dbvctl -d DDC -i

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -d DBPRIMA -i
=============================================================
Dbvisit Standby Database Technology (8.0.04.18184) (pid 4420)
dbvctl started on dbvisit1: Tue Nov 29 14:26:11 2016
=============================================================
Dbvisit Standby log gap report for DBPRIMA thread 1 at 201611291426:
-------------------------------------------------------------
Destination database on dbvisit2 is at sequence: 29.
Source database on dbvisit1 is at log sequence: 30.
Source database on dbvisit1 is at archived log sequence: 29.
Dbvisit Standby last transfer log sequence: 29.
Dbvisit Standby last transfer at: 2016-11-29 14:24:16.
Archive log gap for DBPRIMA: 0.
Transfer log gap for DBPRIMA: 0.
Standby database time lag (DAYS-HH:MI:SS): +0:01:37.
=============================================================
dbvctl ended on dbvisit1: Tue Nov 29 14:26:12 2016
=============================================================
[oracle@dbvisit1 ~]$

Now it’s time to import our configuration in our graphical console. For this let’s use MANAGE HOSTS
hosts

And then add hosts using the option NEW
hosts2

Fill info and click on CREATE NEW HOST
hosts3
After we add all hosts
hosts4

We can now import our standby configuration using the MANAGE CONFIGURATIONS tab.
config1

Using IMPORT
config2
config3

Now we can manage our configuration either by command line using dbvctl or by using the graphical console. In a next blog we will talk about how to do a SWITCHOVER and a FAILOVER

 

Cet article Creating standby database with Dbvisit 8 est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 5 – Generating DDL commands

Wed, 2016-11-30 11:17

From time to time it is very useful that you can generate the DDL commands for existing objects (Tables, Indexes, whole Schema …). In Oracle you can either use the dbms_metadata PL/SQL package for this or use expdp/impdp to generate the statements out of a dump file. What options do you have in PostgreSQL? Note: We’ll not look at any third party tools you could use for that, only plain PostgreSQL.

As always we’ll need some objects to test with, so here we go:

\c postgres
drop database if exists ddl;
create database ddl;
\c ddl
create table t1 ( a int, b int );
insert into t1 (a,b)
       values (generate_series(1,1000000)
              ,generate_series(1,1000000));
select count(*) from t1;
create index i1 on t1(a);
create unique index i2 on t1(b);
create view v1 as select a from t1;
alter table t1 add constraint con1 check ( a < 2000000 );
\d t1
CREATE FUNCTION add(integer, integer) RETURNS integer
    AS 'select $1 + $2;'
    LANGUAGE SQL
    IMMUTABLE
    RETURNS NULL ON NULL INPUT;

PostgreSQL comes with a set of administration functions which can be used to query various stuff. Some are there to get the definitions for your objects.

You can get the definition of a view:

(postgres@[local]:5439) [ddl] > select pg_get_viewdef('v1'::regclass, true);
 pg_get_viewdef 
----------------
  SELECT t1.a  +
    FROM t1;
(1 row)

You can get the definition of a constraint:

(postgres@[local]:5439) [ddl] > SELECT conname
                                     , pg_get_constraintdef(r.oid, true) as definition
                                  FROM pg_constraint r
                                 WHERE r.conrelid = 't1'::regclass;
 conname |     definition      
---------+---------------------
 con1    | CHECK (a < 2000000)

You can get the definition of a function:

(postgres@[local]:5439) [ddl] > SELECT proname
     , pg_get_functiondef(a.oid)
  FROM pg_proc a
 WHERE a.proname = 'add';
 proname |                   pg_get_functiondef                    
---------+---------------------------------------------------------
 add     | CREATE OR REPLACE FUNCTION public.add(integer, integer)+
         |  RETURNS integer                                       +
         |  LANGUAGE sql                                          +
         |  IMMUTABLE STRICT                                      +
         | AS $function$select $1 + $2;$function$                 +
         | 
--OR
(postgres@[local]:5439) [ddl] > SELECT pg_get_functiondef(to_regproc('add'));
                   pg_get_functiondef                    
---------------------------------------------------------
 CREATE OR REPLACE FUNCTION public.add(integer, integer)+
  RETURNS integer                                       +
  LANGUAGE sql                                          +
  IMMUTABLE STRICT                                      +
 AS $function$select $1 + $2;$function$                 +

You can get the definition of an index:

(postgres@[local]:5439) [ddl] > select pg_get_indexdef('i1'::regclass);
            pg_get_indexdef            
---------------------------------------
 CREATE INDEX i1 ON t1 USING btree (a)
(1 row)

But surprisingly you can not get the DDL for a table. There is just no function available to do this. How can you do that without concatenating the definitions you can get out of the PostgreSQL catalog? The only option I am aware of is pg_dump:

postgres@pgbox:/home/postgres/ [PG961] pg_dump -s -t t1 ddl | egrep -v "^--|^$"
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;
SET search_path = public, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
CREATE TABLE t1 (
    a integer,
    b integer,
    CONSTRAINT con1 CHECK ((a < 2000000))
);
ALTER TABLE t1 OWNER TO postgres;
CREATE INDEX i1 ON t1 USING btree (a);
CREATE UNIQUE INDEX i2 ON t1 USING btree (b);

Using the “-s” (schema only) and “-t” (tables) options you get the DDL for the complete table. Not as handy as in Oracle where you can do this in sqlplus but it works and produces a result you can work with.

Of course you can always create the DDLs for your own by querying the catalog, e.g. pg_attribute which holds all the column definitions for the tables:

    Table "pg_catalog.pg_attribute"
    Column     |   Type    | Modifiers 
---------------+-----------+-----------
 attrelid      | oid       | not null
 attname       | name      | not null
 atttypid      | oid       | not null
 attstattarget | integer   | not null
 attlen        | smallint  | not null
 attnum        | smallint  | not null
 attndims      | integer   | not null
 attcacheoff   | integer   | not null
 atttypmod     | integer   | not null
 attbyval      | boolean   | not null
 attstorage    | "char"    | not null
 attalign      | "char"    | not null
 attnotnull    | boolean   | not null
 atthasdef     | boolean   | not null
 attisdropped  | boolean   | not null
 attislocal    | boolean   | not null
 attinhcount   | integer   | not null
 attcollation  | oid       | not null
 attacl        | aclitem[] | 
 attoptions    | text[]    | 
 attfdwoptions | text[]    | 
Indexes:
    "pg_attribute_relid_attnam_index" UNIQUE, btree (attrelid, attname)
    "pg_attribute_relid_attnum_index" UNIQUE, btree (attrelid, attnum)

One nasty way which which is even documented on the PostgreSQL wiki is this:

(postgres@[local]:5439) [ddl] > create extension plperlu;
CREATE EXTENSION
Time: 90.074 ms
(postgres@[local]:5439) [ddl] > \dx
                      List of installed extensions
  Name   | Version |   Schema   |              Description               
---------+---------+------------+----------------------------------------
 plperlu | 1.0     | pg_catalog | PL/PerlU untrusted procedural language
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language

(postgres@[local]:5439) [ddl] > CREATE OR REPLACE FUNCTION system(text) RETURNS text 
AS 'my $cmd=shift; return `cd /tmp;$cmd`;' LANGUAGE plperlu;
CREATE FUNCTION

(postgres@[local]:5439) [ddl] > select system('pg_dump -s -t t1 ddl | egrep -v "^--|^$"');
                    system                     
-----------------------------------------------
 SET statement_timeout = 0;                   +
 SET lock_timeout = 0;                        +
 SET idle_in_transaction_session_timeout = 0; +
 SET client_encoding = 'UTF8';                +
 SET standard_conforming_strings = on;        +
 SET check_function_bodies = false;           +
 SET client_min_messages = warning;           +
 SET row_security = off;                      +
 SET search_path = public, pg_catalog;        +
 SET default_tablespace = '';                 +
 SET default_with_oids = false;               +
 CREATE TABLE t1 (                            +
     a integer,                               +
     b integer,                               +
     CONSTRAINT con1 CHECK ((a < 2000000))    +
 );                                           +
 ALTER TABLE t1 OWNER TO postgres;            +
 CREATE INDEX i1 ON t1 USING btree (a);       +
 CREATE UNIQUE INDEX i2 ON t1 USING btree (b);+
 

Can be a workaround. Hope this helps…

 

Cet article Can I do it with PostgreSQL? – 5 – Generating DDL commands est apparu en premier sur Blog dbi services.

Oracle 12c DataGuard – Insufficient SRLs reported by DGMGRL VALIDATE DATABASE VERBOSE

Wed, 2016-11-30 07:03

I have setup a DataGuard environment and followed the instructions from Oracle to create the Standby Redo Logs. The Standby Redo Logs have to be the same size as the Online Redo Logs. If not, the RFS process won’t attach Standby Redo Logs, and you should have at least one more of the Standby Redo Log Group as you have for your Online Redo Log Group per Thread.

For my single instance, this should be quite straight forward, and so I issued the following commands on the primary and standby.

alter database add standby logfile group 4 size 1073741824;
alter database add standby logfile group 5 size 1073741824;
alter database add standby logfile group 6 size 1073741824;
alter database add standby logfile group 7 size 1073741824;

After setting all up, I started the new cool Broker command “DGMGRL> VALIDATE DATABASE VERBOSE ‘<DB>';” and surprisingly found, that the validation complains that I do have insufficient Standby Redo Logs.

  Current Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE2)          (OCM12C_SITE1)
    1         3                       3                       Insufficient SRLs

  Future Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE1)          (OCM12C_SITE2)
    1         3                       3                       Insufficient SRLs

After looking everything up on Primary and Standby, the number of Log Groups and the sizes looked ok. I do have 3 Online Redo Log Groups with 1G each, and I have 4 Standby Redo Log Groups with 1G each.

-- Standby

SQL> select thread#, group#, sequence#, status, bytes from v$log;

   THREAD#     GROUP#  SEQUENCE# STATUS                BYTES
---------- ---------- ---------- ---------------- ----------
         1          1          0 UNUSED           1073741824
         1          3          0 UNUSED           1073741824
         1          2          0 UNUSED           1073741824

SQL> select thread#, group#, sequence#, status, bytes from v$standby_log;

   THREAD#     GROUP#  SEQUENCE# STATUS          BYTES
---------- ---------- ---------- ---------- ----------
         1          4          0 UNASSIGNED 1073741824
         1          5        552 ACTIVE     1073741824
         1          6          0 UNASSIGNED 1073741824
         0          7          0 UNASSIGNED 1073741824

-- Primary

SQL> select thread#, group#, sequence#, status, bytes from v$log;

   THREAD#     GROUP#  SEQUENCE# STATUS                BYTES
---------- ---------- ---------- ---------------- ----------
         1          1        550 INACTIVE         1073741824
         1          2        551 INACTIVE         1073741824
         1          3        552 CURRENT          1073741824

SQL> select thread#, group#, sequence#, status, bytes from v$standby_log;

   THREAD#     GROUP#  SEQUENCE# STATUS          BYTES
---------- ---------- ---------- ---------- ----------
         1          4          0 UNASSIGNED 1073741824
         1          5          0 UNASSIGNED 1073741824
         1          6          0 UNASSIGNED 1073741824
         0          7          0 UNASSIGNED 1073741824

 

The only strange thing, is that the Standby Redo Log Group 7, shows up with Thread 0, instead of Thread 1.
Did not even know, that a thread 0 exists. It always starts with 1, and in case of RAC, you might see Thread 2, 3 or more. But if you want to, you can perfectly create thread 0 without any issues. For what reasons, I don’t know.

SQL> alter database add standby logfile thread 0 group 8 size 1073741824;

Database altered.

Ok. Lets correct the Thread 0 thing, and then lets see want the “DGMGRL> VALIDATE DATABASE VERBOSE ‘<DB>';” shows.

-- On Standby
		 
DGMGRL> EDIT DATABASE 'OCM12C_SITE1' SET STATE = 'APPLY-OFF';
Succeeded.

SQL> alter database drop standby logfile group 7;

Database altered.

SQL> alter database add standby logfile thread 1 group 7 size 1073741824;

Database altered.

SQL> select thread#, group#, sequence#, status, bytes from v$standby_log;

   THREAD#     GROUP#  SEQUENCE# STATUS          BYTES
---------- ---------- ---------- ---------- ----------
         1          4        553 ACTIVE     1073741824
         1          5          0 UNASSIGNED 1073741824
         1          6          0 UNASSIGNED 1073741824
         1          7          0 UNASSIGNED 1073741824
		 
DGMGRL> EDIT DATABASE 'OCM12C_SITE1' SET STATE = 'APPLY-ON';
Succeeded.
		 
-- On Primary

SQL> alter database drop standby logfile group 7;

Database altered.

SQL> alter database add standby logfile thread 1 group 7 size 1073741824;

Database altered.

And here we go. Now I have sufficient Standby Redo Logs.

  Current Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE2)          (OCM12C_SITE1)
    1         3                       4                       Sufficient SRLs

  Future Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE1)          (OCM12C_SITE2)
    1         3                       4                       Sufficient SRLs

 

Conclusion

Even on a single instance, use the thread number in your create Standby Redo Log statement.

alter database add standby logfile thread 1 group 4 size 1073741824;
alter database add standby logfile thread 1 group 5 size 1073741824;
alter database add standby logfile thread 1 group 6 size 1073741824;
alter database add standby logfile thread 1 group 7 size 1073741824;

Cheers,
William

 

Cet article Oracle 12c DataGuard – Insufficient SRLs reported by DGMGRL VALIDATE DATABASE VERBOSE est apparu en premier sur Blog dbi services.

Encryption in Oracle Public Cloud

Tue, 2016-11-29 12:04

Oracle Transparent Data Encryption is available without option on the Oracle Public Cloud: Standard Edition as well as Enterprise Edition (EE, EE-HP EE-EP, ECS). More than that, the DBaaS enforces TDE for any user tablespace even when not specifying in the CREATE TABLESPACE. It you are not familiar with TDE key management (wallets) then you have probably encountered ORA-28374: typed master key not found in wallet.
Rather than another tutorial on TDE I’ll try to explain it from the errors you may encounter when simply creating a tablespace.

I have created a new pluggable database PDB2 from the command line:

SQL> create pluggable database PDB2 admin user admin identified by "admin";
Pluggable database PDB2 created.
 
SQL> alter pluggable database PDB2 open read write;
Pluggable database PDB2 altered.
 
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------- --------- ----------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
7 PDB2 READ WRITE NO

I go to the PDB2 container and try to create a tablespace:

SQL> alter session set container=PDB2;
Session altered.
 
SQL> create tablespace mytablespace;
 
Error starting at line 1 in command -
create tablespace mytablespace
Error report -
ORA-28374: typed master key not found in wallet
28374. 0000 - "typed master key not found in wallet"
*Cause: You attempted to access encrypted tablespace or redo logs with
a typed master key not existing in the wallet.
*Action: Copy the correct Oracle Wallet from the instance where the tablespace
was created.

So, this message is related with TDE wallet.

encrypt_new_tablespaces

I didn’t specify any encryption clause in the CREATE TABLESPACE command but it is activated by default by the following parameter:

SQL> show parameter encrypt_new_tablespaces
 
NAME TYPE VALUE
----------------------- ------ ----------
encrypt_new_tablespaces string CLOUD_ONLY

The values can be DDL (the old behavior where encryption must be defined in the CREATE TABLESPACE statement), ALWAYS (AES128 encryption by default), or CLOUD_ONLY which is the same as ALWAYS when the instance is on the Cloud, or as DDL if the instance is on-premises. The default is CLOUD_ONLY.
This parameter has been introduced in 12.2 and has been backported to 11.2.0.4 and 12.1.0.2 with bug 21281607 that is applied on any Oracle Public Cloud DBaaS instance.

So, one solution to create our tablespace is to set encrypt_new_tablespaces to DDL but as it is recommended to encrypt all user tablespaces, let’s continue with it.

ORA-28374: typed master key not found in wallet

So the error message means that I don’t have a master key in the wallet for my newly created PDB because in multitenant each PDB has it’s own master key (but there’s only one wallet for the CDB).
The wallet is opened:

SQL> select * from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
-------- --------------- ------------------- ------------ ------------- ---------------- ------
FILE OPEN_NO_MASTER_KEY AUTOLOGIN SINGLE UNDEFINED 7

But empty (I’m still in the PDB2 container)

SQL> select * from v$encryption_keys order by creation_time;
no rows selected

SET KEY

So the idea is to set a key:

SQL> administer key management set key identified by "Ach1z0#d";

but:

Error starting at line 1 in command -
administer key management set key identified by "Ach1z0#d"
Error report -
ORA-28417: password-based keystore is not open
28417. 0000 - "password-based keystore is not open"
*Cause: Password-based keystore was not opened.
*Action: Close the auto login keystore, if required, and open a
password-based keystore.

Ok. An error because the wallet is not opened. Let’s try to open it:

SQL> administer key management set keystore open identified by "Ach1z0#d";
 
Error starting at line 1 in command -
administer key management set keystore open identified by "Ach1z0#d"
Error report -
ORA-28354: Encryption wallet, auto login wallet, or HSM is already open
28354. 0000 - "Encryption wallet, auto login wallet, or HSM is already open"
*Cause: Encryption wallet, auto login wallet, or HSM was already opened.
*Action: None.

Actually, the wallet is opened. We have seen that the opened wallet is AUTOLOGIN:

SQL> select * from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
-------- --------------- ------------------- ------------ ------------- ---------------- ------
FILE OPEN_NO_MASTER_KEY AUTOLOGIN SINGLE UNDEFINED 7

On DBaaS an AUTOLOGIN wallet is used to be able to have the database automatically restarted without manual intervention. Without AUTOLOGIN wallet you have to provide the password.

But AUTOLOGIN wallet is limited to use it to access the tablespaces.
When administering the wallet, we need to provide the password manually:

We need to close the AUTOLOGIN one:

SQL> administer key management set keystore close;
Key MANAGEMENT succeeded.

Now that it is closed, we can try to open it and open it with the password:

SQL> administer key management set keystore open identified by "Ach1z0#d";
 
Error starting at line : 1 in command -
administer key management set keystore open identified by "Ach1z0#d"
Error report -
ORA-28417: password-based keystore is not open
28417. 0000 - "password-based keystore is not open"
*Cause: Password-based keystore was not opened.
*Action: Close the auto login keystore, if required, and open a
password-based keystore.

Oh… it is opened AUTOLOGIN once again:

SQL> select * from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
-------- --------------- ------------------- ------------ ------------- ---------------- ------
FILE OPEN_NO_MASTER_KEY AUTOLOGIN SINGLE UNDEFINED 7

CDB$ROOT

You need to open the wallet with password from CDB$ROOT:

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> administer key management set keystore close;
Key MANAGEMENT succeeded.
 
SQL> administer key management set keystore open identified by "Ach1z0#d";
Key MANAGEMENT succeeded.

So here is the right way to start: in CDB$ROOT close the AUTOLOGIN wallet and open it with the password.

PDB

Now ready to go further in the PDB2.


SQL> alter session set container=PDB2;
Session altered.

The wallet is now closed for the PDB:

SQL> select * from v$encryption_wallet;
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
--------- -------------- ------- ------------ ------------- ---------------- ------
FILE CLOSED UNKNOWN SINGLE UNDEFINED 7

Let’s open it manually:

SQL> administer key management set keystore open identified by "Ach1z0#d";
Key MANAGEMENT succeeded.

We have no encryption key:

SQL> select * from v$encryption_keys order by creation_time;
no rows selected

Let’s do what we want to do from the get-go: create an encryption key for our PDB:

SQL> administer key management set key identified by "Ach1z0#d";
 
Error starting at line 1 in command -
administer key management set key identified by "Ach1z0#d"
Error report -
ORA-46631: keystore needs to be backed up
46631. 00000 - "keystore needs to be backed up"
*Cause: The keystore was not backed up. For this operation to proceed, the
keystore must be backed up.
*Action: Backup the keystore and try again.

Oh yes. Any change must be backed up. That’s easy:


SQL> administer key management set key identified by "Ach1z0#d" with backup;
Key MANAGEMENT succeeded.

Here we are. The key is there:


SQL> select * from v$encryption_keys order by creation_time;
 
KEY_ID TAG CREATION_TIME ACTIVATION_TIME CREATOR CREATOR_ID USER USER_ID KEY_USE KEYSTORE_TYPE ORIGIN BACKED_UP CREATOR_DBNAME CREATOR_DBID CREATOR_INSTANCE_NAME CREATOR_INSTANCE_NUMBER CREATOR_INSTANCE_SERIAL CREATOR_PDBNAME CREATOR_PDBID CREATOR_PDBUID CREATOR_PDBGUID ACTIVATING_DBNAME ACTIVATING_DBID ACTIVATING_INSTANCE_NAME ACTIVATING_INSTANCE_NUMBER ACTIVATING_INSTANCE_SERIAL ACTIVATING_PDBNAME ACTIVATING_PDBID ACTIVATING_PDBUID ACTIVATING_PDBGUID CON_ID
----------------------------------------------------- ---- --------------------------------------- --------------------------------------- -------- ----------- ----- -------- ----------- ------------------ ------- ---------- --------------- ------------- ---------------------- ------------------------ ------------------------ ---------------- -------------- --------------- --------------------------------- ------------------ ---------------- ------------------------- --------------------------- --------------------------- ------------------- ----------------- ------------------ --------------------------------- ------
AXP3BIrVW0+Evwfx7okZtcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAA 28-NOV-16 08.41.20.629496000 PM +00:00 28-NOV-16 08.41.20.629498000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL NO CDB1 902797638 CDB1 1 4294967295 PDB2 7 96676154 42637D7C7F7A3315E053DA116A0A2666 CDB1 902797638 CDB1 1 4294967295 PDB2 7 96676154 42637D7C7F7A3315E053DA116A0A2666 7

All is perfect but the wallet is still opened with the password:

SQL> select * from v$encryption_wallet;
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
-------- --------------- ------- ------------ ------------- --------------- -------
FILE OPEN PASSWORD SINGLE NO 7

In order to get back to the initial state, it is sufficient to close it (from the CDB$ROOT):


SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> administer key management set keystore close;
 
Error starting at line 1 in command -
administer key management set keystore close
Error report -
ORA-28389: cannot close auto login wallet
28389. 00000 - "cannot close auto login wallet"
*Cause: Auto login wallet could not be closed because it was opened with
another wallet or HSM requiring a password.
*Action: Close the wallet or HSM with a password.

Ok. The ‘close’ command needs the password as it was not opened with AUTOLOGIN one.


SQL> administer key management set keystore close identified by "Ach1z0#d";
Key MANAGEMENT succeeded.

It is immediately automatically re-opened with the AUTOLOGIN one:

SQL> select * from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
--------- --------------------------------------- ------- ------------ ------------- ---------------- ------
FILE /u01/app/oracle/admin/CDB1/tde_wallet/ OPEN AUTOLOGIN SINGLE NO 1

and from the CDB$ROOT I can see all of them:

SQL> select * from v$encryption_keys order by creation_time;
 
KEY_ID TAG CREATION_TIME ACTIVATION_TIME CREATOR CREATOR_ID USER USER_ID KEY_USE KEYSTORE_TYPE ORIGIN BACKED_UP CREATOR_DBNAME CREATOR_DBID CREATOR_INSTANCE_NAME CREATOR_INSTANCE_NUMBER CREATOR_INSTANCE_SERIAL CREATOR_PDBNAME CREATOR_PDBID CREATOR_PDBUID CREATOR_PDBGUID ACTIVATING_DBNAME ACTIVATING_DBID ACTIVATING_INSTANCE_NAME ACTIVATING_INSTANCE_NUMBER ACTIVATING_INSTANCE_SERIAL ACTIVATING_PDBNAME ACTIVATING_PDBID ACTIVATING_PDBUID ACTIVATING_PDBGUID CON_ID
----------------------------------------------------- ---- --------------------------------------- --------------------------------------- -------- ----------- ----- -------- ----------- ------------------ ------- ---------- --------------- ------------- ---------------------- ------------------------ ------------------------ ---------------- -------------- --------------- --------------------------------- ------------------ ---------------- ------------------------- --------------------------- --------------------------- ------------------- ----------------- ------------------ --------------------------------- ------
ATxUk1G7gU/0v3Ygk1MbZj8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA 27-NOV-16 09.02.18.050676000 PM +00:00 27-NOV-16 09.02.18.130705000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL YES CDB1 902797638 CDB1 1 4294967295 CDB$ROOT 1 1 3D94C45E41CA19A9E05391E5E50AB8D8 CDB1 902797638 CDB1 1 4294967295 CDB$ROOT 1 1 3D94C45E41CA19A9E05391E5E50AB8D8 1
AWSs1Gr0WE86vyfWc123xccAAAAAAAAAAAAAAAAAAAAAAAAAAAAA 27-NOV-16 09.02.18.089346000 PM +00:00 27-NOV-16 09.02.18.722365000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL YES CDB1 902797638 CDB1 1 4294967295 PDB1 3 2687567370 424FA3D9C61927FFE053DA116A0A85F7 CDB1 902797638 CDB1 1 4294967295 PDB1 3 2687567370 424FA3D9C61927FFE053DA116A0A85F7 3
AfwqzZP/Rk+5v5WqiNK5nl0AAAAAAAAAAAAAAAAAAAAAAAAAAAAA 28-NOV-16 08.36.43.980717000 PM +00:00 28-NOV-16 08.36.43.980720000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL YES CDB1 902797638 CDB1 1 4294967295 PDB2 5 2602763579 42636D1380072BE7E053DA116A0A8E2D CDB1 902797638 CDB1 1 4294967295 PDB2 5 2602763579 42636D1380072BE7E053DA116A0A8E2D 5
AXP3BIrVW0+Evwfx7okZtcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAA 28-NOV-16 08.41.20.629496000 PM +00:00 28-NOV-16 08.41.20.629498000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL NO CDB1 902797638 CDB1 1 4294967295 PDB2 7 96676154 42637D7C7F7A3315E053DA116A0A2666 CDB1 902797638 CDB1 1 4294967295 PDB2 7 96676154 42637D7C7F7A3315E053DA116A0A2666 7

As you can see I did two attempts with the PDB2 to write this blog post. The previous keys are all in the wallet.

I check that the AUTOLOGIN is opened in PDB2:


SQL> alter session set container=PDB2;
Session altered.
 ;
SQL> select * from v$encryption_wallet;
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
--------- -------------- ------- ------------ ------------- ---------------- ------
FILE OPEN AUTOLOGIN SINGLE NO 7

And finally I can create my tablespace


SQL> create tablespace mytablespace;
Tablespace MYTABLESPACE created.

Easy, isn’t it?

If you create your PDB with the DBaaS monitor interface all is done automatically with the ‘create PDB’ button:

  • Close the AUTOLOGIN wallet (from CDB$ROOT)
  • Open the wallet with password
  • Create the pluggable database and open it
  • Open the wallet from the PDB, with password
  • Set the masterkey for the PDB
  • Close the wallet to get it opened with AUTOLOGIN
 

Cet article Encryption in Oracle Public Cloud est apparu en premier sur Blog dbi services.

Documentum story – Authentication failed for Installation Owner with the correct password

Tue, 2016-11-29 01:00

When installing a new Remote Content Server (High Availability), everything was going according to the plan until we try to login to DA using this new CS: the login using the Installation Owner (dmadmin) failed… Same result from dqMan or any other third party tools and only the iapi or idql sessions on the Content Server itself were still working because of the local trust. When something strange is happening regarding the authentication of the Installation Owner, I tink the first to do is always to verify if the dm_check_password is able to recognize your username/password:

[dmadmin@content_server_01 ~]$ $DOCUMENTUM/dba/dm_check_password
Enter user name: dmadmin
Enter user password:
Enter user extra #1 (not used):
Enter user extra #2 (not used):
$DOCUMENTUM/dba/dm_check_password: Result = (245) = (DM_CHKPASS_BAD_LOGIN)

 

As you can see above, this was actually not working so apparently the CS is thinking that the password isn’t the correct one… This might happen for several reasons:

 

1. Wrong permissions on the dm_check_password script

This script is part of the few scripts that need some specific permissions to be working… This is either done by the Installer if you provide the root’s password during the installation or you can run the $DOCUMENTUM/dba/dm_root_task script manually using the root account (using sudo/dzdo or asking your UNIX admin team for example). These are the permissions that are needed for this script:

[dmadmin@content_server_01 ~]$ ls -l $DOCUMENTUM/dba/dm_check_password
-rwsr-s---. 1 root dmadmin 14328 Oct 10 12:57 $DOCUMENTUM/dba/dm_check_password

 

If the permissions aren’t the right ones or if you think that this file has been corrupted somehow, then you can re-execute the dm_root_task again as root. It will ask you if you want to overwrite the current files and it will in the end set the permissions properly.

 

2. Expired password/account

If the OS password/account you are testing (Installation Owner in my case) is expired then there are several behaviors. In case the account is expired, then the dm_check_password will return a DM_CHKPASS_ACCOUNT_EXPIRED error. If it is the password that is expired, then in some OS, the dm_check_password won’t work with the bad login error shown at the beginning of this blog. This can be checked pretty easily on most OS. On a RedHat for example, it would be something like:

[dmadmin@content_server_01 ~]$ chage -l dmadmin
Last password change                                    : Oct 10, 2016
Password expires                                        : never
Password inactive                                       : never
Account expires                                         : never
Minimum number of days between password change          : 0
Maximum number of days between password change          : 4294967295
Number of days of warning before password expires       : 7

 

In our case, we are setting the password/account to never expire to avoid such issues so that’s not the problem here. By the way, an expired password will also prevent you to use the crontab for example…

 

To change these parameters, you will need to have root permissions. That’s how it is done for example (press enter to just use the proposed value if that’s fine for you). The value between brackets is the current/proposed value and you can put your desired value after the colon:

[root@content_server_01 ~]$ chage dmadmin
Changing the aging information for dmadmin
Enter the new value, or press ENTER for the default

        Minimum Password Age [0]: 0
        Maximum Password Age [4294967295]: 4294967295
        Last Password Change (YYYY-MM-DD) [2016-10-10]:
        Password Expiration Warning [7]:
        Password Inactive [-1]: -1
        Account Expiration Date (YYYY-MM-DD) [-1]: -1

 

3. Wrong OS password

Of course if the OS password isn’t the correct one, then the dm_check_password will return a BAD_LOGIN… Makes sense, isn’t it? What I wanted to explain in this section is that in our case, we are always using sudo/dzdo options to change the current user to the Installation Owner and therefore we never really use the Installation Owner’s password at the OS level (only in DA, dqMan, aso…). To check if the password is correct at the OS level, you can of course start a ssh session with dmadmin directly or any other command that would require the password to be entered like a su or sudo on itself in our case:

[dmadmin@content_server_01 ~]$ su - dmadmin
Password:
[dmadmin@content_server_01 ~]$

 

As you can see, the OS isn’t complaining and therefore this is working properly: the OS password is the correct one.

 

4. Wrong mount options

Finally the last thing that can prevent you to login to your docbases remotely is some wrong options on your mount points… For this paragraph, I will suppose that $DOCUMENTUM has been installed on a mount point /app. So let’s check the current mount options, as root of course:

[root@content_server_01 ~]$ mount | grep "/app"
/dev/mapper/VG01-LV00 on /app type ext4 (rw,nodev,nosuid)
[root@content_server_01 ~]$
[root@content_server_01 ~]$ cat /etc/fstab | grep "/app"
/dev/mapper/VG01-LV00 /app                ext4        nodev,nosuid        1 2

 

That seems alright but actually it isn’t… For Documentum to work properly, the nosuid shouldn’t be present on the mount point where it has been installed! Therefore we need to change this. First of all, you need to update the file /etc/fstab so this change will remain after a reboot of the linux host. Just remove “,nosuid” to have something like that:

[root@content_server_01 ~]$ cat /etc/fstab | grep "/app"
/dev/mapper/VG01-LV00 /app                ext4        nodev               1 2

 

Now, the configuration inside the file /etc/fstab isn’t applied or reloaded if /app is already mounted. Therefore you need to remount it with the right options and that’s how you can do it:

[root@content_server_01 ~]$ mount | grep "/app"
/dev/mapper/VG01-LV00 on /app type ext4 (rw,nodev,nosuid)
[root@content_server_01 ~]$
[root@content_server_01 ~]$ mount -o remount,nodev /app
[root@content_server_01 ~]$
[root@content_server_01 ~]$ mount | grep "/app"
/dev/mapper/VG01-LV00 on /app type ext4 (rw,nodev)

 

Now that the mount options are correct, we can check again the dm_check_password:

[dmadmin@content_server_01 ~]$ $DOCUMENTUM/dba/dm_check_password
Enter user name: dmadmin
Enter user password:
Enter user extra #1 (not used):
Enter user extra #2 (not used):
$DOCUMENTUM/dba/dm_check_password: Result = (0) = (DM_EXT_APP_SUCCESS)

 

As you can see, this is now working… That’s a miracle ;).

 

Cet article Documentum story – Authentication failed for Installation Owner with the correct password est apparu en premier sur Blog dbi services.

Oracle 12c – Finding the DBID – The last resort

Mon, 2016-11-28 07:46

The DBID is a very important part for Oracle databases. It is an internal, uniquely generated number that differentiates databases. Oracle creates this number automatically as soon as you create the database.

During normal operation, it is quite easy to find your DBID. Whenever you start your RMAN session, it displays the DBID.

oracle@oel001:/home/oracle/ [OCM121] rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Nov 28 10:32:47 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: OCM121 (DBID=3827054096)

RMAN>

Or you can just simply select it from your v$database view.

SQL> select DBID from v$database;

DBID
----------
3827054096

But what happens in case you have a restore/recovery scenario where you lost your database. In the NOMOUNT state, it is not possible to retrieve the DBID.

SQL> select DBID from v$database;
select DBID from v$database
                 *
ERROR at line 1:
ORA-01507: database not mounted

You can take a look into the alert.log or any other trace file in your DIAG destination, but you will not find a DBID there.

So, if the only thing that you have left is your RMAN Catalog, and your datafile copies in your FRA + Archivelogs, then you need the DBID beforehand, before you can restore/recover your database correctly.

There are three possibilities to get your DBID

  • You could check your RMAN backup log files, if you have set it up correctly
  • You could connect to your RMAN catalog and query the “DB” table from the catalog owner. Be careful, there might be more than one entry for your DB name, and then it might become difficult to get the correct one.  In my example, I have only one entry
    SQL> select * from db;
    
        DB_KEY      DB_ID REG_DB_UNIQUE_NAME             CURR_DBINC_KEY S
    ---------- ---------- ------------------------------ -------------- -
             1 3827054096 OCM121                                      2 N
  • And as the last resort, you can startup nomount (either with a backup pfile or with the RMAN dummy instance), and afterwards you can dump out the header of your datafile copies in your FRA

Dumping out the first block is usually enough, and besides that, you are not limited to the SYSTEM datafile. You can use any of your datafile copies in your FRA (like SYSAUX, USERS and so on) to dump out the block, like shown in the following example:

-- Dump the first block from the SYSTEM datafile
SQL> alter session set tracefile_identifier = dbid_system;
Session altered.

SQL> alter system dump datafile '+fra/OCM121/datafile/SYSTEM.457.926419155' block min 1 block max 1;
System altered.

oracle@oel001:/u00/app/oracle/diag/rdbms/ocm121/OCM121/trace/ [OCM121] cat OCM121_ora_6459_DBID_SYSTEM.trc | grep "Db ID"
        Db ID=3827054096=0xe41c3610, Db Name='OCM121'

-- Dump the first block from the SYSAUX datafile		
SQL> alter session set tracefile_identifier = dbid_sysaux;
Session altered.

SQL> alter system dump datafile '+fra/OCM121/datafile/SYSAUX.354.926417851' block min 1 block max 1;
System altered.

oracle@oel001:/u00/app/oracle/diag/rdbms/ocm121/OCM121/trace/ [OCM121] cat OCM121_ora_7035_DBID_SYSAUX.trc | grep "Db ID"
        Db ID=3827054096=0xe41c3610, Db Name='OCM121'

-- Dump the first block from the USERS datafile
SQL> alter session set tracefile_identifier = dbid_users;
Session altered.

SQL> alter system dump datafile '+fra/OCM121/datafile/USERS.533.926419511' block min 1 block max 1;
System altered.

oracle@oel001:/u00/app/oracle/diag/rdbms/ocm121/OCM121/trace/ [OCM121] cat OCM121_ora_7064_DBID_USERS.trc | grep "Db ID"
        Db ID=3827054096=0xe41c3610, Db Name='OCM121'

As soon as you have your DBID, it is straight forward to do the rest. Connect to your target and RMAN catalog, set the DBID and then run your restore, recovery scripts.

rman target sys/manager catalog rman/rman@rman
set dbid=3827054096
run {
restore spfile from autobackup;
}

run {
restore controlfile ....
}

run {
restore database ....
recover database ....
}
Conclusion

Don’t forget to save your DBID with your RMAN backup jobs somewhere. Recovering a database at 3 o’clock in the morning with a missing DBID might become a nightmare.
Cheers,
William

 

 

Cet article Oracle 12c – Finding the DBID – The last resort est apparu en premier sur Blog dbi services.

Documentum story – Download failed with ‘Exceeded stated content-length of 63000 bytes’

Mon, 2016-11-28 02:00

At one of our customer, we were in the middle of a migration process of some docbases from 6.7 to 7.2. A few days after the migration, we started seeing some failures/errors during simple download of documents from D2 4.5. The migration has been done using the EMC EMA migration tool by some EMC colleagues. The strange thing here is that these download failures only applied to a few documents, far from the majority and only when opening the document using “View Native Content”. In addition to that, it appeared that the issue was only on migrated documents and it wasn’t happening for new ones.

 

This is an example of the error message we were able to see in the D2 4.5 log files:

2016-07-04 12:00:20 [ERROR] [[ACTIVE] ExecuteThread: '326' for queue: 'weblogic.kernel.Default (self-tuning)'] - c.e.d.d.s.D2HttpServlet[                    ] : Download failed
java.net.ProtocolException: Exceeded stated content-length of: '63000' bytes
        at weblogic.servlet.internal.ServletOutputStreamImpl.checkCL(ServletOutputStreamImpl.java:217) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:162) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.emc.d2fs.dctm.servlets.ServletUtil.download(ServletUtil.java:375) [D2FS4DCTM-API-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.ServletUtil.download(ServletUtil.java:280) [D2FS4DCTM-API-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.download.Download.processRequest(Download.java:132) [D2FS4DCTM-WEB-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.execute(D2HttpServlet.java:242) [D2FS4DCTM-API-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.doGetAndPost(D2HttpServlet.java:498) [D2FS4DCTM-API-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.doGet(D2HttpServlet.java:115) [D2FS4DCTM-API-4.5.0.jar:na]
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:731) [weblogic.server.merged.jar:12.1.3.0.0]
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:844) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:280) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:254) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:136) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:346) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.emc.x3.portal.server.filters.HttpHeaderFilter.doFilter(HttpHeaderFilter.java:77) [_wl_cls_gen.jar:na]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:66) [guice-servlet-3.0.jar:na]
        at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108) [shiro-web-1.1.0.jar:na]
        at com.company.d2.auth.NonSSOAuthenticationFilter.executeChain(NonSSOAuthenticationFilter.java:21) [_wl_cls_gen.jar:na]
        at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:81) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:359) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:275) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90) [shiro-core-1.1.0.jar:1.1.0]
        at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83) [shiro-core-1.1.0.jar:1.1.0]
        at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:344) [shiro-core-1.1.0.jar:1.1.0]
        at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:272) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:81) [shiro-web-1.1.0.jar:na]
        at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:168) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) [guice-servlet-3.0.jar:na]
        at com.planetj.servlet.filter.compression.CompressingFilter.doFilter(CompressingFilter.java:270) [pjl-comp-filter-1.7.jar:na]
        at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113) [guice-servlet-3.0.jar:na]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.emc.x3.portal.server.filters.X3SessionTimeoutFilter.doFilter(X3SessionTimeoutFilter.java:34) [_wl_cls_gen.jar:na]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.planetj.servlet.filter.compression.CompressingFilter.doFilter(CompressingFilter.java:270) [pjl-comp-filter-1.7.jar:na]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3436) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3402) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120) [com.oracle.css.weblogic.security.wls_7.1.0.0.jar:CSS 7.1 0.0]
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:57) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2285) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2201) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2179) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1572) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:255) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:311) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:263) [weblogic.server.merged.jar:12.1.3.0.0]

 

So the error in this case is “Exceeded stated content-length of: ‘63000’ bytes”. Hum what does that mean? Well it is not really clear (who said not clear at all?)… So we checked several documents for which the download failed (iAPI using dumps) and the only common points we were able to find for these documents were the following ones:

  • They all had a r_full_content_size of: 0
  • They all had a r_content_size bigger than: 63 000

 

The issue only appeared for objects which were assigned a r_full_content_size of 0 during the migration. We tried to set this property to 0 on a document for which the download was working, in order to try to reproduce the issue, but that didn’t change anything: the download was still working properly.

 

So here is some background regarding this parameter: the expected behavior for this parameter is that it has a real value (obviously). If the file in question is smaller than 2 GB, then the r_full_content_size will have the same value as the r_content_size which is of course the size of the file in bytes. If the file is bigger than 2 GB, then the r_content_size field is too small to show the real size in bytes and therefore the real size is only displayed in the r_full_content_size field… The r_full_content_size is the one read by D2 when using the “View Native Content” action while other actions like “View” or “Edit” behave like older wdk clients and are therefore reading the r_content_size attribute… So yes there is a difference in behavior between the few actions that are doing a download and that’s the reason why we only had this issue with the “View Native Content” action!

 

Unfortunately and as you probably already understood if you read the previous paragraph, the r_content_size and r_full_content_size aren’t of the same type (Integer Vs. Double) and therefore you can’t simply execute one single DQL to set the value of r_full_content_size equal to the value of r_content_size because you will get a DM_QUERY_E_UP_BAD_ATTR_TYPES error. So you will have to do things a little bit slower.

 

The first thing to do is obviously to gather a list of all documents that need to be updated with their r_object_id and value of their r_content_size (r_full_content_size don’t really matter since you will gather only affected documents so this value is always 0):

> SELECT r_object_id, r_content_size FROM dm_document WHERE r_full_content_size='0' AND r_content_size>'63000';
r_object_id         r_content_size
------------------- -------------------
090f446780034513    289326
090f446780034534    225602
090f446780034536    212700
090f446780034540    336916
090f446780034559    269019
090f446780034572    196252
090f446780034574    261094
090f44678003459a    232887
...                 ...

 

Then a first solution would be to manually go through the list and execute one DQL query for each document setting r_full_content_size=’241309′ (or whatever the r_content_size is). For the first document listed above, that would therefore be the following DQL:

> UPDATE dm_document objects SET r_full_content_size='289326' WHERE r_object_id='090f446780034513';

 

Note 1: In case you will want at some point to restore a previous version of a document for example, then you will most probably need to use the “FROM dm_document(ALL)” instead of “FROM dm_document”. But then take care that non current versions are immutable and can’t therefore be updated using a simple DQL. You will need to remove the immutable flag, update the object and restore that so that’s a little bit more tricky ;)

Note 2: In case you have a few documents bigger than 2 GB, the r_content_size will not reflect the real value and therefore setting the r_full_content_size to that value isn’t correct… I wasn’t able to test that since our customer didn’t have any document bigger than 2 GB but you should most probably be able to use instead the full_content_size that is stored on the dmr_content for this object… Just like the dm_document, the dmr_content object has two fields that you should be able to use to find the correct size: content_size (that should reflect r_content_size) and full_content_size (that should reflect r_full_content_size). If that isn’t helping then a last solution would be to export and re-import all documents bigger than 2 GB…

 

Ok so updating all objects is possible but this is really boring so a second solution – and probably a better one – is to use a script to prepare a list of DQL queries to be executed. When you have the r_object_id and r_content_size of all affected documents, you can just put that in a CSV file (copy/paste in excel and save as CSV for example) and wrote a small script (bash for example) that will generate 1 DQL query per document, that’s really simple and if you have thousands of documents affected, then it will just take you a few minutes to write the script and in 1/2 seconds you will have thousands of DQL queries generated. Then you can put all these commands in a single file that can be executed against a docbase on the Content Server. That’s a better solution but actually the simplest solution you can ever find will always be to use dqMan (or any similar solution). Indeed dqMan has a dedicated feature that allows you to execute a “template DQL” on any list of objects returned by a specific command. Therefore you don’t need any bash scripting if you are using dqMan and that does the job in a few seconds.

 

A last solution would be to go directly to the database and execute SQL queries to set r_full_content_size equal to r_content_size BUT I would NOT recommend you to do that unless you have a very good knowledge of the Documentum Content Model and if you absolutely know what you are doing and what you are messing with ;).

 

See you soon!

 

Cet article Documentum story – Download failed with ‘Exceeded stated content-length of 63000 bytes’ est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 4 – External tables

Sun, 2016-11-27 12:59

In the last posts of this series we talked about restore points, how you could do things that would require the dual table in Oracle and how you can make use of tablespaces in PostgreSQL. In this post we’ll look at what my colleague Clemens thinks is one of the greatest features in Oracle. Can you do external external tables in PostgreSQL?

The easy answers is: yes, of course you can. And you can do it in various ways. To start with we’ll need a sample file were we can load data from. For the test here we’ll use this one. Note that this file uses Windows line feeds which you’ll need to convert to unix style if you are working on Linux like me. You can use VI to do this.

Once you extracted the file the content looks like this:

postgres@pgbox:/home/postgres/ [PG961] head -2 FL_insurance_sample.csv
policyID,statecode,county,eq_site_limit,hu_site_limit,fl_site_limit,fr_site_limit,tiv_2011,tiv_2012,eq_site_deductible,hu_site_deductible,fl_site_deductible,fr_site_deductible,point_latitude,point_longitude,line,construction,point_granularity
119736,FL,CLAY COUNTY,498960,498960,498960,498960,498960,792148.9,0,9979.2,0,0,30.102261,-81.711777,Residential,Masonry,1

So, we have a total of 18 columns and 36634 rows to test with. Should be fine :)

How can we bring that into PostgreSQL? Clemens talked about SQL*Loader in his post. There is a similar project for PostgreSQL called pg_bulkload which we’ll not be talking about. We will look at two options you can use to load data from files into PostgreSQL which are available by default:

  1. copy
  2. file_fdw

What we need no matter with which option we go first is the definition of the table. These are the columns we need:

postgres@pgbox:/home/postgres/ [PG961] head -1 FL_insurance_sample.csv | sed 's/,/,\n/g'
policyID,
statecode,
county,
eq_site_limit,
hu_site_limit,
fl_site_limit,
fr_site_limit,
tiv_2011,
tiv_2012,
eq_site_deductible,
hu_site_deductible,
fl_site_deductible,
fr_site_deductible,
point_latitude,
point_longitude,
line,
construction,
point_granularity

So the create table statement will look something like this:

(postgres@[local]:5439) [postgres] > create table exttab ( policyID int,
                                                           statecode varchar(2),
                                                           county varchar(50),
                                                           eq_site_limit numeric,
                                                           hu_site_limit numeric,
                                                           fl_site_limit numeric,
                                                           fr_site_limit numeric,
                                                           tiv_2011 numeric,
                                                           tiv_2012 numeric,
                                                           eq_site_deductible numeric,
                                                           hu_site_deductible numeric,
                                                           fl_site_deductible numeric,
                                                           fr_site_deductible numeric,
                                                           point_latitude numeric,
                                                           point_longitude numeric,
                                                           line varchar(50),
                                                           construction varchar(50),
                                                           point_granularity int);
CREATE TABLE

Now that we have the table we can use copy to load the data:

(postgres@[local]:5439) [postgres] > copy exttab from '/home/postgres/FL_insurance_sample.csv' with csv header;
COPY 36634
(postgres@[local]:5439) [postgres] > select count(*) from exttab;
 count 
-------
 36634
(1 row)

Quite fast. But there is a downside with this approach. As Clemens mentions in his posts one of the benefits of external tables in Oracle is that you can access the file via standard SQL and do transformations before the data arrives in the database. Can you do the same with PostgreSQL? Yes, if you use the file_fdw foreign data wrapper.

The file_fdw is available by default:

(postgres@[local]:5439) [postgres] > create extension file_fdw;
CREATE EXTENSION
Time: 442.777 ms
(postgres@[local]:5439) [postgres] > \dx
                        List of installed extensions
   Name   | Version |   Schema   |                Description                
----------+---------+------------+-------------------------------------------
 file_fdw | 1.0     | public     | foreign-data wrapper for flat file access
 plpgsql  | 1.0     | pg_catalog | PL/pgSQL procedural language

(postgres@[local]:5439) [postgres] > create server srv_file_fdw foreign data wrapper file_fdw;
CREATE SERVER
(postgres@[local]:5439) [postgres] > create foreign table exttab2  ( policyID int,
                                statecode varchar(2),
                                county varchar(50),
                                eq_site_limit numeric,     
                                hu_site_limit numeric,     
                                fl_site_limit numeric,     
                                fr_site_limit numeric,     
                                tiv_2011 numeric,          
                                tiv_2012 numeric,          
                                eq_site_deductible numeric,
                                hu_site_deductible numeric,
                                fl_site_deductible numeric,
                                fr_site_deductible numeric,
                                point_latitude numeric,    
                                point_longitude numeric,   
                                line varchar(50),          
                                construction varchar(50),  
                                point_granularity int)     
server srv_file_fdw options ( filename '/home/postgres/FL_insurance_sample.csv', format 'csv', header 'true' );
CREATE FOREIGN TABLE

(postgres@[local]:5439) [postgres] > select count(*) from exttab2;
 count 
-------
 36634
(1 row)

From now on you can work with the file by accessing it using standard SQL and all the options you have with SQL are available. Very much the same as Clemens states in his post: “Because external tables can be accessed through SQL. You have all possibilities SQL-queries offer. Prallelism, difficult joins with internal or other external tables and of course all complex operations SQL allows. ETL became much easier using external tables, because it allowed to process data through SQL joins and filters already before it was loaded in the database.”

 

Cet article Can I do it with PostgreSQL? – 4 – External tables est apparu en premier sur Blog dbi services.

#DOAG2016

Fri, 2016-11-25 09:36

That’s a very late ‘wrap-up’ post about the DOAG Conference.
Well, not so late because it was last week but time flies and blog posts are already listed.

12.2

It started very well when Oracle has released the full 12.2 on the DBaaS cloud a few days before the conference. As lot of speakers, my slides and demo was full of 12.2 features. Of course, as beta tester we can ask the permission to talk about it but it’s not easy to replace any “12.2” with “the next generation of Oracle database” especially in titles that are limited to 60 characters ;)
So 12cR2 came on the Oracle Public Cloud, easily available in free trial, full documentation became public and my demos were still working… this was perfect.
The 12.2 beta program started one year ago. At the time of abstract submission, all speakers believed that 12.2 would be out at DOAG conference time. This was just in time.

Book

Exactly one year ago, at DOAG 2015, Vit Spinka came to me with this crazy idea to write a book on Multitenant. It was not the perfect fit in the planning for me as I was starting to prepare OCM 12c but that was a really good idea. The book was mostly finished in June but we had to wait for the public availability of 12.2. Now Oracle Press got the authorization tou publish it so it will probably be available soon. I had lot of questions about the date at DOAG. We have no date, but I hope to see it before Christmas.

Sharing

Of course those international events are very good for networking. Yes, DOAG conference is international. Lot of speakers coming from other continents. DOAG organized a nice dinner with speakers coming from abroad and/or being Oakies and/or ACED. Very nice conversations, and unconferences on different subjects: #cloud, #DBADev, PL/SQL … and those conversations will continue in two weeks in Birmingham.

Among the conversations, I was very happy to discuss with Tobias Deml who I asked to explain me the #DOAGNextGen community. Oracle DBA is a job where you don’t find a lot of young people. We always have that image of senior bearded guys in front of a text screen. It’s good to have experienced people but this must be mixed with some fresh blood. I really like what the DOAG Next Generation Community is doing. They have a lot of energy and ideas.

Tokyo

Speaking in the Tokio room is great. Especially when doing live demos. You don’t worry about your mintty fonts being too small or not.

2016-11-15 16.42.36

I was talking about the Multitenant features that can help in avoiding too many roundtrips between Dev and Ops.

DSC09076 - S

The future is about automation, fast provisioning, fast refresh and clear separation of roles. Call it #cloud if you like.

dbi services

DOAG is a big event for dbi services: lot of us participate, most being speakers, we have a booth to meet our customers and any one with questions about IT infrastructure. Of course, a bit about Cloud.

2016-11-17 08.42.41-crop

csm_DOAG_85_ba6c760e40

2017

I’m looking forward to DOAGDB in Dusseldorf (30-31 May 2017) – calls for paper currently opened. And next conference 21-24 November 2017

In the meantime if you want to learn more about 12.2, plan your upgrades and master the multitenant architecture… our 12c New Features workshop will be on 12cR2 in 2017 – first date in February: http://www.dbi-services.com/trainings/oracle-12c-new-features-workshop/

 

Cet article #DOAG2016 est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 3 – Tablespaces

Fri, 2016-11-25 09:02

In the last posts of this series we talked about restore points and how you could do things that would require the dual table in Oracle. In this post we’ll look at tablespaces. This is one of the features that is available in PostgreSQL but is totally different from what you know from Oracle. In Oracle you need to create a datafile which is attached to a tablespace. Once you have this you can start creating tables in there if you have the permissions to do so. How does this work in PostgreSQL?

Before we start playing with our own tablespaces you need to know that there are two default tablespaces in each PostgreSQL instance:

(postgres@[local]:5439) [postgres] > \db+
                                  List of tablespaces
    Name    |  Owner   | Location | Access privileges | Options |  Size  | Description 
------------+----------+----------+-------------------+---------+--------+-------------
 pg_default | postgres |          |                   |         | 21 MB  | 
 pg_global  | postgres |          |                   |         | 497 kB | 
(2 rows)

When you create a table and do not specify in which tablespace you want to get it created it will be created in the pg_default tablespace (this is the default tablespace for template0 and template1 and therefore will be the default for every user created database if not overwritten). pg_global contains the shared system catalog.

This means, whenever you create a table without specifying a tablespace in the create table statement it will go to the pg_default tablespace:

(postgres@[local]:5439) [postgres] > create table t1 ( a int );
CREATE TABLE
Time: 99.609 ms
(postgres@[local]:5439) [postgres] > select tablespace from pg_tables where tablename = 't1';
 tablespace 
------------
 NULL
(1 row)

NULL, in this case, means default. If you want to know where exactly the files that make up the tables are you can use oid2name:

postgres@pgbox:/home/postgres/ [PG961] oid2name -t t1
From database "postgres":
  Filenode  Table Name
----------------------
     24592          t1
postgres@pgbox:/home/postgres/ [PG961] find $PGDATA -name 2459*
/u02/pgdata/PG961/base/13322/24592

In addition oid2name tells you more about the databases and the default tablespace associated to them:

postgres@pgbox:/home/postgres/ [PG961] oid2name 
All databases:
    Oid  Database Name  Tablespace
----------------------------------
  13322       postgres  pg_default
  13321      template0  pg_default
      1      template1  pg_default

So far for the basics. Time to create our own tablespace. When you look at the syntax:

(postgres@[local]:5439) [postgres] > \h create tablespace
Command:     CREATE TABLESPACE
Description: define a new tablespace
Syntax:
CREATE TABLESPACE tablespace_name
    [ OWNER { new_owner | CURRENT_USER | SESSION_USER } ]
    LOCATION 'directory'
    [ WITH ( tablespace_option = value [, ... ] ) ]

… this is quite different from what you know when you work with Oracle. The important point for now is the “LOCATION”. This refers to a directory somewhere the PostgreSQL owner has write access to. This can be a local directory, can be a directory on any storage the host has access to and it even can be on a ramdisk. It really doesn’t matter as long as the PostgreSQL OS user has write permissions to it.

Lets create our first tablespace:

(postgres@[local]:5439) [postgres] > \! mkdir /var/tmp/tbs1
(postgres@[local]:5439) [postgres] > create tablespace tbs1 location '/var/tmp/tbs1';
CREATE TABLESPACE
Time: 26.362 ms
(postgres@[local]:5439) [postgres] > \db+
                                     List of tablespaces
    Name    |  Owner   |   Location    | Access privileges | Options |  Size   | Description 
------------+----------+---------------+-------------------+---------+---------+-------------
 pg_default | postgres |               |                   |         | 21 MB   | 
 pg_global  | postgres |               |                   |         | 497 kB  | 
 tbs1       | postgres | /var/tmp/tbs1 |                   |         | 0 bytes | 
(3 rows)

What happened? The first thing to notice is that we can now see the “Location” column populated when we display all the tablespaces and that the size of our new tablespace is zero (well, not surprising as nothing is created in the tablespace right now). Did PostgreSQL already create datafiles in this location you might ask?

(postgres@[local]:5439) [postgres] > \! ls -l /var/tmp/tbs1/
total 0
drwx------. 2 postgres postgres 6 Nov 25 11:03 PG_9.6_201608131

At least a directory which contains the version of PostgreSQL was created. What is inside this directory?

(postgres@[local]:5439) [postgres] > \! ls -l /var/tmp/tbs1/PG_9.6_201608131/
total 0

Nothing, so lets create a table in this brand new tablespace:

(postgres@[local]:5439) [postgres] > create table t1 ( a int ) tablespace tbs1;
CREATE TABLE
(postgres@[local]:5439) [postgres] > \d+ t1
                          Table "public.t1"
 Column |  Type   | Modifiers | Storage | Stats target | Description 
--------+---------+-----------+---------+--------------+-------------
 a      | integer |           | plain   |              | 
Tablespace: "tbs1"

How does the directory look like now?:

(postgres@[local]:5439) [postgres] > \! ls -l /var/tmp/tbs1/PG_9.6_201608131/
total 0
drwx------. 2 postgres postgres 18 Nov 25 12:02 13322

Ok, 13322 is the OID of the database which the table belongs to:

(postgres@[local]:5439) [postgres] > \! oid2name
All databases:
    Oid  Database Name  Tablespace
----------------------------------
  13322       postgres  pg_default
  13321      template0  pg_default
      1      template1  pg_default

And below that?

(postgres@[local]:5439) [postgres] > \! ls -l /var/tmp/tbs1/PG_9.6_201608131/13322/
total 0
-rw-------. 1 postgres postgres 0 Nov 25 12:02 24596

This is the OID of the table. So in summary this is the layout you get per tablespace:

|
|---[LOCATION]
|       |
|       | ----- [FIXED_VERSION_DIRECTORY]
|       |                  |
|       |                  |---------[DATABASE_OID]
|       |                  |              |
|       |                  |              |-----------[TABLE_AND_INDEX_FILES_OID]

One point that is often forgotten is that you can set various parameters on a tablespace level:

CREATE TABLESPACE tablespace_name
    [ OWNER { new_owner | CURRENT_USER | SESSION_USER } ]
    LOCATION 'directory'
    [ WITH ( tablespace_option = value [, ... ] ) ]

What you can set per tablespace is:

This can be very helpful when you have tablespaces on disks (ramdisk?) that have very different performance specifications.

A very important point to keep in mind: Each tablespace you create in PostgreSQL creates a symlink in the clusters data directory:

CREATE TABLESPACE tablespace_name
postgres@pgbox:/home/postgres/ [PG961] ls -l $PGDATA/pg_tblspc 
total 0
lrwxrwxrwx. 1 postgres postgres 13 Nov 25 11:03 24595 -> /var/tmp/tbs1

Again, the number (24595) is the OID, in this case of the tablespace:

|
(postgres@[local]:5439) [postgres] > select oid,spcname from pg_tablespace where spcname = 'tbs1';
  oid  | spcname 
-------+---------
 24595 | tbs1
(1 row)

This is important to know because when you do backups of you PostgreSQL instance it is critical that you backup the tablespaces as well. You can find all the pointers/symlinks in the pg_tblspc directory.

What else can you do with tablespaces? Of course you can change the default tablespace for the whole instance:

|
(postgres@[local]:5439) [postgres] > alter system set default_tablespace='tbs1';
ALTER SYSTEM
Time: 120.406 ms

(postgres@[local]:5439) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

Time: 4.279 ms
(postgres@[local]:5439) [postgres] > show default_tablespace ;
 default_tablespace 
--------------------
 tbs1
(1 row)

You can assign a tablespace to a database:

|
(postgres@[local]:5439) [postgres] > create database db1 TABLESPACE = tbs1;
CREATE DATABASE
Time: 1128.020 ms
(postgres@[local]:5439) [postgres] > \l+
                                                                    List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   |  Size   | Tablespace |                Description                 
-----------+----------+----------+-------------+-------------+-----------------------+---------+------------+--------------------------------------------
 db1       | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |                       | 7233 kB | tbs1       | 
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |                       | 7343 kB | pg_default | default administrative connection database
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +| 7233 kB | pg_default | unmodifiable empty database
           |          |          |             |             | postgres=CTc/postgres |         |            | 
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +| 7233 kB | pg_default | default template for new databases
           |          |          |             |             | postgres=CTc/postgres |         |            | 

You can make someone else the owner of a tablespace:

|
(postgres@[local]:5439) [postgres] > create user u1 password 'u1';
CREATE ROLE
Time: 31.414 ms
(postgres@[local]:5439) [postgres] > ALTER TABLESPACE tbs1 OWNER TO u1;
ALTER TABLESPACE
Time: 2.072 ms
(postgres@[local]:5439) [postgres] > \db
          List of tablespaces
    Name    |  Owner   |   Location    
------------+----------+---------------
 pg_default | postgres | 
 pg_global  | postgres | 
 tbs1       | u1       | /var/tmp/tbs1
(3 rows)

And finally you can set one or more tablespaces to be used as temporary tablespaces:

|
(postgres@[local]:5439) [postgres] > alter system set temp_tablespaces='tbs1';
ALTER SYSTEM
Time: 4.175 ms

(postgres@[local]:5439) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

Time: 3.638 ms
(postgres@[local]:5439) [postgres] > show temp_tablespaces ;
 temp_tablespaces 
------------------
 tbs1
(1 row)

Conclusion: Yes, you can have tablespaces in PostgreSQL and they give you great flexibility on how you can organize your PostgreSQL files on disk. The implementation is very different from other vendors, though.

 

Cet article Can I do it with PostgreSQL? – 3 – Tablespaces est apparu en premier sur Blog dbi services.

Documentum story – Lockbox file is newer than the active Lockbox library

Fri, 2016-11-25 02:00

A few weeks ago at a customer, I was installing a completely new sandbox environment for one of our Application Teams. On this environment, we used a new version stack with the most recent patch set (at that time): CS 7.2 P16, D2 4.6 P05, aso… A new patch level means a lot of “fun” trying to install and execute the updated installers of Documentum/BPM/D2… And of course a lot of new bugs ;). In this blog I will talk about a new issue we never faced before that is related to the Lockbox libraries.

 

With the older version stack, when we installed D2, we always kept the D2 Lockbox libraries under $DOCUMENTUM/d2-lib/lockbox on the CS and then we referenced these libraries in the environment variable ($PATH, $LD_LIBRARY_PATH and $CLASSPATH) of our Installation Owner. In the documentation, it is not asked to set these variables as “permanent” by adding them in the bash_profile for example or something similar but this is what we used to do to avoid adding them again and again each time we needed to update the D2 Lockbox.

 

For this new version stack and since the Official Documentation didn’t change, we kept the same approach. After installing D2, we had to restart the Content Server for a maintenance at the OS level. As you probably already know if you are familiar with the CS 7.2, there is also a Lockbox specific to the CS that is normally stored under $DOCUMENTUM/dba/secure/. Whenever you restart the host, you will need to store the lockbox inside the Shared Memory and that’s how it is done:

[dmadmin@content_server_01 ~]$ pwlockbox="L0ckb0xP4ssphr4se"
[dmadmin@content_server_01 ~]$ pwaek="A3kP4ssphr4se"
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ dm_crypto_boot -lockbox lockbox.lb -lockboxpassphrase $pwlockbox -passphrase $pwaek -all

Please wait. This will take a few seconds ...

Please wait, this will take a few seconds..
Setting up the (single) passphrase for all keys in the shared memory region..
Operation succeeded
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ dm_crypto_manage_lockbox -lockbox lockbox.lb -lockboxpassphrase $pwlockbox -resetfingerprint
Lockbox lockbox.lb
Lockbox Path $DOCUMENTUM/dba/secure/lockbox.lb
Reset host done

 

These two commands are working properly so the Lockbox has been put in the Shared Memory – normally – and the docbases should now be able to start. As explained before, we are referencing the D2 Lockbox libraries inside the environment variable by default and not the CS specific versions. So I tried to start the docbase as always and checked the logs to ensure that it was running properly:

[dmadmin@content_server_01 ~]$ $DOCUMENTUM/dba/dm_start_GR_DOCBASE
starting Documentum server for repository: [GR_DOCBASE]
with server log: [$DOCUMENTUM/dba/log/GR_DOCBASE.log]
server pid: 7056
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cat $DOCUMENTUM/dba/log/GR_DOCBASE.log
The Lockbox file is newer than the active Lockbox library. Retry with a newer version of the Lockbox library.
The Lockbox file is newer than the active Lockbox library. Retry with a newer version of the Lockbox library.
2016-09-29T12:02:38.314624      7056[7056]      0000000000000000        [DM_CRYPTO_F_KEYSTORE_INIT]fatal:  "Failed to initialize keystore at $DOCUMENTUM/dba/secure/CSaek. Internal error - 1057226514"

[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$

 

So that’s the error I wanted to share in this blog. As you can see, the docbase wasn’t able to start because the lockbox file is “newer” than the active Lockbox libraries. So I started to do some tests to try to understand what was happening exactly and how to solve it.

 

Ok so the first test I did is removing the D2 Lockbox libraries from the environment variables (.bash_profile + reload of shell session) and then try to start the docbase again:

[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ exit
logout
[morganpatou@content_server_01 ~]$ su - dmadmin
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ pwlockbox="L0ckb0xP4ssphr4se"
[dmadmin@content_server_01 ~]$ pwaek="A3kP4ssphr4se"
[dmadmin@content_server_01 ~]$ dm_crypto_boot -lockbox lockbox.lb -lockboxpassphrase $pwlockbox -passphrase $pwaek -all

Please wait. This will take a few seconds ...

Please wait, this will take a few seconds..
Setting up the (single) passphrase for all keys in the shared memory region..
Operation succeeded
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ dm_crypto_manage_lockbox -lockbox lockbox.lb -lockboxpassphrase $pwlockbox -resetfingerprint
Lockbox lockbox.lb
Lockbox Path $DOCUMENTUM/dba/secure/lockbox.lb
Lockbox open failed The Lockbox file is newer than the active Lockbox library. Retry with a newer version of the Lockbox library.
** Operation failed **
[dmadmin@content_server_01 ~]$

 

As you can see, after removing the D2 Lockbox libraries from the environment, the lockbox can’t be stored in the Shared Memory anymore… So based on this information, it is clear that the Content Server Lockbox libraries (stored under $DM_HOME/bin) are different than the D2 Lockbox libraries (in our case stored under $DOCUMENTUM/d2-lib/lockbox)… It’s “fun” that two patches that have been released exactly at the same time are using different versions of the Lockbox libraries… The annoying point here is that the CS Lockbox isn’t readable anymore using the CS Lockbox libraries and that can be checked using the following command:

[dmadmin@content_server_01 ~]$ dm_crypto_create -lockbox lockbox.lb -lockboxpassphrase $pwlockbox -keyname CSaek -passphrase $pwaek -algorithm AES_256_CBC -check

The Lockbox file is newer than the active Lockbox library. Retry with a newer version of the Lockbox library.
** No AEK store exists in lockbox lockbox.lb and got status code returned as '1057226514'.
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$

 

The problem I can see here is that when installing D2, the CS Lockbox has apparently somehow been corrupted by the D2 Lockbox libraries. The fact is that when a docbase is trying to start, the start script (dm_start_GR_DOCBASE) is setting the environment using the script $DM_HOME/bin/dm_set_server_env.sh which overwrites our environment variables because this file is using the CS Lockbox libraries. That’s the reason why the docbase wasn’t able to start earlier: the docbase is trying to start using the CS libraries (older) while the lockbox has been tempered with the D2 libraries (newer).

 

There are several ways to quickly workaround this specific issue and I will try to describe them below.

 

First solution: Create a copy of the dm_crypto_create and dm_crypto_create.bin files to force it to use the environment variable defined in the .bash_profile and not reload them based on the dm_set_server_env.sh (removing the 3 lines related to the LD_LIBRARY_PATH):

[dmadmin@content_server_01 ~]$ cd $DM_HOME/bin
[dmadmin@content_server_01 bin]$ cp dm_crypto_create my_dm_crypto_create
[dmadmin@content_server_01 bin]$ cp dm_crypto_create.bin my_dm_crypto_create.bin
[dmadmin@content_server_01 bin]$ vi my_dm_crypto_create
[dmadmin@content_server_01 bin]$ diff dm_crypto_create my_dm_crypto_create
21,23d20
< else
<   LD_LIBRARY_PATH=${location}
<   export LD_LIBRARY_PATH

 

Once this is done, you can compare the result of the check command:

[dmadmin@content_server_01 bin]$ ./dm_crypto_create -lockbox lockbox.lb -lockboxpassphrase $pwlockbox -keyname CSaek -passphrase $pwaek -algorithm AES_256_CBC -check

The Lockbox file is newer than the active Lockbox library. Retry with a newer version of the Lockbox library.
** No AEK store exists in lockbox lockbox.lb and got status code returned as '1057226514'.
[dmadmin@content_server_01 bin]$
[dmadmin@content_server_01 bin]$
[dmadmin@content_server_01 bin]$ ./my_dm_crypto_create -lockbox lockbox.lb -lockboxpassphrase $pwlockbox -keyname CSaek -passphrase $pwaek -algorithm AES_256_CBC -check

Key - CSaek uses algorithm AES_256_CBC.

** An AEK store with the given passphrase exists in lockbox lockbox.lb and got status code returned as '0'.
[dmadmin@content_server_01 bin]$
[dmadmin@content_server_01 bin]$

 

=> As you can see, this is now working with the updated script but that’s not really a sexy solution…

 

 

Second solution: Update the file dm_set_server_env.sh to force it to use the D2 Lockbox libraries instead of the CS ones. You can check the Official Documentation about how to setup environment variables for D2 and then just put that at the end of this file. In our case, this was something like:

[dmadmin@content_server_01 bin]$ tail -6 $DM_HOME/bin/dm_set_server_env.sh
export LD_LIBRARY_PATH=$DOCUMENTUM/d2-lib/lockbox/lib/native/linux_gcc34_x64:$LD_LIBRARY_PATH
export PATH=$DOCUMENTUM/d2-lib/lockbox/lib/native/linux_gcc34_x64:$PATH
export CLASSPATH=$DOCUMENTUM/d2-lib/D2.jar:$DOCUMENTUM/d2-lib/LB.jar:$DOCUMENTUM/d2-lib/LBJNI.jar:$CLASSPATH
# The environment that exists after this script executes is output to
# the dm_set_server_env.log file in $DOCUMENTUM_SHARED/logs
env >> $DOCUMENTUM_SHARED/logs/dm_set_server_env.log

 

This solution is a little bit better but as soon as you will upgrade/migrate your Content Server, this file might be overwritten by the Patch and therefore you will face this issue again… :(

 

The solution N°2 is what EMC recommended us to do so that’s what we did in the first place and we were able to start the docbase, aso… But one week later, we tried to install a new docbase and then the Server Configuration Program wasn’t able to read the lockbox anymore and therefore we weren’t able to create any new docbase! This is an extract from the file Server Configuration Program log file:

13:58:10,266  INFO [AWT-EventQueue-0] com.documentum.install.server.ui.panels.DiWPServerUseExistingLockBox - UserSelection: "lockbox.lb"
13:58:10,266  INFO [AWT-EventQueue-0] com.documentum.install.server.ui.panels.DiWPServerUseExistingLockBox - UserSelection: "AES_256_CBC"
13:58:10,266  INFO [AWT-EventQueue-0] com.documentum.install.server.ui.panels.DiWPServerUseExistingLockBox - UserInput: "CSaek"
13:58:10,266  INFO [AWT-EventQueue-0] com.documentum.install.server.ui.panels.DiWPServerUseExistingLockBox - UserInputPassword: "**********"
13:58:10,266  INFO [AWT-EventQueue-0] com.documentum.install.server.ui.panels.DiWPServerUseExistingLockBox - UserSelection: "true"
13:58:10,266  INFO [AWT-EventQueue-0] com.documentum.install.server.ui.panels.DiWPServerUseExistingLockBox - UserInput: "lockbox.lb"
13:58:10,266  INFO [AWT-EventQueue-0] com.documentum.install.server.ui.panels.DiWPServerUseExistingLockBox - UserInputPassword: "**********"
13:58:10,507 ERROR [Thread-97] com.documentum.install.server.installanywhere.actions.DiWAServerValidateLockboxPassphrase - Check AEK key passphrase failed

 

Therefore the solutions 1 and 2 are actually not good because we are able to start the docbase but not to install a new one… So we had to find another, better solution and it all started with this question: which component(s) is/are actually using the D2 Lockbox on the Content Server? Of course the answer to this question is: only the Java Method Server.

 

Based on this information, we therefore decided to restrict the access to the D2 Lockbox libraries only to the JMS. This has been done by removing the environment variables mentioned above from the bash_profile and/or from the dm_set_server_env.sh and then by updating the JMS startup script with the same:

[dmadmin@content_server_01 ~]$ head -6 $DOCUMENTUM_SHARED/jboss7.1.1/server/startMethodServer.sh
#!/bin/sh

#for D2.lockbox
export LD_LIBRARY_PATH=$DOCUMENTUM/d2-lib/lockbox/lib/native/linux_gcc34_x64:$LD_LIBRARY_PATH
export PATH=$DOCUMENTUM/d2-lib/lockbox/lib/native/linux_gcc34_x64:$PATH
export CLASSPATH=$DOCUMENTUM/d2-lib/D2.jar:$DOCUMENTUM/d2-lib/LB.jar:$DOCUMENTUM/d2-lib/LBJNI.jar:$CLASSPATH

 

After that, we “simply” regenerated the Content Server Lockbox with the CS libraries (basically you need to remove completely the current encryption inside the docbase and force the CS to recreate it from scratch… Arnaud Berbier will most probably write a blog on this subject in the next few days/weeks). Then we restarted the whole Content Server and everything was working properly with the right Lockbox libraries and we were able to create new docbases.

 

Cet article Documentum story – Lockbox file is newer than the active Lockbox library est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 2 – Dual

Fri, 2016-11-25 00:42

In the first post of this series we talked about restore points. Another question that pops up from time to time is how you can do things in PostgreSQL that would require the dual table in Oracle. Lets go …

The question is: When do you need the dual table in Oracle? Well, everything time you have nothing to select from, meaning no table you could provide in the from clause and you need exactly one row. This could be the case when you want to do math:

SQL> select 1+2+3*4/2 from dual;

 1+2+3*4/2
----------
	 9

This can be the case when you want to generate test data:

SQL> select 'a' from dual connect by level <= 5;

'
-
a
a
a
a
a

This can be the case when you want to select from a PL/SQL function, such as:

SQL> create table ta (a number);

Table created.

SQL> select dbms_metadata.get_ddl('TABLE','TA',USER) from dual;

DBMS_METADATA.GET_DDL('TABLE','TA',USER)
--------------------------------------------------------------------------------

  CREATE TABLE "SYS"."TA"
   (	"A" NUMBER
   ) PCTFREE 10 PCTUSED 40 INITRANS

… any many more.

The easy answer to the question if you can do it in PostgreSQL is: You don’t need to. Why? Because you can do things like this:

(postgres@[local]:5439) [postgres] > select 'Time for a beer';
    ?column?     
-----------------
 Time for a beer
(1 row)

… or this:

(postgres@[local]:5439) [postgres] > select 1+2+3*4/2;
 ?column? 
----------
        9
(1 row)

The same is true for getting the results of a function:

(postgres@[local]:5439) [postgres] > create function f1 (integer,integer) returns integer
as 'select $1 * $2;'
language sql;
CREATE FUNCTION
Time: 249.499 ms
(postgres@[local]:5439) [postgres] > select f1(5,5);
 f1 
----
 25
(1 row)

PostgreSQL does not force you to provide a table to select from. You can completely skip this. Looks strange when you are used to work with Oracle, I know, but hey: This is much more easy: Why provide a from clause when it is not necessary?

If you really, really can’t live without dual:

(postgres@[local]:5439) [postgres] > create table dual (dummy varchar(1));
CREATE TABLE
(postgres@[local]:5439) [postgres] > insert into dual (dummy) values ('a');
INSERT 0 1
(postgres@[local]:5439) [postgres] > select 'I can not live without dual' from dual;
          ?column?           
-----------------------------
 I can not live without dual
(1 row)
(postgres@[local]:5439) [postgres] > select 1+2+3*4/2 from dual;
 ?column? 
----------
        9
(1 row)

And here you go …

 

Cet article Can I do it with PostgreSQL? – 2 – Dual est apparu en premier sur Blog dbi services.

Observing DML restarts caused by invalidations

Thu, 2016-11-24 14:53

Usually, cursor invalidations do not interrupt the current execution of the cursor. You can even drop a table while a query on it is running and the query may end successfully. However some invalidations will throw internal errors that are catch and the cursor execution restarts transparently. I’ve reproduced one here to show the symptoms: it’s a different EXEC_SQL_ID so the restarts are not accounted within the same SQL Monitoring.

I had that question because I encountered a SQL Monitoring report where the ‘executions’ count of the INSERT operation was more than one. I wondered if it can be a case of DML restart or if DML restart should be accounted in different EXEC_SQL_ID. I got the quick answer from a very reliable source:

@FranckPachot Yep!

— Stefan Koehler (@OracleSK) November 21, 2016

However for my better understanding, I’m now reproducing a case of DML restart to show the symptoms on V$SQL and SQL Monitor.

I need a table with some rows:

SQL> create table DEMO as select rownum n from xmltable('1 to 3');
Table created.

Initially my problem was with an insert into a GTT but now I’m reproducing the case with a partitioned table:

SQL> create table DEMOGTT (n number) partition by hash(n);
Table created.

For internal reasons, when a cursor is invalidated after it has acquired a lock on a partition, a DML restart occurs. If you have a doubt, “oerr ora 14403″ explains that.

The easiest way to invalidate while the cursor is running is to call a function that does it for each row. And as you will see that it can be restarted to infinity I set a timeout timestamp after 30 seconds.


SQL> column limit new_value limit
SQL> select to_char(sysdate+30/24/60/60,'yyyymmddhh24miss') limit from dual;
 
LIMIT
--------------
20161124212154
 
SQL> create function DEMOSLOW(n number) return number as
2 pragma autonomous_transaction;
3 begin
4 if sysdate > to_date('&limit','yyyymmddhh24miss')
5 then
6 dbms_stats.gather_table_stats(user,'DEMOGTT',no_invalidate=>false);
7 end if;
8 return n;
9 end;
10 /
old 4: if sysdate < to_date('&limit','yyyymmddhh24miss')
new 4: if sysdate < to_date('20161124212154','yyyymmddhh24miss')
 
Function created.

Time to run the insert, calling the function for each row:

SQL> alter session set statistics_level=all;
Session altered.
 
SQL> set timing on time on
21:21:24 SQL> insert /*+ monitor */ into DEMOGTT select n from DEMO where n=DEMOSLOW(n);
3 rows created.
 
Elapsed: 00:00:29.30
21:21:54 SQL>

This has taked 30 seconds. Without the timeout, it never stops.

Execution plan for last run shows only one ‘Start':

21:21:54 SQL> select * from table(dbms_xplan.display_cursor('','','allstats last'));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 3fk1hyq9vnuzx, child number 0
-------------------------------------
insert /*+ monitor */ into DEMOGTT select n from DEMO where
n=DEMOSLOW(n)
 
Plan hash value: 4000794843
 
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:00.01 | 138 |
| 1 | LOAD TABLE CONVENTIONAL | DEMOGTT | 1 | | 0 |00:00:00.01 | 138 |
|* 2 | TABLE ACCESS FULL | DEMO | 1 | 1 | 3 |00:00:00.01 | 9 |
----------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter("N"="DEMOSLOW"("N"))

This is a proof that DML restarts are not accounted within the same execution. the ‘last’ statistics are only from the last restart.

V$SQL accounts only one execution but thousands of invalidations:

21:21:54 SQL> select executions,invalidations from v$sql where sql_id='3fk1hyq9vnuzx';
 
EXECUTIONS INVALIDATIONS
---------- -------------
1 1571

ASH shows the different SQL_EXEC_ID:

21:21:54 SQL> select sample_time,sql_id,sql_exec_id from v$active_session_history where sql_id='3fk1hyq9vnuzx' and sample_time>sysdate-10/24/60/60 order by sample_time desc;
 
SAMPLE_TIME SQL_ID SQL_EXEC_ID
--------------------------------------------------------------------------- ------------- -----------
24-NOV-16 09.21.53.773 PM 3fk1hyq9vnuzx 16778774
24-NOV-16 09.21.51.773 PM 3fk1hyq9vnuzx
24-NOV-16 09.21.48.773 PM 3fk1hyq9vnuzx 16778501
24-NOV-16 09.21.46.773 PM 3fk1hyq9vnuzx 16778396
24-NOV-16 09.21.45.773 PM 3fk1hyq9vnuzx 16778341

and SQL Monitoring see different executions:

21:21:54 SQL> select sql_id,sql_exec_id,status,last_refresh_time,plan_operation,starts from v$sql_plan_monitor where sql_id='3fk1hyq9vnuzx' and plan_line_id=0 order by last_refresh_time desc fetch first 10 rows only;
 
SQL_ID SQL_EXEC_ID STATUS LAST_REFRESH_TI PLAN_OPERATION STARTS
------------- ----------- ------------------- --------------- ------------------------------ ----------
3fk1hyq9vnuzx 16778762 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778765 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778767 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778766 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778771 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778769 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778764 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778770 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778763 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778768 DONE 24-nov 21:21:53 INSERT STATEMENT 1

So those are the symptoms of DML restart. And my initial problem is still there:

  • This test case does not reproduce the issue on a GTT
  • My issue on the GTT had only one SQL_EXEC_ID with multiple ‘starts’ of the plan operations.

#sqlmon #puzzled
I expected to see '1' as 'executions for the first line of a plan,
Here I have executions with 1,12,19,64… ?!? pic.twitter.com/yHPZXDprgX

— Franck Pachot (@FranckPachot) November 21, 2016

Any comments greatly appreciated :)

 

Cet article Observing DML restarts caused by invalidations est apparu en premier sur Blog dbi services.

SQL Server on Linux in Azure

Thu, 2016-11-24 14:23

I’m pleased to announce that Microsoft gives now the opportunity to choose Linux as a new operating system for SQL Server.

To try this new version, you can find in the Cloud Azure a new template in the marketplace.

In the search bar, tape “SQL Server vnext” and you’ll find the “SQL Server vNext on Red Hat Enterprise Linux 7.2”.

SQL Server vNext 01 Follow all “classic” steps to create your new VM SQL Server on Linux:

SQL Server vNext 02

First step, choose your “deployment mode”

SQL Server vNext 04

After this first step, choose a name, a disk type, a couple login/password and the location

SQL Server vNext 05

The next step is to choose your storage:

SQL Server vNext 06

At the last step, configure you options:

SQL Server vNext 07

Wait a little bit for the provisioning and at the end, you’ll have your VM SQL Server on Linux ready to use:

SQL Server vNext 11

 

Et voila! My first VM SQL Server on Linux in the Cloud! It’s easy and fast to create…

Next blog is how to use it ;-)

 

Cet article SQL Server on Linux in Azure est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 1 – Restore points

Thu, 2016-11-24 11:03

When discussing with customers about PostgreSQL we often hear that they can do things in one database that they can not do in PostgreSQL. Most of the times this is not true and you actually can do it in PostgreSQL. Maybe not in exactly the same way but this is not surprising as PostgreSQL does implement features not in exactly the same way other vendors do.

To start this series we’ll talk about restore points. Of course you can create restore points in PostgreSQL and then restore up to such a point in case you need to (e.g. after a failed schema or application upgrade or just for testing purposes ). Lets go…

We’ll use the latest version of PostgreSQL which is 9.6.1 currently:

postgres@pgbox:/home/postgres/ [PG961] sqh
psql (9.6.1 dbi services build)
Type "help" for help.

(postgres@[local]:5439) [postgres] > select version();
                                                          version                                                           
----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6.1 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit
(1 row)

Time: 47.119 ms
(postgres@[local]:5439) [postgres] > 

When we want to do point in time recovery we need to setup archiving. Without going into the details (as this is out of scope here) the parameters which need to be adjusted are these (if not already done):

(postgres@[local]:5439) [postgres] > alter system set wal_level = 'replica';
ALTER SYSTEM
Time: 28.056 ms
(postgres@[local]:5439) [postgres] > alter system set archive_command='test ! -f /u90/pgdata/PG961/%f && cp %p /u90/pgdata/PG961/%f'; 
ALTER SYSTEM
Time: 20.925 ms
(postgres@[local]:5439) [postgres] > alter system set archive_mode ='on';
ALTER SYSTEM
Time: 5.307 ms
(postgres@[local]:5439) [postgres] > select name,context from pg_settings where name in ('archive_mode','archive_command','wal_level');
      name       |  context   
-----------------+------------
 archive_command | sighup
 archive_mode    | postmaster
 wal_level       | postmaster
(3 rows)

Time: 1.460 ms

Be sure to restart your instance before you continue. Changing archive_mode and wal_level can not be done online. Once you restarted make sure that your archive_command really succeeds:

(postgres@[local]:5439) [postgres] > create database test1;
CREATE DATABASE
Time: 1705.539 ms
(postgres@[local]:5439) [postgres] > drop database test1;
DROP DATABASE
Time: 107.283 ms
(postgres@[local]:5439) [restore] > select pg_switch_xlog();
 pg_switch_xlog 
----------------
 0/22001798
(1 row)

Time: 214.216 ms
(postgres@[local]:5439) [postgres] > \! ls -l /u90/pgdata/PG961/
total 16384
-rw-------. 1 postgres postgres 16777216 Nov 24 17:34 000000020000000000000022

When you can not see an archived wal in the last step you did something wrong. The next bit you need when you want to do point in time recovery with PostgreSQL is a base backup:

postgres@pgbox:/u02/pgdata/PG961/ [PG961] mkdir /u90/pgdata/PG961/basebackups
postgres@pgbox:/u02/pgdata/PG961/ [PG961] pg_basebackup -x -D /u90/pgdata/PG961/basebackups/
postgres@pgbox:/u02/pgdata/PG961/ [PG961] ls /u90/pgdata/PG961/basebackups/
backup_label  pg_commit_ts   pg_log        pg_replslot   pg_stat_tmp  PG_VERSION
base          pg_dynshmem    pg_logical    pg_serial     pg_subtrans  pg_xlog
global        pg_hba.conf    pg_multixact  pg_snapshots  pg_tblspc    postgresql.auto.conf
pg_clog       pg_ident.conf  pg_notify     pg_stat       pg_twophase  postgresql.conf

Fine. Lets generate some test data with this simple script:

(postgres@[local]:5439) [postgres] > \! cat a.sql
\c postgres
drop database if exists restore;
create database restore;
\c restore
create table t1 ( a int );
insert into t1 (a)
       values (generate_series(1,1000000));
select count(*) from t1;
\d t1

When you run this you’ll get a table (t1) containing 1 million rows:

(postgres@[local]:5439) [postgres] > \i a.sql
You are now connected to database "postgres" as user "postgres".
DROP DATABASE
Time: 114.000 ms
CREATE DATABASE
Time: 1033.245 ms
You are now connected to database "restore" as user "postgres".
CREATE TABLE
Time: 5.917 ms
INSERT 0 1000000
Time: 2226.599 ms
  count  
---------
 1000000
(1 row)

Time: 65.864 ms
      Table "public.t1"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 

Ok, fine. Now we are ready for testing restore points. Lets say you want to do some modifications to your table and to be on the safe side you want to create a restore point before. No problem:

(postgres@[local]:5439) [postgres] > select pg_create_restore_point('RP1');
 pg_create_restore_point 
-------------------------
 0/28D50EF8
(1 row)

Time: 0.825 ms

Quite easy and fast. Now lets play with our table:

(postgres@[local]:5439) [restore] > select count(*) from t1;
  count  
---------
 1000010
(1 row)

Time: 66.214 ms
(postgres@[local]:5439) [restore] > \d t1
      Table "public.t1"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 

(postgres@[local]:5439) [restore] > alter table t1 add column b varchar(10);
ALTER TABLE
Time: 1.810 ms
(postgres@[local]:5439) [restore] > update t1 set b='b';
UPDATE 1000010
Time: 11004.972 ms
(postgres@[local]:5439) [restore] > drop table t1;
DROP TABLE
Time: 238.329 ms

Ups, table gone. How can we now go back to the restore point created above? Quite easy:

Shutdown your instance and copy back the base backup:

postgres@pgbox:/u02/pgdata/PG961/ [PG961] rm -rf pg_xlog
postgres@pgbox:/u02/pgdata/PG961/ [PG961] cp -pr /u90/pgdata/PG961/basebackups/* $PGDATA
cp: cannot overwrite non-directory ‘/u02/pgdata/PG961/pg_xlog’ with directory ‘/u90/pgdata/PG961/basebackups/pg_xlog’
postgres@pgbox:/u02/pgdata/PG961/ [PG961] ln -s /u03/pgdata/PG961/ pg_xlog

Then create a recovery.conf file (for telling PostgreSQL to go into recovery mode when it comes up) and specify the restore point you created above:

postgres@pgbox:/home/postgres/ [PG961] echo "restore_command = 'cp /u90/pgdata/PG961/%f %p'
> recovery_target_name = 'RP1'" > $PGDATA/recovery.conf
postgres@pgbox:/home/postgres/ [PG961] cat $PGDATA/recovery.conf
restore_command = 'cp /u90/pgdata/PG961/%f %p'
recovery_target_name = 'RP1'

Start the instance and check the log file:

LOG:  database system was interrupted; last known up at 2016-11-24 17:36:28 CET
LOG:  creating missing WAL directory "pg_xlog/archive_status"
LOG:  starting point-in-time recovery to "RP1"

If everything went fine your table should be back without the additional column:

(postgres@[local]:5439) [restore] > \d t1
      Table "public.t1"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 

(postgres@[local]:5439) [restore] > select count(*) from t1;
  count  
---------
 1000000
(1 row)

Time: 82.797 ms

So, yes, you can definitely use restore points with PostgreSQL :)

If you want me to blog about any feature you are not sure is there in PostgreSQL let me know.

 

Cet article Can I do it with PostgreSQL? – 1 – Restore points est apparu en premier sur Blog dbi services.

Pages