Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 14 hours 39 min ago

Can I do it with PostgreSQL? – 5 – Generating DDL commands

Wed, 2016-11-30 11:17

From time to time it is very useful that you can generate the DDL commands for existing objects (Tables, Indexes, whole Schema …). In Oracle you can either use the dbms_metadata PL/SQL package for this or use expdp/impdp to generate the statements out of a dump file. What options do you have in PostgreSQL? Note: We’ll not look at any third party tools you could use for that, only plain PostgreSQL.

As always we’ll need some objects to test with, so here we go:

\c postgres
drop database if exists ddl;
create database ddl;
\c ddl
create table t1 ( a int, b int );
insert into t1 (a,b)
       values (generate_series(1,1000000)
              ,generate_series(1,1000000));
select count(*) from t1;
create index i1 on t1(a);
create unique index i2 on t1(b);
create view v1 as select a from t1;
alter table t1 add constraint con1 check ( a < 2000000 );
\d t1
CREATE FUNCTION add(integer, integer) RETURNS integer
    AS 'select $1 + $2;'
    LANGUAGE SQL
    IMMUTABLE
    RETURNS NULL ON NULL INPUT;

PostgreSQL comes with a set of administration functions which can be used to query various stuff. Some are there to get the definitions for your objects.

You can get the definition of a view:

(postgres@[local]:5439) [ddl] > select pg_get_viewdef('v1'::regclass, true);
 pg_get_viewdef 
----------------
  SELECT t1.a  +
    FROM t1;
(1 row)

You can get the definition of a constraint:

(postgres@[local]:5439) [ddl] > SELECT conname
                                     , pg_get_constraintdef(r.oid, true) as definition
                                  FROM pg_constraint r
                                 WHERE r.conrelid = 't1'::regclass;
 conname |     definition      
---------+---------------------
 con1    | CHECK (a < 2000000)

You can get the definition of a function:

(postgres@[local]:5439) [ddl] > SELECT proname
     , pg_get_functiondef(a.oid)
  FROM pg_proc a
 WHERE a.proname = 'add';
 proname |                   pg_get_functiondef                    
---------+---------------------------------------------------------
 add     | CREATE OR REPLACE FUNCTION public.add(integer, integer)+
         |  RETURNS integer                                       +
         |  LANGUAGE sql                                          +
         |  IMMUTABLE STRICT                                      +
         | AS $function$select $1 + $2;$function$                 +
         | 
--OR
(postgres@[local]:5439) [ddl] > SELECT pg_get_functiondef(to_regproc('add'));
                   pg_get_functiondef                    
---------------------------------------------------------
 CREATE OR REPLACE FUNCTION public.add(integer, integer)+
  RETURNS integer                                       +
  LANGUAGE sql                                          +
  IMMUTABLE STRICT                                      +
 AS $function$select $1 + $2;$function$                 +

You can get the definition of an index:

(postgres@[local]:5439) [ddl] > select pg_get_indexdef('i1'::regclass);
            pg_get_indexdef            
---------------------------------------
 CREATE INDEX i1 ON t1 USING btree (a)
(1 row)

But surprisingly you can not get the DDL for a table. There is just no function available to do this. How can you do that without concatenating the definitions you can get out of the PostgreSQL catalog? The only option I am aware of is pg_dump:

postgres@pgbox:/home/postgres/ [PG961] pg_dump -s -t t1 ddl | egrep -v "^--|^$"
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;
SET search_path = public, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
CREATE TABLE t1 (
    a integer,
    b integer,
    CONSTRAINT con1 CHECK ((a < 2000000))
);
ALTER TABLE t1 OWNER TO postgres;
CREATE INDEX i1 ON t1 USING btree (a);
CREATE UNIQUE INDEX i2 ON t1 USING btree (b);

Using the “-s” (schema only) and “-t” (tables) options you get the DDL for the complete table. Not as handy as in Oracle where you can do this in sqlplus but it works and produces a result you can work with.

Of course you can always create the DDLs for your own by querying the catalog, e.g. pg_attribute which holds all the column definitions for the tables:

    Table "pg_catalog.pg_attribute"
    Column     |   Type    | Modifiers 
---------------+-----------+-----------
 attrelid      | oid       | not null
 attname       | name      | not null
 atttypid      | oid       | not null
 attstattarget | integer   | not null
 attlen        | smallint  | not null
 attnum        | smallint  | not null
 attndims      | integer   | not null
 attcacheoff   | integer   | not null
 atttypmod     | integer   | not null
 attbyval      | boolean   | not null
 attstorage    | "char"    | not null
 attalign      | "char"    | not null
 attnotnull    | boolean   | not null
 atthasdef     | boolean   | not null
 attisdropped  | boolean   | not null
 attislocal    | boolean   | not null
 attinhcount   | integer   | not null
 attcollation  | oid       | not null
 attacl        | aclitem[] | 
 attoptions    | text[]    | 
 attfdwoptions | text[]    | 
Indexes:
    "pg_attribute_relid_attnam_index" UNIQUE, btree (attrelid, attname)
    "pg_attribute_relid_attnum_index" UNIQUE, btree (attrelid, attnum)

One nasty way which which is even documented on the PostgreSQL wiki is this:

(postgres@[local]:5439) [ddl] > create extension plperlu;
CREATE EXTENSION
Time: 90.074 ms
(postgres@[local]:5439) [ddl] > \dx
                      List of installed extensions
  Name   | Version |   Schema   |              Description               
---------+---------+------------+----------------------------------------
 plperlu | 1.0     | pg_catalog | PL/PerlU untrusted procedural language
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language

(postgres@[local]:5439) [ddl] > CREATE OR REPLACE FUNCTION system(text) RETURNS text 
AS 'my $cmd=shift; return `cd /tmp;$cmd`;' LANGUAGE plperlu;
CREATE FUNCTION

(postgres@[local]:5439) [ddl] > select system('pg_dump -s -t t1 ddl | egrep -v "^--|^$"');
                    system                     
-----------------------------------------------
 SET statement_timeout = 0;                   +
 SET lock_timeout = 0;                        +
 SET idle_in_transaction_session_timeout = 0; +
 SET client_encoding = 'UTF8';                +
 SET standard_conforming_strings = on;        +
 SET check_function_bodies = false;           +
 SET client_min_messages = warning;           +
 SET row_security = off;                      +
 SET search_path = public, pg_catalog;        +
 SET default_tablespace = '';                 +
 SET default_with_oids = false;               +
 CREATE TABLE t1 (                            +
     a integer,                               +
     b integer,                               +
     CONSTRAINT con1 CHECK ((a < 2000000))    +
 );                                           +
 ALTER TABLE t1 OWNER TO postgres;            +
 CREATE INDEX i1 ON t1 USING btree (a);       +
 CREATE UNIQUE INDEX i2 ON t1 USING btree (b);+
 

Can be a workaround. Hope this helps…

 

Cet article Can I do it with PostgreSQL? – 5 – Generating DDL commands est apparu en premier sur Blog dbi services.

Oracle 12c DataGuard – Insufficient SRLs reported by DGMGRL VALIDATE DATABASE VERBOSE

Wed, 2016-11-30 07:03

I have setup a DataGuard environment and followed the instructions from Oracle to create the Standby Redo Logs. The Standby Redo Logs have to be the same size as the Online Redo Logs. If not, the RFS process won’t attach Standby Redo Logs, and you should have at least one more of the Standby Redo Log Group as you have for your Online Redo Log Group per Thread.

For my single instance, this should be quite straight forward, and so I issued the following commands on the primary and standby.

alter database add standby logfile group 4 size 1073741824;
alter database add standby logfile group 5 size 1073741824;
alter database add standby logfile group 6 size 1073741824;
alter database add standby logfile group 7 size 1073741824;

After setting all up, I started the new cool Broker command “DGMGRL> VALIDATE DATABASE VERBOSE ‘<DB>';” and surprisingly found, that the validation complains that I do have insufficient Standby Redo Logs.

  Current Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE2)          (OCM12C_SITE1)
    1         3                       3                       Insufficient SRLs

  Future Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE1)          (OCM12C_SITE2)
    1         3                       3                       Insufficient SRLs

After looking everything up on Primary and Standby, the number of Log Groups and the sizes looked ok. I do have 3 Online Redo Log Groups with 1G each, and I have 4 Standby Redo Log Groups with 1G each.

-- Standby

SQL> select thread#, group#, sequence#, status, bytes from v$log;

   THREAD#     GROUP#  SEQUENCE# STATUS                BYTES
---------- ---------- ---------- ---------------- ----------
         1          1          0 UNUSED           1073741824
         1          3          0 UNUSED           1073741824
         1          2          0 UNUSED           1073741824

SQL> select thread#, group#, sequence#, status, bytes from v$standby_log;

   THREAD#     GROUP#  SEQUENCE# STATUS          BYTES
---------- ---------- ---------- ---------- ----------
         1          4          0 UNASSIGNED 1073741824
         1          5        552 ACTIVE     1073741824
         1          6          0 UNASSIGNED 1073741824
         0          7          0 UNASSIGNED 1073741824

-- Primary

SQL> select thread#, group#, sequence#, status, bytes from v$log;

   THREAD#     GROUP#  SEQUENCE# STATUS                BYTES
---------- ---------- ---------- ---------------- ----------
         1          1        550 INACTIVE         1073741824
         1          2        551 INACTIVE         1073741824
         1          3        552 CURRENT          1073741824

SQL> select thread#, group#, sequence#, status, bytes from v$standby_log;

   THREAD#     GROUP#  SEQUENCE# STATUS          BYTES
---------- ---------- ---------- ---------- ----------
         1          4          0 UNASSIGNED 1073741824
         1          5          0 UNASSIGNED 1073741824
         1          6          0 UNASSIGNED 1073741824
         0          7          0 UNASSIGNED 1073741824

 

The only strange thing, is that the Standby Redo Log Group 7, shows up with Thread 0, instead of Thread 1.
Did not even know, that a thread 0 exists. It always starts with 1, and in case of RAC, you might see Thread 2, 3 or more. But if you want to, you can perfectly create thread 0 without any issues. For what reasons, I don’t know.

SQL> alter database add standby logfile thread 0 group 8 size 1073741824;

Database altered.

Ok. Lets correct the Thread 0 thing, and then lets see want the “DGMGRL> VALIDATE DATABASE VERBOSE ‘<DB>';” shows.

-- On Standby
		 
DGMGRL> EDIT DATABASE 'OCM12C_SITE1' SET STATE = 'APPLY-OFF';
Succeeded.

SQL> alter database drop standby logfile group 7;

Database altered.

SQL> alter database add standby logfile thread 1 group 7 size 1073741824;

Database altered.

SQL> select thread#, group#, sequence#, status, bytes from v$standby_log;

   THREAD#     GROUP#  SEQUENCE# STATUS          BYTES
---------- ---------- ---------- ---------- ----------
         1          4        553 ACTIVE     1073741824
         1          5          0 UNASSIGNED 1073741824
         1          6          0 UNASSIGNED 1073741824
         1          7          0 UNASSIGNED 1073741824
		 
DGMGRL> EDIT DATABASE 'OCM12C_SITE1' SET STATE = 'APPLY-ON';
Succeeded.
		 
-- On Primary

SQL> alter database drop standby logfile group 7;

Database altered.

SQL> alter database add standby logfile thread 1 group 7 size 1073741824;

Database altered.

And here we go. Now I have sufficient Standby Redo Logs.

  Current Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE2)          (OCM12C_SITE1)
    1         3                       4                       Sufficient SRLs

  Future Log File Groups Configuration:
    Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (OCM12C_SITE1)          (OCM12C_SITE2)
    1         3                       4                       Sufficient SRLs

 

Conclusion

Even on a single instance, use the thread number in your create Standby Redo Log statement.

alter database add standby logfile thread 1 group 4 size 1073741824;
alter database add standby logfile thread 1 group 5 size 1073741824;
alter database add standby logfile thread 1 group 6 size 1073741824;
alter database add standby logfile thread 1 group 7 size 1073741824;

Cheers,
William

 

Cet article Oracle 12c DataGuard – Insufficient SRLs reported by DGMGRL VALIDATE DATABASE VERBOSE est apparu en premier sur Blog dbi services.

Encryption in Oracle Public Cloud

Tue, 2016-11-29 12:04

Oracle Transparent Data Encryption is available without option on the Oracle Public Cloud: Standard Edition as well as Enterprise Edition (EE, EE-HP EE-EP, ECS). More than that, the DBaaS enforces TDE for any user tablespace even when not specifying in the CREATE TABLESPACE. It you are not familiar with TDE key management (wallets) then you have probably encountered ORA-28374: typed master key not found in wallet.
Rather than another tutorial on TDE I’ll try to explain it from the errors you may encounter when simply creating a tablespace.

I have created a new pluggable database PDB2 from the command line:

SQL> create pluggable database PDB2 admin user admin identified by "admin";
Pluggable database PDB2 created.
 
SQL> alter pluggable database PDB2 open read write;
Pluggable database PDB2 altered.
 
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------- --------- ----------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
7 PDB2 READ WRITE NO

I go to the PDB2 container and try to create a tablespace:

SQL> alter session set container=PDB2;
Session altered.
 
SQL> create tablespace mytablespace;
 
Error starting at line 1 in command -
create tablespace mytablespace
Error report -
ORA-28374: typed master key not found in wallet
28374. 0000 - "typed master key not found in wallet"
*Cause: You attempted to access encrypted tablespace or redo logs with
a typed master key not existing in the wallet.
*Action: Copy the correct Oracle Wallet from the instance where the tablespace
was created.

So, this message is related with TDE wallet.

encrypt_new_tablespaces

I didn’t specify any encryption clause in the CREATE TABLESPACE command but it is activated by default by the following parameter:

SQL> show parameter encrypt_new_tablespaces
 
NAME TYPE VALUE
----------------------- ------ ----------
encrypt_new_tablespaces string CLOUD_ONLY

The values can be DDL (the old behavior where encryption must be defined in the CREATE TABLESPACE statement), ALWAYS (AES128 encryption by default), or CLOUD_ONLY which is the same as ALWAYS when the instance is on the Cloud, or as DDL if the instance is on-premises. The default is CLOUD_ONLY.
This parameter has been introduced in 12.2 and has been backported to 11.2.0.4 and 12.1.0.2 with bug 21281607 that is applied on any Oracle Public Cloud DBaaS instance.

So, one solution to create our tablespace is to set encrypt_new_tablespaces to DDL but as it is recommended to encrypt all user tablespaces, let’s continue with it.

ORA-28374: typed master key not found in wallet

So the error message means that I don’t have a master key in the wallet for my newly created PDB because in multitenant each PDB has it’s own master key (but there’s only one wallet for the CDB).
The wallet is opened:

SQL> select * from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
-------- --------------- ------------------- ------------ ------------- ---------------- ------
FILE OPEN_NO_MASTER_KEY AUTOLOGIN SINGLE UNDEFINED 7

But empty (I’m still in the PDB2 container)

SQL> select * from v$encryption_keys order by creation_time;
no rows selected

SET KEY

So the idea is to set a key:

SQL> administer key management set key identified by "Ach1z0#d";

but:

Error starting at line 1 in command -
administer key management set key identified by "Ach1z0#d"
Error report -
ORA-28417: password-based keystore is not open
28417. 0000 - "password-based keystore is not open"
*Cause: Password-based keystore was not opened.
*Action: Close the auto login keystore, if required, and open a
password-based keystore.

Ok. An error because the wallet is not opened. Let’s try to open it:

SQL> administer key management set keystore open identified by "Ach1z0#d";
 
Error starting at line 1 in command -
administer key management set keystore open identified by "Ach1z0#d"
Error report -
ORA-28354: Encryption wallet, auto login wallet, or HSM is already open
28354. 0000 - "Encryption wallet, auto login wallet, or HSM is already open"
*Cause: Encryption wallet, auto login wallet, or HSM was already opened.
*Action: None.

Actually, the wallet is opened. We have seen that the opened wallet is AUTOLOGIN:

SQL> select * from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
-------- --------------- ------------------- ------------ ------------- ---------------- ------
FILE OPEN_NO_MASTER_KEY AUTOLOGIN SINGLE UNDEFINED 7

On DBaaS an AUTOLOGIN wallet is used to be able to have the database automatically restarted without manual intervention. Without AUTOLOGIN wallet you have to provide the password.

But AUTOLOGIN wallet is limited to use it to access the tablespaces.
When administering the wallet, we need to provide the password manually:

We need to close the AUTOLOGIN one:

SQL> administer key management set keystore close;
Key MANAGEMENT succeeded.

Now that it is closed, we can try to open it and open it with the password:

SQL> administer key management set keystore open identified by "Ach1z0#d";
 
Error starting at line : 1 in command -
administer key management set keystore open identified by "Ach1z0#d"
Error report -
ORA-28417: password-based keystore is not open
28417. 0000 - "password-based keystore is not open"
*Cause: Password-based keystore was not opened.
*Action: Close the auto login keystore, if required, and open a
password-based keystore.

Oh… it is opened AUTOLOGIN once again:

SQL> select * from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
-------- --------------- ------------------- ------------ ------------- ---------------- ------
FILE OPEN_NO_MASTER_KEY AUTOLOGIN SINGLE UNDEFINED 7

CDB$ROOT

You need to open the wallet with password from CDB$ROOT:

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> administer key management set keystore close;
Key MANAGEMENT succeeded.
 
SQL> administer key management set keystore open identified by "Ach1z0#d";
Key MANAGEMENT succeeded.

So here is the right way to start: in CDB$ROOT close the AUTOLOGIN wallet and open it with the password.

PDB

Now ready to go further in the PDB2.


SQL> alter session set container=PDB2;
Session altered.

The wallet is now closed for the PDB:

SQL> select * from v$encryption_wallet;
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
--------- -------------- ------- ------------ ------------- ---------------- ------
FILE CLOSED UNKNOWN SINGLE UNDEFINED 7

Let’s open it manually:

SQL> administer key management set keystore open identified by "Ach1z0#d";
Key MANAGEMENT succeeded.

We have no encryption key:

SQL> select * from v$encryption_keys order by creation_time;
no rows selected

Let’s do what we want to do from the get-go: create an encryption key for our PDB:

SQL> administer key management set key identified by "Ach1z0#d";
 
Error starting at line 1 in command -
administer key management set key identified by "Ach1z0#d"
Error report -
ORA-46631: keystore needs to be backed up
46631. 00000 - "keystore needs to be backed up"
*Cause: The keystore was not backed up. For this operation to proceed, the
keystore must be backed up.
*Action: Backup the keystore and try again.

Oh yes. Any change must be backed up. That’s easy:


SQL> administer key management set key identified by "Ach1z0#d" with backup;
Key MANAGEMENT succeeded.

Here we are. The key is there:


SQL> select * from v$encryption_keys order by creation_time;
 
KEY_ID TAG CREATION_TIME ACTIVATION_TIME CREATOR CREATOR_ID USER USER_ID KEY_USE KEYSTORE_TYPE ORIGIN BACKED_UP CREATOR_DBNAME CREATOR_DBID CREATOR_INSTANCE_NAME CREATOR_INSTANCE_NUMBER CREATOR_INSTANCE_SERIAL CREATOR_PDBNAME CREATOR_PDBID CREATOR_PDBUID CREATOR_PDBGUID ACTIVATING_DBNAME ACTIVATING_DBID ACTIVATING_INSTANCE_NAME ACTIVATING_INSTANCE_NUMBER ACTIVATING_INSTANCE_SERIAL ACTIVATING_PDBNAME ACTIVATING_PDBID ACTIVATING_PDBUID ACTIVATING_PDBGUID CON_ID
----------------------------------------------------- ---- --------------------------------------- --------------------------------------- -------- ----------- ----- -------- ----------- ------------------ ------- ---------- --------------- ------------- ---------------------- ------------------------ ------------------------ ---------------- -------------- --------------- --------------------------------- ------------------ ---------------- ------------------------- --------------------------- --------------------------- ------------------- ----------------- ------------------ --------------------------------- ------
AXP3BIrVW0+Evwfx7okZtcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAA 28-NOV-16 08.41.20.629496000 PM +00:00 28-NOV-16 08.41.20.629498000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL NO CDB1 902797638 CDB1 1 4294967295 PDB2 7 96676154 42637D7C7F7A3315E053DA116A0A2666 CDB1 902797638 CDB1 1 4294967295 PDB2 7 96676154 42637D7C7F7A3315E053DA116A0A2666 7

All is perfect but the wallet is still opened with the password:

SQL> select * from v$encryption_wallet;
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
-------- --------------- ------- ------------ ------------- --------------- -------
FILE OPEN PASSWORD SINGLE NO 7

In order to get back to the initial state, it is sufficient to close it (from the CDB$ROOT):


SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> administer key management set keystore close;
 
Error starting at line 1 in command -
administer key management set keystore close
Error report -
ORA-28389: cannot close auto login wallet
28389. 00000 - "cannot close auto login wallet"
*Cause: Auto login wallet could not be closed because it was opened with
another wallet or HSM requiring a password.
*Action: Close the wallet or HSM with a password.

Ok. The ‘close’ command needs the password as it was not opened with AUTOLOGIN one.


SQL> administer key management set keystore close identified by "Ach1z0#d";
Key MANAGEMENT succeeded.

It is immediately automatically re-opened with the AUTOLOGIN one:

SQL> select * from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
--------- --------------------------------------- ------- ------------ ------------- ---------------- ------
FILE /u01/app/oracle/admin/CDB1/tde_wallet/ OPEN AUTOLOGIN SINGLE NO 1

and from the CDB$ROOT I can see all of them:

SQL> select * from v$encryption_keys order by creation_time;
 
KEY_ID TAG CREATION_TIME ACTIVATION_TIME CREATOR CREATOR_ID USER USER_ID KEY_USE KEYSTORE_TYPE ORIGIN BACKED_UP CREATOR_DBNAME CREATOR_DBID CREATOR_INSTANCE_NAME CREATOR_INSTANCE_NUMBER CREATOR_INSTANCE_SERIAL CREATOR_PDBNAME CREATOR_PDBID CREATOR_PDBUID CREATOR_PDBGUID ACTIVATING_DBNAME ACTIVATING_DBID ACTIVATING_INSTANCE_NAME ACTIVATING_INSTANCE_NUMBER ACTIVATING_INSTANCE_SERIAL ACTIVATING_PDBNAME ACTIVATING_PDBID ACTIVATING_PDBUID ACTIVATING_PDBGUID CON_ID
----------------------------------------------------- ---- --------------------------------------- --------------------------------------- -------- ----------- ----- -------- ----------- ------------------ ------- ---------- --------------- ------------- ---------------------- ------------------------ ------------------------ ---------------- -------------- --------------- --------------------------------- ------------------ ---------------- ------------------------- --------------------------- --------------------------- ------------------- ----------------- ------------------ --------------------------------- ------
ATxUk1G7gU/0v3Ygk1MbZj8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA 27-NOV-16 09.02.18.050676000 PM +00:00 27-NOV-16 09.02.18.130705000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL YES CDB1 902797638 CDB1 1 4294967295 CDB$ROOT 1 1 3D94C45E41CA19A9E05391E5E50AB8D8 CDB1 902797638 CDB1 1 4294967295 CDB$ROOT 1 1 3D94C45E41CA19A9E05391E5E50AB8D8 1
AWSs1Gr0WE86vyfWc123xccAAAAAAAAAAAAAAAAAAAAAAAAAAAAA 27-NOV-16 09.02.18.089346000 PM +00:00 27-NOV-16 09.02.18.722365000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL YES CDB1 902797638 CDB1 1 4294967295 PDB1 3 2687567370 424FA3D9C61927FFE053DA116A0A85F7 CDB1 902797638 CDB1 1 4294967295 PDB1 3 2687567370 424FA3D9C61927FFE053DA116A0A85F7 3
AfwqzZP/Rk+5v5WqiNK5nl0AAAAAAAAAAAAAAAAAAAAAAAAAAAAA 28-NOV-16 08.36.43.980717000 PM +00:00 28-NOV-16 08.36.43.980720000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL YES CDB1 902797638 CDB1 1 4294967295 PDB2 5 2602763579 42636D1380072BE7E053DA116A0A8E2D CDB1 902797638 CDB1 1 4294967295 PDB2 5 2602763579 42636D1380072BE7E053DA116A0A8E2D 5
AXP3BIrVW0+Evwfx7okZtcgAAAAAAAAAAAAAAAAAAAAAAAAAAAAA 28-NOV-16 08.41.20.629496000 PM +00:00 28-NOV-16 08.41.20.629498000 PM +00:00 SYS 0 SYS 0 TDE IN PDB SOFTWARE KEYSTORE LOCAL NO CDB1 902797638 CDB1 1 4294967295 PDB2 7 96676154 42637D7C7F7A3315E053DA116A0A2666 CDB1 902797638 CDB1 1 4294967295 PDB2 7 96676154 42637D7C7F7A3315E053DA116A0A2666 7

As you can see I did two attempts with the PDB2 to write this blog post. The previous keys are all in the wallet.

I check that the AUTOLOGIN is opened in PDB2:


SQL> alter session set container=PDB2;
Session altered.
 ;
SQL> select * from v$encryption_wallet;
WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_ORDER FULLY_BACKED_UP CON_ID
--------- -------------- ------- ------------ ------------- ---------------- ------
FILE OPEN AUTOLOGIN SINGLE NO 7

And finally I can create my tablespace


SQL> create tablespace mytablespace;
Tablespace MYTABLESPACE created.

Easy, isn’t it?

If you create your PDB with the DBaaS monitor interface all is done automatically with the ‘create PDB’ button:

  • Close the AUTOLOGIN wallet (from CDB$ROOT)
  • Open the wallet with password
  • Create the pluggable database and open it
  • Open the wallet from the PDB, with password
  • Set the masterkey for the PDB
  • Close the wallet to get it opened with AUTOLOGIN
 

Cet article Encryption in Oracle Public Cloud est apparu en premier sur Blog dbi services.

Documentum story – Authentication failed for Installation Owner with the correct password

Tue, 2016-11-29 01:00

When installing a new Remote Content Server (High Availability), everything was going according to the plan until we try to login to DA using this new CS: the login using the Installation Owner (dmadmin) failed… Same result from dqMan or any other third party tools and only the iapi or idql sessions on the Content Server itself were still working because of the local trust. When something strange is happening regarding the authentication of the Installation Owner, I tink the first to do is always to verify if the dm_check_password is able to recognize your username/password:

[dmadmin@content_server_01 ~]$ $DOCUMENTUM/dba/dm_check_password
Enter user name: dmadmin
Enter user password:
Enter user extra #1 (not used):
Enter user extra #2 (not used):
$DOCUMENTUM/dba/dm_check_password: Result = (245) = (DM_CHKPASS_BAD_LOGIN)

 

As you can see above, this was actually not working so apparently the CS is thinking that the password isn’t the correct one… This might happen for several reasons:

 

1. Wrong permissions on the dm_check_password script

This script is part of the few scripts that need some specific permissions to be working… This is either done by the Installer if you provide the root’s password during the installation or you can run the $DOCUMENTUM/dba/dm_root_task script manually using the root account (using sudo/dzdo or asking your UNIX admin team for example). These are the permissions that are needed for this script:

[dmadmin@content_server_01 ~]$ ls -l $DOCUMENTUM/dba/dm_check_password
-rwsr-s---. 1 root dmadmin 14328 Oct 10 12:57 $DOCUMENTUM/dba/dm_check_password

 

If the permissions aren’t the right ones or if you think that this file has been corrupted somehow, then you can re-execute the dm_root_task again as root. It will ask you if you want to overwrite the current files and it will in the end set the permissions properly.

 

2. Expired password/account

If the OS password/account you are testing (Installation Owner in my case) is expired then there are several behaviors. In case the account is expired, then the dm_check_password will return a DM_CHKPASS_ACCOUNT_EXPIRED error. If it is the password that is expired, then in some OS, the dm_check_password won’t work with the bad login error shown at the beginning of this blog. This can be checked pretty easily on most OS. On a RedHat for example, it would be something like:

[dmadmin@content_server_01 ~]$ chage -l dmadmin
Last password change                                    : Oct 10, 2016
Password expires                                        : never
Password inactive                                       : never
Account expires                                         : never
Minimum number of days between password change          : 0
Maximum number of days between password change          : 4294967295
Number of days of warning before password expires       : 7

 

In our case, we are setting the password/account to never expire to avoid such issues so that’s not the problem here. By the way, an expired password will also prevent you to use the crontab for example…

 

To change these parameters, you will need to have root permissions. That’s how it is done for example (press enter to just use the proposed value if that’s fine for you). The value between brackets is the current/proposed value and you can put your desired value after the colon:

[root@content_server_01 ~]$ chage dmadmin
Changing the aging information for dmadmin
Enter the new value, or press ENTER for the default

        Minimum Password Age [0]: 0
        Maximum Password Age [4294967295]: 4294967295
        Last Password Change (YYYY-MM-DD) [2016-10-10]:
        Password Expiration Warning [7]:
        Password Inactive [-1]: -1
        Account Expiration Date (YYYY-MM-DD) [-1]: -1

 

3. Wrong OS password

Of course if the OS password isn’t the correct one, then the dm_check_password will return a BAD_LOGIN… Makes sense, isn’t it? What I wanted to explain in this section is that in our case, we are always using sudo/dzdo options to change the current user to the Installation Owner and therefore we never really use the Installation Owner’s password at the OS level (only in DA, dqMan, aso…). To check if the password is correct at the OS level, you can of course start a ssh session with dmadmin directly or any other command that would require the password to be entered like a su or sudo on itself in our case:

[dmadmin@content_server_01 ~]$ su - dmadmin
Password:
[dmadmin@content_server_01 ~]$

 

As you can see, the OS isn’t complaining and therefore this is working properly: the OS password is the correct one.

 

4. Wrong mount options

Finally the last thing that can prevent you to login to your docbases remotely is some wrong options on your mount points… For this paragraph, I will suppose that $DOCUMENTUM has been installed on a mount point /app. So let’s check the current mount options, as root of course:

[root@content_server_01 ~]$ mount | grep "/app"
/dev/mapper/VG01-LV00 on /app type ext4 (rw,nodev,nosuid)
[root@content_server_01 ~]$
[root@content_server_01 ~]$ cat /etc/fstab | grep "/app"
/dev/mapper/VG01-LV00 /app                ext4        nodev,nosuid        1 2

 

That seems alright but actually it isn’t… For Documentum to work properly, the nosuid shouldn’t be present on the mount point where it has been installed! Therefore we need to change this. First of all, you need to update the file /etc/fstab so this change will remain after a reboot of the linux host. Just remove “,nosuid” to have something like that:

[root@content_server_01 ~]$ cat /etc/fstab | grep "/app"
/dev/mapper/VG01-LV00 /app                ext4        nodev               1 2

 

Now, the configuration inside the file /etc/fstab isn’t applied or reloaded if /app is already mounted. Therefore you need to remount it with the right options and that’s how you can do it:

[root@content_server_01 ~]$ mount | grep "/app"
/dev/mapper/VG01-LV00 on /app type ext4 (rw,nodev,nosuid)
[root@content_server_01 ~]$
[root@content_server_01 ~]$ mount -o remount,nodev /app
[root@content_server_01 ~]$
[root@content_server_01 ~]$ mount | grep "/app"
/dev/mapper/VG01-LV00 on /app type ext4 (rw,nodev)

 

Now that the mount options are correct, we can check again the dm_check_password:

[dmadmin@content_server_01 ~]$ $DOCUMENTUM/dba/dm_check_password
Enter user name: dmadmin
Enter user password:
Enter user extra #1 (not used):
Enter user extra #2 (not used):
$DOCUMENTUM/dba/dm_check_password: Result = (0) = (DM_EXT_APP_SUCCESS)

 

As you can see, this is now working… That’s a miracle ;).

 

Cet article Documentum story – Authentication failed for Installation Owner with the correct password est apparu en premier sur Blog dbi services.

Oracle 12c – Finding the DBID – The last resort

Mon, 2016-11-28 07:46

The DBID is a very important part for Oracle databases. It is an internal, uniquely generated number that differentiates databases. Oracle creates this number automatically as soon as you create the database.

During normal operation, it is quite easy to find your DBID. Whenever you start your RMAN session, it displays the DBID.

oracle@oel001:/home/oracle/ [OCM121] rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Nov 28 10:32:47 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: OCM121 (DBID=3827054096)

RMAN>

Or you can just simply select it from your v$database view.

SQL> select DBID from v$database;

DBID
----------
3827054096

But what happens in case you have a restore/recovery scenario where you lost your database. In the NOMOUNT state, it is not possible to retrieve the DBID.

SQL> select DBID from v$database;
select DBID from v$database
                 *
ERROR at line 1:
ORA-01507: database not mounted

You can take a look into the alert.log or any other trace file in your DIAG destination, but you will not find a DBID there.

So, if the only thing that you have left is your RMAN Catalog, and your datafile copies in your FRA + Archivelogs, then you need the DBID beforehand, before you can restore/recover your database correctly.

There are three possibilities to get your DBID

  • You could check your RMAN backup log files, if you have set it up correctly
  • You could connect to your RMAN catalog and query the “DB” table from the catalog owner. Be careful, there might be more than one entry for your DB name, and then it might become difficult to get the correct one.  In my example, I have only one entry
    SQL> select * from db;
    
        DB_KEY      DB_ID REG_DB_UNIQUE_NAME             CURR_DBINC_KEY S
    ---------- ---------- ------------------------------ -------------- -
             1 3827054096 OCM121                                      2 N
  • And as the last resort, you can startup nomount (either with a backup pfile or with the RMAN dummy instance), and afterwards you can dump out the header of your datafile copies in your FRA

Dumping out the first block is usually enough, and besides that, you are not limited to the SYSTEM datafile. You can use any of your datafile copies in your FRA (like SYSAUX, USERS and so on) to dump out the block, like shown in the following example:

-- Dump the first block from the SYSTEM datafile
SQL> alter session set tracefile_identifier = dbid_system;
Session altered.

SQL> alter system dump datafile '+fra/OCM121/datafile/SYSTEM.457.926419155' block min 1 block max 1;
System altered.

oracle@oel001:/u00/app/oracle/diag/rdbms/ocm121/OCM121/trace/ [OCM121] cat OCM121_ora_6459_DBID_SYSTEM.trc | grep "Db ID"
        Db ID=3827054096=0xe41c3610, Db Name='OCM121'

-- Dump the first block from the SYSAUX datafile		
SQL> alter session set tracefile_identifier = dbid_sysaux;
Session altered.

SQL> alter system dump datafile '+fra/OCM121/datafile/SYSAUX.354.926417851' block min 1 block max 1;
System altered.

oracle@oel001:/u00/app/oracle/diag/rdbms/ocm121/OCM121/trace/ [OCM121] cat OCM121_ora_7035_DBID_SYSAUX.trc | grep "Db ID"
        Db ID=3827054096=0xe41c3610, Db Name='OCM121'

-- Dump the first block from the USERS datafile
SQL> alter session set tracefile_identifier = dbid_users;
Session altered.

SQL> alter system dump datafile '+fra/OCM121/datafile/USERS.533.926419511' block min 1 block max 1;
System altered.

oracle@oel001:/u00/app/oracle/diag/rdbms/ocm121/OCM121/trace/ [OCM121] cat OCM121_ora_7064_DBID_USERS.trc | grep "Db ID"
        Db ID=3827054096=0xe41c3610, Db Name='OCM121'

As soon as you have your DBID, it is straight forward to do the rest. Connect to your target and RMAN catalog, set the DBID and then run your restore, recovery scripts.

rman target sys/manager catalog rman/rman@rman
set dbid=3827054096
run {
restore spfile from autobackup;
}

run {
restore controlfile ....
}

run {
restore database ....
recover database ....
}
Conclusion

Don’t forget to save your DBID with your RMAN backup jobs somewhere. Recovering a database at 3 o’clock in the morning with a missing DBID might become a nightmare.
Cheers,
William

 

 

Cet article Oracle 12c – Finding the DBID – The last resort est apparu en premier sur Blog dbi services.

Documentum story – Download failed with ‘Exceeded stated content-length of 63000 bytes’

Mon, 2016-11-28 02:00

At one of our customer, we were in the middle of a migration process of some docbases from 6.7 to 7.2. A few days after the migration, we started seeing some failures/errors during simple download of documents from D2 4.5. The migration has been done using the EMC EMA migration tool by some EMC colleagues. The strange thing here is that these download failures only applied to a few documents, far from the majority and only when opening the document using “View Native Content”. In addition to that, it appeared that the issue was only on migrated documents and it wasn’t happening for new ones.

 

This is an example of the error message we were able to see in the D2 4.5 log files:

2016-07-04 12:00:20 [ERROR] [[ACTIVE] ExecuteThread: '326' for queue: 'weblogic.kernel.Default (self-tuning)'] - c.e.d.d.s.D2HttpServlet[                    ] : Download failed
java.net.ProtocolException: Exceeded stated content-length of: '63000' bytes
        at weblogic.servlet.internal.ServletOutputStreamImpl.checkCL(ServletOutputStreamImpl.java:217) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:162) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.emc.d2fs.dctm.servlets.ServletUtil.download(ServletUtil.java:375) [D2FS4DCTM-API-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.ServletUtil.download(ServletUtil.java:280) [D2FS4DCTM-API-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.download.Download.processRequest(Download.java:132) [D2FS4DCTM-WEB-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.execute(D2HttpServlet.java:242) [D2FS4DCTM-API-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.doGetAndPost(D2HttpServlet.java:498) [D2FS4DCTM-API-4.5.0.jar:na]
        at com.emc.d2fs.dctm.servlets.D2HttpServlet.doGet(D2HttpServlet.java:115) [D2FS4DCTM-API-4.5.0.jar:na]
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:731) [weblogic.server.merged.jar:12.1.3.0.0]
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:844) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:280) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:254) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:136) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:346) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.emc.x3.portal.server.filters.HttpHeaderFilter.doFilter(HttpHeaderFilter.java:77) [_wl_cls_gen.jar:na]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:66) [guice-servlet-3.0.jar:na]
        at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:61) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.AdviceFilter.executeChain(AdviceFilter.java:108) [shiro-web-1.1.0.jar:na]
        at com.company.d2.auth.NonSSOAuthenticationFilter.executeChain(NonSSOAuthenticationFilter.java:21) [_wl_cls_gen.jar:na]
        at org.apache.shiro.web.servlet.AdviceFilter.doFilterInternal(AdviceFilter.java:137) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:81) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.ProxiedFilterChain.doFilter(ProxiedFilterChain.java:66) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.AbstractShiroFilter.executeChain(AbstractShiroFilter.java:359) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.AbstractShiroFilter$1.call(AbstractShiroFilter.java:275) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.subject.support.SubjectCallable.doCall(SubjectCallable.java:90) [shiro-core-1.1.0.jar:1.1.0]
        at org.apache.shiro.subject.support.SubjectCallable.call(SubjectCallable.java:83) [shiro-core-1.1.0.jar:1.1.0]
        at org.apache.shiro.subject.support.DelegatingSubject.execute(DelegatingSubject.java:344) [shiro-core-1.1.0.jar:1.1.0]
        at org.apache.shiro.web.servlet.AbstractShiroFilter.doFilterInternal(AbstractShiroFilter.java:272) [shiro-web-1.1.0.jar:na]
        at org.apache.shiro.web.servlet.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:81) [shiro-web-1.1.0.jar:na]
        at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:168) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) [guice-servlet-3.0.jar:na]
        at com.planetj.servlet.filter.compression.CompressingFilter.doFilter(CompressingFilter.java:270) [pjl-comp-filter-1.7.jar:na]
        at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118) [guice-servlet-3.0.jar:na]
        at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113) [guice-servlet-3.0.jar:na]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.emc.x3.portal.server.filters.X3SessionTimeoutFilter.doFilter(X3SessionTimeoutFilter.java:34) [_wl_cls_gen.jar:na]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at com.planetj.servlet.filter.compression.CompressingFilter.doFilter(CompressingFilter.java:270) [pjl-comp-filter-1.7.jar:na]
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:79) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3436) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3402) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120) [com.oracle.css.weblogic.security.wls_7.1.0.0.jar:CSS 7.1 0.0]
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:57) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2285) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2201) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2179) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1572) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:255) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:311) [weblogic.server.merged.jar:12.1.3.0.0]
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:263) [weblogic.server.merged.jar:12.1.3.0.0]

 

So the error in this case is “Exceeded stated content-length of: ‘63000’ bytes”. Hum what does that mean? Well it is not really clear (who said not clear at all?)… So we checked several documents for which the download failed (iAPI using dumps) and the only common points we were able to find for these documents were the following ones:

  • They all had a r_full_content_size of: 0
  • They all had a r_content_size bigger than: 63 000

 

The issue only appeared for objects which were assigned a r_full_content_size of 0 during the migration. We tried to set this property to 0 on a document for which the download was working, in order to try to reproduce the issue, but that didn’t change anything: the download was still working properly.

 

So here is some background regarding this parameter: the expected behavior for this parameter is that it has a real value (obviously). If the file in question is smaller than 2 GB, then the r_full_content_size will have the same value as the r_content_size which is of course the size of the file in bytes. If the file is bigger than 2 GB, then the r_content_size field is too small to show the real size in bytes and therefore the real size is only displayed in the r_full_content_size field… The r_full_content_size is the one read by D2 when using the “View Native Content” action while other actions like “View” or “Edit” behave like older wdk clients and are therefore reading the r_content_size attribute… So yes there is a difference in behavior between the few actions that are doing a download and that’s the reason why we only had this issue with the “View Native Content” action!

 

Unfortunately and as you probably already understood if you read the previous paragraph, the r_content_size and r_full_content_size aren’t of the same type (Integer Vs. Double) and therefore you can’t simply execute one single DQL to set the value of r_full_content_size equal to the value of r_content_size because you will get a DM_QUERY_E_UP_BAD_ATTR_TYPES error. So you will have to do things a little bit slower.

 

The first thing to do is obviously to gather a list of all documents that need to be updated with their r_object_id and value of their r_content_size (r_full_content_size don’t really matter since you will gather only affected documents so this value is always 0):

> SELECT r_object_id, r_content_size FROM dm_document WHERE r_full_content_size='0' AND r_content_size>'63000';
r_object_id         r_content_size
------------------- -------------------
090f446780034513    289326
090f446780034534    225602
090f446780034536    212700
090f446780034540    336916
090f446780034559    269019
090f446780034572    196252
090f446780034574    261094
090f44678003459a    232887
...                 ...

 

Then a first solution would be to manually go through the list and execute one DQL query for each document setting r_full_content_size=’241309′ (or whatever the r_content_size is). For the first document listed above, that would therefore be the following DQL:

> UPDATE dm_document objects SET r_full_content_size='289326' WHERE r_object_id='090f446780034513';

 

Note 1: In case you will want at some point to restore a previous version of a document for example, then you will most probably need to use the “FROM dm_document(ALL)” instead of “FROM dm_document”. But then take care that non current versions are immutable and can’t therefore be updated using a simple DQL. You will need to remove the immutable flag, update the object and restore that so that’s a little bit more tricky ;)

Note 2: In case you have a few documents bigger than 2 GB, the r_content_size will not reflect the real value and therefore setting the r_full_content_size to that value isn’t correct… I wasn’t able to test that since our customer didn’t have any document bigger than 2 GB but you should most probably be able to use instead the full_content_size that is stored on the dmr_content for this object… Just like the dm_document, the dmr_content object has two fields that you should be able to use to find the correct size: content_size (that should reflect r_content_size) and full_content_size (that should reflect r_full_content_size). If that isn’t helping then a last solution would be to export and re-import all documents bigger than 2 GB…

 

Ok so updating all objects is possible but this is really boring so a second solution – and probably a better one – is to use a script to prepare a list of DQL queries to be executed. When you have the r_object_id and r_content_size of all affected documents, you can just put that in a CSV file (copy/paste in excel and save as CSV for example) and wrote a small script (bash for example) that will generate 1 DQL query per document, that’s really simple and if you have thousands of documents affected, then it will just take you a few minutes to write the script and in 1/2 seconds you will have thousands of DQL queries generated. Then you can put all these commands in a single file that can be executed against a docbase on the Content Server. That’s a better solution but actually the simplest solution you can ever find will always be to use dqMan (or any similar solution). Indeed dqMan has a dedicated feature that allows you to execute a “template DQL” on any list of objects returned by a specific command. Therefore you don’t need any bash scripting if you are using dqMan and that does the job in a few seconds.

 

A last solution would be to go directly to the database and execute SQL queries to set r_full_content_size equal to r_content_size BUT I would NOT recommend you to do that unless you have a very good knowledge of the Documentum Content Model and if you absolutely know what you are doing and what you are messing with ;).

 

See you soon!

 

Cet article Documentum story – Download failed with ‘Exceeded stated content-length of 63000 bytes’ est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 4 – External tables

Sun, 2016-11-27 12:59

In the last posts of this series we talked about restore points, how you could do things that would require the dual table in Oracle and how you can make use of tablespaces in PostgreSQL. In this post we’ll look at what my colleague Clemens thinks is one of the greatest features in Oracle. Can you do external external tables in PostgreSQL?

The easy answers is: yes, of course you can. And you can do it in various ways. To start with we’ll need a sample file were we can load data from. For the test here we’ll use this one. Note that this file uses Windows line feeds which you’ll need to convert to unix style if you are working on Linux like me. You can use VI to do this.

Once you extracted the file the content looks like this:

postgres@pgbox:/home/postgres/ [PG961] head -2 FL_insurance_sample.csv
policyID,statecode,county,eq_site_limit,hu_site_limit,fl_site_limit,fr_site_limit,tiv_2011,tiv_2012,eq_site_deductible,hu_site_deductible,fl_site_deductible,fr_site_deductible,point_latitude,point_longitude,line,construction,point_granularity
119736,FL,CLAY COUNTY,498960,498960,498960,498960,498960,792148.9,0,9979.2,0,0,30.102261,-81.711777,Residential,Masonry,1

So, we have a total of 18 columns and 36634 rows to test with. Should be fine :)

How can we bring that into PostgreSQL? Clemens talked about SQL*Loader in his post. There is a similar project for PostgreSQL called pg_bulkload which we’ll not be talking about. We will look at two options you can use to load data from files into PostgreSQL which are available by default:

  1. copy
  2. file_fdw

What we need no matter with which option we go first is the definition of the table. These are the columns we need:

postgres@pgbox:/home/postgres/ [PG961] head -1 FL_insurance_sample.csv | sed 's/,/,\n/g'
policyID,
statecode,
county,
eq_site_limit,
hu_site_limit,
fl_site_limit,
fr_site_limit,
tiv_2011,
tiv_2012,
eq_site_deductible,
hu_site_deductible,
fl_site_deductible,
fr_site_deductible,
point_latitude,
point_longitude,
line,
construction,
point_granularity

So the create table statement will look something like this:

(postgres@[local]:5439) [postgres] > create table exttab ( policyID int,
                                                           statecode varchar(2),
                                                           county varchar(50),
                                                           eq_site_limit numeric,
                                                           hu_site_limit numeric,
                                                           fl_site_limit numeric,
                                                           fr_site_limit numeric,
                                                           tiv_2011 numeric,
                                                           tiv_2012 numeric,
                                                           eq_site_deductible numeric,
                                                           hu_site_deductible numeric,
                                                           fl_site_deductible numeric,
                                                           fr_site_deductible numeric,
                                                           point_latitude numeric,
                                                           point_longitude numeric,
                                                           line varchar(50),
                                                           construction varchar(50),
                                                           point_granularity int);
CREATE TABLE

Now that we have the table we can use copy to load the data:

(postgres@[local]:5439) [postgres] > copy exttab from '/home/postgres/FL_insurance_sample.csv' with csv header;
COPY 36634
(postgres@[local]:5439) [postgres] > select count(*) from exttab;
 count 
-------
 36634
(1 row)

Quite fast. But there is a downside with this approach. As Clemens mentions in his posts one of the benefits of external tables in Oracle is that you can access the file via standard SQL and do transformations before the data arrives in the database. Can you do the same with PostgreSQL? Yes, if you use the file_fdw foreign data wrapper.

The file_fdw is available by default:

(postgres@[local]:5439) [postgres] > create extension file_fdw;
CREATE EXTENSION
Time: 442.777 ms
(postgres@[local]:5439) [postgres] > \dx
                        List of installed extensions
   Name   | Version |   Schema   |                Description                
----------+---------+------------+-------------------------------------------
 file_fdw | 1.0     | public     | foreign-data wrapper for flat file access
 plpgsql  | 1.0     | pg_catalog | PL/pgSQL procedural language

(postgres@[local]:5439) [postgres] > create server srv_file_fdw foreign data wrapper file_fdw;
CREATE SERVER
(postgres@[local]:5439) [postgres] > create foreign table exttab2  ( policyID int,
                                statecode varchar(2),
                                county varchar(50),
                                eq_site_limit numeric,     
                                hu_site_limit numeric,     
                                fl_site_limit numeric,     
                                fr_site_limit numeric,     
                                tiv_2011 numeric,          
                                tiv_2012 numeric,          
                                eq_site_deductible numeric,
                                hu_site_deductible numeric,
                                fl_site_deductible numeric,
                                fr_site_deductible numeric,
                                point_latitude numeric,    
                                point_longitude numeric,   
                                line varchar(50),          
                                construction varchar(50),  
                                point_granularity int)     
server srv_file_fdw options ( filename '/home/postgres/FL_insurance_sample.csv', format 'csv', header 'true' );
CREATE FOREIGN TABLE

(postgres@[local]:5439) [postgres] > select count(*) from exttab2;
 count 
-------
 36634
(1 row)

From now on you can work with the file by accessing it using standard SQL and all the options you have with SQL are available. Very much the same as Clemens states in his post: “Because external tables can be accessed through SQL. You have all possibilities SQL-queries offer. Prallelism, difficult joins with internal or other external tables and of course all complex operations SQL allows. ETL became much easier using external tables, because it allowed to process data through SQL joins and filters already before it was loaded in the database.”

 

Cet article Can I do it with PostgreSQL? – 4 – External tables est apparu en premier sur Blog dbi services.

#DOAG2016

Fri, 2016-11-25 09:36

That’s a very late ‘wrap-up’ post about the DOAG Conference.
Well, not so late because it was last week but time flies and blog posts are already listed.

12.2

It started very well when Oracle has released the full 12.2 on the DBaaS cloud a few days before the conference. As lot of speakers, my slides and demo was full of 12.2 features. Of course, as beta tester we can ask the permission to talk about it but it’s not easy to replace any “12.2” with “the next generation of Oracle database” especially in titles that are limited to 60 characters ;)
So 12cR2 came on the Oracle Public Cloud, easily available in free trial, full documentation became public and my demos were still working… this was perfect.
The 12.2 beta program started one year ago. At the time of abstract submission, all speakers believed that 12.2 would be out at DOAG conference time. This was just in time.

Book

Exactly one year ago, at DOAG 2015, Vit Spinka came to me with this crazy idea to write a book on Multitenant. It was not the perfect fit in the planning for me as I was starting to prepare OCM 12c but that was a really good idea. The book was mostly finished in June but we had to wait for the public availability of 12.2. Now Oracle Press got the authorization tou publish it so it will probably be available soon. I had lot of questions about the date at DOAG. We have no date, but I hope to see it before Christmas.

Sharing

Of course those international events are very good for networking. Yes, DOAG conference is international. Lot of speakers coming from other continents. DOAG organized a nice dinner with speakers coming from abroad and/or being Oakies and/or ACED. Very nice conversations, and unconferences on different subjects: #cloud, #DBADev, PL/SQL … and those conversations will continue in two weeks in Birmingham.

Among the conversations, I was very happy to discuss with Tobias Deml who I asked to explain me the #DOAGNextGen community. Oracle DBA is a job where you don’t find a lot of young people. We always have that image of senior bearded guys in front of a text screen. It’s good to have experienced people but this must be mixed with some fresh blood. I really like what the DOAG Next Generation Community is doing. They have a lot of energy and ideas.

Tokyo

Speaking in the Tokio room is great. Especially when doing live demos. You don’t worry about your mintty fonts being too small or not.

2016-11-15 16.42.36

I was talking about the Multitenant features that can help in avoiding too many roundtrips between Dev and Ops.

DSC09076 - S

The future is about automation, fast provisioning, fast refresh and clear separation of roles. Call it #cloud if you like.

dbi services

DOAG is a big event for dbi services: lot of us participate, most being speakers, we have a booth to meet our customers and any one with questions about IT infrastructure. Of course, a bit about Cloud.

2016-11-17 08.42.41-crop

csm_DOAG_85_ba6c760e40

2017

I’m looking forward to DOAGDB in Dusseldorf (30-31 May 2017) – calls for paper currently opened. And next conference 21-24 November 2017

In the meantime if you want to learn more about 12.2, plan your upgrades and master the multitenant architecture… our 12c New Features workshop will be on 12cR2 in 2017 – first date in February: http://www.dbi-services.com/trainings/oracle-12c-new-features-workshop/

 

Cet article #DOAG2016 est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 3 – Tablespaces

Fri, 2016-11-25 09:02

In the last posts of this series we talked about restore points and how you could do things that would require the dual table in Oracle. In this post we’ll look at tablespaces. This is one of the features that is available in PostgreSQL but is totally different from what you know from Oracle. In Oracle you need to create a datafile which is attached to a tablespace. Once you have this you can start creating tables in there if you have the permissions to do so. How does this work in PostgreSQL?

Before we start playing with our own tablespaces you need to know that there are two default tablespaces in each PostgreSQL instance:

(postgres@[local]:5439) [postgres] > \db+
                                  List of tablespaces
    Name    |  Owner   | Location | Access privileges | Options |  Size  | Description 
------------+----------+----------+-------------------+---------+--------+-------------
 pg_default | postgres |          |                   |         | 21 MB  | 
 pg_global  | postgres |          |                   |         | 497 kB | 
(2 rows)

When you create a table and do not specify in which tablespace you want to get it created it will be created in the pg_default tablespace (this is the default tablespace for template0 and template1 and therefore will be the default for every user created database if not overwritten). pg_global contains the shared system catalog.

This means, whenever you create a table without specifying a tablespace in the create table statement it will go to the pg_default tablespace:

(postgres@[local]:5439) [postgres] > create table t1 ( a int );
CREATE TABLE
Time: 99.609 ms
(postgres@[local]:5439) [postgres] > select tablespace from pg_tables where tablename = 't1';
 tablespace 
------------
 NULL
(1 row)

NULL, in this case, means default. If you want to know where exactly the files that make up the tables are you can use oid2name:

postgres@pgbox:/home/postgres/ [PG961] oid2name -t t1
From database "postgres":
  Filenode  Table Name
----------------------
     24592          t1
postgres@pgbox:/home/postgres/ [PG961] find $PGDATA -name 2459*
/u02/pgdata/PG961/base/13322/24592

In addition oid2name tells you more about the databases and the default tablespace associated to them:

postgres@pgbox:/home/postgres/ [PG961] oid2name 
All databases:
    Oid  Database Name  Tablespace
----------------------------------
  13322       postgres  pg_default
  13321      template0  pg_default
      1      template1  pg_default

So far for the basics. Time to create our own tablespace. When you look at the syntax:

(postgres@[local]:5439) [postgres] > \h create tablespace
Command:     CREATE TABLESPACE
Description: define a new tablespace
Syntax:
CREATE TABLESPACE tablespace_name
    [ OWNER { new_owner | CURRENT_USER | SESSION_USER } ]
    LOCATION 'directory'
    [ WITH ( tablespace_option = value [, ... ] ) ]

… this is quite different from what you know when you work with Oracle. The important point for now is the “LOCATION”. This refers to a directory somewhere the PostgreSQL owner has write access to. This can be a local directory, can be a directory on any storage the host has access to and it even can be on a ramdisk. It really doesn’t matter as long as the PostgreSQL OS user has write permissions to it.

Lets create our first tablespace:

(postgres@[local]:5439) [postgres] > \! mkdir /var/tmp/tbs1
(postgres@[local]:5439) [postgres] > create tablespace tbs1 location '/var/tmp/tbs1';
CREATE TABLESPACE
Time: 26.362 ms
(postgres@[local]:5439) [postgres] > \db+
                                     List of tablespaces
    Name    |  Owner   |   Location    | Access privileges | Options |  Size   | Description 
------------+----------+---------------+-------------------+---------+---------+-------------
 pg_default | postgres |               |                   |         | 21 MB   | 
 pg_global  | postgres |               |                   |         | 497 kB  | 
 tbs1       | postgres | /var/tmp/tbs1 |                   |         | 0 bytes | 
(3 rows)

What happened? The first thing to notice is that we can now see the “Location” column populated when we display all the tablespaces and that the size of our new tablespace is zero (well, not surprising as nothing is created in the tablespace right now). Did PostgreSQL already create datafiles in this location you might ask?

(postgres@[local]:5439) [postgres] > \! ls -l /var/tmp/tbs1/
total 0
drwx------. 2 postgres postgres 6 Nov 25 11:03 PG_9.6_201608131

At least a directory which contains the version of PostgreSQL was created. What is inside this directory?

(postgres@[local]:5439) [postgres] > \! ls -l /var/tmp/tbs1/PG_9.6_201608131/
total 0

Nothing, so lets create a table in this brand new tablespace:

(postgres@[local]:5439) [postgres] > create table t1 ( a int ) tablespace tbs1;
CREATE TABLE
(postgres@[local]:5439) [postgres] > \d+ t1
                          Table "public.t1"
 Column |  Type   | Modifiers | Storage | Stats target | Description 
--------+---------+-----------+---------+--------------+-------------
 a      | integer |           | plain   |              | 
Tablespace: "tbs1"

How does the directory look like now?:

(postgres@[local]:5439) [postgres] > \! ls -l /var/tmp/tbs1/PG_9.6_201608131/
total 0
drwx------. 2 postgres postgres 18 Nov 25 12:02 13322

Ok, 13322 is the OID of the database which the table belongs to:

(postgres@[local]:5439) [postgres] > \! oid2name
All databases:
    Oid  Database Name  Tablespace
----------------------------------
  13322       postgres  pg_default
  13321      template0  pg_default
      1      template1  pg_default

And below that?

(postgres@[local]:5439) [postgres] > \! ls -l /var/tmp/tbs1/PG_9.6_201608131/13322/
total 0
-rw-------. 1 postgres postgres 0 Nov 25 12:02 24596

This is the OID of the table. So in summary this is the layout you get per tablespace:

|
|---[LOCATION]
|       |
|       | ----- [FIXED_VERSION_DIRECTORY]
|       |                  |
|       |                  |---------[DATABASE_OID]
|       |                  |              |
|       |                  |              |-----------[TABLE_AND_INDEX_FILES_OID]

One point that is often forgotten is that you can set various parameters on a tablespace level:

CREATE TABLESPACE tablespace_name
    [ OWNER { new_owner | CURRENT_USER | SESSION_USER } ]
    LOCATION 'directory'
    [ WITH ( tablespace_option = value [, ... ] ) ]

What you can set per tablespace is:

This can be very helpful when you have tablespaces on disks (ramdisk?) that have very different performance specifications.

A very important point to keep in mind: Each tablespace you create in PostgreSQL creates a symlink in the clusters data directory:

CREATE TABLESPACE tablespace_name
postgres@pgbox:/home/postgres/ [PG961] ls -l $PGDATA/pg_tblspc 
total 0
lrwxrwxrwx. 1 postgres postgres 13 Nov 25 11:03 24595 -> /var/tmp/tbs1

Again, the number (24595) is the OID, in this case of the tablespace:

|
(postgres@[local]:5439) [postgres] > select oid,spcname from pg_tablespace where spcname = 'tbs1';
  oid  | spcname 
-------+---------
 24595 | tbs1
(1 row)

This is important to know because when you do backups of you PostgreSQL instance it is critical that you backup the tablespaces as well. You can find all the pointers/symlinks in the pg_tblspc directory.

What else can you do with tablespaces? Of course you can change the default tablespace for the whole instance:

|
(postgres@[local]:5439) [postgres] > alter system set default_tablespace='tbs1';
ALTER SYSTEM
Time: 120.406 ms

(postgres@[local]:5439) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

Time: 4.279 ms
(postgres@[local]:5439) [postgres] > show default_tablespace ;
 default_tablespace 
--------------------
 tbs1
(1 row)

You can assign a tablespace to a database:

|
(postgres@[local]:5439) [postgres] > create database db1 TABLESPACE = tbs1;
CREATE DATABASE
Time: 1128.020 ms
(postgres@[local]:5439) [postgres] > \l+
                                                                    List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   |  Size   | Tablespace |                Description                 
-----------+----------+----------+-------------+-------------+-----------------------+---------+------------+--------------------------------------------
 db1       | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |                       | 7233 kB | tbs1       | 
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |                       | 7343 kB | pg_default | default administrative connection database
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +| 7233 kB | pg_default | unmodifiable empty database
           |          |          |             |             | postgres=CTc/postgres |         |            | 
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +| 7233 kB | pg_default | default template for new databases
           |          |          |             |             | postgres=CTc/postgres |         |            | 

You can make someone else the owner of a tablespace:

|
(postgres@[local]:5439) [postgres] > create user u1 password 'u1';
CREATE ROLE
Time: 31.414 ms
(postgres@[local]:5439) [postgres] > ALTER TABLESPACE tbs1 OWNER TO u1;
ALTER TABLESPACE
Time: 2.072 ms
(postgres@[local]:5439) [postgres] > \db
          List of tablespaces
    Name    |  Owner   |   Location    
------------+----------+---------------
 pg_default | postgres | 
 pg_global  | postgres | 
 tbs1       | u1       | /var/tmp/tbs1
(3 rows)

And finally you can set one or more tablespaces to be used as temporary tablespaces:

|
(postgres@[local]:5439) [postgres] > alter system set temp_tablespaces='tbs1';
ALTER SYSTEM
Time: 4.175 ms

(postgres@[local]:5439) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

Time: 3.638 ms
(postgres@[local]:5439) [postgres] > show temp_tablespaces ;
 temp_tablespaces 
------------------
 tbs1
(1 row)

Conclusion: Yes, you can have tablespaces in PostgreSQL and they give you great flexibility on how you can organize your PostgreSQL files on disk. The implementation is very different from other vendors, though.

 

Cet article Can I do it with PostgreSQL? – 3 – Tablespaces est apparu en premier sur Blog dbi services.

Documentum story – Lockbox file is newer than the active Lockbox library

Fri, 2016-11-25 02:00

A few weeks ago at a customer, I was installing a completely new sandbox environment for one of our Application Teams. On this environment, we used a new version stack with the most recent patch set (at that time): CS 7.2 P16, D2 4.6 P05, aso… A new patch level means a lot of “fun” trying to install and execute the updated installers of Documentum/BPM/D2… And of course a lot of new bugs ;). In this blog I will talk about a new issue we never faced before that is related to the Lockbox libraries.

 

With the older version stack, when we installed D2, we always kept the D2 Lockbox libraries under $DOCUMENTUM/d2-lib/lockbox on the CS and then we referenced these libraries in the environment variable ($PATH, $LD_LIBRARY_PATH and $CLASSPATH) of our Installation Owner. In the documentation, it is not asked to set these variables as “permanent” by adding them in the bash_profile for example or something similar but this is what we used to do to avoid adding them again and again each time we needed to update the D2 Lockbox.

 

For this new version stack and since the Official Documentation didn’t change, we kept the same approach. After installing D2, we had to restart the Content Server for a maintenance at the OS level. As you probably already know if you are familiar with the CS 7.2, there is also a Lockbox specific to the CS that is normally stored under $DOCUMENTUM/dba/secure/. Whenever you restart the host, you will need to store the lockbox inside the Shared Memory and that’s how it is done:

[dmadmin@content_server_01 ~]$ pwlockbox="L0ckb0xP4ssphr4se"
[dmadmin@content_server_01 ~]$ pwaek="A3kP4ssphr4se"
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ dm_crypto_boot -lockbox lockbox.lb -lockboxpassphrase $pwlockbox -passphrase $pwaek -all

Please wait. This will take a few seconds ...

Please wait, this will take a few seconds..
Setting up the (single) passphrase for all keys in the shared memory region..
Operation succeeded
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ dm_crypto_manage_lockbox -lockbox lockbox.lb -lockboxpassphrase $pwlockbox -resetfingerprint
Lockbox lockbox.lb
Lockbox Path $DOCUMENTUM/dba/secure/lockbox.lb
Reset host done

 

These two commands are working properly so the Lockbox has been put in the Shared Memory – normally – and the docbases should now be able to start. As explained before, we are referencing the D2 Lockbox libraries inside the environment variable by default and not the CS specific versions. So I tried to start the docbase as always and checked the logs to ensure that it was running properly:

[dmadmin@content_server_01 ~]$ $DOCUMENTUM/dba/dm_start_GR_DOCBASE
starting Documentum server for repository: [GR_DOCBASE]
with server log: [$DOCUMENTUM/dba/log/GR_DOCBASE.log]
server pid: 7056
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cat $DOCUMENTUM/dba/log/GR_DOCBASE.log
The Lockbox file is newer than the active Lockbox library. Retry with a newer version of the Lockbox library.
The Lockbox file is newer than the active Lockbox library. Retry with a newer version of the Lockbox library.
2016-09-29T12:02:38.314624      7056[7056]      0000000000000000        [DM_CRYPTO_F_KEYSTORE_INIT]fatal:  "Failed to initialize keystore at $DOCUMENTUM/dba/secure/CSaek. Internal error - 1057226514"

[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$

 

So that’s the error I wanted to share in this blog. As you can see, the docbase wasn’t able to start because the lockbox file is “newer” than the active Lockbox libraries. So I started to do some tests to try to understand what was happening exactly and how to solve it.

 

Ok so the first test I did is removing the D2 Lockbox libraries from the environment variables (.bash_profile + reload of shell session) and then try to start the docbase again:

[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ exit
logout
[morganpatou@content_server_01 ~]$ su - dmadmin
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ pwlockbox="L0ckb0xP4ssphr4se"
[dmadmin@content_server_01 ~]$ pwaek="A3kP4ssphr4se"
[dmadmin@content_server_01 ~]$ dm_crypto_boot -lockbox lockbox.lb -lockboxpassphrase $pwlockbox -passphrase $pwaek -all

Please wait. This will take a few seconds ...

Please wait, this will take a few seconds..
Setting up the (single) passphrase for all keys in the shared memory region..
Operation succeeded
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ dm_crypto_manage_lockbox -lockbox lockbox.lb -lockboxpassphrase $pwlockbox -resetfingerprint
Lockbox lockbox.lb
Lockbox Path $DOCUMENTUM/dba/secure/lockbox.lb
Lockbox open failed The Lockbox file is newer than the active Lockbox library. Retry with a newer version of the Lockbox library.
** Operation failed **
[dmadmin@content_server_01 ~]$

 

As you can see, after removing the D2 Lockbox libraries from the environment, the lockbox can’t be stored in the Shared Memory anymore… So based on this information, it is clear that the Content Server Lockbox libraries (stored under $DM_HOME/bin) are different than the D2 Lockbox libraries (in our case stored under $DOCUMENTUM/d2-lib/lockbox)… It’s “fun” that two patches that have been released exactly at the same time are using different versions of the Lockbox libraries… The annoying point here is that the CS Lockbox isn’t readable anymore using the CS Lockbox libraries and that can be checked using the following command:

[dmadmin@content_server_01 ~]$ dm_crypto_create -lockbox lockbox.lb -lockboxpassphrase $pwlockbox -keyname CSaek -passphrase $pwaek -algorithm AES_256_CBC -check

The Lockbox file is newer than the active Lockbox library. Retry with a newer version of the Lockbox library.
** No AEK store exists in lockbox lockbox.lb and got status code returned as '1057226514'.
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$

 

The problem I can see here is that when installing D2, the CS Lockbox has apparently somehow been corrupted by the D2 Lockbox libraries. The fact is that when a docbase is trying to start, the start script (dm_start_GR_DOCBASE) is setting the environment using the script $DM_HOME/bin/dm_set_server_env.sh which overwrites our environment variables because this file is using the CS Lockbox libraries. That’s the reason why the docbase wasn’t able to start earlier: the docbase is trying to start using the CS libraries (older) while the lockbox has been tempered with the D2 libraries (newer).

 

There are several ways to quickly workaround this specific issue and I will try to describe them below.

 

First solution: Create a copy of the dm_crypto_create and dm_crypto_create.bin files to force it to use the environment variable defined in the .bash_profile and not reload them based on the dm_set_server_env.sh (removing the 3 lines related to the LD_LIBRARY_PATH):

[dmadmin@content_server_01 ~]$ cd $DM_HOME/bin
[dmadmin@content_server_01 bin]$ cp dm_crypto_create my_dm_crypto_create
[dmadmin@content_server_01 bin]$ cp dm_crypto_create.bin my_dm_crypto_create.bin
[dmadmin@content_server_01 bin]$ vi my_dm_crypto_create
[dmadmin@content_server_01 bin]$ diff dm_crypto_create my_dm_crypto_create
21,23d20
< else
<   LD_LIBRARY_PATH=${location}
<   export LD_LIBRARY_PATH

 

Once this is done, you can compare the result of the check command:

[dmadmin@content_server_01 bin]$ ./dm_crypto_create -lockbox lockbox.lb -lockboxpassphrase $pwlockbox -keyname CSaek -passphrase $pwaek -algorithm AES_256_CBC -check

The Lockbox file is newer than the active Lockbox library. Retry with a newer version of the Lockbox library.
** No AEK store exists in lockbox lockbox.lb and got status code returned as '1057226514'.
[dmadmin@content_server_01 bin]$
[dmadmin@content_server_01 bin]$
[dmadmin@content_server_01 bin]$ ./my_dm_crypto_create -lockbox lockbox.lb -lockboxpassphrase $pwlockbox -keyname CSaek -passphrase $pwaek -algorithm AES_256_CBC -check

Key - CSaek uses algorithm AES_256_CBC.

** An AEK store with the given passphrase exists in lockbox lockbox.lb and got status code returned as '0'.
[dmadmin@content_server_01 bin]$
[dmadmin@content_server_01 bin]$

 

=> As you can see, this is now working with the updated script but that’s not really a sexy solution…

 

 

Second solution: Update the file dm_set_server_env.sh to force it to use the D2 Lockbox libraries instead of the CS ones. You can check the Official Documentation about how to setup environment variables for D2 and then just put that at the end of this file. In our case, this was something like:

[dmadmin@content_server_01 bin]$ tail -6 $DM_HOME/bin/dm_set_server_env.sh
export LD_LIBRARY_PATH=$DOCUMENTUM/d2-lib/lockbox/lib/native/linux_gcc34_x64:$LD_LIBRARY_PATH
export PATH=$DOCUMENTUM/d2-lib/lockbox/lib/native/linux_gcc34_x64:$PATH
export CLASSPATH=$DOCUMENTUM/d2-lib/D2.jar:$DOCUMENTUM/d2-lib/LB.jar:$DOCUMENTUM/d2-lib/LBJNI.jar:$CLASSPATH
# The environment that exists after this script executes is output to
# the dm_set_server_env.log file in $DOCUMENTUM_SHARED/logs
env >> $DOCUMENTUM_SHARED/logs/dm_set_server_env.log

 

This solution is a little bit better but as soon as you will upgrade/migrate your Content Server, this file might be overwritten by the Patch and therefore you will face this issue again… :(

 

The solution N°2 is what EMC recommended us to do so that’s what we did in the first place and we were able to start the docbase, aso… But one week later, we tried to install a new docbase and then the Server Configuration Program wasn’t able to read the lockbox anymore and therefore we weren’t able to create any new docbase! This is an extract from the file Server Configuration Program log file:

13:58:10,266  INFO [AWT-EventQueue-0] com.documentum.install.server.ui.panels.DiWPServerUseExistingLockBox - UserSelection: "lockbox.lb"
13:58:10,266  INFO [AWT-EventQueue-0] com.documentum.install.server.ui.panels.DiWPServerUseExistingLockBox - UserSelection: "AES_256_CBC"
13:58:10,266  INFO [AWT-EventQueue-0] com.documentum.install.server.ui.panels.DiWPServerUseExistingLockBox - UserInput: "CSaek"
13:58:10,266  INFO [AWT-EventQueue-0] com.documentum.install.server.ui.panels.DiWPServerUseExistingLockBox - UserInputPassword: "**********"
13:58:10,266  INFO [AWT-EventQueue-0] com.documentum.install.server.ui.panels.DiWPServerUseExistingLockBox - UserSelection: "true"
13:58:10,266  INFO [AWT-EventQueue-0] com.documentum.install.server.ui.panels.DiWPServerUseExistingLockBox - UserInput: "lockbox.lb"
13:58:10,266  INFO [AWT-EventQueue-0] com.documentum.install.server.ui.panels.DiWPServerUseExistingLockBox - UserInputPassword: "**********"
13:58:10,507 ERROR [Thread-97] com.documentum.install.server.installanywhere.actions.DiWAServerValidateLockboxPassphrase - Check AEK key passphrase failed

 

Therefore the solutions 1 and 2 are actually not good because we are able to start the docbase but not to install a new one… So we had to find another, better solution and it all started with this question: which component(s) is/are actually using the D2 Lockbox on the Content Server? Of course the answer to this question is: only the Java Method Server.

 

Based on this information, we therefore decided to restrict the access to the D2 Lockbox libraries only to the JMS. This has been done by removing the environment variables mentioned above from the bash_profile and/or from the dm_set_server_env.sh and then by updating the JMS startup script with the same:

[dmadmin@content_server_01 ~]$ head -6 $DOCUMENTUM_SHARED/jboss7.1.1/server/startMethodServer.sh
#!/bin/sh

#for D2.lockbox
export LD_LIBRARY_PATH=$DOCUMENTUM/d2-lib/lockbox/lib/native/linux_gcc34_x64:$LD_LIBRARY_PATH
export PATH=$DOCUMENTUM/d2-lib/lockbox/lib/native/linux_gcc34_x64:$PATH
export CLASSPATH=$DOCUMENTUM/d2-lib/D2.jar:$DOCUMENTUM/d2-lib/LB.jar:$DOCUMENTUM/d2-lib/LBJNI.jar:$CLASSPATH

 

After that, we “simply” regenerated the Content Server Lockbox with the CS libraries (basically you need to remove completely the current encryption inside the docbase and force the CS to recreate it from scratch… Arnaud Berbier will most probably write a blog on this subject in the next few days/weeks). Then we restarted the whole Content Server and everything was working properly with the right Lockbox libraries and we were able to create new docbases.

 

Cet article Documentum story – Lockbox file is newer than the active Lockbox library est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 2 – Dual

Fri, 2016-11-25 00:42

In the first post of this series we talked about restore points. Another question that pops up from time to time is how you can do things in PostgreSQL that would require the dual table in Oracle. Lets go …

The question is: When do you need the dual table in Oracle? Well, everything time you have nothing to select from, meaning no table you could provide in the from clause and you need exactly one row. This could be the case when you want to do math:

SQL> select 1+2+3*4/2 from dual;

 1+2+3*4/2
----------
	 9

This can be the case when you want to generate test data:

SQL> select 'a' from dual connect by level <= 5;

'
-
a
a
a
a
a

This can be the case when you want to select from a PL/SQL function, such as:

SQL> create table ta (a number);

Table created.

SQL> select dbms_metadata.get_ddl('TABLE','TA',USER) from dual;

DBMS_METADATA.GET_DDL('TABLE','TA',USER)
--------------------------------------------------------------------------------

  CREATE TABLE "SYS"."TA"
   (	"A" NUMBER
   ) PCTFREE 10 PCTUSED 40 INITRANS

… any many more.

The easy answer to the question if you can do it in PostgreSQL is: You don’t need to. Why? Because you can do things like this:

(postgres@[local]:5439) [postgres] > select 'Time for a beer';
    ?column?     
-----------------
 Time for a beer
(1 row)

… or this:

(postgres@[local]:5439) [postgres] > select 1+2+3*4/2;
 ?column? 
----------
        9
(1 row)

The same is true for getting the results of a function:

(postgres@[local]:5439) [postgres] > create function f1 (integer,integer) returns integer
as 'select $1 * $2;'
language sql;
CREATE FUNCTION
Time: 249.499 ms
(postgres@[local]:5439) [postgres] > select f1(5,5);
 f1 
----
 25
(1 row)

PostgreSQL does not force you to provide a table to select from. You can completely skip this. Looks strange when you are used to work with Oracle, I know, but hey: This is much more easy: Why provide a from clause when it is not necessary?

If you really, really can’t live without dual:

(postgres@[local]:5439) [postgres] > create table dual (dummy varchar(1));
CREATE TABLE
(postgres@[local]:5439) [postgres] > insert into dual (dummy) values ('a');
INSERT 0 1
(postgres@[local]:5439) [postgres] > select 'I can not live without dual' from dual;
          ?column?           
-----------------------------
 I can not live without dual
(1 row)
(postgres@[local]:5439) [postgres] > select 1+2+3*4/2 from dual;
 ?column? 
----------
        9
(1 row)

And here you go …

 

Cet article Can I do it with PostgreSQL? – 2 – Dual est apparu en premier sur Blog dbi services.

Observing DML restarts caused by invalidations

Thu, 2016-11-24 14:53

Usually, cursor invalidations do not interrupt the current execution of the cursor. You can even drop a table while a query on it is running and the query may end successfully. However some invalidations will throw internal errors that are catch and the cursor execution restarts transparently. I’ve reproduced one here to show the symptoms: it’s a different EXEC_SQL_ID so the restarts are not accounted within the same SQL Monitoring.

I had that question because I encountered a SQL Monitoring report where the ‘executions’ count of the INSERT operation was more than one. I wondered if it can be a case of DML restart or if DML restart should be accounted in different EXEC_SQL_ID. I got the quick answer from a very reliable source:

@FranckPachot Yep!

— Stefan Koehler (@OracleSK) November 21, 2016

However for my better understanding, I’m now reproducing a case of DML restart to show the symptoms on V$SQL and SQL Monitor.

I need a table with some rows:

SQL> create table DEMO as select rownum n from xmltable('1 to 3');
Table created.

Initially my problem was with an insert into a GTT but now I’m reproducing the case with a partitioned table:

SQL> create table DEMOGTT (n number) partition by hash(n);
Table created.

For internal reasons, when a cursor is invalidated after it has acquired a lock on a partition, a DML restart occurs. If you have a doubt, “oerr ora 14403″ explains that.

The easiest way to invalidate while the cursor is running is to call a function that does it for each row. And as you will see that it can be restarted to infinity I set a timeout timestamp after 30 seconds.


SQL> column limit new_value limit
SQL> select to_char(sysdate+30/24/60/60,'yyyymmddhh24miss') limit from dual;
 
LIMIT
--------------
20161124212154
 
SQL> create function DEMOSLOW(n number) return number as
2 pragma autonomous_transaction;
3 begin
4 if sysdate > to_date('&limit','yyyymmddhh24miss')
5 then
6 dbms_stats.gather_table_stats(user,'DEMOGTT',no_invalidate=>false);
7 end if;
8 return n;
9 end;
10 /
old 4: if sysdate < to_date('&limit','yyyymmddhh24miss')
new 4: if sysdate < to_date('20161124212154','yyyymmddhh24miss')
 
Function created.

Time to run the insert, calling the function for each row:

SQL> alter session set statistics_level=all;
Session altered.
 
SQL> set timing on time on
21:21:24 SQL> insert /*+ monitor */ into DEMOGTT select n from DEMO where n=DEMOSLOW(n);
3 rows created.
 
Elapsed: 00:00:29.30
21:21:54 SQL>

This has taked 30 seconds. Without the timeout, it never stops.

Execution plan for last run shows only one ‘Start':

21:21:54 SQL> select * from table(dbms_xplan.display_cursor('','','allstats last'));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID 3fk1hyq9vnuzx, child number 0
-------------------------------------
insert /*+ monitor */ into DEMOGTT select n from DEMO where
n=DEMOSLOW(n)
 
Plan hash value: 4000794843
 
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:00.01 | 138 |
| 1 | LOAD TABLE CONVENTIONAL | DEMOGTT | 1 | | 0 |00:00:00.01 | 138 |
|* 2 | TABLE ACCESS FULL | DEMO | 1 | 1 | 3 |00:00:00.01 | 9 |
----------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter("N"="DEMOSLOW"("N"))

This is a proof that DML restarts are not accounted within the same execution. the ‘last’ statistics are only from the last restart.

V$SQL accounts only one execution but thousands of invalidations:

21:21:54 SQL> select executions,invalidations from v$sql where sql_id='3fk1hyq9vnuzx';
 
EXECUTIONS INVALIDATIONS
---------- -------------
1 1571

ASH shows the different SQL_EXEC_ID:

21:21:54 SQL> select sample_time,sql_id,sql_exec_id from v$active_session_history where sql_id='3fk1hyq9vnuzx' and sample_time>sysdate-10/24/60/60 order by sample_time desc;
 
SAMPLE_TIME SQL_ID SQL_EXEC_ID
--------------------------------------------------------------------------- ------------- -----------
24-NOV-16 09.21.53.773 PM 3fk1hyq9vnuzx 16778774
24-NOV-16 09.21.51.773 PM 3fk1hyq9vnuzx
24-NOV-16 09.21.48.773 PM 3fk1hyq9vnuzx 16778501
24-NOV-16 09.21.46.773 PM 3fk1hyq9vnuzx 16778396
24-NOV-16 09.21.45.773 PM 3fk1hyq9vnuzx 16778341

and SQL Monitoring see different executions:

21:21:54 SQL> select sql_id,sql_exec_id,status,last_refresh_time,plan_operation,starts from v$sql_plan_monitor where sql_id='3fk1hyq9vnuzx' and plan_line_id=0 order by last_refresh_time desc fetch first 10 rows only;
 
SQL_ID SQL_EXEC_ID STATUS LAST_REFRESH_TI PLAN_OPERATION STARTS
------------- ----------- ------------------- --------------- ------------------------------ ----------
3fk1hyq9vnuzx 16778762 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778765 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778767 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778766 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778771 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778769 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778764 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778770 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778763 DONE 24-nov 21:21:53 INSERT STATEMENT 1
3fk1hyq9vnuzx 16778768 DONE 24-nov 21:21:53 INSERT STATEMENT 1

So those are the symptoms of DML restart. And my initial problem is still there:

  • This test case does not reproduce the issue on a GTT
  • My issue on the GTT had only one SQL_EXEC_ID with multiple ‘starts’ of the plan operations.

#sqlmon #puzzled
I expected to see '1' as 'executions for the first line of a plan,
Here I have executions with 1,12,19,64… ?!? pic.twitter.com/yHPZXDprgX

— Franck Pachot (@FranckPachot) November 21, 2016

Any comments greatly appreciated :)

 

Cet article Observing DML restarts caused by invalidations est apparu en premier sur Blog dbi services.

SQL Server on Linux in Azure

Thu, 2016-11-24 14:23

I’m pleased to announce that Microsoft gives now the opportunity to choose Linux as a new operating system for SQL Server.

To try this new version, you can find in the Cloud Azure a new template in the marketplace.

In the search bar, tape “SQL Server vnext” and you’ll find the “SQL Server vNext on Red Hat Enterprise Linux 7.2”.

SQL Server vNext 01 Follow all “classic” steps to create your new VM SQL Server on Linux:

SQL Server vNext 02

First step, choose your “deployment mode”

SQL Server vNext 04

After this first step, choose a name, a disk type, a couple login/password and the location

SQL Server vNext 05

The next step is to choose your storage:

SQL Server vNext 06

At the last step, configure you options:

SQL Server vNext 07

Wait a little bit for the provisioning and at the end, you’ll have your VM SQL Server on Linux ready to use:

SQL Server vNext 11

 

Et voila! My first VM SQL Server on Linux in the Cloud! It’s easy and fast to create…

Next blog is how to use it ;-)

 

Cet article SQL Server on Linux in Azure est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 1 – Restore points

Thu, 2016-11-24 11:03

When discussing with customers about PostgreSQL we often hear that they can do things in one database that they can not do in PostgreSQL. Most of the times this is not true and you actually can do it in PostgreSQL. Maybe not in exactly the same way but this is not surprising as PostgreSQL does implement features not in exactly the same way other vendors do.

To start this series we’ll talk about restore points. Of course you can create restore points in PostgreSQL and then restore up to such a point in case you need to (e.g. after a failed schema or application upgrade or just for testing purposes ). Lets go…

We’ll use the latest version of PostgreSQL which is 9.6.1 currently:

postgres@pgbox:/home/postgres/ [PG961] sqh
psql (9.6.1 dbi services build)
Type "help" for help.

(postgres@[local]:5439) [postgres] > select version();
                                                          version                                                           
----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6.1 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit
(1 row)

Time: 47.119 ms
(postgres@[local]:5439) [postgres] > 

When we want to do point in time recovery we need to setup archiving. Without going into the details (as this is out of scope here) the parameters which need to be adjusted are these (if not already done):

(postgres@[local]:5439) [postgres] > alter system set wal_level = 'replica';
ALTER SYSTEM
Time: 28.056 ms
(postgres@[local]:5439) [postgres] > alter system set archive_command='test ! -f /u90/pgdata/PG961/%f && cp %p /u90/pgdata/PG961/%f'; 
ALTER SYSTEM
Time: 20.925 ms
(postgres@[local]:5439) [postgres] > alter system set archive_mode ='on';
ALTER SYSTEM
Time: 5.307 ms
(postgres@[local]:5439) [postgres] > select name,context from pg_settings where name in ('archive_mode','archive_command','wal_level');
      name       |  context   
-----------------+------------
 archive_command | sighup
 archive_mode    | postmaster
 wal_level       | postmaster
(3 rows)

Time: 1.460 ms

Be sure to restart your instance before you continue. Changing archive_mode and wal_level can not be done online. Once you restarted make sure that your archive_command really succeeds:

(postgres@[local]:5439) [postgres] > create database test1;
CREATE DATABASE
Time: 1705.539 ms
(postgres@[local]:5439) [postgres] > drop database test1;
DROP DATABASE
Time: 107.283 ms
(postgres@[local]:5439) [restore] > select pg_switch_xlog();
 pg_switch_xlog 
----------------
 0/22001798
(1 row)

Time: 214.216 ms
(postgres@[local]:5439) [postgres] > \! ls -l /u90/pgdata/PG961/
total 16384
-rw-------. 1 postgres postgres 16777216 Nov 24 17:34 000000020000000000000022

When you can not see an archived wal in the last step you did something wrong. The next bit you need when you want to do point in time recovery with PostgreSQL is a base backup:

postgres@pgbox:/u02/pgdata/PG961/ [PG961] mkdir /u90/pgdata/PG961/basebackups
postgres@pgbox:/u02/pgdata/PG961/ [PG961] pg_basebackup -x -D /u90/pgdata/PG961/basebackups/
postgres@pgbox:/u02/pgdata/PG961/ [PG961] ls /u90/pgdata/PG961/basebackups/
backup_label  pg_commit_ts   pg_log        pg_replslot   pg_stat_tmp  PG_VERSION
base          pg_dynshmem    pg_logical    pg_serial     pg_subtrans  pg_xlog
global        pg_hba.conf    pg_multixact  pg_snapshots  pg_tblspc    postgresql.auto.conf
pg_clog       pg_ident.conf  pg_notify     pg_stat       pg_twophase  postgresql.conf

Fine. Lets generate some test data with this simple script:

(postgres@[local]:5439) [postgres] > \! cat a.sql
\c postgres
drop database if exists restore;
create database restore;
\c restore
create table t1 ( a int );
insert into t1 (a)
       values (generate_series(1,1000000));
select count(*) from t1;
\d t1

When you run this you’ll get a table (t1) containing 1 million rows:

(postgres@[local]:5439) [postgres] > \i a.sql
You are now connected to database "postgres" as user "postgres".
DROP DATABASE
Time: 114.000 ms
CREATE DATABASE
Time: 1033.245 ms
You are now connected to database "restore" as user "postgres".
CREATE TABLE
Time: 5.917 ms
INSERT 0 1000000
Time: 2226.599 ms
  count  
---------
 1000000
(1 row)

Time: 65.864 ms
      Table "public.t1"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 

Ok, fine. Now we are ready for testing restore points. Lets say you want to do some modifications to your table and to be on the safe side you want to create a restore point before. No problem:

(postgres@[local]:5439) [postgres] > select pg_create_restore_point('RP1');
 pg_create_restore_point 
-------------------------
 0/28D50EF8
(1 row)

Time: 0.825 ms

Quite easy and fast. Now lets play with our table:

(postgres@[local]:5439) [restore] > select count(*) from t1;
  count  
---------
 1000010
(1 row)

Time: 66.214 ms
(postgres@[local]:5439) [restore] > \d t1
      Table "public.t1"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 

(postgres@[local]:5439) [restore] > alter table t1 add column b varchar(10);
ALTER TABLE
Time: 1.810 ms
(postgres@[local]:5439) [restore] > update t1 set b='b';
UPDATE 1000010
Time: 11004.972 ms
(postgres@[local]:5439) [restore] > drop table t1;
DROP TABLE
Time: 238.329 ms

Ups, table gone. How can we now go back to the restore point created above? Quite easy:

Shutdown your instance and copy back the base backup:

postgres@pgbox:/u02/pgdata/PG961/ [PG961] rm -rf pg_xlog
postgres@pgbox:/u02/pgdata/PG961/ [PG961] cp -pr /u90/pgdata/PG961/basebackups/* $PGDATA
cp: cannot overwrite non-directory ‘/u02/pgdata/PG961/pg_xlog’ with directory ‘/u90/pgdata/PG961/basebackups/pg_xlog’
postgres@pgbox:/u02/pgdata/PG961/ [PG961] ln -s /u03/pgdata/PG961/ pg_xlog

Then create a recovery.conf file (for telling PostgreSQL to go into recovery mode when it comes up) and specify the restore point you created above:

postgres@pgbox:/home/postgres/ [PG961] echo "restore_command = 'cp /u90/pgdata/PG961/%f %p'
> recovery_target_name = 'RP1'" > $PGDATA/recovery.conf
postgres@pgbox:/home/postgres/ [PG961] cat $PGDATA/recovery.conf
restore_command = 'cp /u90/pgdata/PG961/%f %p'
recovery_target_name = 'RP1'

Start the instance and check the log file:

LOG:  database system was interrupted; last known up at 2016-11-24 17:36:28 CET
LOG:  creating missing WAL directory "pg_xlog/archive_status"
LOG:  starting point-in-time recovery to "RP1"

If everything went fine your table should be back without the additional column:

(postgres@[local]:5439) [restore] > \d t1
      Table "public.t1"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 

(postgres@[local]:5439) [restore] > select count(*) from t1;
  count  
---------
 1000000
(1 row)

Time: 82.797 ms

So, yes, you can definitely use restore points with PostgreSQL :)

If you want me to blog about any feature you are not sure is there in PostgreSQL let me know.

 

Cet article Can I do it with PostgreSQL? – 1 – Restore points est apparu en premier sur Blog dbi services.

Documentum story – Restrict the number of jpeg renditions

Thu, 2016-11-24 03:00

As already explained in this blog, we had to remove the PDF renditions for a customer to only keep the jpeg renditions on the ADTS side because the PDF was generated by another third-party tool. If you take a look at the dmr_content items attached to a specific document (parent_id), you will understand that there is by default only one PDF content while there might be a lot more jpeg renditions. Why? The answer to that is pretty simple, the content of the PDF rendition contains all the pages of the document while the jpeg renditions are only for a single page… Therefore if an ADTS is processing a document of 25 pages, then it will create 25 jpeg renditions. Actually that’s even more than that… Indeed by default the ADTS generates two types/formats of jpeg renditions for each page which double the total number of renditions. Here are the two formats available by default:

  • jpeg_lres: (Low Resolutions) that’s actually a real size preview in low resolution of the page
  • jpeg_story: (StoryBoards) that’s a reduced size preview of the page, quite hard to read what’s written…

 

jpeg_lres is the format used by the Thumbnail Server and also by D2 for the preview widget. On the other hand, jpeg_story isn’t used at all in D2 4.5. According to EMC, it *might* has been used for D2 4.1 and previous versions but they aren’t sure about it… ;)

 

Now that this has been said, let’s go back to the title of this blog. Because of this behavior of the ADTS, it might happen that one day you will see hundreds or even thousands of dmr_content items deleted. You might think that there is something wrong, that the cleanup jobs deleted too many objects or something like that… So if this happens to you, please take a look at the format of these dmr_content! Several months ago, the cleanup jobs were inactive for a few weeks because of a bug and when we reactivated them, this happened to us and we finally found out that 95% of these items where only jpeg renditions and that this was actually the expected behavior!

 

After that, we started thinking about how we should handle these jpeg renditions for really big documents? Because having the preview of the documents available in D2 is great but then is it really needed? Generating a preview of the first page or of the 10 first pages might makes sense but would it makes sense to generate a preview for each page of a document bigger than 10 pages? 100 pages? 1 000 pages? These previews are used in D2 and you need to move from one page to the other one starting with the page 1. If you absolutely want to see the preview of the page *364* in D2, then you will need 20 minutes to reach that page in the first place… I think that downloading the document is a little bit faster ;).

 

So is it possible to only generate previews for a few pages and setup a maximum number of jpeg renditions per document? The short answer to that is: yes! And that’s the purpose of this blog, wonderful, isn’t it?!

 

First check out the configuration file that will need to be updated:

[dmadmin@content_server_01 workspace]$ iapi DOCBASE -Udmadmin -Pxxx


        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2015
        All rights reserved.
        Client Library Release 7.2.0050.0084


Connecting to Server using docbase DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 013f245a802173d6 started for user dmadmin."


Connected to Documentum Server running Release 7.2.0050.0214  Linux64.Oracle
Session id is s0
API> retrieve,c,dm_document where folder('/System/Media Server/Command Line Files') and object_name = 'storyboard_pdfstoryboard.xml'
...
093f245a801c9075
API> checkout,c,l
...
093f245a801c9075
API> getfile,c,l,/tmp/workspace/storyboard_pdfstoryboard.xml
...
/tmp/workspace/storyboard_pdfstoryboard.xml
API> flushcache,c
...
OK

 

Then retrieve a test document that will be used to see how it is working:

API> retrieve,c,dm_document where r_object_id='093f245a801a8f56'
...
093f245a801a8f56
API> getfile,c,l,/tmp/workspace/TestRenditions.docx
...
/tmp/workspace/TestRenditions.docx

 

Ok so now the two files are stored locally. I retrieved the content of the file TestRenditions.docx in order to be able to regenerate the renditions, you will see how it works later. So let’s check how many renditions this document currently has:

API> ?,c,select r_object_id, full_format, parent_id, content_size, full_content_size, set_time, set_file from dmr_content where any parent_id='093f245a801a8f56'
r_object_id       full_format         parent_id         content_size  full_content_size       set_time                   set_file                                                                              
----------------  ------------------  ----------------  ------------  ----------------------  -------------------------  ---------------------------------------------------------------------------------------
063f245a801d95cc  jpeg_lres           093f245a801a8f56         60467                   60467  7/4/2016 13:22:39          C:\Users\SYS_AD~1\AppData\Local\Temp\batchFile353776780929575961.tar
063f245a801d95cd  jpeg_lres           093f245a801a8f56        138862                  138862  7/4/2016 13:22:39          C:\Users\SYS_AD~1\AppData\Local\Temp\batchFile353776780929575961.tar
063f245a801d95ce  jpeg_lres           093f245a801a8f56         29596                   29596  7/4/2016 13:22:39          C:\Users\SYS_AD~1\AppData\Local\Temp\batchFile353776780929575961.tar
.....
063f245a8024b99e  jpeg_story          093f245a801a8f56          3392                    3392  7/4/2016 13:22:39          C:\Users\SYS_AD~1\AppData\Local\Temp\batchFile7193259325098763580.tar
063f245a8024b99f  jpeg_story          093f245a801a8f56          4718                    4718  7/4/2016 13:22:39          C:\Users\SYS_AD~1\AppData\Local\Temp\batchFile7193259325098763580.tar
063f245a8024b9a0  jpeg_story          093f245a801a8f56          1567                    1567  7/4/2016 13:22:39          C:\Users\SYS_AD~1\AppData\Local\Temp\batchFile7193259325098763580.tar
.....
063f245a8024b622  msw12               093f245a801a8f56         90535                   90535  7/4/2016 13:22:30          /app/weblogic/tmp/DOCBASE/msD2-01/DefaultFileRenamePolicy.rename8572010254156028259.docx
(52 rows affected)

API> exit
Bye

 

As explained previously in this blog, the document “TestRenditions.docx” (093f245a801a8f56) has 25 pages and therefore there are 25 (jpeg_lres) + 25 (jpeg_story) + 1 (pdf) + 1 (real document) = 52 dmr_content items attached to it and therefore 51 renditions. Now let’s see the content of the configuration file and what to do to change the number of renditions we need:

[dmadmin@content_server_01 workspace]$ pwd
/tmp/workspace
[dmadmin@content_server_01 workspace]$ cat storyboard_pdfstoryboard.xml
<PDFSTORYBOARD_MP_PROPERTIES>
    <FORMAT name="JPEG">
        <PROP name="Format" type="string">JPEG</PROP>
        <PROP name="Width" type="unsigned long" token="doc_token_width">200</PROP>
        <PROP name="Height" type="unsigned long" token="doc_token_height">200</PROP>
        <PROP name="Dpi" type="unsigned long" token="doc_token_dpi">72</PROP>
        <PROP name="KeepRatio" type="boolean">true</PROP>
        <PROP name="Password" type="string">your_password_be_here</PROP>
        <PROP name="Max Pages" type="unsigned long" token="doc_token_maxPages">-1</PROP>
        <PROP name="Frames Requested" type="unsigned long" token="doc_token_frames_requested">-1</PROP>
    </FORMAT>
</PDFSTORYBOARD_MP_PROPERTIES>
[dmadmin@content_server_01 workspace]$ 
[dmadmin@content_server_01 workspace]$ sed -i 's/doc_token_maxPages">-1</doc_token_maxPages">1</' storyboard_pdfstoryboard.xml
[dmadmin@content_server_01 workspace]$ 
[dmadmin@content_server_01 workspace]$ cat storyboard_pdfstoryboard.xml
<PDFSTORYBOARD_MP_PROPERTIES>
    <FORMAT name="JPEG">
        <PROP name="Format" type="string">JPEG</PROP>
        <PROP name="Width" type="unsigned long" token="doc_token_width">200</PROP>
        <PROP name="Height" type="unsigned long" token="doc_token_height">200</PROP>
        <PROP name="Dpi" type="unsigned long" token="doc_token_dpi">72</PROP>
        <PROP name="KeepRatio" type="boolean">true</PROP>
        <PROP name="Password" type="string">your_password_be_here</PROP>
        <PROP name="Max Pages" type="unsigned long" token="doc_token_maxPages">1</PROP>
        <PROP name="Frames Requested" type="unsigned long" token="doc_token_frames_requested">-1</PROP>
    </FORMAT>
</PDFSTORYBOARD_MP_PROPERTIES>

 

As you can see above, I just changed the value assigned to the “doc_token_maxPages” from -1 (unlimited) to 1 (1 page) and that should be it! To apply this change, we need to check in the storyboard_pdfstoryboard.xml file:

[dmadmin@content_server_01 workspace]$ iapi DOCBASE -Udmadmin -Pxxx


        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2015
        All rights reserved.
        Client Library Release 7.2.0050.0084


Connecting to Server using docbase DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 013f245a80217406 started for user dmadmin."


Connected to Documentum Server running Release 7.2.0050.0214  Linux64.Oracle
Session id is s0
API> retrieve,c,dm_document where folder('/System/Media Server/Command Line Files') and object_name = 'storyboard_pdfstoryboard.xml'
...
093f245a801c9075
API> setfile,c,l,/tmp/workspace/storyboard_pdfstoryboard.xml
...
OK
API> checkin,c,l
...
093f245a8027254e
API> flushcache,c
...
OK

 

Once this is done, we can remove all current renditions of this document (we saw above pdf, jpeg_lres and jpeg_story renditions) to only let the docx/msw12 file:

API> retrieve,c,dm_document where r_object_id='093f245a801a8f56'
...
093f245a801a8f56
API> removerendition,c,l,pdf
...
OK
API> save,c,l
...
OK
API> flushcache,c
...
OK
API> removerendition,c,l,jpeg_lres
...
OK
API> save,c,l
...
OK
API> flushcache,c
...
OK
API> removerendition,c,l,jpeg_story
...
OK
API> save,c,l
...
OK
API> flushcache,c
...
OK
API> ?,c,select r_object_id, full_format, parent_id, content_size, full_content_size, set_time, set_file from dmr_content where any parent_id='093f245a801a8f56'
r_object_id       full_format         parent_id         content_size  full_content_size       set_time                   set_file                                                                              
----------------  ------------------  ----------------  ------------  ----------------------  -------------------------  ---------------------------------------------------------------------------------------
063f245a8024b622  msw12               093f245a801a8f56         90535                   90535  7/4/2016 13:22:30          /app/weblogic/tmp/DOMAIN/msD2-01/DefaultFileRenamePolicy.rename8572010254156028259.docx
(1 row affected)

 

The last step is to request the recreation of these renditions (using a setfile), wait 20 seconds or so and then check how much renditions have been recreated:

API> retrieve,c,dm_document where r_object_id='093f245a801a8f56'
...
093f245a801a8f56
API> setfile,c,l,/tmp/workspace/TestRenditions.docx
...
OK
API> save,c,l
...
OK
API> ?,c,select r_object_id, full_format, parent_id, content_size, full_content_size, set_time, set_file from dmr_content where any parent_id='093f245a801a8f56';
r_object_id       full_format         parent_id         content_size  full_content_size       set_time                   set_file                                                                              
----------------  ------------------  ----------------  ------------  ----------------------  -------------------------  ---------------------------------------------------------------------------------------
063f245a8024b622  msw12               093f245a801a8f56         90535                   90535  11/13/2016 12:49:48        /tmp/workspace/TestRenditions.docx
063f245a8024b957  jpeg_lres           093f245a801a8f56         60467                   60467  11/13/2016 12:49:56        C:\Users\SYS_AD~1\AppData\Local\Temp\batchFile1116397023537525059.tar
063f245a8024b958  jpeg_story          093f245a801a8f56          3392                    3392  11/13/2016 12:49:57        C:\Users\SYS_AD~1\AppData\Local\Temp\batchFile6054850753334521340.tar
(3 rows affected)

 

As you can see, there is now only one jpeg rendition per format and that’s for the first page only so that’s a success! If you want to keep only X jpeg renditions per document, now you know how to do it :)

 

Cet article Documentum story – Restrict the number of jpeg renditions est apparu en premier sur Blog dbi services.

MongoDB Installation on Windows

Wed, 2016-11-23 08:15

In this blog post, we will show how to install MongoDB on Windows.

Download installer

First go to the MongoDB download center https://www.mongodb.com/download-center#community. According to your Windows server version, click on the green buton and download the installer (.msi).

In our example, we will use the MongoDB Community version for Windows Server 2008 and later.

mongodb1

Important: MongoDB is available exclusively for Windows 64 bits. 

To check the Windows version you are running:

mongodb2

 

Install MongoDB

Get the installer:

mongodb3

Now, follow the next installation steps.

 

mongodb4

 

Accept the terms.

mongodb5

 

You can choose your installation method. For a customized installation, use the “custom” method. For instance, you can choose your installation drive location.

mongodb6

 

 

After choosing your installation path, click on next.

mongodb8

 

Well, your installation is now  finished.

Check the installation by command line (powershell):

cd "C:\Program Files\MongoDB\Server\3.2\bin"
PS C:\Program Files\MongoDB\Server\3.2\bin> ls


 Directory: C:\Program Files\MongoDB\Server\3.2\bin


Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 11/18/2016 5:45 PM 7145663 bsondump.exe
-a--- 9/2/2016 10:43 PM 1991680 libeay32.dll
-a--- 11/18/2016 5:52 PM 9494016 mongo.exe
-a--- 11/23/2016 12:20 PM 321 mongod.cfg
-a--- 11/23/2016 12:19 PM 325 mongod.cs
-a--- 11/18/2016 5:56 PM 19520512 mongod.exe
-a--- 11/18/2016 5:56 PM 161075200 mongod.pdb
-a--- 11/18/2016 5:48 PM 9228030 mongodump.exe
-a--- 11/18/2016 5:47 PM 7393229 mongoexport.exe
-a--- 11/18/2016 5:46 PM 7318208 mongofiles.exe
-a--- 11/18/2016 5:47 PM 7432777 mongoimport.exe
-a--- 11/18/2016 5:48 PM 7154618 mongooplog.exe
-a--- 11/18/2016 5:57 PM 16767488 mongoperf.exe
-a--- 11/18/2016 5:47 PM 10548404 mongorestore.exe
-a--- 11/18/2016 5:56 PM 7959040 mongos.exe
-a--- 11/18/2016 5:56 PM 85716992 mongos.pdb
-a--- 11/18/2016 5:46 PM 7401410 mongostat.exe
-a--- 11/18/2016 5:48 PM 7287874 mongotop.exe
-a--- 9/2/2016 10:43 PM 321536 ssleay32.dll
PS C:\Program Files\MongoDB\Server\3.2\bin>

Setup the MongoDB Environment

MongoDB can be setting up by two ways:

  • Command line options
  • Configuration file options

We will use the configuration file method.

Create the following MongoDB data directory:

C:\mongodb\data\

You can also create a directory for each MongoDB instances in order to separate your data files between instances:

C:\mongodb\data\MDBx

Create the log directory:

C:\mongodb\logs\

As for the data files, you should have a separation of logs between instances:

C:\mongodb\logs\MDBx

 

MongoDB configuration file:

To run MongoDB with your configuration parameters, create a Mongodb configuration file and use it when you start mongod process.

Insert the following lines in your configuration files:

#MongoDB Configuration file example
dbpath = C:\mongodb\data\MDB1
port = 27001
logpath = C:\mongodb\logs\MDB1\mongod.log

Save the file with the .cfg extension (full path), and starting the process from command prompt:

 

.\mongod.exe -f mongod.cfg

mongodb10

Connect to the mongo shell:

PS C:\Program Files\MongoDB\Server\3.2\bin> .\mongo.exe --host 127.0.0.1:27001

 

Create a MongoDB Windows service:

Install a Windows service with the following options:

PS C:\Program Files\MongoDB\Server\3.2\bin> .\mongod.exe -f "C:\Program Files\MongoDB\Server\3.2\bin\mongod.cfg" --install --serviceName mdb1

A new service is now created:

mongodb11

 

Cet article MongoDB Installation on Windows est apparu en premier sur Blog dbi services.

12cR2: Upgrade by remote clone with TDE in DBaaS

Wed, 2016-11-23 00:20

Upgrading from 12.1 to 12.2 is easy in Oracle Public Cloud DBaaS because you are in multitenant. Here is how to clone a 12.1 PDB to 12.2 service.

I’ve a service HP121 in 12.1.0.2 with one pluggable database PDB1 and a service HP122 in 12.2.0.1 with an empty CDB (only CDB$ROOT and PDB$SEED containers).

Export TDE key

The Oracle Public Cloud uses Transparent Data Encryption to secure the datafiles. When you move the pluggable databases you need to export/import the encryption keys.

Here is the key:

18:42:58 HP121 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER WALLET_TY
-------- ---------------------------------------- ---------
FILE /u01/app/oracle/admin/HP121/tde_wallet/ AUTOLOGIN
 
18:42:58 HP121 SQL>select key_id from v$encryption_keys where creator_pdbname='PDB1';
 
KEY_ID
------------------------------------------------------------------------------
AQqCc8XWV09uvxkaw0Bm5XUAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

The instance uses an auto-login wallet and you cannot export the keys from that:

18:42:58 HP121 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1');
administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1')
*
ERROR at line 1:
ORA-28417: password-based keystore is not open

You need to open it with the password:

18:42:58 HP121 SQL>administer key management set keystore close;
keystore altered.
 
18:42:58 HP121 SQL>administer key management set keystore open identified by "Ach1z0#d";
keystore altered.
 
18:42:58 HP121 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER WALLET_TY
-------- ---------------------------------------- ---------
FILE /u01/app/oracle/admin/HP121/tde_wallet/ PASSWORD

And then you can export it:

18:42:58 HP121 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1');
keystore altered.

Import TDE key

I copy the file /tmp/cdb2pdb1.p12 to the destination (scp) and then I can import it, giving the same ‘secret’ identifier. Here again i have to open the wallet with password because it cannot be imported when opened

18:43:04 HP122 SQL>administer key management set keystore close;
keystore altered.
18:43:04 HP122 SQL>administer key management set keystore open identified by "Ach1z0#d";
keystore altered.
18:43:04 HP122 SQL>administer key management import encryption keys with secret "oracle" from '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d";
keystore altered.

Database link

We need to create a database link to the source (don’t forget to open the port for the listener):

18:43:04 HP122 SQL>select dbms_tns.resolve_tnsname('//HP121/HP121.demnov.oraclecloud.internal') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('//HP121/HP121.DEMNOV.ORACLECLOUD.INTERNAL')
--------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=HP121.demnov.oraclecloud.internal)(CID=
(PROGRAM=oracle)(HOST=HP122.compute-demnov.oraclecloud.internal)(USER=oracle)))(
ADDRESS=(PROTOCOL=TCP)(HOST=10.196.202.47)(PORT=1521)))
 
18:43:04 HP122 SQL>create database link HP121@HP121 connect to system identified by "Ach1z0#d" using '//HP121/HP121.demnov.oraclecloud.internal';
Database link created.
 
18:43:04 HP122 SQL>select host_name from v$instance@HP121@HP121;
 
HOST_NAME
----------------------------------------------------------------
HP121.compute-demnov.oraclecloud.internal

Remote clone

You need to have the source PDB1 opened read-only, and the cloning is only one command:

18:43:09 HP122 SQL>create pluggable database PDB1 from PDB1@HP121@HP121 keystore identified by "Ach1z0#d";
Pluggable database created.

Upgrade

Now that you have the PDB you can open it (because you have imported the TDE key) but the dictionary is still in 12.1 so you have to run:

[oracle@HP122 ~]$ dbupgrade -c PDB1

This is described in previous post: http://blog.dbi-services.com/12cr2-how-long-to-upgrade-a-pdb/

 

Cet article 12cR2: Upgrade by remote clone with TDE in DBaaS est apparu en premier sur Blog dbi services.

12cR2: Upgrade by remote clone – workaround ORA-17630 in DBaaS

Sun, 2016-11-20 13:42

Easier than unplug/plug, you can move pluggable databases with remote cloning. It’s the same idea but you don’t have to manage the files yourself: the are shipped through database link. However, this uses the ‘remote file protocol’ and it fails with version mismatch:
ORA-17628: Oracle error 17630 returned by remote Oracle server
ORA-17630: Mismatch in the remote file protocol version client server

Remote cloning

I’ll describe the full operation of remote cloning in a future post. This is the error I got when I tried to remote clone from 12.1 to 12.2:
13:43:55 HP122A SQL> create pluggable database PDB1 from PDB1@HP121A@HP121A keystore identified by "Ach1z0#d" relocate;
create pluggable database PDB1 from PDB1@HP121A@HP121A keystore identified by "Ach1z0#d"
*
ERROR at line 1:
ORA-17628: Oracle error 17630 returned by remote Oracle server
ORA-17630: Mismatch in the remote file protocol version client server

Alert.log in target

The error is received from the remote side. There is not a lot in local alert.log
create pluggable database PDB1 from PDB1@HP121A@HP121A keystore identified by * relocate
Errors in file /u01/app/oracle/diag/rdbms/hp122a/HP122A/trace/HP122A_ora_29385.trc:
ORA-17628: Oracle error 17630 returned by remote Oracle server
ORA-17630: Mismatch in the remote file protocol version client server

Alert.log in source

More information about versions in the remote alert.log:
Errors in file /u01/app/oracle/diag/rdbms/hp121a/HP121A/trace/HP121A_ora_21344.trc:
ORA-17630: Mismatch in the remote file protocol version client 3 server 2

Patch

Fortunately, version mismatch of remote file protocol has already been a problem in previous versions with other features that have to transport files, and a patch exists to bypass this version checking:

Patch 18633374: COPYING ACROSS REMOTE SERVERS: ASMCMD-8016, ORA-17628, ORA-17630, ORA-06512

And you can download it at https://updates.oracle.com/download/18633374.html

My 12.1.0.2 DBaaS has the July 2016 PSU applied, as well as a merge of patches specific for the cloud:
[oracle@HP121 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
19469538;
24310028;
22366322;
20475845;
18043064;
21132297;
23177536;Database PSU 12.1.0.2.160719, Oracle JavaVM Component (JUL2016)
23054246;Database Patch Set Update : 12.1.0.2.160719 (23054246)

Today, there is no patch to download for this configuration. There is one for April 2016 PSU but there is still a conflict with patch 24310028

To be able to continue, I’ve removed following patches from the 12.1 source:
[oracle@HP121 ~]$ $ORACLE_HOME/OPatch/opatch rollback -id 23177536
[oracle@HP121 ~]$ $ORACLE_HOME/OPatch/opatch rollback -id 24310028

But then, the parameter “encrypt_new_tablespaces” that has been introduced by 24310028 is unknown:

SQL> startup
ORA-01078: failure in processing system parameters
LRM-00101: unknown parameter name 'encrypt_new_tablespaces'

You have to remove this one from the SPFILE. Basically it forces TDE when in the cloud, even when not specified in the DDL.

So what?

I hope this patch will be included in future DBaaS versions. Currently, the Oracle Public Cloud has no simple button to upgrade a service from 12.1 to 12.2 and the easiest way to do it should be remote cloning of PDB. But with those version mismatch and patch to apply, unplug/plug is probably easier.

 

Cet article 12cR2: Upgrade by remote clone – workaround ORA-17630 in DBaaS est apparu en premier sur Blog dbi services.

12cR2: How long to upgrade a PDB?

Sun, 2016-11-20 09:39

In the previous post I described how simple it is to unplug a PDB and plug it into a newer version CDB. One goal of dictionary separation in the multitenant architecture is to keep system objects on CDB$ROOT only. Knowing that an upgrade does not touch the application metadata and data, does this make PDB upgrade fast as a simple refresh of metadata links?

CDB$ROOT upgrade

As a point of comparison I’ve run an upgrade on an empty CDB from 12.1.0.2 to 12.2.0.1 and here is the summary:

Oracle Database 12.2 Post-Upgrade Status Tool 11-19-2016 14:04:51
[CDB$ROOT]  
Component Current Version Elapsed Time
Name Status Number HH:MM:SS
 
Oracle Server UPGRADED 12.2.0.1.0 00:11:19
JServer JAVA Virtual Machine UPGRADED 12.2.0.1.0 00:04:29
Oracle Real Application Clusters UPGRADED 12.2.0.1.0 00:00:00
Oracle Workspace Manager UPGRADED 12.2.0.1.0 00:00:41
OLAP Analytic Workspace UPGRADED 12.2.0.1.0 00:00:14
Oracle OLAP API UPGRADED 12.2.0.1.0 00:00:08
Oracle Label Security UPGRADED 12.2.0.1.0 00:00:05
Oracle XDK UPGRADED 12.2.0.1.0 00:01:01
Oracle Text UPGRADED 12.2.0.1.0 00:00:31
Oracle XML Database UPGRADED 12.2.0.1.0 00:01:33
Oracle Database Java Packages UPGRADED 12.2.0.1.0 00:00:07
Oracle Multimedia UPGRADED 12.2.0.1.0 00:01:22
Spatial UPGRADED 12.2.0.1.0 00:04:46
Oracle Application Express VALID 5.0.0.00.31 00:00:02
Oracle Database Vault UPGRADED 12.2.0.1.0 00:00:15
Final Actions 00:01:50
Post Upgrade 00:00:12
 
Total Upgrade Time: 00:29:17 [CDB$ROOT]

This was running on a Oracle Public Cloud DBaaS with two OCPUs which means four threads. It’s about 30 minutes to upgrade the system dictionary and all components.
Those are the times we are used to. Since 12c some operations are parallelized to make it faster than in previous versions.

The more components you install, the longer it takes. Even if it is recommended to install all components in a CDB in case a PDB needs it, you may think about this.

PDB upgrade

When you plug a PDB, you should not have all this work to do. You can expect that the metadata links and data links just work, now pointing to the new version. At most, a quick check or refresh may be necessary to ensure that object types did not change.

At UKOUG TECH16 in 12c Multitenant: Not a Revolution, Just an Evolution I demo how those links work internally and I show that running a full CATUPGRD.SQL on each container is not required to be run for each object. However, the DBUPGRADE script runs it. Let’s see if it is optimized for pluggable databases.

In 12.2 the command is easy:

[oracle@HP122A tmp]$ $ORACLE_HOME/bin/dbupgrade -c PDB1

You can see that it runs the catctl.pl commands that we used in 12.1

Start processing of PDB1
[/u01/app/oracle/product/12.2.0/dbhome_1/perl/bin/perl /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin/catctl.pl -c 'PDB1' -I -i pdb1 -n 2 -l /home/oracle /u01/app/oracle/product/12.2.0/dbhome_1/rdbms/admin/catupgrd.sql]

Here is what will be run.

Number of Cpus = 2
Database Name = HP122A
DataBase Version = 12.2.0.1.0
Generated PDB Inclusion:[PDB1] CDB$ROOT Open Mode = [OPEN] Components in [PDB1] Installed [APEX APS CATALOG CATJAVA CATPROC CONTEXT DV JAVAVM OLS ORDIM OWM SDO XDB XML XOQ] Not Installed [EM MGW ODM RAC WK]

Summary is here:

Oracle Database 12.2 Post-Upgrade Status Tool 11-19-2016 15:25:15
[PDB1]  
Component Current Version Elapsed Time
Name Status Number HH:MM:SS
 
Oracle Server UPGRADED 12.2.0.1.0 00:08:59
JServer JAVA Virtual Machine UPGRADED 12.2.0.1.0 00:02:16
Oracle Real Application Clusters UPGRADED 12.2.0.1.0 00:00:00
Oracle Workspace Manager UPGRADED 12.2.0.1.0 00:00:27
OLAP Analytic Workspace UPGRADED 12.2.0.1.0 00:00:22
Oracle OLAP API UPGRADED 12.2.0.1.0 00:00:07
Oracle Label Security UPGRADED 12.2.0.1.0 00:00:03
Oracle XDK UPGRADED 12.2.0.1.0 00:00:40
Oracle Text UPGRADED 12.2.0.1.0 00:00:18
Oracle XML Database UPGRADED 12.2.0.1.0 00:01:25
Oracle Database Java Packages UPGRADED 12.2.0.1.0 00:00:03
Oracle Multimedia UPGRADED 12.2.0.1.0 00:01:13
Oracle Application Express VALID 5.0.0.00.31 00:00:02
Oracle Database Vault UPGRADED 12.2.0.1.0 00:00:40
Final Actions 00:01:49
Post Upgrade 00:01:17
 
Total Upgrade Time: 00:23:55 [PDB1]  
Database time zone version is 18. It is older than current release time
zone version 26. Time zone upgrade is needed using the DBMS_DST package.
 
Grand Total Upgrade Time: [0d:0h:25m:0s]

When you compare with a CDB$ROOT upgrade the gain is very small. We saved 25% of Oracle Server time. JVM and XDK was x2 faster. But finally, that’s only 5 minutes.

It is important to understand that the upgrade time depends on the components installed. Here is the percentage of time per component:

CapturedbupgradePDB

About the core of the database, what we know as catalog/catproc, here is the detail showing which phases are run in parallel.
Note that the phase number is important because in 12.2 you can restart a failed upgrade from where it stopped.


------------------------------------------------------
Phases [0-117] Start Time:[2016_11_19 15:00:37] Container Lists Inclusion:[PDB1] Exclusion:[NONE] ------------------------------------------------------
*********** Executing Change Scripts ***********
Serial Phase #:0 [PDB1] Files:1 Time: 36s
*************** Catalog Core SQL ***************
Serial Phase #:1 [PDB1] Files:5 Time: 39s
Restart Phase #:2 [PDB1] Files:1 Time: 1s
*********** Catalog Tables and Views ***********
Parallel Phase #:3 [PDB1] Files:19 Time: 23s
Restart Phase #:4 [PDB1] Files:1 Time: 0s
************* Catalog Final Scripts ************
Serial Phase #:5 [PDB1] Files:6 Time: 15s
***************** Catproc Start ****************
Serial Phase #:6 [PDB1] Files:1 Time: 12s
***************** Catproc Types ****************
Serial Phase #:7 [PDB1] Files:2 Time: 9s
Restart Phase #:8 [PDB1] Files:1 Time: 0s
**************** Catproc Tables ****************
Parallel Phase #:9 [PDB1] Files:70 Time: 48s
Restart Phase #:10 [PDB1] Files:1 Time: 1s
************* Catproc Package Specs ************
Serial Phase #:11 [PDB1] Files:1 Time: 12s
Restart Phase #:12 [PDB1] Files:1 Time: 1s
************** Catproc Procedures **************
Parallel Phase #:13 [PDB1] Files:97 Time: 8s
Restart Phase #:14 [PDB1] Files:1 Time: 1s
Parallel Phase #:15 [PDB1] Files:118 Time: 11s
Restart Phase #:16 [PDB1] Files:1 Time: 1s
Serial Phase #:17 [PDB1] Files:13 Time: 3s
Restart Phase #:18 [PDB1] Files:1 Time: 1s
***************** Catproc Views ****************
Parallel Phase #:19 [PDB1] Files:33 Time: 25s
Restart Phase #:20 [PDB1] Files:1 Time: 0s
Serial Phase #:21 [PDB1] Files:3 Time: 8s
Restart Phase #:22 [PDB1] Files:1 Time: 1s
Parallel Phase #:23 [PDB1] Files:24 Time: 82s
Restart Phase #:24 [PDB1] Files:1 Time: 1s
Parallel Phase #:25 [PDB1] Files:11 Time: 42s
Restart Phase #:26 [PDB1] Files:1 Time: 0s
Serial Phase #:27 [PDB1] Files:1 Time: 0s
Serial Phase #:28 [PDB1] Files:3 Time: 5s
Serial Phase #:29 [PDB1] Files:1 Time: 0s
Restart Phase #:30 [PDB1] Files:1 Time: 0s
*************** Catproc CDB Views **************
Serial Phase #:31 [PDB1] Files:1 Time: 2s
Restart Phase #:32 [PDB1] Files:1 Time: 1s
Serial Phase #:34 [PDB1] Files:1 Time: 0s
***************** Catproc PLBs *****************
Serial Phase #:35 [PDB1] Files:283 Time: 17s
Serial Phase #:36 [PDB1] Files:1 Time: 0s
Restart Phase #:37 [PDB1] Files:1 Time: 0s
Serial Phase #:38 [PDB1] Files:1 Time: 3s
Restart Phase #:39 [PDB1] Files:1 Time: 1s
*************** Catproc DataPump ***************
Serial Phase #:40 [PDB1] Files:3 Time: 49s
Restart Phase #:41 [PDB1] Files:1 Time: 1s
****************** Catproc SQL *****************
Parallel Phase #:42 [PDB1] Files:13 Time: 51s
Restart Phase #:43 [PDB1] Files:1 Time: 0s
Parallel Phase #:44 [PDB1] Files:12 Time: 8s
Restart Phase #:45 [PDB1] Files:1 Time: 1s
Parallel Phase #:46 [PDB1] Files:2 Time: 2s
Restart Phase #:47 [PDB1] Files:1 Time: 1s
************* Final Catproc scripts ************
Serial Phase #:48 [PDB1] Files:1 Time: 5s
Restart Phase #:49 [PDB1] Files:1 Time: 1s
************** Final RDBMS scripts *************
Serial Phase #:50 [PDB1] Files:1 Time: 16s

In the summary when we compare with a CDB$ROOT upgrade we don’t see the Spatial part that took 4 minutes but we see it in the detail:

***************** Upgrading SDO ****************
Restart Phase #:81 [PDB1] Files:1 Time: 1s
Serial Phase #:83 [PDB1] Files:1 Time: 23s
Serial Phase #:84 [PDB1] Files:1 Time: 4s
Restart Phase #:85 [PDB1] Files:1 Time: 1s
Serial Phase #:86 [PDB1] Files:1 Time: 5s
Restart Phase #:87 [PDB1] Files:1 Time: 0s
Parallel Phase #:88 [PDB1] Files:3 Time: 110s
Restart Phase #:89 [PDB1] Files:1 Time: 0s
Serial Phase #:90 [PDB1] Files:1 Time: 4s
Restart Phase #:91 [PDB1] Files:1 Time: 1s
Serial Phase #:92 [PDB1] Files:1 Time: 4s
Restart Phase #:93 [PDB1] Files:1 Time: 0s
Parallel Phase #:94 [PDB1] Files:4 Time: 30s
Restart Phase #:95 [PDB1] Files:1 Time: 0s
Serial Phase #:96 [PDB1] Files:1 Time: 3s
Restart Phase #:97 [PDB1] Files:1 Time: 1s
Serial Phase #:98 [PDB1] Files:1 Time: 22s
Restart Phase #:99 [PDB1] Files:1 Time: 0s
Serial Phase #:100 [PDB1] Files:1 Time: 3s
Restart Phase #:101 [PDB1] Files:1 Time: 1s
Serial Phase #:102 [PDB1] Files:1 Time: 2s
Restart Phase #:103 [PDB1] Files:1 Time: 1s

So what?

From what we see, the multitenant architecture, with consolidation of the system directory in only one place – the CDB$ROOT – we have no gain in upgrade. In the current implementation (12.2.0.1) the same work is done on all containers, with only minimal optimization for pluggable databases where we have metadata links instead of full object metadata.
In summary:

  • Upgrading by plug-in or remote clone is faster than upgrading the whole CDB because CDB has more containers, such as PDB$SEED
  • But upgrading a single PDB, whatever the method is, is not faster than upgrading a non-CDB

I’m talking about upgrade of the container here. Transportable tablespaces/database is a different thing.

More about the Multitenant internals, dictionary separation, metadata links and data links (was called object links in 12.1) at UKOUG TECH 2016 conference next month.

CaptureUpgradePres

 

Cet article 12cR2: How long to upgrade a PDB? est apparu en premier sur Blog dbi services.

12cR2: Upgrade by unplug/plug in the Oracle Cloud Service

Sat, 2016-11-19 15:01

12.2 is available in the Oracle Public Cloud DBaaS. If you have a 12.1 DBaaS service, there’s no button to upgrade it. I’ll describe all the possible upgrade procedures and the first one, and the most simple, is to create a new DBaaS service in 12.2 and unplug/plug the PDBs to it.

Here is my DBaaS in 12.1

[oracle@HP121A ~]$ sqlplus / as sysdba
 
SQL*Plus: Release 12.1.0.2.0 Production on Sat Nov 19 14:47:04 2016
 
Copyright (c) 1982, 2014, Oracle. All rights reserved.
 
 
Connected to:
Oracle Database 12c EE High Perf Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Advanced Analytics
and Real Application Testing options
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO

Unplug

I close the PDB1 and unplug it.

SQL> alter pluggable database PDB1 close;
 
Pluggable database altered.
 
SQL> alter pluggable database PDB1 unplug into '/tmp/PDB1.xml';
 
Pluggable database altered.

Copy files

I’ve opened ssh between the two VMs and copy the xml file
[oracle@HP122A tmp]$ scp 141.144.32.166:/tmp/PDB1.xml .
The authenticity of host '141.144.32.168 (141.144.32.168)' can't be established.
RSA key fingerprint is 84:e4:e3:db:67:20:e8:e2:f7:ff:a6:4d:9e:ee:a4:08.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '141.144.32.168' (RSA) to the list of known hosts.
PDB1.xml 100% 6118 6.0KB/s 00:00

From the xml file I see which files are referenced:
[oracle@HP121A ~]$ grep path /tmp/PDB1.xml
<path>/u02/app/oracle/oradata/HP121A/41A8A48F54195236E0534EC5C40A569E/datafile/o1_mf_system_d30owr5v_.dbf</path>
<path>/u02/app/oracle/oradata/HP121A/41A8A48F54195236E0534EC5C40A569E/datafile/o1_mf_sysaux_d30owr69_.dbf</path>
<path>/u02/app/oracle/oradata/HP121A/41A8A48F54195236E0534EC5C40A569E/datafile/o1_mf_temp_d30owr6h_.dbf</path>

and copy them

[oracle@HP122A tmp]$ scp -r 141.144.32.168:/u02/app/oracle/oradata/HP121A/41A8A48F54195236E0534EC5C40A569E /u02/app/oracle/oradata/HP121A
o1_mf_temp_d30owr6h_.dbf 100% 20MB 20.0MB/s 00:00
o1_mf_system_d30owr5v_.dbf 100% 270MB 135.0MB/s 00:02
o1_mf_sysaux_d30owr69_.dbf 100% 570MB 114.0MB/s 00:05

Plug

It’s only one command to plug it into the 12.2 CDB:

[oracle@HP122A tmp]$ sqlplus / as sysdba
 
SQL*Plus: Release 12.2.0.1.0 Production on Sat Nov 19 14:50:26 2016
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c EE High Perf Release 12.2.0.1.0 - 64bit Production
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
 
SQL> create pluggable database PDB1 using '/tmp/PDB1.xml';
Pluggable database created.

At that time, you can drop it from the source but probably you will remove the service once you are sure the migration is ok.

Upgrade

That was easy, but that was only the transportation of the PDB to another CDB, but it cannot be opened so easily on a newer version CDB. When we open the PDB we get a warning:
SQL> alter pluggable database PDB1 open;
 
Warning: PDB altered with errors.

and have to look at the PDB_PLUG_IN_VIOLATIONS

SQL> select MESSAGE from pdb_plug_in_violations order by time
 
MESSAGE
-----------------------------------------------------------------------------------------------------------
APEX mismatch: PDB installed version NULL CDB installed version 5.0.4.00.12
CDB is using local undo, but no undo tablespace found in the PDB.
CDB parameter compatible mismatch: Previous '12.1.0.2.0' Current '12.2.0'
Database option APS mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
Database option CATALOG mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
Database option CATJAVA mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
Database option CATPROC mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
Database option CONTEXT mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
Database option DV mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
Database option JAVAVM mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
Database option OLS mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
Database option ORDIM mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
Database option OWM mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
Database option SDO mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
Database option XDB mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
Database option XML mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
Database option XOQ mismatch: PDB installed version 12.1.0.2.0. CDB installed version 12.2.0.1.0.
PDB's version does not match CDB's version: PDB's version 12.1.0.2.0. CDB's version 12.2.0.1.0.

Each component report a newer version. We have to upgrade them running catupgrd.sql.
In 12.2 we have a new script that calls the catctl.pl and catupgrd.sql to make this easier. It is a shell script located in ORACLE_HOME/bin and is dbupgrade. As with catcon.pl we have the ‘-c’ argument to run it on PDB1:

[oracle@HP122A tmp]$ $ORACLE_HOME/bin/dbupgrade -c PDB1

How long does it take? Documentation says that:
It is easier to apply a patch to one CDB than to multiple non-CDBs and to upgrade one CDB than to upgrade several non-CDBs.
So this supposes that upgrade work is mostly done at CDB level. PDBs have only metadata links to them. It’s only a virtual dictionary.

More than 3 years after the multitenant architecture was released, there are big doubts about the time it takes to upgrade a PDB plugged from a previous version:

#dbupgrade @OracleDatabase
Guess how many minutes to catupgrd a 12.1 PDB plugged into an empty 12.2 CDB?
[ doc says https://t.co/hQTknmKl1y

— Franck Pachot (@FranckPachot) November 19, 2016

So I keep the answer for the next blog post.

 

Cet article 12cR2: Upgrade by unplug/plug in the Oracle Cloud Service est apparu en premier sur Blog dbi services.

Pages