Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 9 hours 28 min ago

Re-assigning all objects from on role to another in PostgreSQL

Mon, 2017-07-31 04:07

From time to time it might be required to move objects (tables, indexes, whatever) from one user to another user in a database system. You could do that by dumping all the objects with pg_dump and then load it to the target user. But there are other solutions which are faster than that. Lets go.

Obviously we need two users when we want to move objects from one user to another (actually it is not really moving the objects but more about changing the ownership):

postgres=# create role a login password 'a';
CREATE ROLE
postgres=# create role b login password 'b';
CREATE ROLE
postgres=# create schema a authorization a;
CREATE SCHEMA
postgres=# create schema b authorization b;
CREATE SCHEMA
postgres=# alter role a set search_path=a;
ALTER ROLE
postgres=# alter role b set search_path=b;
ALTER ROLE
postgres=# 

Lets create some objects in schema “a” owned by user “a”:

postgres=# \c postgres a
You are now connected to database "postgres" as user "a".
postgres=> \! cat a.sql
create table a ( a int );
create table b ( a int );
create table c ( a int );
create table d ( a int );
create index i1 on a (a);
create index i2 on b (a);
create index i3 on c (a);
create index i4 on d (a);

postgres=> \i a.sql
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX

By joining pg_class and pg_roles we can verify who is actually the owner of the objects:

postgres=> select t.relname, d.rolname 
             from pg_class t, pg_roles d 
            where t.relowner = d.oid and d.rolname = 'a';
 relname | rolname 
---------+---------
 a       | a
 b       | a
 c       | a
 d       | a
 i1      | a
 i2      | a
 i3      | a
 i4      | a
(8 rows)

The easiest way to make these objects owned by another user (call it “c”) would be:

postgres=# alter user a rename to c;
NOTICE:  MD5 password cleared because of role rename
ALTER ROLE
postgres=# select t.relname, d.rolname from pg_class t, pg_roles d where t.relowner = d.oid and d.rolname = 'c';
 relname | rolname 
---------+---------
 a       | c
 b       | c
 c       | c
 d       | c
 i1      | c
 i2      | c
 i3      | c
 i4      | c
(8 rows)

Not a good idea though as the schema still is named “a” and this at least will create some confusion with the naming. Of course we could rename the schema as well:

postgres=# alter schema a rename to c;
ALTER SCHEMA
postgres=# \c postgres c
You are now connected to database "postgres" as user "c".
postgres=> select count(*) from a;
2017-07-28 15:51:25.499 CEST [3415] ERROR:  relation "a" does not exist at character 22
2017-07-28 15:51:25.499 CEST [3415] STATEMENT:  select count(*) from a;
ERROR:  relation "a" does not exist
LINE 1: select count(*) from a;

… but now we have another mess. Because the search_path is still set to “a” we can not see the objects by default but we will need to use the fully qualified name:

postgres=> select count(*) from c.a;
 count 
-------
     0
(1 row)

Finally we would need to adjust the search_path to get back the previous behavior:

postgres=> set search_path=c;
SET
postgres=> select count(*) from a;
 count 
-------
     0
(1 row)

A lot of steps to follow. Easier is:

postgres=# reassign owned by c to b;
REASSIGN OWNED
postgres=# alter user b set search_path=c,b;
ALTER ROLE
postgres=# \c postgres b
You are now connected to database "postgres" as user "b".
postgres=> select count(*) from a;
 count 
-------
     0
(1 row)
postgres=> select t.relname, d.rolname from pg_class t, pg_roles d where t.relowner = d.oid and d.rolname = 'b';
 relname | rolname 
---------+---------
 d       | b
 i4      | b
 c       | b
 i3      | b
 b       | b
 i2      | b
 a       | b
 i1      | b
(8 rows)

Cool :) There is also a command to drop all objects of a user:

postgres=> drop owned by b;
DROP OWNED
postgres=> select t.relname, d.rolname from pg_class t, pg_roles d where t.relowner = d.oid and d.rolname = 'b';
 relname | rolname 
---------+---------
(0 rows)

Nice …

 

Cet article Re-assigning all objects from on role to another in PostgreSQL est apparu en premier sur Blog dbi services.

Documentum – Change password – 8 – FT – Dsearch & xDB

Sat, 2017-07-29 06:19

Here we are, already, at the end of my series of blogs about the “Change Password”. This blog will, as already mentioned in a previous one, talk about the Dsearch and xDB passwords. I could have created a lot more blogs in this series but I already presented the most important and most interesting ones so this blog will be the last one – at least for now ;).

 

Please note that below, I will set the same password for the Dsearch admin account and for the xDB. The reason for that is pretty simple: this is what Documentum is doing by default when you install a PrimaryDsearch. Indeed, when you install a Dsearch, the installer will ask you to enter the Dsearch admin account which is the JBoss instance admin account (this blog explained how to change a JBoss Admin password) and it will use this password for the xDB too.

If you want to use a different password, then you can potentially define three passwords here: xDB Admin password, xDB Superuser password and Dsearch JBoss instance password.

 

The xDB (find more info here about what it is) provides a command line interface to manage it (repair, create, list, aso…) which is the “XHCommand” but you aren’t able to change the different passwords through this CLI ;(. Therefore, before even starting with this blog, you will have to start a X Server or something similar, to be able to open the xDB Admin GUI (“XHAdmin”). For this blog, I installed a new VM and I also installed the complete graphical interface on it (my first time in a very long time) in order to have some screenshots that aren’t too ugly…

 

So first of all, let’s login to the Full Text Server and open the xDB Admin GUI:

[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/dsearch/xhive/admin/
[xplore@full_text_server_01 ~]$ ./XHAdmin

 

Just a small funny note, when you will close the XHAdmin tool, you will see that EMC thanks you for using this tool: “Thank you for using EMC Documentum xDB”. I guess this is your reward for having to open a GUI to change a password, in 2017 ;).

 

At this point, the xDB Admin Client will be opened but not connected. So the first thing to do is to connect to the xDB with the CURRENT Dsearch Admin credentials:

  • Click on the “connect” button
  • Set the “Database name” to: xhivedb
  • Set the “Username” to: Administrator
  • Set the “Password” to: ###CURRENT_DSEARCH_ADMIN_PWD###
  • Click on the “OK” button

Dsearch-xDB_Change_Password_1

 

Then you will be connected to the xDB. Therefore, it is now time to change the Superuser password:

  • Click on the “Federation > Change superuser password” menu
  • Set the “New superuser password” to: ###NEW_DSEARCH_ADMIN_PWD###
  • Set the “Retype password” to: ###NEW_DSEARCH_ADMIN_PWD###
  • Click on the “OK” button
  • A pop-up will be opened with a title “Enter superuser password”. Enter the ###CURRENT_DSEARCH_ADMIN_PWD### in it and click on OK to validate the change.

Dsearch-xDB_Change_Password_2

 

After that, time to change the Admin password itself. Here, you have the use the new Superuser password that has been defined above and then define the new Admin password:

  • Click on the “Database > Reset admin password” menu
  • Set the “Database name” to: xhivedb
  • Set the “Superuser password” to: ###NEW_DSEARCH_ADMIN_PWD###
  • Set the “Administrator password” to: ###NEW_DSEARCH_ADMIN_PWD###
  • Set the “Retype password” to: ###NEW_DSEARCH_ADMIN_PWD###
  • Click on the “OK” button

Dsearch-xDB_Change_Password_3

 

At this point, the xDB Superuser and Admin passwords have been defined so you can close the xDB Admin Client (Database > Exit) and we can go back to the command line. Ok so now let’s stop the xPlore components in order to reflect these changes on the configuration files:

[xplore@full_text_server_01 ~]$ read -s -p "Please enter the NEW Dsearch Admin password: " newpw; echo
Please enter the NEW Dsearch Admin password:
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/jboss7.1.1/server
[xplore@full_text_server_01 ~]$ for i in `ls stop*.sh`; do ./${i}; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/jboss7.1.1/server/DctmServer_PrimaryDsearch/deployments/dsearch.war/WEB-INF/classes/
[xplore@full_text_server_01 ~]$ cp indexserver-bootstrap.properties indexserver-bootstrap.properties_bck_$(date +"%Y%m%d-%H%M%S")
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ grep "password=" indexserver-bootstrap.properties
superuser-password=AAAAEF1hfFDU0zff6A6Mmd1f6Le8VvxPmLxF11ZdzjVo5KU/
adminuser-password=AAAAEF1hfFDU0zff6A6Mmd1f6Le8VvxPmLxF11ZdzjVo5KU/
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ sed -i "s,password=.*,password=${newpw}," indexserver-bootstrap.properties
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ grep "password=" indexserver-bootstrap.properties
superuser-password=New_Dsearch_Admin_Pwd
adminuser-password=New_Dsearch_Admin_Pwd
[xplore@full_text_server_01 ~]$

 

As you can see above, I’m defining the environment variable with the NEW Dsearch Admin Password and then I’m replacing the superuser and adminuser encrypted OLD password inside the file indexserver-bootstrap.properties with the non-encrypted NEW password. It was the same for both before and it is still the same for both after. Don’t worry about putting the non-encrypted NEW password in this properties file, it will be encrypted automatically at the next start of the PrimaryDsearch. So now, let’s start the Dsearch only to verify if the passwords have been encrypted successfully and then we can update the Dsearch JBoss Admin password:

[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/jboss7.1.1/server
[xplore@full_text_server_01 ~]$ nohup ./startPrimaryDsearch.sh >> nohup-PrimaryDsearch.out 2>&1 &
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ sleep 60
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ grep "password=" ./DctmServer_PrimaryDsearch/deployments/dsearch.war/WEB-INF/classes/indexserver-bootstrap.properties
superuser-password=AAAAFAq2fFwiHf21P98hRch982+1hEDGe824E21eL21e9c57f6A
adminuser-password=AAAAFAq2fFwiHf21P98hRch982+1hEDGe824E21eL21e9c57f6A
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/jboss7.1.1/server/DctmServer_PrimaryDsearch/configuration
[xplore@full_text_server_01 ~]$ cp dctm-users.properties dctm-users.properties_bck_$(date +"%Y%m%d-%H%M%S")
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ encrypted_newpw=`grep "adminuser-password=" ../deployments/dsearch.war/WEB-INF/classes/indexserver-bootstrap.properties | sed 's,adminuser-password=,,'`
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ echo "# users.properties file to use with UsersRolesLoginModule" > dctm-users.properties
[xplore@full_text_server_01 ~]$ echo "admin=${encrypted_newpw}" >> dctm-users.properties
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ cat dctm-users.properties
# users.properties file to use with UsersRolesLoginModule
admin=AAAAFAq2fFwiHf21P98hRch982+1hEDGe824E21eL21e9c57f6A
[xplore@full_text_server_01 ~]$

 

If you are referring to the JBoss Admin password blog I already published and that I linked above, you could think that there is nothing more to be done but actually there is one more file that needs to be updated when it comes to the Dsearch Admin password and this file is the xplore.properties:

[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/dsearch/admin/
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ grep -B2 "password=" xplore.properties
# Specify admin password for xPlore primary instance
# It is highly suggested to input the encrypted password, you can get it from indexserver-bootstrap.properties
password=AAAAEF1hfFDU0zff6A6Mmd1f6Le8VvxPmLxF11ZdzjVo5KU/
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ sed -i "s,password=.*,password=${encrypted_newpw}," xplore.properties
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ grep -B2 "password=" xplore.properties
# Specify admin password for xPlore primary instance
# It is highly suggested to input the encrypted password, you can get it from indexserver-bootstrap.properties
password=AAAAFAq2fFwiHf21P98hRch982+1hEDGe824E21eL21e9c57f6A
[xplore@full_text_server_01 ~]$

 

To complete the changes, we can now restart all xPlore components and verify that the PrimaryDsearch has been started properly:

[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/jboss7.1.1/server
[xplore@full_text_server_01 ~]$ ./stopPrimaryDsearch.sh
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ sleep 30
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ nohup ./startPrimaryDsearch.sh >> nohup-PrimaryDsearch.out 2>&1 &
[xplore@full_text_server_01 ~]$ for i in `ls startIndexag*.sh`; do ia=`echo $i|sed 's,start\(.*\).sh,\1,'`; nohup ./$i >> nohup-$ia.out 2>&1 &; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ sleep 30
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ curl -g --user admin -D - http://localhost:9305/management --header "Content-Type: application/json" -d '{"operation":"read-attribute","name":"server-state","json.pretty":1}'
Enter host password for user 'admin':
HTTP/1.1 200 OK
Transfer-encoding: chunked
Content-type: application/json
Date: Sun, 23 Jul 2017 08:14:56 GMT

{
    "outcome" : "success",
    "result" : "running"
}
[xplore@full_text_server_01 ~]$

 

To be sure, you can also open the Dsearch Admin UI (E.g.: https://hostname:9302/dsearchadmin) and then log in with the NEW Dsearch Admin password. If the login is working, you should be good to go! :)

 

 

Cet article Documentum – Change password – 8 – FT – Dsearch & xDB est apparu en premier sur Blog dbi services.

Documentum – Change password – 7 – DB – Schema Owner

Sat, 2017-07-29 04:48

In this serie, I completed the passwords I wanted to talk about on the Content Server. Therefore in this blog, I will talk about the only Database Account that is relevant for Documentum: the Database Schema Owner. Since there are a few steps to be done on the Content Server, I’m just doing everything from there… In this blog, I will assume there is one Global Registry (GR_DOCBASE) and one normal Repository (DocBase1). Each docbase has a different Database Schema Owner of course but both Schemas are on the same Database and therefore the same SID will be used.

 

In High Availability setups, you will have to execute the steps below for all Content Servers. Of course, when it comes to changing the password inside the DB, this needs to be done only once since the Database Schema Owner is shared between the different Content Servers of the HA setup.

 

In this blog, I’m using a CS 7.2. Please note that in CS 7.2, there is a property inside the dfc.properties of the Content Server ($DOCUMENTUM_SHARED/config/dfc.properties) that defines the crypto repository (dfc.crypto.repository). The repository that is used for this property is the one that Documentum will use for encryption/decryption of passwords and therefore I will use this one below to encrypt the password. By default, the Repository used for this property is the last one created… I tend to use the Global Registry instead, but it’s really up to you.

 

As said before, I’m considering two different repositories and therefore two different accounts and two different passwords. So, let’s start with encrypting these two passwords:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW GR_DOCBASE Schema Owner's password: " new_gr_pw; echo
Please enter the NEW GR_DOCBASE Schema Owner's password:
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW DocBase1 Schema Owner's password: " new_doc1_pw; echo
Please enter the NEW DocBase1 Schema Owner's password:
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ iapi `cat $DOCUMENTUM_SHARED/config/dfc.properties | grep crypto | tail -1 | sed 's/.*=//'` -Udmadmin -Pxxx << EOF
> encrypttext,c,${new_gr_pw}
> encrypttext,c,${new_doc1_pw}
> EOF


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0150.0154


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f12345605ae7b started for user dmadmin."


Connected to Documentum Server running Release 7.2.0160.0297  Linux64.Oracle
Session id is s0
API> ...
DM_ENCR_TEXT_V2=AAAAEH7UNwFub2ubf92h+21/rc8HEc3rd1C82hc52c8bz2cFl1cQ721zex2nxWDEegwqgdotwncZVVqgZlDLmfflWK6+f8AGf0dSRzi5rr3h3::GR_DOCBASE
API> ...
DM_ENCR_TEXT_V2=AAAAEGBQ6Zy7FxQ10idQdFj+Gn20nFlif02ieMx+AGBHLz+vQfmGu2GAiv8KeIN2PhPOf1oiF9u2fP98zEFhhuBAmxY+d5AoBCGNf61ZRavpa::GR_DOCBASE
API> Bye
[dmadmin@content_server_01 ~]$

 

If you have more repositories, you will have to encrypt those too, if you want to change them of course. Once the new password has been encrypted, we can change it on the Database. To avoid any issues and error messages, let’s first stop Documentum (the docbases at the very least) and then printing the Database Connection information:

[dmadmin@content_server_01 ~]$ service documentum stop
  ** JMS stopped
  ** DocBase1 stopped
  ** GR_DOCBASE stopped
  ** Docbroker stopped
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora
<sid> =
    (DESCRIPTION =
        (ADDRESS_LIST =
            (ADDRESS = (PROTOCOL = TCP)(HOST = <database_hostname>)(PORT = <database_port>))
        )
        (CONNECT_DATA =
            (SERVICE_NAME = <service_name>)
        )
    )
[dmadmin@content_server_01 ~]$

 

Once you know what the SID is, you can now login to the database to change the password so I will do that for both repositories. This could also be scripted to retrieve the list of docbases, create new passwords for them, encrypt them all automatically and then connect to each database using different SQL scripts to change the passwords, however I will use here manual steps:

[dmadmin@content_server_01 ~]$ sqlplus GR_DOCBASE@<sid>

SQL*Plus: Release 12.1.0.2.0 Production on Sat Jul 22 15:05:08 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter password:
    -->> Enter here the OLD GR_DOCBASE Schema Owner's password
Last Successful login time: Sat Jul 22 2017 15:04:18 +00:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

SQL> PASSWORD
Changing password for GR_DOCBASE
Old password:
    -->> Enter here the OLD GR_DOCBASE Schema Owner's password
New password:
    -->> Enter here the NEW GR_DOCBASE Schema Owner's password
Retype new password:
    -->> Re-enter here the NEW GR_DOCBASE Schema Owner's password
Password changed

SQL> quit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sqlplus DocBase1@<sid>

SQL*Plus: Release 12.1.0.2.0 Production on Sat Jul 22 15:08:20 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter password:
    -->> Enter here the OLD DocBase1 Schema Owner's password
Last Successful login time: Sat Jul 22 2017 15:07:10 +00:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

SQL> PASSWORD
Changing password for DocBase1
Old password:
    -->> Enter here the OLD DocBase1 Schema Owner's password
New password:
    -->> Enter here the NEW DocBase1 Schema Owner's password
Retype new password:
    -->> Re-enter here the NEW DocBase1 Schema Owner's password
Password changed

SQL> quit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options
[dmadmin@content_server_01 ~]$

 

At this point, the passwords have been changed in the database and we encrypted them properly. The next step is therefore to update the password files for each repository with the encrypted password so that the repositories can start again:

[dmadmin@content_server_01 ~]$ cd $DOCUMENTUM/dba/config
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for i in `ls -d *`; do echo "  ** dbpasswd.txt for ${i} **"; cat ./${i}/dbpasswd.txt; echo; done
  ** dbpasswd.txt for GR_DOCBASE **
DM_ENCR_TEXT_V2=AAAAEH7UNwFgncubfd1C82hc5l1cwqgdotwQ7212c8bz2cFZVVqgZub2zex8bz2cFWK92h+21EelDLmffl2/rc82c8bz2cFf0dSRazi5rr3h3::GR_DOCBASE

  ** dbpasswd.txt for DocBase1 **
DM_ENCR_TEXT_V2=AAAAQ10idQdFj+Gn2EGBPZy7e0niF9uQfAGBHLz+vv8KQ62fP98zE+02iFhhuBAmxY+FFxeMxIN2Phl1od5AoBCGNf61ZRifmGu2GAiOfavpa::GR_DOCBASE

[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for i in `ls -d *`; do cp ./${i}/dbpasswd.txt ./${i}/dbpasswd.txt_bck_$(date +"%Y%m%d-%H%M%S"); done
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ echo "DM_ENCR_TEXT_V2=AAAAEH7UNwFub2ubf92h+21/rc8HEc3rd1C82hc52c8bz2cFl1cQ721zex2nxWDEegwqgdotwncZVVqgZlDLmfflWK6+f8AGf0dSRzi5rr3h3::GR_DOCBASE" > ./GR_DOCBASE/dbpasswd.txt
[dmadmin@content_server_01 ~]$ echo "DM_ENCR_TEXT_V2=AAAAEGBQ6Zy7FxQ10idQdFj+Gn20nFlif02ieMx+AGBHLz+vQfmGu2GAiv8KeIN2PhPOf1oiF9u2fP98zEFhhuBAmxY+d5AoBCGNf61ZRavpa::GR_DOCBASE" > ./DocBase1/dbpasswd.txt
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for i in `ls -d *`; do echo "  ** dbpasswd.txt for ${i} **"; cat ./${i}/dbpasswd.txt; echo; done
  ** dbpasswd.txt for GR_DOCBASE **
DM_ENCR_TEXT_V2=AAAAEH7UNwFub2ubf92h+21/rc8HEc3rd1C82hc52c8bz2cFl1cQ721zex2nxWDEegwqgdotwncZVVqgZlDLmfflWK6+f8AGf0dSRzi5rr3h3::GR_DOCBASE

  ** dbpasswd.txt for DocBase1 **
DM_ENCR_TEXT_V2=AAAAEGBQ6Zy7FxQ10idQdFj+Gn20nFlif02ieMx+AGBHLz+vQfmGu2GAiv8KeIN2PhPOf1oiF9u2fP98zEFhhuBAmxY+d5AoBCGNf61ZRavpa::GR_DOCBASE

[dmadmin@content_server_01 ~]$

 

Once the dbpasswd.txt files have been updated with the new encrypted password that has been generated at the beginning of this blog, then we can restart Documentum and verify that the docbases are up&running. If they are, then the password has been changed properly!

[dmadmin@content_server_01 ~]$ service documentum start
  ** Docbroker started
  ** GR_DOCBASE started
  ** DocBase1 started
  ** JMS started
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ ps -ef | grep "documentum.*docbase_name"
...
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ grep -C3 "DM_DOCBROKER_I_PROJECTING" $DOCUMENTUM/dba/log/GR_DOCBASE.log
2017-07-22T15:28:40.657360      9690[9690]      0000000000000000        [DM_SERVER_I_START]info:  "Sending Initial Docbroker check-point "

2017-07-22T15:28:40.671878      9690[9690]      0000000000000000        [DM_MQ_I_DAEMON_START]info:  "Message queue daemon (pid : 9870, session 010f123456000456) is started sucessfully."
2017-07-22T15:28:40.913699      9869[9869]      010f123456000003        [DM_DOCBROKER_I_PROJECTING]info:  "Sending information to Docbroker located on host (content_server_01) with port (1490).  Information: (Config(GR_DOCBASE), Proximity(1), Status(Open), Dormancy Status(Active))."
Tue Jul 22 15:29:38 2017 [INFORMATION] [AGENTEXEC 10309] Detected during program initialization: Version: 7.2.0160.0297  Linux64
Tue Jul 22 15:29:44 2017 [INFORMATION] [AGENTEXEC 10309] Detected during program initialization: Agent Exec connected to server GR_DOCBASE:  [DM_SESSION_I_SESSION_START]info:  "Session 010f123456056d00 started for user dmadmin."

[dmadmin@content_server_01 ~]$

 

When the docbase has been registered to the Docbroker, you are sure that it was able to contact and log in to the database so that the new password is now used properly. To be sure that everything in Documentum is working properly however, I would still check the complete log file…

 

 

Cet article Documentum – Change password – 7 – DB – Schema Owner est apparu en premier sur Blog dbi services.

Documentum – Change password – 6 – CS/FT – JKS

Sat, 2017-07-29 04:29

Just like for the JBoss Admin password (here), this blog will be for both Content Servers and Full Text Servers. I will provide commands below to change the passwords of the Java KeyStore for the Java Method Server (JMS) and Full Text Servers (Dsearch/IndexAgent). Again, JKS aren’t only used in the scope of Documentum so if you are here for the JKS and not for Documentum, that’s fine too ;).

 

The steps are exactly the same for all JKS files, it’s just a matter of integrating that inside Documentum. Therefore, I will continue to use the JMS for single JKS update and I will use the Dsearch/IndexAgent for multiple updates. The steps are pretty simple:

  1. Store the current and new password in variables
  2. Backup the old configuration and JKS files
  3. Update the JKS password
  4. Restart the components
  5. Verify that the components are running over https

 

I. JMS Java KeyStore

For the JMS Java KeyStore, you obviously need to connect to all Content Servers and then perform the steps. Below, I’m using a JKS named “my.keystore” which is placed right next to the standalone.xml file. So let’s do that:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the CURRENT JKS password: " currentpw; echo
Please enter the CURRENT JKS password:
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW JKS password: " newpw; echo
Please enter the NEW JKS password:
[dmadmin@content_server_01 ~]$ read -p "Please enter the name of the JKS file with extension: " jks_name
Please enter the name of the JKS file with extension: my.keystore
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cd $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/configuration/
[dmadmin@content_server_01 ~]$ cp ${jks_name} ${jks_name}_bck_$(date +"%Y%m%d-%H%M%S")
[dmadmin@content_server_01 ~]$ cp standalone.xml standalone.xml_bck_$(date +"%Y%m%d-%H%M%S")
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ keytool -storepasswd -keystore ${jks_name} -storepass ${currentpw} -new ${newpw}
[dmadmin@content_server_01 ~]$ keytool -keypasswd -keystore ${jks_name} -storepass ${newpw} -alias jms_alias -keypass ${currentpw} -new ${newpw}
[dmadmin@content_server_01 ~]$

 

These last two commands are the ones updating the Java KeyStore and the key passwords. In case your JKS and its included key do not have the same password, you will have to use the real passwords at the end of the second line. If the last command (the 2nd keytool command) is working, it means that you changed the JKS password properly in the first keytool command because you are now able to change the key password using the new JKS password (-storepass ${newpw}). Still following me?

Once this has been done, the next step is simply to update the password in the JMS configuration file and restart the JMS:

[dmadmin@content_server_01 ~]$ grep "password=" standalone.xml
                <ssl name="https" password="Old_JKS_Password" certificate-key-file="$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i 's,\(<[sk][se][ly].*password="\)[^"]*,\1'${newpw}',' standalone.xml
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ grep "password=" standalone.xml
                <ssl name="https" password="New_JKS_Password" certificate-key-file="$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cd $DOCUMENTUM_SHARED/jboss7.1.1/server
[dmadmin@content_server_01 ~]$ ./stopMethodServer.sh
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ nohup ./startMethodServer.sh >> nohup-JMS.out 2>&1 &
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sleep 30
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ curl -k -D - https://localhost:9082/DmMethods/servlet/DoMethod; echo
HTTP/1.1 200 OK
Content-Length: 144
Date: Sat, 22 Jul 2017 09:58:41 GMT
Server: MethodServer

<html><title>Documentum Java Method Server</title><body><font face="Verdana, Arial" size="-1"><p>Documentum Java Method Server</p></body></html>
[dmadmin@content_server_01 ~]$

 

If the password of the JKS has been changed properly for the JKS file as well as in the configuration file, then you should get an HTTP 200 OK return code.

 

II. Dsearch/IndexAgent Java KeyStores

For the Dsearch/IndexAgent Java KeyStores, you obviously need to connect to all Full Text Servers and then perform the steps again. Below, I’m using a JKS named “my.keystore”. It doesn’t matter where this file is placed since the commands below will anyway just find them. However, by default this file will be placed right next to the standalone.xml file: this is the default setup if you used the “ConfigSSL.groovy” script to setup the xPlore in SSL (see this blog for information about that and a lot of other of my blogs to see issues related to this script/setup in SSL). These commands are adapted in case you have several IndexAgents installed. Please note that the commands below will set the same JKS password for all JBoss instances (all Dsearch/IndexAgents). Therefore, if that’s not what you want (if you have Subject Alternative Names for example), you will have to execute the commands for each keystore, one by one.

[xplore@full_text_server_01 ~]$ read -s -p "Please enter the CURRENT JKS password: " currentpw; echo
Please enter the CURRENT JKS password:
[xplore@full_text_server_01 ~]$ read -s -p "Please enter the NEW JKS password: " newpw; echo
Please enter the NEW JKS password:
[xplore@full_text_server_01 ~]$ read -p "Please enter the name of the JKS file with extension: " jks_name
Please enter the name of the JKS file with extension: my.keystore
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `find $XPLORE_HOME -name ${jks_name}`; do cp ${i} ${i}_bck_$(date +"%Y%m%d-%H%M%S"); done
[xplore@full_text_server_01 ~]$ for i in `find $XPLORE_HOME -name standalone.xml`; do if [[ ${i} != */template/* ]]; then cp ${i} ${i}_bck_$(date +"%Y%m%d-%H%M%S"); fi; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `find $XPLORE_HOME -name ${jks_name}`; do keytool -storepasswd -keystore ${i} -storepass ${currentpw} -new ${newpw}; done
[xplore@full_text_server_01 ~]$ for i in `find $XPLORE_HOME -name ${jks_name}`; do keytool -keypasswd -keystore ${i} -storepass ${newpw} -alias ft_alias -keypass ${currentpw} -new ${newpw}; done
[xplore@full_text_server_01 ~]$

 

At this point, all the Java KeyStore have been backed up and updated and the related standalone.xml files have been backed up too. The only remaining step is to replace the passwords in the standalone.xml files, restart the components and test again the availability of the xPlore components:

[xplore@full_text_server_01 ~]$ for i in `find $XPLORE_HOME -name standalone.xml`; do if [[ ${i} != */template/* ]]; then grep "password=" ${i}; fi; done
                <ssl name="https" password="Old_JKS_Password" certificate-key-file="$XPLORE_HOME/jboss7.1.1/server/DctmServer_PrimaryDsearch/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
                <ssl name="https" password="Old_JKS_Password" certificate-key-file="$XPLORE_HOME/jboss7.1.1/server/DctmServer_Indexagent_DocBase1/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
                <ssl name="https" password="Old_JKS_Password" certificate-key-file="$XPLORE_HOME/jboss7.1.1/server/DctmServer_Indexagent_DocBase2/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
                <ssl name="https" password="Old_JKS_Password" certificate-key-file="$XPLORE_HOME/jboss7.1.1/server/DctmServer_Indexagent_DocBase3/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `find $XPLORE_HOME -name standalone.xml`; do if [[ ${i} != */template/* ]]; then sed -i 's,\(<[sk][se][ly].*password="\)[^"]*,\1'${newpw}',' ${i}; fi; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `find $XPLORE_HOME -name standalone.xml`; do if [[ ${i} != */template/* ]]; then grep "password=" ${i}; fi; done
                <ssl name="https" password="New_JKS_Password" certificate-key-file="$XPLORE_HOME/jboss7.1.1/server/DctmServer_PrimaryDsearch/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
                <ssl name="https" password="New_JKS_Password" certificate-key-file="$XPLORE_HOME/jboss7.1.1/server/DctmServer_Indexagent_DocBase1/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
                <ssl name="https" password="New_JKS_Password" certificate-key-file="$XPLORE_HOME/jboss7.1.1/server/DctmServer_Indexagent_DocBase2/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
                <ssl name="https" password="New_JKS_Password" certificate-key-file="$XPLORE_HOME/jboss7.1.1/server/DctmServer_Indexagent_DocBase3/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/jboss7.1.1/server
[xplore@full_text_server_01 ~]$ for i in `ls stop*.sh`; do ./${i}; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ nohup ./startPrimaryDsearch.sh >> nohup-PrimaryDsearch.out 2>&1 &
[xplore@full_text_server_01 ~]$ for i in `ls startIndexag*.sh`; do ia=`echo $i|sed 's,start\(.*\).sh,\1,'`; nohup ./$i >> nohup-$ia.out 2>&1 &; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ sleep 30
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ curl -k -D - https://localhost:9302/dsearch/; echo
HTTP/1.1 259
Server: Apache-Coyote/1.1
Pragma: No-cache
Cache-Control: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 UTC
Content-Type: text/html;charset=UTF-8
Content-Length: 65
Date: Sat, 22 Jul 2017 11:33:38 GMT

The xPlore instance PrimaryDsearch [version=1.5.0020.0048] normal
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do port=`grep '<socket-binding .*name="https"' ./${i}/configuration/standalone.xml|sed 's,.*port="\([0-9]*\).*,\1,'`; echo; echo "  ** Accessing IndexAgent URL of '${i}' (${port})"; curl -k -D - https://localhost:${port}/IndexAgent; done

  ** Accessing IndexAgent URL of 'DctmServer_Indexagent_DocBase1' (9202)
HTTP/1.1 302 Moved Temporarily
Server: Apache-Coyote/1.1
Location: https://localhost:9202/IndexAgent/
Transfer-Encoding: chunked
Date: Sat, 22 Jul 2017 11:36:57 GMT


  ** Accessing IndexAgent URL of 'DctmServer_Indexagent_DocBase2' (9222)
HTTP/1.1 302 Moved Temporarily
Server: Apache-Coyote/1.1
Location: https://localhost:9222/IndexAgent/
Transfer-Encoding: chunked
Date: Sat, 22 Jul 2017 11:36:57 GMT


  ** Accessing IndexAgent URL of 'DctmServer_Indexagent_DocBase3' (9242)
HTTP/1.1 302 Moved Temporarily
Server: Apache-Coyote/1.1
Location: https://localhost:9242/IndexAgent/
Transfer-Encoding: chunked
Date: Sat, 22 Jul 2017 11:36:58 GMT
[dmadmin@content_server_01 ~]$

 

For the Dsearch, the proper answer is a HTTP 259 return code while for the IndexAgent, I didn’t put an ending “/” in the URL so that we don’t have the full page of the IA loaded but only some header. Therefore HTTP 302 Moved Temporarily is the proper return code there.

 

Please note that for the “sed” commands above, I deliberately used “<[sk][se][ly]” even if for all JBoss 7.1.1 instances, the SSL configuration will always start with “<ssl “. The reason for that is to make the steps compatible with WidlFly 9.0.1 too (xPlore 1.6). There are a few differences between JBoss 7.1.1 and WildFly 9.0.1 and one of them is that the JKS password is now on a line starting with “<keystore ” so that’s why :).

 

 

Cet article Documentum – Change password – 6 – CS/FT – JKS est apparu en premier sur Blog dbi services.

Setting up default parameters for roles in PostgreSQL

Fri, 2017-07-28 08:31

As you might know you can set various parameters on the session level in PostgreSQL by using the “set” command:

postgres=# \h set
Command:     SET
Description: change a run-time parameter
Syntax:
SET [ SESSION | LOCAL ] configuration_parameter { TO | = } { value | 'value' | DEFAULT }
SET [ SESSION | LOCAL ] TIME ZONE { timezone | LOCAL | DEFAULT }

This allows a session to adjust parameters at runtime and can be a handy way for on the fly configuration when you need special settings. Wouldn’t it be great if we could have a default set of parameters for a role or user? Maybe there is one user who needs a special setting for work_mem and another one who needs a special setting for search_path. Instead of setting that each time after the connect in the session you can also do that on the server side.

Lets create to users, user a and user b:

postgres=# create user a login password 'a';
CREATE ROLE
postgres=# create user b login password 'b';
CREATE ROLE
postgres=# \du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 a         |                                                            | {}
 b         |                                                            | {}
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

When we want a special setting for work_mem every time user a creates a new connection and a special setting for search_path every time user b creates a connection we can do it like this:

postgres=# alter user a set work_mem = '1MB';
ALTER ROLE
postgres=# alter user b set search_path='b';
ALTER ROLE
postgres=# 

When user a connects from now on:

postgres=# \c postgres a
You are now connected to database "postgres" as user "a".
postgres=> show work_mem;
 work_mem 
----------
 1MB
(1 row)

When user b connects from now on:

postgres=> \c postgres b
You are now connected to database "postgres" as user "b".
postgres=> show search_path ;
 search_path 
-------------
 b
(1 row)

Notice that this does not prevent a user from overriding the setting:

postgres=> select current_user;
 current_user 
--------------
 b
(1 row)

postgres=> set search_path=c;
SET
postgres=> show search_path ;
 search_path 
-------------
 c
(1 row)

… this is more meant as setting defaults that differ from the main server configuration where it makes sense. And how can you know then which settings are configured for a specific role? Easy, there is pg_roles:

postgres=> select rolname,rolconfig from pg_roles where rolname in ('a','b');
 rolname |    rolconfig    
---------+-----------------
 a       | {work_mem=1MB}
 b       | {search_path=b}
(2 rows)

Good to know…

 

Cet article Setting up default parameters for roles in PostgreSQL est apparu en premier sur Blog dbi services.

Alfresco 5.2 our DMS of choice!

Thu, 2017-07-27 06:46
Introduction

Nowadays companies have to deal with lots of electronic documents, some of them being mission critical. Insurances, Banks and Pharma industries are good candidates for ECM/DMS solutions since they produce and deal with lots of documentations, contracts and receipts. Usually the largest ECM/DMS infrastructures can be found at those customers which initiate large digitalization processes. However even for smaller businesses, managing e-documents like sales quotations, offers, answers to RFIs and RFPs becomes mission critical. Indeed, while creating such quotations and offers, collaboration is often requested between salesmen and eventually with the technical department too. The ECM/DMS solutions must offer the means to share and work together on the same document. Unfortunately these documents are, most of the time, simply lying around on a basic Windows Share, if the users even took the time to copy the documents on this share. As a result, there is no concurrency management, preventing any data loss and the “locking strategy” is quite simple: “last wrote … won”. It’s even incredible to see how many “larger” companies still work like that. All companies follow the digitalization trends but sometimes in a quite elementary way.

So basically what prevents the usage of a ECM/DMS solution in all companies? From my point of view, most of the time ECM/DMS projects are wrongly sized and approached. Indeed, each customer has lots of good intentions at the begin of the project. Therefore instead of focusing on the essentials, project responsible want to implement almost everything, and may be too much:

  • workflow management
  • complex user/group management and security rules
  • full text indexing
  • infrastructure redundancy
  • full integration in existing solutions
  • complex business processes (mixing up BPM and ECM/DMS)
  • aso … aso …

As a result the proposed ECM/DMS solutions can become quite complex to set up and quite expensive in terms of licenses. That’s exactly where those kinds of projects usually get stuck and die. We want to do too much, it gets too complex, so let’s do nothing! :-)

Is there a way and a technology which allows to start smoothly in the ECM/DMS area?

Why a DMS?

First of all, let’s summarize again which core functionalities we need from a DMS. In other words, what do we want to achieve with such a solution?

As a salesman, and in particular as a Chief Sales Officer, I need to keep a clear track of all changes. Indeed, while editing/changing documents, and in particular quotations, we should keep traces of each modifications. Release management and traceability is a “must have” nowadays. Document validation (workflow processes) would be nice to have in a second step.

Of course in the current context of cyber-attacks, I need a high security level. I also need to protect the documents against unauthorized users: we do not want/need all people in the company to know the sales engineering policy. Furthermore, we do not want viruses encrypting all our documents lying on a simple Windows Share. If the ECM/DMS solutions request identifications to the system to proceed with CheckOut/CheckIn procedures to work on documents the virus has almost no chance to access easily all files.

If this CheckOut/CheckIn procedure is included in the Microsoft Office suite, it won’t even decrease the efficiency of the users or of the salesmen. Users are always afraid when they have to do more that simple double clicks :-)

Why Alfresco?

As explained in the introduction, the costs and the over sizing of ECM/DMS projects may sometimes kill them before they even born.

Alfresco is an Open Source ECM/DMS solution allowing to implement quite quickly and easily the core needed functions without license costs. Of course, the Enterprise version offers some additional features like:

  • content encryption at rest and encrypted configuration files
  • clustering
  • synchronization of content between Cloud and On-Premises installations

At dbi services, since we are ECM/DMS experts we decided to implement Alfresco on our own. However, the setup and documentation of such a solution can be limited to several days, not weeks or years. We do not need bunch of senior technical experts and presales over several months to set it up, like for some un-named ERP solutions :-)

Out of the box, and in particular with the version 5.x, Alfresco really covers 100% of what I do expect from an ECM/DMS, as a salesman:

  • integrated release management
  • protection and management of concurrency between users
  • protection against viruses since some identification is needed and you can always revert a previous version if needed
  • easy drag & drop functionality to copy documents/folders into alfresco

Below, an example of the smooth integration of Alfresco in each Small and Medium Businesses environment using MS Office. With a smooth integration in MS Office it is now possible to directly work on a document and save it into alfresco without having to “CheckOut/CheckIn” it, since this operation is integrated in the Office connector. Below an example of the integration of Alfresco in MS Office once a so called “SharePoint online location” (compatible with alfresco) has been created. you can directly open the documents in Word from the Alfresco repository (checkin/checkout happens in the background) :

alfresco_5.2_open_MS_Office_4

Another example of smooth integration in the MS or Mac world, the drag and drop feature from the file browser directly in the Alfresco browser using any Web browser :

alfresco_5.2_drag_and_drop_4

It is even possible to save a newly created MS Office document directly into Alfresco, the integration has been really maximized in the last Alfresco release (5.x).

Another strong advantage of Alfresco is basically coming from the Open Source culture. Despite the fact that some companies still have the feeling that they have to pay expensive software licenses, it may sometimes be possible to think about the “service only” model. This approach, used by Open Source software, allows the product to improve and growth through contributors offering their services around the product. That’s the case for dbi services providing support around Alfresco which allows a professional usage of the solution. In the same idea, lots of contributors developed some Alfresco extensions allowing to improve the core functionalities and to integrate the product in lots of other solutions or products (i.e. in ERP solutions like Odoo, SAP, Salesforce, aso…). Some of these add-ons that were developed by the community are even integrated directly into the next Alfresco releases to improve the product (HTML5 Previewer, Trashcan Cleaner, aso…).

Providing the complete set of required core features, easy to deploy, manage and administrate, cost efficient and extensible, Alfresco has become a kind of optimum choice for our company development while insuring the quality of our Sales activities.

Conclusion

As for each IT project, we do strongly advice to follow a pragmatic way, ideally proceeding with POCs (Proof Of Concepts), in order to validate, step by step, the solution. Furthermore, it is advised to focus on essential functionalities first, avoiding huge and complex specifications giving the impression that we will never reach the end of the project.

Combining efficiency and security and providing the required features, Alfresco was the most competitive price/feature solution which helped us to growth as we did over the last years. The last version we just migrated to (version 5.2) did even increase the user acceptance since the integration into the usual office tools has been seriously improved.

 

Cet article Alfresco 5.2 our DMS of choice! est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 17 – Identifying a blocking session

Tue, 2017-07-25 13:49

One single blocking session in a database can completely halt your application so identifying which session is blocking other sessions is a task you must be able to perform quickly. In Oracle you can query v$session for getting that information (blocking_session, final_blocking_session). Can you do the same in PostgreSQL? Yes, you definitely can, lets go.

As usual we’ll start by creating a test table:

postgres@pgbox:/home/postgres/ [PG10B] psql -X postgres
psql (10beta2 dbi services build)
Type "help" for help.

postgres=# create table t1 ( a int );
CREATE TABLE
postgres=# 

One way to force other sessions to wait is to start a new transaction, modify the table:

postgres=# begin;
BEGIN
postgres=# alter table t1 add column t2 text;
ALTER TABLE
postgres=#  

… and then try to insert data into the same table from another session:

postgres@pgbox:/home/postgres/ [PG10B] psql -X postgres
psql (10beta2 dbi services build)
Type "help" for help.

postgres=# insert into t1 (a) values (1);

The insert statement will hang/wait because the modification of the table is still ongoing (the transaction did neither commit nor rollback, remember that DDLs in PostgreSQL are transactional). Now that we have a blocking session how can we identify the session?

What “v$session” is in Oracle, pg_stat_activity is in PostgreSQL (Note: I am using PostgreSQL 10Beta2 here):

postgres=# \d pg_stat_activity 
                      View "pg_catalog.pg_stat_activity"
      Column      |           Type           | Collation | Nullable | Default 
------------------+--------------------------+-----------+----------+---------
 datid            | oid                      |           |          | 
 datname          | name                     |           |          | 
 pid              | integer                  |           |          | 
 usesysid         | oid                      |           |          | 
 usename          | name                     |           |          | 
 application_name | text                     |           |          | 
 client_addr      | inet                     |           |          | 
 client_hostname  | text                     |           |          | 
 client_port      | integer                  |           |          | 
 backend_start    | timestamp with time zone |           |          | 
 xact_start       | timestamp with time zone |           |          | 
 query_start      | timestamp with time zone |           |          | 
 state_change     | timestamp with time zone |           |          | 
 wait_event_type  | text                     |           |          | 
 wait_event       | text                     |           |          | 
 state            | text                     |           |          | 
 backend_xid      | xid                      |           |          | 
 backend_xmin     | xid                      |           |          | 
 query            | text                     |           |          | 
 backend_type     | text         

There is no column which identifies a blocking session but there are other interesting columns:

postgres=# select datname,pid,usename,wait_event_type,wait_event,state,query from pg_stat_activity where backend_type = 'client backend' and pid != pg_backend_pid();
 datname  | pid  | usename  | wait_event_type | wait_event |        state        |               query                
----------+------+----------+-----------------+------------+---------------------+------------------------------------
 postgres | 2572 | postgres | Client          | ClientRead | idle in transaction | alter table t1 add column t2 text;
 postgres | 2992 | postgres | Lock            | relation   | active              | insert into t1 (a) values (1);
(2 rows)

This shows only client connections (excluding all the backend connections) and does not show the current session. In this case it is easy to identify the session which is blocking because we only have two sessions. When you have hundreds of sessions it becomes more tricky to identify the session which is blocking by looking at pg_stat_activity.

When you want to know which locks are currently being held/granted in PostgreSQL you can query pg_locks:

postgres=# \d pg_locks
                   View "pg_catalog.pg_locks"
       Column       |   Type   | Collation | Nullable | Default 
--------------------+----------+-----------+----------+---------
 locktype           | text     |           |          | 
 database           | oid      |           |          | 
 relation           | oid      |           |          | 
 page               | integer  |           |          | 
 tuple              | smallint |           |          | 
 virtualxid         | text     |           |          | 
 transactionid      | xid      |           |          | 
 classid            | oid      |           |          | 
 objid              | oid      |           |          | 
 objsubid           | smallint |           |          | 
 virtualtransaction | text     |           |          | 
 pid                | integer  |           |          | 
 mode               | text     |           |          | 
 granted            | boolean  |           |          | 
 fastpath           | boolean  |           |          | 

What can we see here:

postgres=# select locktype,database,relation,pid,mode,granted from pg_locks where pid != pg_backend_pid();
   locktype    | database | relation | pid  |        mode         | granted 
---------------+----------+----------+------+---------------------+---------
 virtualxid    |          |          | 2992 | ExclusiveLock       | t
 virtualxid    |          |          | 2572 | ExclusiveLock       | t
 relation      |    13212 |    24576 | 2992 | RowExclusiveLock    | f
 relation      |    13212 |    24581 | 2572 | AccessExclusiveLock | t
 transactionid |          |          | 2572 | ExclusiveLock       | t
 relation      |    13212 |    24579 | 2572 | ShareLock           | t
 relation      |    13212 |    24576 | 2572 | AccessExclusiveLock | t
(7 rows)

There is one lock for session 2992 which is not granted and that is the session which currently is trying to insert a row in the table (see above). We can get more information by joining pg_locks with pg_database and pg_class taking the pids from above:

select b.locktype,d.datname,c.relname,b.pid,b.mode 
  from pg_locks b 
     , pg_database d
     , pg_class c
 where b.pid in (2572,2992)
   and b.database = d.oid
   and b.relation = c.oid;

 locktype | datname  | relname | pid  |        mode         
----------+----------+---------+------+---------------------
 relation | postgres | t1      | 2992 | RowExclusiveLock
 relation | postgres | t1      | 2572 | AccessExclusiveLock
(2 rows)

Does that help us beside that we now know that both sessions want to do some stuff against the t1 table? Not really. So how can we then identify a blocking session? Easy, use the pg_blocking_pids system information function passing in the session which is blocked:

postgres=# select pg_blocking_pids(2992);
 pg_blocking_pids 
------------------
 {2572}
(1 row)

This gives you a list of sessions which are blocking. Can we kill it? Yes, of course, PostgreSQL comes with a rich set of system administration functions:

postgres=# select pg_terminate_backend(2572);
 pg_terminate_backend 
----------------------
 t

… and the insert succeeds. Hope this helps …

PS: There is a great page on the PostgreSQL Wiki about locks.

 

Cet article Can I do it with PostgreSQL? – 17 – Identifying a blocking session est apparu en premier sur Blog dbi services.

Documentum – Change password – 5 – CS/FT – JBoss Admin

Sat, 2017-07-22 02:57

The next password I wanted to blog about is the JBoss Admin password. As you know, there are several JBoss Application Servers in Documentum. The most used being the ones for the Java Method Server (JMS) and for the Full Text Servers (Dsearch/IndexAgent). In this blog, I will only talk about the JBoss Admin password of the JMS and IndexAgents simply because I will include the Dsearch JBoss instance in another blog which will talk about the xDB.

 

The steps are exactly the same for all JBoss instances, it’s just a matter of checking/updating the right file. In this blog, I will still separate the steps for JMS and IndexAgents but that’s because I usually have more than one IndexAgent on the same FT and therefore I’m also providing a way to update all JBoss instances at the same time using the right commands.

 

As always, I will define an environment variable to store the password to avoid using clear text passwords in the shell. The generic steps to change a JBoss Admin password, in Documentum, are pretty simple:

  1. Store the password in a variable
  2. Encrypt the password
  3. Backup the old configuration file
  4. Replace the password file with the new encrypted password
  5. Restart the component
  6. Checking the connection with the new password

 

As you can see above, there is actually nothing in these steps to change the password… We are just replacing a string inside a file with another string and that’s done, the password is changed! That’s really simple but that’s also a security issue since you do NOT need to know the old password… That’s how Documentum works with JBoss…

 

I. JMS JBoss Admin

For the JMS JBoss Admin, you obviously need to connect to all Content Servers and then perform the steps. Below are the commands I use to set the variable, encrypt the password and the update the password file with the new encrypted password (I’m just overwriting it):

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW JBoss admin password: " jboss_admin_pw; echo
Please enter the NEW JBoss admin password:
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -cp "$DOCUMENTUM_SHARED/dfc/dfc.jar" com.documentum.fc.tools.RegistryPasswordUtils ${jboss_admin_pw}
AAAAENwH4N2fF92dfRajKzaARvrfnIG29fnqf8Kgnd2fWfYKmMd9x
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cd $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/configuration/
[dmadmin@content_server_01 ~]$ mv dctm-users.properties dctm-users.properties_bck_$(date "+%Y%m%d")
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ echo "# users.properties file to use with UsersRolesLoginModule" > dctm-users.properties
[dmadmin@content_server_01 ~]$ echo "admin=AAAAENwH4N2fF92dfRajKzaARvrfnIG29fnqf8Kgnd2fWfYKmMd9x" >> dctm-users.properties
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cat dctm-users.properties
# users.properties file to use with UsersRolesLoginModule
admin=AAAAENwH4N2fF92dfRajKzaARvrfnIG29fnqf8Kgnd2fWfYKmMd9x
[dmadmin@content_server_01 ~]$

 

At this point, the new password has been put in the file dctm-users.properties in its encrypted form so you can now restart the component and check the status of the JBoss Application Server. To check that, I will use below a small curl command which is really useful… If just like me you always restrict the JBoss Administration Console to 127.0.0.1 (localhost only), for security reasons, then this is really awesome since you don’t need to start a X server and you don’t need to start a browser and all this stuff, simply put the password when asked and voila!

[dmadmin@content_server_01 ~]$ cd $DOCUMENTUM_SHARED/jboss7.1.1/server
[dmadmin@content_server_01 ~]$ ./stopMethodServer.sh
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ nohup ./startMethodServer.sh >> nohup-JMS.out 2>&1 &
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sleep 30
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ curl -g --user admin -D - http://localhost:9085/management --header "Content-Type: application/json" -d '{"operation":"read-attribute","name":"server-state","json.pretty":1}'
Enter host password for user 'admin':
HTTP/1.1 200 OK
Transfer-encoding: chunked
Content-type: application/json
Date: Wed, 15 Jul 2017 11:16:51 GMT

{
    "outcome" : "success",
    "result" : "running"
}
[dmadmin@content_server_01 ~]$

 

If everything has been done properly, you should get a “HTTP/1.1 200 OK” status meaning that the JBoss Application Server is up & running and the “result” should be “running”. This proves that the password provided in the command match the encrypted one from the file dctm-users.properties because the JMS is able to answer your request.

 

II. IndexAgent JBoss Admin

For the IndexAgent JBoss Admin, you obviously need to connect to all Full Text Servers and then perform the steps again. Below are the commands to do that. These commands are adapted in case you have several IndexAgents installed. Please note that the commands below will set the same Admin password for all JBoss instances (all IndexAgents JBoss Admin). Therefore, if that’s not what you want, you will have to take the commands from the JMS section but adapt the paths.

[xplore@full_text_server_01 ~]$ read -s -p "Please enter the NEW JBoss admin password: " jboss_admin_pw; echo
Please enter the NEW JBoss admin password:
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ $JAVA_HOME/bin/java -cp "$XPLORE_HOME/dfc/dfc.jar" com.documentum.fc.tools.RegistryPasswordUtils ${jboss_admin_pw}
AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/jboss7.1.1/server/
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do mv ./$i/configuration/dctm-users.properties ./$i/configuration/dctm-users.properties_bck_$(date "+%Y%m%d"); done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do echo "# users.properties file to use with UsersRolesLoginModule" > ./$i/configuration/dctm-users.properties; done
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do echo "AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x" >> ./$i/configuration/dctm-users.properties; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do echo "--$i:"; cat ./$i/configuration/dctm-users.properties; echo; done
--DctmServer_Indexagent_DocBase1:
# users.properties file to use with UsersRolesLoginModule
AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x

--DctmServer_Indexagent_DocBase2:
# users.properties file to use with UsersRolesLoginModule
AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x

--DctmServer_Indexagent_DocBase3:
# users.properties file to use with UsersRolesLoginModule
AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x

[xplore@full_text_server_01 ~]$

 

At this point, the new password has been put in its encrypted form in the file dctm-users.properties for each IndexAgent. So, the next step is to restart all the components and check the status of the JBoss instances. Just like for the JMS, I will use below the curl command to check the status of a specific IndexAgent:

[xplore@full_text_server_01 ~]$ for i in `ls stopIndexag*.sh`; do ./$i; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `ls startIndexag*.sh`; do ia=`echo $i|sed 's,start\(.*\).sh,\1,'`; nohup ./$i >> nohup-$ia.out 2>&1 &; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ sleep 30
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ curl -g --user admin -D - http://localhost:9205/management --header "Content-Type: application/json" -d '{"operation":"read-attribute","name":"server-state","json.pretty":1}'
Enter host password for user 'admin':
HTTP/1.1 200 OK
Transfer-encoding: chunked
Content-type: application/json
Date: Wed, 15 Jul 2017 11:16:51 GMT

{
    "outcome" : "success",
    "result" : "running"
}
[xplore@full_text_server_01 ~]$

 

If you want to check all IndexAgents at once, you can use this command instead (it’s a long one I know…):

[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do port=`grep '<socket-binding .*name="management-http"' ./$i/configuration/standalone.xml|sed 's,.*http.port:\([0-9]*\).*,\1,'`; echo; echo "  ** Please enter below the password for '$i' ($port)"; curl -g --user admin -D - http://localhost:$port/management --header "Content-Type: application/json" -d '{"operation":"read-attribute","name":"server-state","json.pretty":1}'; done

  ** Please enter below the password for 'DctmServer_Indexagent_DocBase1' (9205)
Enter host password for user 'admin':
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Content-Length: 55
Date: Wed, 15 Jul 2017 12:37:35 GMT

{
    "outcome" : "success",
    "result" : "running"
}
  ** Please enter below the password for 'DctmServer_Indexagent_DocBase2' (9225)
Enter host password for user 'admin':
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Content-Length: 55
Date: Wed, 15 Jul 2017 12:37:42 GMT

{
    "outcome" : "success",
    "result" : "running"
}
  ** Please enter below the password for 'DctmServer_Indexagent_DocBase3' (9245)
Enter host password for user 'admin':
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Content-Length: 55
Date: Wed, 15 Jul 2017 12:37:45 GMT

{
    "outcome" : "success",
    "result" : "running"
}
[xplore@full_text_server_01 ~]$

 

If everything has been done properly, you should get a “HTTP/1.1 200 OK” status for all IndexAgents.

 

 

Cet article Documentum – Change password – 5 – CS/FT – JBoss Admin est apparu en premier sur Blog dbi services.

Documentum – Change password – 4 – CS – Presets & Preferences

Sat, 2017-07-22 01:58

In a previous blog (see this one), I already provided the steps to change the BOF password and I mentioned that this was more or less the only important account in the Global Registry. Well in this blog, I will show you how to change the passwords for the two other important accounts: the Presets and Preferences accounts.

 

These two accounts can actually be created in a dedicated repository for performance reasons but by default they will be taken from the Global Registry and they are used – as you can easily understand – to create Presets and Preferences…

 

As said above, these accounts are docbase accounts so let’s start with setting up some environment variable containing the passwords and then updating their passwords on a Content Server:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW Preset password: " prespw; echo
Please enter the NEW Preset password:
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW Preferences password: " prefpw; echo
Please enter the NEW Preferences password:
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ iapi GR_DOCBASE -Udmadmin -Pxxx << EOF
> retrieve,c,dm_user where user_login_name='dmc_wdk_presets_owner'
> set,c,l,user_password
> $prespw
> save,c,l
> retrieve,c,dm_user where user_login_name='dmc_wdk_preferences_owner'
> set,c,l,user_password
> $prefpw
> save,c,l
> EOF


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000907 started for user dmadmin."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> ...
110f123456000144
API> SET> ...
OK
API> ...
OK
API> ...
110f123456000145
API> SET> ...
OK
API> ...
OK
API> Bye
[dmadmin@content_server_01 ~]$

 

Again, to verify that the passwords have been set properly, you can try to login to the respective accounts:

[dmadmin@content_server_01 ~]$ echo quit | iapi GR_DOCBASE -Udmc_wdk_presets_owner -P$prespw


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000908 started for user dmc_wdk_presets_owner."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ echo quit | iapi GR_DOCBASE -Udmc_wdk_preferences_owner -P$prefpw


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000909 started for user dmc_wdk_preferences_owner."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@content_server_01 ~]$

 

When the docbase account has been updated, the first part is done. That’s good but just like for the BOF account, you still need to update the references everywhere… Fortunately for the Presets and Preferences accounts there are less references so it’s less a pain in the… ;)

 

There are references to these two accounts in the WDK-based Applications. Below I will use Documentum Administrator as an example which is deployed as a WAR file on a WebLogic Server, however the steps would be the same for other Application Servers, except that you might use exploded folders and not war files… Below I will use:

  • $WLS_APPLICATIONS as the directory where the DA WAR file is present.
  • $WLS_APPS_DATA as the directory where the Data are present (log files, dfc.keystore, cache, …).

 

These two folders might be the same depending on how you configured your Application Server. So, first of all, let’s encrypt the two passwords on the Application Server using the DA libraries:

[weblogic@weblogic_server_01 ~]$ cd $WLS_APPLICATIONS/
[weblogic@weblogic_server_01 ~]$ jar -xvf da.war wdk/app.xml WEB-INF/classes WEB-INF/lib/dfc.jar WEB-INF/lib
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ read -s -p "Please enter the NEW Preset password: " prespw; echo
Please enter the NEW Preset password:
[weblogic@weblogic_server_01 ~]$ read -s -p "Please enter the NEW Preferences password: " prefpw; echo
Please enter the NEW Preferences password:
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ java -Djava.security.egd=file:///dev/./urandom -classpath WEB-INF/classes:WEB-INF/lib/dfc.jar:WEB-INF/lib/commons-io-1.2.jar com.documentum.web.formext.session.TrustedAuthenticatorTool $prespw $prefpw
Encrypted: [jpQm5FfqdD3HWqP4mgoIIw==], Decrypted: [Pr3seTp4sSwoRd]
Encrypted: [YaGqNkj2FqfQDn3gfna8Nw==], Decrypted: [Pr3feRp4sSwoRd]
[weblogic@weblogic_server_01 ~]$

 

Once this has been done, let’s check the old passwords, updating them in the app.xml file for DA and then checking that the update has been done. The sed commands below are pretty simple: the first part will search for the parent XML tag (so either <presets>…</presets> or <preferencesrepository>…</preferencesrepository>) and the second part will replace the first occurrence of the <password>…</password> line INSIDE the XML tag mentioned in the command (presets or preferencesrepository) with the new password we encrypted before. So, again, just replace my encrypted password with what you got:

[weblogic@weblogic_server_01 ~]$ grep -C20 "<password>.*</password>" wdk/app.xml | grep -E "dmc_|</password>|presets>|preferencesrepository>"
         <presets>
            <!-- Encrypted password for default preset user "dmc_wdk_presets_owner" -->
            <password>tqQd5gfWGF3tVacfmgwL2w==</password>
         </presets>
         <preferencesrepository>
            <!-- Encrypted password for default preference user "dmc_wdk_preferences_owner" -->
            <password>LdFinAwf2F2fuB29cqfs2w==</password>
         </preferencesrepository>
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ sed -i "/<presets>/,/<\/presets>/ s,<password>.*</password>,<password>jpQm5FfqdD3HWqP4mgoIIw==</password>," wdk/app.xml
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ sed -i "/<preferencesrepository>/,/<\/preferencesrepository>/ s,<password>.*</password>,<password>YaGqNkj2FqfQDn3gfna8Nw==</password>," wdk/app.xml
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ grep -C20 "<password>.*</password>" wdk/app.xml | grep -E "dmc_|</password>|presets>|preferencesrepository>"
         <presets>
            <!-- Encrypted password for default preset user "dmc_wdk_presets_owner" -->
            <password>jpQm5FfqdD3HWqP4mgoIIw==</password>
         </presets>
         <preferencesrepository>
            <!-- Encrypted password for default preference user "dmc_wdk_preferences_owner" -->
            <password>YaGqNkj2FqfQDn3gfna8Nw==</password>
         </preferencesrepository>
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ jar -uvf da.war wdk/app.xml
[weblogic@weblogic_server_01 ~]$ rm -rf WEB-INF/ wdk/
[weblogic@weblogic_server_01 ~]$

 

Normally the passwords returned by the second grep command should be different and they should match the ones returned by the JAVA previously executed to encrypt the Presets and Preferences passwords. Once that is done, simply repack the war file and redeploy it (if needed).

 

To verify that the passwords are properly set you can simply stop DA, remove the cache containing the Presets’ jars and restart DA. If the jars are automatically re-created, then the passwords should be OK:

[weblogic@weblogic_server_01 ~]$ cd $WLS_APPS_DATA/documentum.da/dfc.data/cache
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ ls -l
total 4
drwxr-x---. 4 weblogic weblogic 4096 Jul 15 20:58 7.3.0000.0205
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ ls -l ./7.3.*/bof/*/
...
[weblogic@weblogic_server_01 ~]$

 

This last ‘ls’ command will display a list of 10 or 15 jars (12 for me in DA 7.3 GA release) as well as a few files (content.lck, content.xml and GR_DOCBASE.lck usually). If you don’t see any jar files before the restart, it means the old password was probably not correct… Ok so now to verify that the new passwords have been put properly in the app.xml file, simply stop the Managed Server hosting DA with your preferred way (I will use “msDA-01″ for the example below), then remove the cache folder and restart DA. Once DA is up&running again, it will re-create this cache folder in a few seconds and all the jars should be back:

[weblogic@weblogic_server_01 ~]$ $DOMAIN_HOME/bin/startstop stop msDA-01
  ** Managed Server msDA-01 stopped
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ rm -rf ./7.3*/
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ $DOMAIN_HOME/bin/startstop start msDA-01
  ** Managed Server msDA-01 started
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ sleep 30
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ ls -l ./7.3.*/bof/*/
...
[weblogic@weblogic_server_01 ~]$

 

If you did it properly, the jars will be back. If you want a list of the jars that should be present, take a look at the file “./7.3.*/bof/*/content.xml”. Obviously above I was using the DA 7.3 GA so my cache folder starts with 7.3.xxx. If you are using another version of DA, the name of this folder will change so just keep that in mind.

 

 

Cet article Documentum – Change password – 4 – CS – Presets & Preferences est apparu en premier sur Blog dbi services.

Documentum – Change password – 3 – CS – Installation Owner

Sat, 2017-07-22 00:36

In this blog, I will describe the few steps needed to change the Documentum Installation Owner’s password. As you all know, the Installation Owner is (one of) the most important password in Documentum and it is probably the first you define even before starting the installation.

 

As always, I will use a linux environment and in this case, I’m assuming the “dmadmin” account is a local account to each Content Server and therefore the change of the password must be done on all of them. In case you have an AD integration or something similar, you can just change the password at the AD level so that’s not funny, right?!

 

So, let’s start with log in to all Content Servers using the Installation Owner’s account. In case you don’t remember the old password, you will have to use the root account instead. So changing the dmadmin’s password is pretty simple, you just have to change it on the OS level (again this is the default… If you changed the dmadmin’s account type, then…):

[dmadmin@content_server_01 ~]$ passwd
    Changing password for user dmadmin.
    Changing password for dmadmin.
    (current) UNIX password:
    New password:
    Retype new password:
    passwd: all authentication tokens updated successfully.
[dmadmin@content_server_01 ~]$

 

To verify that the dmadmin’s password has been changed successfully, you can use the dm_check_password utility as follow (leave the extra #1 and #2 empty):

[dmadmin@content_server_01 ~]$ $DOCUMENTUM/dba/dm_check_password
    Enter user name: dmadmin
    Enter user password:
    Enter user extra #1 (not used):
    Enter user extra #2 (not used):
    $DOCUMENTUM/dba/dm_check_password: Result = (0) = (DM_EXT_APP_SUCCESS)
[dmadmin@content_server_01 ~]$

 

Once you are sure that the password is set properly, one could think that it’s over but actually, it’s not… There is one additional place where this password must be set and I’m not talking about new installations which obviously will requires you to enter the new password. For that, let’s first encrypt this password:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW dmadmin's password: " dmadmin_pw; echo
    --> Enter the NEW dmadmin's password: 
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -cp $DOCUMENTUM_SHARED/dfc/dfc.jar com.documentum.fc.tools.RegistryPasswordUtils ${dmadmin_pw}
AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0
[dmadmin@content_server_01 ~]$

 

I generated a random string for this example (“AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0″) but this will be the encrypted password of dmadmin. I will use this value in the commands below so whenever you see this, just replace it with what your “java -cp ..” command returned.

 

Then where should this be used? On the Full Text Server! So log in to your FT and inside the watchdog configuration, the dmadmin’s password is used for the IndexAgent connection. The commands below will take a backup of the configuration file and then update it to use the new encrypted password:

[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/watchdog/config/
[xplore@full_text_server_01 config]$ cp dsearch-watchdog-config.xml dsearch-watchdog-config.xml_bck_$(date "+%Y%m%d")
[xplore@full_text_server_01 config]$
[xplore@full_text_server_01 config]$ sed -i 's,<property name="docbase_password" value="[^"]*",<property name="docbase_password" value="AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0",' dsearch-watchdog-config.xml
[xplore@full_text_server_01 config]$

 

Small (but important) note on the above commands: if you are using the same FT for different environments or if one of the IndexAgent is linked to a different dmadmin’s account (and therefore different password), then you will need to open the file manually and replace the passwords for the corresponding xml tags (or use a different sed command which will be more complicated). Each IndexAgent will have the following lines for its configuration:

<application-config instance-name="<hostname>_9200_IndexAgent" name="IndexAgent">
        <properties>
                <property name="application_url" value="https://<hostname>:9202/IndexAgent"/>
                <property name="docbase_user" value="dmadmin"/>
                <property name="docbase_name" value="DocBase1"/>
                <property name="docbase_password" value="AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0"/>
                <property name="servlet_wait_time" value="3000"/>
                <property name="servlet_max_retry" value="5"/>
                <property name="action_on_servlet_if_stopped" value="notify"/>
        </properties>
        <tasks>
                ...
        </tasks>
</application-config>

 

Once the above modification has been done, simply restart the xPlore components.

 

Another thing that must be done is linked to D2 and D2-Config… If you are using these components, then you will need to update the D2 Lockbox on the Content Server side and you probably defined the LoadOnStartup property which will require you to put the dmadmin’s password in the D2 Lockbox on the Web Application side too. In this blog, I won’t discuss the full recreation of the D2 Lockbox with new password/passphrases since this is pretty simple and most likely known by everybody so I’m just going to update the dmadmin’s password inside the D2 Lockbox instead for the different properties. If you would like a more complete blog for the lockbox, just let me know! This only apply to “not so old nor so recent” D2 versions since the D2 Lockbox has been introduced only a few years ago but is yet not present anymore with D2 4.7, so…

 

On the Content Server – I’m just setting up the environment to contain the libraries needed to update the D2 Lockbox and then updating the D2-JMS properties inside the lockbox. I’m using $DOCUMENTUM/d2-lib as the root folder under which the D2 Installer put the libraries and initial lockbox:

[dmadmin@content_server_01 ~]$ export LD_LIBRARY_PATH=$DOCUMENTUM/d2-lib/lockbox/lib/native/linux_gcc34_x64:$LD_LIBRARY_PATH
[dmadmin@content_server_01 ~]$ export PATH=$DOCUMENTUM/d2-lib/lockbox/lib/native/linux_gcc34_x64:$PATH
[dmadmin@content_server_01 ~]$ export CLASSPATH=$DOCUMENTUM/d2-lib/D2.jar:$DOCUMENTUM/d2-lib/LB.jar:$DOCUMENTUM/d2-lib/LBJNI.jar:$CLASSPATH
[dmadmin@content_server_01 ~]$ cp -R $DOCUMENTUM/d2-lib/lockbox $DOCUMENTUM/d2-lib/lockbox-bck_$(date "+%Y%m%d-%H%M%S")
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for docbase in `cd $DOCUMENTUM/dba/config/; ls`; do java com.emc.common.java.crypto.SetLockboxProperty $DOCUMENTUM/d2-lib/lockbox D2-JMS.${docbase}.password ${dmadmin_pw}; done
[dmadmin@content_server_01 ~]$ for docbase in `cd $DOCUMENTUM/dba/config/; ls`; do java com.emc.common.java.crypto.SetLockboxProperty $DOCUMENTUM/d2-lib/lockbox D2-JMS.${docbase}.${docbase}.password ${dmadmin_pw}; done

 

The last command above mention “${docbase}.${docbase}”… Actually the first one is indeed the name of the docbase but the second one is the name of the local dm_server_config. Therefore, for a single Content Server the above commands are probably enough (since by default dm_server_config name = docbase name) but if you have a HA setup, then you will need to also include the remote dm_server_config names for each docbase (alternatively you can also use wildcards…). Once that is done, just replace the old lockbox with the new one in the JMS.

 

On the Web Application Server – the same environment variables are needed but of course the paths will change and you might need to include the C6-Common jar file too (which is known to cause issues with WebLogic if it is still in the CLASSPATH when you start it). So on the Web Application Server, I’m also setting up the environment variables with the dmadmin’s password and D2 Lockbox passphrase as well as another variable for the list of docbases to loop on them:

[weblogic@weblogic_server_01 ~]$ for docbase in $DOCBASES; do java com.emc.common.java.crypto.SetLockboxProperty $WLS_DOMAIN/D2/lockbox LoadOnStartup.${docbase}.password ${dmadmin_pw} ${d2method_pp}; done
[weblogic@weblogic_server_01 ~]$

 

With the D2 Lockbox, you will need to restart the components using them when you recreate it from scratch. However, when you update a property inside it, like above, it’s usually not needed. The next time the password is needed, it will be picked from the Lockbox.

 

Last comment on this, if you are using an ADTS and if you used the dmadmin’s account to manage it (I wouldn’t recommend this! Please use a dedicated user for this instead), then the password is also encrypted in a password file for each docbases under “%ADTS_HOME%/CTS/docbases/”.

 

 

Cet article Documentum – Change password – 3 – CS – Installation Owner est apparu en premier sur Blog dbi services.

OUD 11.1.2.3 – What are the minimum privileges for administering connect strings in OUD?

Thu, 2017-07-20 10:44

Security in OUD plays a very important role. Keeping your patch level up to date is just one of those topics, like explained in my recent blog.

https://blog.dbi-services.com/oud-11-1-2-3-patch-25840309-oud-bundle-patch-11-1-2-3-170718/

Another one are privileges. You don’t need to do all of your work with the cn=Directory Manager. The cn=Directory Manager is the most powerful user in an Oracle Unified Directory. You can see it as the root user. However, for just managing Connect strings via the netmgr it is kinda oversized.
OK. You could create another root user in cn=Root DNs,cn=config with less privileges, but this would have the disadvantage that it is not replicated.

I would just like to have an user, e.g. cn=oud,ou=people,dc=dbi,dc=com which I can use only for creating/deleting TNS Connect strings. But what are the minimum privileges for doing this job?

First of all, we need to create the user. Specifying the Common Name, the Lastname and the password is enough.

If we try now to add a new connect string via the netmgr, we will get immediately the famous “TNS-04411: Directory service: permission denied” error.

...
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:82)
oracle.net.common.dataStore.DataStoreException: error writing ServiceAlias to: LDAPDataStore [svr: dbidg01:1389, type: OID, ctxt: cn=OracleContext,dc=dbi,dc=com, home: /u01/app/oracle/product/12.2.0/dbhome_1]
original exception message: TNS-04411: Directory service: permission denied
  caused by: oracle.net.config.DirectoryServiceException: TNS-04411: Directory service: permission denied
  caused by: oracle.net.ldap.NNFLException
original stack trace: oracle.net.config.ServiceAliasException: TNS-04411: Directory service: permission denied
  caused by: oracle.net.config.DirectoryServiceException: TNS-04411: Directory service: permission denied
  caused by: oracle.net.ldap.NNFLException
oracle.net.config.DirectoryServiceException: TNS-04411: Directory service: permission denied
  caused by: oracle.net.ldap.NNFLException
oracle.net.ldap.NNFLException

To resolve this error, we need to make sure, that the user has the password reset privilege and any of the following the uniqueMember attribute:

  • cn=oraclenetadmins,dc=oraclecontext,dc=dbi,dc=com
  • cn=oraclecontextadmins,cn=groups,cn=oraclecontext,dc=dbi,dc=com

To create net service names or net service aliases with the netmgr, you need to assign the OracleNetAdmins group (cn=OracleNetAdmins,cn=OracleContext…) to that user.

The OracleContextAdmins group is a super-user group for the Oracle Context and members of the OracleContextAdmins group can add all supported types of entries to the Oracle Context. The directory user (in most cases the cn=Directory Manager) that created Oracle Context is automatically added to these groups. Other users can be added to these groups by the directory administrator.

To add the cn=oud,ou=people,dc=dbi,dc=com user to the particular groups, run the following ldapmodify command:

[dbafmw@dbidg01 bin]$ ldapmodify --hostname dbidg01 --port 1389 --bindDN cn="Directory Manager" --bindPasswordFile ~/.oudpwd
dn: cn=oud,ou=people,dc=dbi,dc=com
changetype: modify
add: ds-privilege-name
ds-privilege-name: password-reset
Processing MODIFY request for cn=oud,ou=people,dc=dbi,dc=com
MODIFY operation successful for DN cn=oud,ou=people,dc=dbi,dc=com

dn: cn=oraclenetadmins,cn=oraclecontext,dc=dbi,dc=com
changetype: modify
add: uniquemember
uniquemember:  cn=oud,ou=people,dc=dbi,dc=com
Processing MODIFY request for cn=oraclenetadmins,cn=oraclecontext,dc=dbi,dc=com
MODIFY operation successful for DN cn=oraclenetadmins,cn=oraclecontext,dc=dbi,dc=com

dn: cn=oraclecontextadmins,cn=groups,cn=oraclecontext,dc=dbi,dc=com
changetype: modify
add: uniquemember
uniquemember:  cn=oud,ou=people,dc=dbi,dc=com
Processing MODIFY request for cn=oraclecontextadmins,cn=groups,cn=oraclecontext,dc=dbi,dc=com
MODIFY operation successful for DN cn=oraclecontextadmins,cn=groups,cn=oraclecontext,dc=dbi,dc=com

[dbafmw@dbidg01 bin]$

Now, let’s connect again via netmgr and add a new entry.

That’s it. :-)

Conclusion

Not only patching, but also a minimum set of privileges makes your OUD more secure.

 

Cet article OUD 11.1.2.3 – What are the minimum privileges for administering connect strings in OUD? est apparu en premier sur Blog dbi services.

OUD 11.1.2.3 – Patch 25840309: OUD BUNDLE PATCH 11.1.2.3.170718

Wed, 2017-07-19 05:55

A new OUD bundle patch was release at the 18th of July 2017, and this gives me the opportunity to show how fast an OUD can be patched, if configured correctly. In case you have an OUD multi master replication with a load balancer virtual IP on top, then zero downtime patching is possible. This is the configuration that I usually recommend. Not only for high availability, but also for maintenance.

A typical OUD installation consists not only of the OUD itself, but also of a few other components like the JDK, the WebLogic and the ODSM which is deployed in WebLogic and the last but not lease the OPatch utility.

In the end, for upgrading successfully to the OUD to bundle patch 11.1.2.3.170718 we need the following patches.

  • Patch 25840309: OUD BUNDLE PATCH 11.1.2.3.170718
  • Patch 13079846: Oracle JDK 1.7.0 (build 1.7.0_151-b15)
  • Patch 25869650: SU Patch [B25A]: WLS PATCH SET UPDATE 10.3.6.0.170718
  • Patch 6880880: OPatch patch of version 11.1.0.12.9 for Oracle software releases 11.1.0.x (OCT 2015)

A few steps can be done beforehand, like patching the OPatch itself and rolling out the new JDK. No downtime is required for those steps. A very important part is the Java environment. If you have set it up like in the following blog, you can change to the new JDK by simply changing a symbolic link.

https://blog.dbi-services.com/oud-oracle-unified-directory-11-1-2-3-how-to-change-the-java-version-of-oud-11-1-2-3-with-weblogic-and-odsm/

Ok. Let’s get started. Currently we do have the following patches in the staging area.

[dbafmw@dbidg01 20170718]$ ls -l
total 698560
-rw-r--r-- 1 dbafmw oinstall 467017155 Jul 19 07:37 p13079846_17000_Linux-x86-64.zip
-rw-r--r-- 1 dbafmw oinstall  98414546 Jul 19 07:33 p25840309_111230_Generic.zip
-rw-r--r-- 1 dbafmw oinstall  95019336 Jul 19 07:38 p25869650_1036_Generic.zip
-rw-r--r-- 1 dbafmw oinstall  54867592 Jul 19 08:34 p6880880_111000_Linux-x86-64.zip

And the following versions.

-- Current Java Version

[dbafmw@dbidg01 bin]$ ./java -version
java version "1.7.0_141"
Java(TM) SE Runtime Environment (build 1.7.0_141-b11)
Java HotSpot(TM) 64-Bit Server VM (build 24.141-b11, mixed mode)

-- Current WebLogic Version

. /u01/app/oracle/product/Middleware/11.1.2.3/wlserver_10.3/server/bin/setWLSEnv.sh
java weblogic.version -verbose
	
WebLogic Server 10.3.6.0.170117 PSU Patch for BUG24667634 FRI NOV 25 18:34:42 IST 2016 ImplVersion: 10.3.6.0
WebLogic Server 10.3.6.0  Tue Nov 15 08:52:36 PST 2011 1441050  ImplVersion: 10.3.6.0

-- Current OUD Version and OPatch version

[dbafmw@dbidg01 OPatch]$ ./opatch lsinv
Oracle Interim Patch Installer version 11.1.0.11.0
Copyright (c) 2017, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/oraInst.loc
OPatch version    : 11.1.0.11.0
OUI version       : 11.1.0.11.0
Log file location : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/cfgtoollogs/opatch/opatch2017-07-19_08-53-32AM_1.log

OPatch detects the Middleware Home as "/u01/app/oracle/product/Middleware/11.1.2.3"

Lsinventory Output file location : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/cfgtoollogs/opatch/lsinv/lsinventory2017-07-19_08-53-32AM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle Unified Directory                                             11.1.2.3.0
There are 1 products installed in this Oracle Home.

Interim patches (2) :

Patch  25383162     : applied on Fri Jun 02 11:39:44 CEST 2017
Unique Patch ID:  21007593
   Created on 8 Feb 2017, 02:26:53 hrs PST8PDT
   Bugs fixed:
     21110317, 24731779, 19596240, 20987332, 20786014, 19271756, 21024218
     24333856, 21184039, 17409345, 22555819, 21569267, 19987242, 17416498
     23703560, 22927723, 21835208, 21278352, 22308304, 24409239, 21620706
     21225831, 21634829, 23317872, 21126991, 20771322, 21189204, 19670854
     23327572, 22843404, 21097296, 23066020, 21298039, 21967348, 20688926
     20813995, 17488044, 18311877, 21249082, 20198154, 20415424, 23032811
     21207635, 21415093, 21831011, 21575993, 21882589, 17568241, 22611713
     20363721, 21498205, 21301617, 24577814, 19991366, 21098103, 21932111
     24390289, 21282258, 19636259, 24399724, 22916463, 20235234, 20630323
     20113230, 20554639, 24481116, 24681490, 21924319, 20979061, 20469049
     24490948, 22017920, 23195988, 22564810, 24565597, 22950438, 20881625
     21169776, 20736946, 21032490, 22641166, 21644986, 21337744, 23262289
     21509636, 21183856, 22369027, 22075202, 21464932, 20692543, 20805136
     21754188, 21889432, 20924952, 21683538, 20554436, 24372668, 20881496
     21527426, 23703466, 21605154, 24310538, 21959508, 22689164, 20875947
     22119339, 20778847, 23207713, 21503767, 21310463, 20554511, 21387378
     21075558, 22180844, 21665762, 20987758, 24405001, 20529805, 20470402
     20381973, 21881962, 20830152, 23090053, 17406282, 21461976, 20989144
     20180870, 20312191, 21111836, 22732230, 21387387, 19504096, 24544860, 18312432

Patch  21197325     : applied on Mon Mar 27 12:25:39 CEST 2017
Unique Patch ID:  18996262
   Created on 3 Jul 2015, 07:28:59 hrs PST8PDT
   Bugs fixed:
     21197325

--------------------------------------------------------------------------------

OPatch succeeded.

Patching the OPatch itself is an operation which requires no downtime. You simply unzip the new OPatch in the OUD home. Please be aware that the OPatch version 11.1.x is needed for patching the OUD. Any other version is not supported. In my case I patch the OPatch from version 11.1.0.11.0 to version 11.1.0.12.9.

[dbafmw@dbidg01 20170718]$ cp p6880880_111000_Linux-x86-64.zip /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/
[dbafmw@dbidg01 20170718]$ cd /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/
[dbafmw@dbidg01 Oracle_OUD1]$ unzip p6880880_111000_Linux-x86-64.zip
Archive:  p6880880_111000_Linux-x86-64.zip
  inflating: OPatch/operr
   creating: OPatch/scripts/oms/
  inflating: OPatch/scripts/oms/opatchauto.bat
  inflating: OPatch/scripts/oms/opatchauto
  inflating: OPatch/scripts/oms/generateMultiOMSPatchingScripts.pl
   creating: OPatch/scripts/oms/oms_child_scripts/
  inflating: OPatch/scripts/oms/oms_child_scripts/opatchauto_fmw
  inflating: OPatch/scripts/oms/oms_child_scripts/opatchauto_fa
   creating: OPatch/scripts/oms/wlskeys/
  inflating: OPatch/scripts/oms/wlskeys/createkeys.sh
...
...

[dbafmw@dbidg01 OPatch]$ ./opatch lsinv | grep "OPatch version"
OPatch version    : 11.1.0.12.9

As a next step. We roll out the new JDK, which is 1.7.0_151-b15. This is also a step which requires no downtime, because we simply untar the new JDK in the appropriate directory. We are not changing the symbolic link at this point.

[dbafmw@dbidg01 20170718]$ cp jdk-7u151-linux-x64.tar.gz /u01/app/oracle/product/Middleware/11.1.2.3/
[dbafmw@dbidg01 20170718]$ cd /u01/app/oracle/product/Middleware/11.1.2.3/
[dbafmw@dbidg01 11.1.2.3]$ tar -xzvf jdk-7u151-linux-x64.tar.gz
jdk1.7.0_151/
jdk1.7.0_151/LICENSE
jdk1.7.0_151/release
jdk1.7.0_151/jre/
jdk1.7.0_151/jre/LICENSE
jdk1.7.0_151/jre/Welcome.html
jdk1.7.0_151/jre/plugin/
jdk1.7.0_151/jre/plugin/desktop/
jdk1.7.0_151/jre/plugin/desktop/sun_java.png
jdk1.7.0_151/jre/plugin/desktop/sun_java.desktop
jdk1.7.0_151/jre/lib/
jdk1.7.0_151/jre/lib/fontconfig.SuSE.10.properties.src
jdk1.7.0_151/jre/lib/alt-rt.jar
jdk1.7.0_151/jre/lib/meta-index
jdk1.7.0_151/jre/lib/net.properties
jdk1.7.0_151/jre/lib/oblique-fonts/
jdk1.7.0_151/jre/lib/oblique-fonts/LucidaSansDemiOblique.ttf
jdk1.7.0_151/jre/lib/oblique-fonts/fonts.dir
jdk1.7.0_151/jre/lib/oblique-fonts/LucidaTypewriterOblique.ttf
...
jdk1.7.0_151/db/bin/sysinfo.bat
jdk1.7.0_151/db/bin/setNetworkClientCP
jdk1.7.0_151/db/bin/derby_common.bat
jdk1.7.0_151/db/bin/startNetworkServer.bat
jdk1.7.0_151/db/bin/startNetworkServer
jdk1.7.0_151/db/bin/setNetworkServerCP.bat
jdk1.7.0_151/db/bin/stopNetworkServer
jdk1.7.0_151/db/README-JDK.html
jdk1.7.0_151/db/NOTICE
jdk1.7.0_151/README.html
jdk1.7.0_151/THIRDPARTYLICENSEREADME.txt

The symbolic link is still pointing to the old JDK version, which is 1.7.0_141.

[dbafmw@dbidg01 11.1.2.3]$ rm jdk-7u151-linux-x64.tar.gz
rm: remove write-protected regular file ‘jdk-7u151-linux-x64.tar.gz’? y
[dbafmw@dbidg01 11.1.2.3]$ ls -l | grep jdk
lrwxrwxrwx  1 dbafmw oinstall     12 May 29 14:10 jdk -> jdk1.7.0_141
drwxr-x---  8 dbafmw oinstall   4096 Dec 12  2016 jdk1.7.0_131
drwxr-xr-x  8 dbafmw oinstall   4096 Mar 14 06:10 jdk1.7.0_141
drwxr-xr-x  8 dbafmw oinstall   4096 Jul 12 11:31 jdk1.7.0_151

Now the downtime for the WebLogic and OUD server starts. From a downtime perspective, the WebLogic server is not so important, because it is only used for ODSM. If the WebLogic server is down, it means only that you can’t use the graphical ODSM tool anymore. It is much more important to get the OUD up and running again.

[dbafmw@dbidg01 ~]$ ./stopOUD.sh
Stopping Weblogic Server...
...
Stopping Derby Server...
Stopping Server...
...

After the WebLogic and OUD server is stopped, it is time to switch the symbolic link for the JDK to the new version.

[dbafmw@dbidg01 11.1.2.3]$ rm jdk
[dbafmw@dbidg01 11.1.2.3]$ ln -s jdk1.7.0_151 jdk
[dbafmw@dbidg01 11.1.2.3]$ ls -l | grep jdk
lrwxrwxrwx  1 dbafmw oinstall     12 Jul 19 09:02 jdk -> jdk1.7.0_151
drwxr-x---  8 dbafmw oinstall   4096 Dec 12  2016 jdk1.7.0_131
drwxr-xr-x  8 dbafmw oinstall   4096 Mar 14 06:10 jdk1.7.0_141
drwxr-xr-x  8 dbafmw oinstall   4096 Jul 12 11:31 jdk1.7.0_151

Now we can patch the OUD to 11.1.2.3.170718.

[dbafmw@dbidg01 OPatch]$ cdh
[dbafmw@dbidg01 11.1.2.3]$ cd Oracle_OUD1/OPatch/
[dbafmw@dbidg01 OPatch]$ ./opatch apply /u01/app/oracle/product/Middleware/stage/PSU/20170718/25840309
Oracle Interim Patch Installer version 11.1.0.12.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/oraInst.loc
OPatch version    : 11.1.0.12.9
OUI version       : 11.1.0.11.0
Log file location : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/cfgtoollogs/opatch/25840309_Jul_19_2017_09_05_01/apply2017-07-19_09-05-01AM_1.log

OPatch detects the Middleware Home as "/u01/app/oracle/product/Middleware/11.1.2.3"

Applying interim patch '25840309' to OH '/u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1'
Verifying environment and performing prerequisite checks...

Conflicts/Supersets for each patch are:

Patch : 25840309

        Bug Superset of 25383162
        Super set bugs are:
        21110317,  24731779,  19596240,  20987332,  20786014,  19271756,  21024218,  24333856,  21184039,  17409345,  22555819,  21569267,  19987242,  17416498,  23703560,  22927723,  21835208,  21278352,  22308304,  24409239,  21620706,  21225831,  21634829,  23317872,  21126991,  20771322,  21189204,  19670854,  23327572,  22843404,  21097296,  23066020,  21298039,  21967348,  20688926,  20813995,  17488044,  18311877,  21249082,  20198154,  20415424,  23032811,  21207635,  21415093,  21831011,  21575993,  21882589,  17568241,  22611713,  20363721,  21498205,  21301617,  24577814,  19991366,  21098103,  21932111,  24390289,  21282258,  19636259,  24399724,  22916463,  20235234,  20630323,  20113230,  20554639,  24481116,  24681490,  21924319,  20979061,  20469049,  24490948,  22017920,  23195988,  22564810,  24565597,  22950438,  20881625,  21169776,  20736946,  21032490,  22641166,  21644986,  21337744,  23262289,  21509636,  21183856,  22369027,  22075202,  21464932,  20692543,  20805136,  21754188,  21889432,  20924952,  21683538,  20554436,  24372668,  20881496,  21527426,  23703466,  21605154,  24310538,  21959508,  22689164,  20875947,  22119339,  20778847,  23207713,  21503767,  21310463,  20554511,  21387378,  21075558,  22180844,  21665762,  20987758,  24405001,  20529805,  20470402,  20381973,  21881962,  20830152,  23090053,  17406282,  21461976,  20989144,  20180870,  20312191,  21111836,  22732230,  21387387,  19504096,  24544860,  18312432
Interim patch 25840309 is a superset of the patch(es) [  25383162 ] in the Oracle Home
OPatch will roll back the subset patches and apply the given patch.
All checks passed.
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]:  Y

Backing up files...
Rolling back interim patch '25383162' from OH '/u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1'

Patching component oracle.idm.oud, 11.1.2.3.0...
RollbackSession removing interim patch '25383162' from inventory

OPatch back to application of the patch '25840309' after auto-rollback.

Patching component oracle.idm.oud, 11.1.2.3.0...

Verifying the update...
Patch 25840309 successfully applied
Log file location: /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/cfgtoollogs/opatch/25840309_Jul_19_2017_09_05_01/apply2017-07-19_09-05-01AM_1.log

OPatch succeeded.
[dbafmw@dbidg01 OPatch]$

[dbafmw@dbidg01 OPatch]$ ./opatch lsinv
Oracle Interim Patch Installer version 11.1.0.12.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/oraInst.loc
OPatch version    : 11.1.0.12.9
OUI version       : 11.1.0.11.0
Log file location : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/cfgtoollogs/opatch/opatch2017-07-19_09-05-45AM_1.log

OPatch detects the Middleware Home as "/u01/app/oracle/product/Middleware/11.1.2.3"

Lsinventory Output file location : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/cfgtoollogs/opatch/lsinv/lsinventory2017-07-19_09-05-45AM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle Unified Directory                                             11.1.2.3.0
There are 1 products installed in this Oracle Home.

Interim patches (2) :

Patch  25840309     : applied on Wed Jul 19 09:05:10 CEST 2017
Unique Patch ID:  21265541
   Created on 14 Jul 2017, 00:55:06 hrs PST8PDT
   Bugs fixed:
     21310463, 21503767, 23531972, 25396981, 25201795, 25411721, 20554639
     24372668, 20415424, 17406282, 22180844, 24409239, 17416498, 20987332
     21298039, 21075558, 21575993, 24577814, 21110317, 22927723, 20235234
     22308304, 17409345, 21301617, 21846477, 20989144, 22641166, 21098103
     24390289, 24753992, 22564810, 21415093, 20554511, 20469049, 22732230
     21569267, 23607708, 21889432, 22963292, 21882589, 25022506, 25326997
     19596240, 21387378, 21278352, 21683538, 21387387, 18325609, 20786014
     20692543, 21665762, 21184039, 20778847, 20805136, 21932111, 20881625
     20736946, 22555819, 25176210, 19504096, 23703466, 23032811, 20771322
     20830152, 24677562, 21461976, 22017920, 19670854, 23090053, 21527426
     20813995, 21169776, 20113230, 20875947, 21282258, 24490948, 24731779
     18312432, 20529805, 21097296, 25173266, 24681490, 21464932, 21881962
     21498205, 21605154, 24793179, 21249082, 21509636, 22611713, 23703560
     22369027, 21126991, 20470402, 22075202, 21207635, 24902148, 24763428
     25451319, 25167070, 21111836, 22950438, 19991366, 20312191, 19987242
     20198154, 25166847, 21967348, 23066020, 24544860, 21032490, 24481116
     24701083, 25951621, 24405001, 21835208, 23327572, 24399724, 20979061
     20630323, 21959508, 19636259, 17568241, 25437857, 23319809, 19947883
     20987758, 20881496, 20924952, 20554436, 21754188, 20381973, 21183856
     24333856, 21620706, 24310538, 22916463, 21634829, 21337744, 22843404
     21924319, 20180870, 17488044, 20688926, 21811333, 21189204, 21644986
     23195988, 24565597, 22119339, 22689164, 23207713, 25686699, 25166924
     21831011, 18311877, 19271756, 21024218, 21680391, 21225831, 20979247
     20363721, 23262289, 23317872

Patch  21197325     : applied on Mon Mar 27 12:25:39 CEST 2017
Unique Patch ID:  18996262
   Created on 3 Jul 2015, 07:28:59 hrs PST8PDT
   Bugs fixed:
     21197325

--------------------------------------------------------------------------------

OPatch succeeded.

After the patching has successfully done, we can start the OUD instance.

[dbafmw@dbidg01 ~]$ /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/bin/start-ds
[19/Jul/2017:09:06:48 +0200] category=CORE severity=INFORMATION msgID=132 msg=The Directory Server is beginning the configuration bootstrapping process
[19/Jul/2017:09:06:48 +0200] category=CORE severity=NOTICE msgID=458886 msg=Oracle Unified Directory 11.1.2.3.170718 (build 20170621135318Z, R1706210545) starting up
[19/Jul/2017:09:06:52 +0200] category=RUNTIME_INFORMATION severity=NOTICE msgID=20381717 msg=Installation Directory:  /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1
[19/Jul/2017:09:06:52 +0200] category=RUNTIME_INFORMATION severity=NOTICE msgID=20381719 msg=Instance Directory:      /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD
...
...
[19/Jul/2017:09:06:57 +0200] category=CORE severity=INFORMATION msgID=720 msg=No worker queue thread pool size specified: sizing automatically to use 24 threads
[19/Jul/2017:09:06:57 +0200] category=PROTOCOL severity=NOTICE msgID=2556180 msg=Started listening for new connections on Administration Connector 0.0.0.0 port 4444
[19/Jul/2017:09:06:57 +0200] category=PROTOCOL severity=NOTICE msgID=2556180 msg=Started listening for new connections on LDAP Connection Handler 192.168.56.201 port 1389
[19/Jul/2017:09:06:57 +0200] category=PROTOCOL severity=NOTICE msgID=2556180 msg=Started listening for new connections on LDAP Connection Handler 192.168.56.201 port 1636
[19/Jul/2017:09:06:57 +0200] category=CORE severity=NOTICE msgID=458887 msg=The Directory Server has started successfully
[19/Jul/2017:09:06:57 +0200] category=CORE severity=NOTICE msgID=458891 msg=The Directory Server has sent an alert notification generated by class org.opends.server.core.DirectoryServer (alert type org.opends.server.DirectoryServerStarted, alert ID 458887):  The Directory Server has started successfully
[dbafmw@dbidg01 ~]$

Ok. OUD is up and running again and ready to use. The whole downtime for patching it, in case you don’t have a multi master replication, was only a few minutes. That is very fast from my point of view.

Now let’s take care of the WebLogic Server. First of all, we need to unzip the new WebLogic patch into the bsu/cache_dir directory, or any other directory.

[dbafmw@dbidg01 ~]$ echo $JAVA_HOME
/u01/app/oracle/product/Middleware/11.1.2.3/jdk
[dbafmw@dbidg01 ~]$ /u01/app/oracle/product/Middleware/11.1.2.3/jdk/bin/java -version
java version "1.7.0_151"
Java(TM) SE Runtime Environment (build 1.7.0_151-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.151-b15, mixed mode)

[dbafmw@dbidg01 20170718]$ cp p25869650_1036_Generic.zip /u01/app/oracle/product/Middleware/11.1.2.3/utils/bsu/cache_dir/
[dbafmw@dbidg01 20170718]$ cd /u01/app/oracle/product/Middleware/11.1.2.3/utils/bsu/cache_dir/
[dbafmw@dbidg01 cache_dir]$ unzip p25869650_1036_Generic.zip
Archive:  p25869650_1036_Generic.zip
  inflating: README.txt
  inflating: B25A.jar
  inflating: patch-catalog_25250.xml
[dbafmw@dbidg01 cache_dir]$

Patching the WebLogic server is a little different then patching the OUD. It does not come with the OPatch utility, but for and foremost it is not cumulative. That means, you can’t apply patch B over patch A. You have to remove patch A beforehand. In case you have not removed it beforehand, you will end up with conflict like shown below.

[dbafmw@dbidg01 cache_dir]$ . /u01/app/oracle/product/Middleware/11.1.2.3/wlserver_10.3/server/bin/setWLSEnv.sh

[dbafmw@dbidg01 cache_dir]$ cd /u01/app/oracle/product/Middleware/11.1.2.3/utils/bsu/

[dbafmw@dbidg01 bsu]$ ./bsu.sh -install -patch_download_dir=/u01/app/oracle/product/Middleware/11.1.2.3/utils/bsu/cache_dir -patchlist=XIDD -prod_dir=/u01/app/oracle/product/Middleware/11.1.2.3/wlserver_10.3
Result: Failure
Unable to locate patch archive: XIDD
[dbafmw@dbidg01 bsu]$ ./bsu.sh -install -patch_download_dir=/u01/app/oracle/product/Middleware/11.1.2.3/utils/bsu/cache_dir -patchlist=B25A -prod_dir=/u01/app/oracle/product/Middleware/11.1.2.3/wlserver_10.3
Checking for conflicts.....
Conflict(s) detected - resolve conflict condition and execute patch installation again
Conflict condition details follow:
Patch B25A is mutually exclusive and cannot coexist with patch(es): XIDD

[dbafmw@dbidg01 bsu]$ ./bsu.sh -remove -patchlist=XIDD -prod_dir=/u01/app/oracle/product/Middleware/11.1.2.3/wlserver_10.3
Checking for conflicts....
No conflict(s) detected

Removing Patch ID: XIDD..
Result: Success

[dbafmw@dbidg01 bsu]$

[dbafmw@dbidg01 bsu]$ ./bsu.sh -install -patch_download_dir=/u01/app/oracle/product/Middleware/11.1.2.3/utils/bsu/cache_dir -patchlist=B25A -prod_dir=/u01/app/oracle/product/Middleware/11.1.2.3/wlserver_10.3
Checking for conflicts.....
No conflict(s) detected

Installing Patch ID: B25A..
Result: Success

[dbafmw@dbidg01 bsu]$

[dbafmw@dbidg01 ~]$ nohup /u01/app/oracle/product/Middleware/11.1.2.3/user_projects/domains/base_domain/bin/startWebLogic.sh &
[1] 20001
[dbafmw@dbidg01 ~]$ nohup: ignoring input and appending output to ‘nohup.out’

Ready. We have now patched the WebLogic server to 10.3.6.0.170718. In case you have issues with the bsu utility, you might want to use the -log option for debugging: bsu.sh -log=bsu_debug.trc. Or you take a quick look at the BSU Command line interface reference guide. http://docs.oracle.com/cd/E14759_01/doc.32/e14143/commands.htm

Ok. OUD and WebLogic is fully patched and running. But what about the ODSM? Do I have to redeploy it? No, you don’t. The odsm.ear is redeployed automatically. The odsm.ear file is shipped along with every OUD bundle patch (bppatchnumber/files/odsm/odsm.ear). A new odsm.ear file is included with Patch 11.1.2.3.170718 at the following location:

[dbafmw@dbidg01 odsm]$ pwd
/u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/odsm
[dbafmw@dbidg01 odsm]$ ls -l
total 36660
-rw-r----- 1 dbafmw oinstall 37535804 Jul 14 09:55 odsm.ear

When you restart the Server for the first time after applying the patch, it will automatically redeploy the new odsm.ear file. Therefore, you might experience a slower restart of the WebLogic server. If you want, you cat take a look at the logs related to odsm.ear file deployment.

[dbafmw@dbidg01 logs]$ pwd
/u01/app/oracle/product/Middleware/11.1.2.3/user_projects/domains/base_domain/servers/AdminServer/logs
[dbafmw@dbidg01 logs]$ cat AdminServer.log | grep odsm.ear
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <J2EE> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602820> <BEA-160166> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/application.xml'. The plan was merged.>
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <Munger> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602830> <BEA-2156202> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/application.xml'. The plan was merged.>
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <J2EE> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602830> <BEA-160166> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/weblogic-application.xml'. The plan was merged.>
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <Munger> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602832> <BEA-2156202> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/weblogic-application.xml'. The plan was merged.>
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <Munger> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602833> <BEA-2156202> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/weblogic-application.xml'. The plan was merged.>
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <Munger> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602863> <BEA-2156202> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/application.xml'. The plan was merged.>
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <Munger> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602863> <BEA-2156202> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/weblogic-application.xml'. The plan was merged.>
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <Munger> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602863> <BEA-2156202> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/application.xml'. The plan was merged.>
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <Munger> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602863> <BEA-2156202> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/weblogic-application.xml'. The plan was merged.>

If for any reason the odsm.ear file deployment fails during the first restart of the Server, then you manually need to redeploy the file. That’s it. We have now successfully upgraded JDK, OPatch, OUD, and WebLogic/ODSM to the latest release.

At the end, repeat the whole procedure on the second OUD, if you have one.

Conclusion

Patching an OUD requires a few steps, however, it can be done very fast if it is prepared well. And in case you have multi master replication with a load balancer vip on top, you can do it with no downtime at all.

 

Cet article OUD 11.1.2.3 – Patch 25840309: OUD BUNDLE PATCH 11.1.2.3.170718 est apparu en premier sur Blog dbi services.

Recommended DB Parameters on the Oracle Database Appliance (ODA)

Tue, 2017-07-18 10:06

When creating a DB on the ODA using


# oakcli create database -db <db-name>

a template is used to set a couple of DB parameters like e.g.


_datafile_write_errors_crash_instance=FALSE
_db_writer_coalesce_area_size=16777216
_disable_interface_checking=TRUE
_ENABLE_NUMA_SUPPORT=FALSE
_FILE_SIZE_INCREASE_INCREMENT=2143289344
_gc_policy_time=0
_gc_undo_affinity=FALSE
db_block_checking='FULL'
db_block_checksum='FULL'
db_lost_write_protect='TYPICAL'
filesystemio_options='setall'
parallel_adaptive_multi_user=FALSE
parallel_execution_message_size=16384
parallel_threads_per_cpu=2
use_large_pages='ONLY'

In recent projects I saw a couple of DBs running on ODA, which did not have (all) those parameters set, because the DBs were migrated from a non-ODA-platform and the customer took over the previous settings.

The questions are: Are above parameters mandatory on ODA and where do I find them?

Actually Oracle writes in the documentation

http://docs.oracle.com/cd/E83239_01/doc.121/e83201/database-templates-oracle-database-appliance.htm#CMTAR269

“Oracle strongly recommends that you use the Oracle Database Appliance templates, because they implement best practices and are configured specifically for Oracle Database Appliance.”

So it’s not mandatory, but it’s “strongly recommended” by Oracle to set those parameters on ODA.

The parameters are actually defined in the XML-files

/opt/oracle/oak/onecmd/templates/OAK_oltp.dbt
/opt/oracle/oak/onecmd/templates/OAK_dss.dbt

E.g. on a virtualized X5-2 HA with 12.1.2.8.0 installed:


# grep "initParam name" /opt/oracle/oak/onecmd/templates/OAK_oltp.dbt
<initParam name="AUDIT_SYS_OPERATIONS" value="TRUE"/>
<initParam name="AUDIT_TRAIL" value="DB"/>
<initParam name="GLOBAL_NAMES" value="TRUE"/>
<initParam name="OS_AUTHENT_PREFIX" value=""/>
<initParam name="SQL92_SECURITY" value="TRUE"/>
<initParam name="PARALLEL_ADAPTIVE_MULTI_USER" value="FALSE"/>
<initParam name="PARALLEL_EXECUTION_MESSAGE_SIZE" value="16384"/>
<initParam name="PARALLEL_THREADS_PER_CPU" value="2"/>
<initParam name="_disable_interface_checking" value="TRUE"/>
<initParam name="_gc_undo_affinity" value="FALSE"/>
<initParam name="_gc_policy_time" value="0"/>
<initParam name="SESSION_CACHED_CURSORS" value="100"/>
<initParam name="OPEN_CURSORS" value="1000"/>
<initParam name="CURSOR_SHARING" value="EXACT"/>
<initParam name="_ENABLE_NUMA_SUPPORT" value="FALSE"/>
<initParam name="DB_LOST_WRITE_PROTECT" value="TYPICAL"/>
<initParam name="DB_BLOCK_CHECKSUM" value="FULL"/>
<initParam name="DB_BLOCK_CHECKING" value="FULL"/>
<initParam name="FAST_START_MTTR_TARGET" value="300"/>
<initParam name="UNDO_RETENTION" value="900"/>
<initParam name="_FILE_SIZE_INCREASE_INCREMENT" value="2143289344"/>
<initParam name="FILESYSTEMIO_OPTIONS" value="setall"/>
<initParam name="use_large_pages" value="only"/>
<initParam name="DB_FILES" value="1024"/>
<initParam name="processes" value="4800"/>
<initParam name="pga_aggregate_target" value="49152" unit="MB"/>
<initParam name="sga_target" value="98304" unit="MB"/>
<initParam name="db_create_file_dest" value="+DATA"/>
<initParam name="log_buffer" value="64000000" />
<initParam name="cpu_count" value="48"/>
<initParam name="pga_aggregate_limit" value="49152" unit="MB"/>
<initParam name="_datafile_write_errors_crash_instance" value="false"/>
<initParam name="_fix_control" value="18960760:on"/>
<initParam name="db_block_size" value="8" unit="KB"/>
<initParam name="compatible" value="11.2.0.x.0"/>
<initParam name="undo_tablespace" value="UNDOTBS1"/>
<initParam name="control_files" value="("{ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/control01.ctl")"/>
<initParam name="audit_file_dest" value="{ORACLE_BASE}/admin/{DB_UNIQUE_NAME}/adump"/>
<initParam name="audit_trail" value="db"/>
<initParam name="diagnostic_dest" value="{ORACLE_BASE}"/>
<initParam name="remote_login_passwordfile" value="EXCLUSIVE"/>
<initParam name="dispatchers" value="(PROTOCOL=TCP) (SERVICE={SID}XDB)"/>
<initParam name="db_recovery_file_dest" value="+RECO"/>
<initParam name="db_recovery_file_dest_size" value="1843200" unit="MB"/>
<initParam name="db_create_online_log_dest_1" value="+REDO" />
<initParam name="_db_writer_coalesce_area_size" value="16777216"/>

Oracle does not take those parameters blindly when creating a DB with oakcli, but adjusts them as e.g. described in the Blog

https://blog.dbi-services.com/oda-32gb-template-but-got-a-database-with-16gb-sga/

I.e. the SGA_TARGET and PGA_AGGREGATE_TARGET parameters are adjusted based on the chosen database class. Also the COMPATIBLE-parameter is set to the current release (e.g. to 12.1.0.2.0).

So if you’re not able to create the DB on ODA using

# oakcli create database -db <db-name>

then I recommend to check the XML-file OAK_oltp.dbt (or in case of a Decision Support System/Warehouse-DB the file OAK_dss.dbt) and set the parameters in your database accordingly. Alternatively (and probably even better) you may create a dummy-DB using oakcli and check Oracle’s settings, which can then be used in your migrated DB, and drop the dummy-DB afterwards.

Here the parameters of a 12c-DB created on a virtualized ODA X6-2 HA 12.1.2.11.0 with oakcli create database using the smallest setting with 1 Core and the DSS-template:


*._datafile_write_errors_crash_instance=false
*._db_writer_coalesce_area_size=16777216
*._disable_interface_checking=TRUE
*._ENABLE_NUMA_SUPPORT=FALSE
*._FILE_SIZE_INCREASE_INCREMENT=2143289344
*._fix_control='18960760:on'
*._gc_policy_time=0
*._gc_undo_affinity=FALSE
*.audit_file_dest='/u01/app/oracle/admin/C12TEST/adump'
*.audit_sys_operations=TRUE
*.audit_trail='db'
*.cluster_database=true
*.compatible='12.1.0.2.0'
*.control_files='/u01/app/oracle/oradata/datastore/C12TEST/C12TEST/controlfile/o1_mf_dpw4ljnv_.ctl'
*.cpu_count=2
*.cursor_sharing='EXACT'
*.db_block_checking='FULL'
*.db_block_checksum='FULL'
*.db_block_size=8192
*.db_create_file_dest='/u02/app/oracle/oradata/datastore/.ACFS/snaps/C12TEST'
*.db_create_online_log_dest_1='/u01/app/oracle/oradata/datastore/C12TEST'
*.db_domain=''
*.db_files=1024
*.db_lost_write_protect='TYPICAL'
*.db_name='C12TEST'
*.db_recovery_file_dest='/u01/app/oracle/fast_recovery_area/datastore/C12TEST'
*.db_recovery_file_dest_size=476160m
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=C12TESTXDB)'
*.fast_start_mttr_target=300
*.filesystemio_options='setall'
*.global_names=TRUE
*.inmemory_size=0m
*.log_archive_format='%t_%s_%r.dbf'
*.log_buffer=16000000
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=1000
*.os_authent_prefix=''
*.parallel_adaptive_multi_user=TRUE
*.parallel_degree_policy='MANUAL'
*.parallel_execution_message_size=16384
*.parallel_force_local=FALSE
*.parallel_max_servers=80
*.parallel_min_servers=8
*.parallel_threads_per_cpu=2
*.pga_aggregate_limit=2048m
*.pga_aggregate_target=2048m
*.processes=200
*.remote_login_passwordfile='exclusive'
*.session_cached_cursors=100
*.sga_target=1024m
*.sql92_security=TRUE
*.undo_retention=900
*.use_large_pages='ONLY'

 

Cet article Recommended DB Parameters on the Oracle Database Appliance (ODA) est apparu en premier sur Blog dbi services.

Documentum – Change password – 2 – CS – dm_bof_registry

Sat, 2017-07-15 03:30

When installing a Global Registry on a Content Server, you will be asked to setup the BOF username and password. The name of this user is by default “dm_bof_registry” so even if you can change it, I will use this value in this blog. This is one of the important accounts that are being created inside the Global Registry. So, what would be the needed steps to change the password of this account?

 

Let’s start with the simple part: changing the password of the account in the Global Registry. For this, I will use iapi below but you can do the same thing using Documentum Administrator, idql, dqMan or anything else that works. First, let’s login on the Content Server, switch to the Installation Owner’s account and start with defining an environment variable that will contain the NEW password to be used:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the dm_bof_registry password: " bof_pwd; echo
Please enter the dm_bof_registry password:
[dmadmin@content_server_01 ~]$

 

Once that is done, we can now execute the iapi commands below to update the password for the dm_bof_registry account. As there is a local trust on the Content Server with the Installation Owner, I don’t need to enter the password, so I use “xxx” instead to login to the Global Registry (GR_DOCBASE). Execute the commands below one after the other and don’t include the “> ” characters, just past the iapi commands and after pasting the final EOF, an iapi session will be opened and all commands will be executed, like that:

[dmadmin@content_server_01 ~]$ iapi GR_DOCBASE -Udmadmin -Pxxx << EOF
> retrieve,c,dm_user where user_login_name='dm_bof_registry'
> set,c,l,user_password
> $bof_pwd
> save,c,l
> EOF


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000905 started for user dmadmin."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> ...
110f123456000144
API> SET> ...
OK
API> ...
OK
API> Bye
[dmadmin@content_server_01 ~]$

 

Then to verify that the password has been set properly in the Global Registry, we can try to login with the dm_bof_registry account:

[dmadmin@content_server_01 ~]$ echo quit | iapi GR_DOCBASE -Udm_bof_registry -P$bof_pwd


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000906 started for user dm_bof_registry."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@content_server_01 ~]$

 

If the password has been changed properly, the output will be similar to the one above: a session will be opened and the only command executed will be “quit” which will close the iapi session automatically. That was pretty easy, right? Well that’s clearly not all there is to do to change the BOF password, unfortunately…

 

The “problem” with the dm_bof_registry account is that it is used on all DFC Clients to register them, to establish trust, aso… Therefore, if you change the password of this account, you will need to reflect this change on all clients that are connecting to your Content Servers. In the steps below, I will provide some commands that can be used to do that on the different typical DFC clients (JMS, xPlore, DA, D2, …). If I’m not talking about one of your DFC client, then basically the steps are always the same, it’s just the commands that differs:

  • Listing all dfc.keystore
  • Updating the dfc.properties
  • Removing/renaming the dfc.keystore files
  • Restarting the DFC clients
  • Checking that the dfc.keystore files have been recreated

 

Before going through the different DFC Clients, you first need to encrypt the BOF user’s password because it is always be used in its encrypted form, so let’s encrypt it on a Content Server:

[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -cp $DOCUMENTUM_SHARED/dfc/dfc.jar com.documentum.fc.tools.RegistryPasswordUtils ${bof_pwd}
AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0
[dmadmin@content_server_01 ~]$

 

I generated a random string for this example (“AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0″) but this will be the encrypted password of our user. I will use this value in the commands below so whenever you see this, just replace it with what your “java -cp ..” command returned.

 

I. Content Server

On the Content Server, the main dfc client is the JMS. You will have one dfc.properties for each JMS application, one global for the CS, aso… So, let’s update all that with a few commands only. Normally you should only get the definition of the dfc.globalregistry.password in the file $DOCUMENTUM_SHARED/config/dfc.properties. If you got this definition elsewhere, you should maybe consider using the “#include” statement to avoid duplicating the definitions…

[dmadmin@content_server_01 ~]$ for i in `find $DOCUMENTUM_SHARED -type f -name "dfc.keystore"`; do ls -l ${i}; done
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' $DOCUMENTUM_SHARED/config/dfc.properties
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for i in `find $DOCUMENTUM_SHARED -type f -name "dfc.keystore"`; do ls -l ${i}; mv "${i}" "${i}_bck_$(date "+%Y%m%d")"; done
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ $DOCUMENTUM_SHARED/jboss7.1.1/server/stopMethodServer.sh
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ nohup $DOCUMENTUM_SHARED/jboss7.1.1/server/startMethodServer.sh >> $DOCUMENTUM_SHARED/jboss7.1.1/server/nohup-JMS.out 2>&1 &
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for i in `find $DOCUMENTUM_SHARED -type f -name "dfc.keystore"`; do ls -l ${i}; done

 

If you do it properly, all the dfc.keystore files will be recreated with the restart and you can verify that by comparing the output of the first and last commands.

 

II. WebLogic Server

In this part, I will assume a WebLogic Server is used for the D2, D2-Config and DA applications. If you are using Tomcat instead, then just adapt the path. Below I will use:

  • $WLS_APPLICATIONS as the directory where all the Application WAR files are present. If you are using exploded applications (it’s just a folder, not a WAR file) OR if you are using an external dfc.properties file (it’s possible even with a WAR file to extract the dfc.properties for it), then the “jar -xvf” and “jar -uvf” commands aren’t needed.
  • $WLS_APPS_DATA as the directory where the Application Data are present (Application log files, dfc.keystore, cache, …)

 

These two folders might be the same depending on how you configured your Application Server. All I’m doing below is just updating the dfc.properties files for D2, D2-Config and DA in order to use the new encrypted password.

[weblogic@weblogic_server_01 ~]$ for i in `find $WLS_APPS_DATA -type f -name "dfc.keystore"`; do ls -l ${i}; done
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ cd $WLS_APPLICATIONS/
[weblogic@weblogic_server_01 ~]$ jar -xvf D2.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ jar -uvf D2.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ jar -xvf D2-Config.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ jar -uvf D2-Config.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ jar -xvf da.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ jar -uvf da.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ for i in `find $WLS_APPS_DATA -type f -name "dfc.keystore"`; do ls -l ${i}; mv "${i}" "${i}_bck_$(date "+%Y%m%d")"; done

 

Once done, the next steps depend, again, on how you configured your Application Server. If you are using WAR files, you will need to redeploy them. If not, you might have to restart your Application Server for the change to be taken into account and for the keystore file to be re-created.

 

III. Full Text Server

On the Full Text Server, it’s again the same stuff but for all Index Agents this time.

[xplore@xplore_server_01 ~]$ for i in `find $XPLORE_HOME -type f -name "dfc.keystore"`; do ls -l ${i}; done
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ for i in `ls $XPLORE_HOME/jboss7.1.1/server/DctmServer_*/deployments/IndexAgent.war/WEB-INF/classes/dfc.properties`; do sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' ${i}; done
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ for i in `find $XPLORE_HOME -type f -name "dfc.keystore"`; do ls -l ${i}; mv "${i}" "${i}_bck_$(date "+%Y%m%d")"; done
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ service xplore stop
[xplore@xplore_server_01 ~]$ service xplore start
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ for i in `find $XPLORE_HOME -type f -name "dfc.keystore"`; do ls -l ${i}; done

 

Again if you do it properly, all the dfc.keystore files will be recreated with the restart.

 

When everything has been done, just let the environment up&running for some time and check the logs for authentication failures regarding the dm_bof_registry user. As you saw above, changing the dm_bof_registry password isn’t really complicated but it’s quite redundant and time consuming so better script all this! :)

 

 

 

Cet article Documentum – Change password – 2 – CS – dm_bof_registry est apparu en premier sur Blog dbi services.

Documentum – Change password – 1 – CS – AEK and Lockbox

Sat, 2017-07-15 02:53

This blog is the first one of a series that I will publish in the next few days/weeks regarding how you can change some passwords in Documentum. In these blogs, I will talk about a lot of accounts like the Installation Owner of course, the Preset and Preferences accounts, the JKS passwords, the JBoss Admin passwords, the xPlore xDB passwords, aso…

 

So, let’s dig in with the first ones: AEK and Lockbox passphrases. In this blog, I will only talk about the Content Server lockbox, it’s not about the D2 Lockbox (which is also under the JMS). I’m assuming here that the AEK key is stored in the Content Server lockbox as it is recommended starting with CS 7.2 for security reasons.

 

In this blog, I will use “dmadmin” as the Installation Owner. First, you need to connect to all Content Servers of this environment using the Installation Owner account. In case you have a High Availability environment, then you will need to do this on all Content Servers, obviously.

 

Then, I’m defining some environment variables so I’m sure I’m using the right passphrases and there is no typo in the commands. The first two commands below will be used to store the CURRENT and NEW passphrases for the AEK. The last two commands are for the Lockbox. When you execute the “read” command, the prompt isn’t returned. Just past the passphrase (it’s hidden) and press enter. Then the prompt is returned and the passphrase is stored in the environment variable. I’m describing this in this blog only. In the next blogs, I will just use the commands without explanation:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the CURRENT AEK passphrase: " c_aek_pp; echo
Please enter the CURRENT AEK passphrase:
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW AEK passphrase: " n_aek_pp; echo
Please enter the NEW AEK passphrase:
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the CURRENT lockbox passphrase: " c_lb_pp; echo
Please enter the CURRENT lockbox passphrase:
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW lockbox passphrase: " n_lb_pp; echo
Please enter the NEW lockbox passphrase:
[dmadmin@content_server_01 ~]$

 

Maybe a small backup of the Lockbox, just in case…:

[dmadmin@content_server_01 ~]$ cp -R $DOCUMENTUM/dba/secure $DOCUMENTUM/dba/secure_bck_$(date "+%Y%m%d-%H%M")
[dmadmin@content_server_01 ~]$

 

Ok then to ensure that the commands will go smoothly, let’s just verify that the environments variables are defined properly (I’m adding “__” at the end of the echo commands to be sure there is no “space” at the end of the passwords). Obviously the “read -s” commands above have been executed to hide the passphrases so if you don’t want the passphrases to be stored in the history of the shell, don’t execute the two commands below.

[dmadmin@content_server_01 ~]$ echo "CURRENT_AEK_PP=${c_aek_pp}__"; echo "NEW_AEK_PP=${n_aek_pp}__"
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ echo "CURRENT_LOCKBOX_PP=${c_lb_pp}__"; echo "NEW_LOCKBOX_PP=${n_lb_pp}__"
[dmadmin@content_server_01 ~]$

 

To verify that the CURRENT AEK and Lockbox passphrases are correct, you can execute the following commands. Just a note, when you first create the Lockbox, the Documentum Installer will ask you which algorithm you want to use… I always choose the stronger one for security reasons so I’m using below “AES_256_CBC”. If you are using something else, just adapt it:

[dmadmin@content_server_01 ~]$ dm_crypto_boot -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -passphrase ${c_aek_pp} -all

Please wait. This will take a few seconds ...

Please wait, this will take a few seconds..
Setting up the (single) passphrase for all keys in the shared memory region..
Operation succeeded
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ dm_crypto_manage_lockbox -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -resetfingerprint
Lockbox lockbox.lb
Lockbox Path /app/dctm/server/dba/secure/lockbox.lb
Reset host done
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ dm_crypto_create -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -keyname CSaek -passphrase ${c_aek_pp} -algorithm AES_256_CBC -check


Key - CSaek uses algorithm AES_256_CBC.

** An AEK store with the given passphrase exists in lockbox lockbox.lb and got status code returned as '0'.

 

For the three commands above, the result should always be “Operation succeeded”, “Reset host done” and “got status code returned as ‘0’”. If the second command fail, then obviously, it’s the Lockbox passphrase that isn’t set properly and otherwise it’s the AEK passphrase.

 

Ok now that all the passwords are set and that the current ones are working, we can start the update of the passphrases. Let’s first start with the AEK:

[dmadmin@content_server_01 ~]$ dm_crypto_change_passphrase -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -keyname CSaek -passphrase ${c_aek_pp} -newpassphrase ${n_aek_pp}
[dmadmin@content_server_01 ~]$

 

Then the Lockbox:

[dmadmin@content_server_01 ~]$ dm_crypto_manage_lockbox -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -changepassphrase -newpassphrase ${n_lb_pp}
[dmadmin@content_server_01 ~]$

 

To verify that the NEW passphrases are now used, you can again run the three above commands. The only difference is that you need to use the environment variables for the NEW passphrases and not the CURRENT (old) ones:

[dmadmin@content_server_01 ~]$ dm_crypto_boot -lockbox lockbox.lb -lockboxpassphrase ${n_lb_pp} -passphrase ${n_aek_pp} -all
[dmadmin@content_server_01 ~]$ dm_crypto_manage_lockbox -lockbox lockbox.lb -lockboxpassphrase ${n_lb_pp} -resetfingerprint
[dmadmin@content_server_01 ~]$ dm_crypto_create -lockbox lockbox.lb -lockboxpassphrase ${n_lb_pp} -keyname CSaek -passphrase ${n_aek_pp} -algorithm AES_256_CBC -check

 

Now we are almost complete. If the three previous commands gave the correct output, then it’s pretty sure that everything is OK. Nevertheless, and to be 100% sure that the Content Server Lockbox isn’t corrupted in some way, it is always good to reboot the Linux Host too. Once the Linux Host is up & running again, you will have to execute the first command above (the dm_crypto_boot) to store the Lockbox information into the Shared Memory so that the docbase(s) can start. If you are able to start the docbase(s) using the NEW passphrases, then the AEK and Lockbox have been updated successfully!

 

As a side note, if the Server Fingerprint has been updated (like some OS patching recently or stuff like that), then you might need to execute the second command too (dm_crypto_manage_lockbox) as well as regenerate the D2 Lockbox (which isn’t described in this blog but will be in a next one).

 

 

 

Cet article Documentum – Change password – 1 – CS – AEK and Lockbox est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 16 – DDL triggers

Fri, 2017-07-14 09:52

A question I received recently from a customer: Oracle gives you the possibility to create DDL triggers. DDL triggers fire (as the name implies) when DDL events occur. Can we do that in PostgreSQL? Yes, this feature is called event triggers. There is a nice list in the documentation which lists for which DDLs an event trigger can fire: Event Trigger Firing Matrix. Lets go …

As usual lets create a dummy table we can work with:

postgres=# create table t1 ( a int );
CREATE TABLE
postgres=# \d t1
      Table "public.t1"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 

What we want to do is to prevent any modifications of the table structure, how do we do this? Obviously we need a way to catch the alter statement against our table and then raise an exception displaying some text. What we need to do is to create a function which returns the pseudo type “event_trigger”:

CREATE OR REPLACE FUNCTION no_ddl() RETURNS event_trigger AS $$
DECLARE r RECORD;
BEGIN
    FOR r IN SELECT * FROM pg_event_trigger_ddl_commands() LOOP
        RAISE EXCEPTION 'You are not allowed to change %', r.object_identity;
    END LOOP;
END;
$$
LANGUAGE plpgsql;

What this function is doing is to iterate over the result set of pg_event_trigger_ddl_commands and then raises an exception. In addition to that we need the event trigger that calls the function:

CREATE EVENT TRIGGER no_ddl_allowed
  ON ddl_command_end WHEN TAG IN ('ALTER TABLE')
  EXECUTE PROCEDURE no_ddl();

Lets see if it works:

postgres=# alter table t1 add column g text;
ERROR:  You are not allowed to change public.t1
CONTEXT:  PL/pgSQL function no_ddl() line 5 at RAISE

Cool, but there is an issue with the current implementation:

postgres=# create table t2 ( a int );
CREATE TABLE
postgres=# alter table t2 add column b text;
ERROR:  You are not allowed to change public.t2
CONTEXT:  PL/pgSQL function no_ddl() line 5 at RAISE

What we effectively did is to deny all alter statements for all objects in that database. This is probably not what you want. A better approach is this:

CREATE OR REPLACE FUNCTION no_ddl() RETURNS event_trigger AS $$
DECLARE r RECORD;
BEGIN
    FOR r IN SELECT * FROM pg_event_trigger_ddl_commands() LOOP
      IF ( r.objid::regclass::text = 't1' )
      THEN
            RAISE EXCEPTION 'You are not allowed to change %', r.object_identity;
      END IF;
    END LOOP;
END;
$$
LANGUAGE plpgsql;

This way we are only raising the exception when the table “t1″ is involved and do nothing for all other tables:

postgres=# alter table t2 add column b text;
ALTER TABLE
postgres=# alter table t1 add column b text;
ERROR:  You are not allowed to change public.t1

Hope this helps …

 

Cet article Can I do it with PostgreSQL? – 16 – DDL triggers est apparu en premier sur Blog dbi services.

What are typed tables in PostgreSQL?

Thu, 2017-07-13 15:29

While reading the PostgreSQL documentation for “CREATE TABLE” I came across this:

“OF type_name: Creates a typed table, which takes its structure from the specified composite type (name optionally schema-qualified). A typed table is tied to its type; for example the table will be dropped if the type is dropped (with DROP TYPE … CASCADE).”

Sounds interesting, lets have a look.

Obviously we’ll need a composite type to make use of the feature described above:

postgres=# create type ctyp1 as ( a int, b varchar(10), c date );
CREATE TYPE
postgres=# \d ctyp1
       Composite type "public.ctyp1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 

Using the “CREATE TABLE” statement we can now create a table which is based on that type:

postgres=# create table t1 of ctyp1;
CREATE TABLE
postgres=# \d t1
             Table "public.t1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 
Typed table of type: ctyp1

But is that useful? First I thought no, but there is at least one use case for which this is a great help. Image you need another table with the same structure, maybe in another schema. All you need to do is:

postgres=# create table t2 of ctyp1;
CREATE TABLE
postgres=# \d t2
             Table "public.t2"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 
Typed table of type: ctyp1

Not much of a help you might think as you can just create another table the usual way by specifying all the columns once more, but wait: How would you manage changes to the table structure then? Having both tables based on the same type gives you this:

postgres=# alter type ctyp1 add attribute d numeric cascade;
ALTER TYPE
postgres=# \d ctyp1
       Composite type "public.ctyp1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 
 d      | numeric               | 

The magic is in the keyword “cascade”. What happened is that both our tables now look like this:

postgres=# \d t1
             Table "public.t1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 
 d      | numeric               | 
Typed table of type: ctyp1

postgres=# \d t2
             Table "public.t2"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 
 d      | numeric               | 
Typed table of type: ctyp1

With one command we changed the structure of all the tables which are based on the composite type. Btw: When you skip the “cascade” keyword you’ll get an error:

postgres=# alter type ctyp1 add attribute e numeric;
ERROR:  cannot alter type "ctyp1" because it is the type of a typed table
HINT:  Use ALTER ... CASCADE to alter the typed tables too.

Can be quite useful…Good to know that this is possible.

 

Cet article What are typed tables in PostgreSQL? est apparu en premier sur Blog dbi services.

Did you ever wonder what PostgreSQL is executing in the background when you use the psql shortcuts?

Thu, 2017-07-13 08:19

When you work with PostgreSQL you are probably using psql and when you use psql you are probably using one or more of the shortcuts psql is providing. These shortcuts provide a quick and convenient way to get meta data out of the PostgreSQL catalog which safes you from a lot of typing and typos. The only issue with that is that it hides the statements which are executed to get the meta data so you don’t know were the information is actually coming from. Of course you can check either the information_schema or the PostgreSQL system catalog and then write your own queries for what you are looking for. But, hey, there is a much easier way.

Lets start by creating a dummy table and an index:

postgres=# create table dummy ( a int primary key, b varchar(50), c timestamp with time zone );
CREATE TABLE
postgres=# create index i_dummy on dummy ( c );
CREATE INDEX
postgres=# 

The fastest way to get the definition of the table is:

postgres=# \d dummy
             Table "public.dummy"
 Column |           Type           | Modifiers 
--------+--------------------------+-----------
 a      | integer                  | not null
 b      | character varying(50)    | 
 c      | timestamp with time zone | 
Indexes:
    "dummy_pkey" PRIMARY KEY, btree (a)
    "i_dummy" btree (c)

As you can see you do not only get the definition of the table itself but also information about the primary key and the index. But where does this information come from? As information about the index and the primary key is displayed as well the information must be coming from more than one catalog table, but which? Quite easy when you check the options of psql:

postgres@pgbox:/home/postgres/ [PG962] psql --help | grep "hidden"
  -E, --echo-hidden        display queries that internal commands generate

When you fire up psql with this option all the internal statements will be displayed when you use a short cut:

postgres@pgbox:/home/postgres/ [PG962] psql -E postgres
postgres=# \d dummy
********* QUERY **********
SELECT c.oid,
  n.nspname,
  c.relname
FROM pg_catalog.pg_class c
     LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname ~ '^(dummy)$'
  AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 2, 3;
**************************

********* QUERY **********
SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules, c.relhastriggers, c.relrowsecurity, c.relforcerowsecurity, c.relhasoids, '', c.reltablespace, CASE WHEN c.reloftype = 0 THEN '' ELSE c.reloftype::pg_catalog.regtype::pg_catalog.text END, c.relpersistence, c.relreplident
FROM pg_catalog.pg_class c
 LEFT JOIN pg_catalog.pg_class tc ON (c.reltoastrelid = tc.oid)
WHERE c.oid = '16679';
**************************

********* QUERY **********
SELECT a.attname,
  pg_catalog.format_type(a.atttypid, a.atttypmod),
  (SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid) for 128)
   FROM pg_catalog.pg_attrdef d
   WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef),
  a.attnotnull, a.attnum,
  (SELECT c.collname FROM pg_catalog.pg_collation c, pg_catalog.pg_type t
   WHERE c.oid = a.attcollation AND t.oid = a.atttypid AND a.attcollation  t.typcollation) AS attcollation,
  NULL AS indexdef,
  NULL AS attfdwoptions
FROM pg_catalog.pg_attribute a
WHERE a.attrelid = '16679' AND a.attnum > 0 AND NOT a.attisdropped
ORDER BY a.attnum;
**************************

********* QUERY **********
SELECT c2.relname, i.indisprimary, i.indisunique, i.indisclustered, i.indisvalid, pg_catalog.pg_get_indexdef(i.indexrelid, 0, true),
  pg_catalog.pg_get_constraintdef(con.oid, true), contype, condeferrable, condeferred, i.indisreplident, c2.reltablespace
FROM pg_catalog.pg_class c, pg_catalog.pg_class c2, pg_catalog.pg_index i
  LEFT JOIN pg_catalog.pg_constraint con ON (conrelid = i.indrelid AND conindid = i.indexrelid AND contype IN ('p','u','x'))
WHERE c.oid = '16679' AND c.oid = i.indrelid AND i.indexrelid = c2.oid
ORDER BY i.indisprimary DESC, i.indisunique DESC, c2.relname;
**************************

********* QUERY **********
SELECT pol.polname,
CASE WHEN pol.polroles = '{0}' THEN NULL ELSE array_to_string(array(select rolname from pg_roles where oid = any (pol.polroles) order by 1),',') END,
pg_catalog.pg_get_expr(pol.polqual, pol.polrelid),
pg_catalog.pg_get_expr(pol.polwithcheck, pol.polrelid),
CASE pol.polcmd 
WHEN 'r' THEN 'SELECT'
WHEN 'a' THEN 'INSERT'
WHEN 'w' THEN 'UPDATE'
WHEN 'd' THEN 'DELETE'
WHEN '*' THEN 'ALL'
END AS cmd
FROM pg_catalog.pg_policy pol
WHERE pol.polrelid = '16679' ORDER BY 1;
**************************

********* QUERY **********
SELECT c.oid::pg_catalog.regclass FROM pg_catalog.pg_class c, pg_catalog.pg_inherits i WHERE c.oid=i.inhparent AND i.inhrelid = '16679' ORDER BY inhseqno;
**************************

********* QUERY **********
SELECT c.oid::pg_catalog.regclass FROM pg_catalog.pg_class c, pg_catalog.pg_inherits i WHERE c.oid=i.inhrelid AND i.inhparent = '16679' ORDER BY c.oid::pg_catalog.regclass::pg_catalog.text;
**************************

             Table "public.dummy"
 Column |           Type           | Modifiers 
--------+--------------------------+-----------
 a      | integer                  | not null
 b      | character varying(50)    | 
 c      | timestamp with time zone | 
Indexes:
    "dummy_pkey" PRIMARY KEY, btree (a)
    "i_dummy" btree (c)

Here you go. Quite a lot of stuff is happening in the background and you can exactly see what it is. This is a great way to get known to the catalog. When you are already inside psql and want to switch the display of the hidden stuff to on you can do that as well:

postgres=# \set ECHO_HIDDEN on
postgres=# \d dummy
********* QUERY **********
SELECT c.oid,
  n.nspname,
  c.relname
FROM pg_catalog.pg_class c
     LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname ~ '^(dummy)$'
  AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 2, 3;
**************************

...

When you want to make this permanent add it to your .psqlrc (scroll down to the “Files” section). Have fun …

 

Cet article Did you ever wonder what PostgreSQL is executing in the background when you use the psql shortcuts? est apparu en premier sur Blog dbi services.

One-day workshop with Slava Oks for SQL Server on Linux

Thu, 2017-07-13 05:26

Few weeks ago, David Barbarin and I went to the Tugar IT  in Lisbon to follow the workshop from Slava Oks: Bringing SQL Server to Linux: Experience

Linux03

It was a deep dive into the code of SQL Server and Slava shows us how they achieve a universal platform regardless the operating system.

On my mind, the challenge was how to deal with the Windows operating system dependencies that include all Win32/ NT Kernel calls and makes SQL Server OS agnostic

Indeed, SQL Server has about more than 800 calls to Win32 and more than 400 calls to NT Kernel through the SQL OS. Slava explained us how challenging it could be to rewrite all the SQL Server code to support the Linux operating system.

To achieve it, the development Team implemented a containerized approach called Drawbridge.

Drawbridge combines 2 technologies:

  • Picoprocess: Process-based isolation container with a minimal kernel API Surface
  • Library OS: Version of Windows enlightened to run efficiently

Here a link for more information about Drawbridge: https://www.microsoft.com/en-us/research/project/drawbridge/#

This technology reduces to 45 calls to the host OS through a proper Platform Abstraction Layer(PAL): SQLPAL.

Linux01Source: Microsoft

SQLPAL manages all resources’ in the process. The goal is to merge SOS and Library OS as the core of SQLPAL.

Then, we get deeper into the SQLPAL code and Slava began a complex topic of debugging stuff…I must admit it was a headache moment! :-?

But finally, I may conclude that it was a very interesting Workshop with a lot of explanations about how Microsoft SQL Server team addressed the challenge of making SQL Server compatiblewith Linux. This helps me to understand the architecture of the next version of SQL Server. 8-)

And it was a nice day in Lisbon in the Microsoft building!

IMG_0782(1)

 

Cet article One-day workshop with Slava Oks for SQL Server on Linux est apparu en premier sur Blog dbi services.

Azure Compare Total Cost of Ownership (TCO) with comparable Azure deployment

Thu, 2017-07-13 05:18

Microsoft Azure allows you to quickly deploy infrastructures and services to meet all of your business needs. You can run Windows and Linux based applications in 36 Azure datacenter regions, delivered with enterprise grade SLAs.

Microsoft Azure offer you to Quickly compare the total cost of ownership (TCO) of your on-premises infrastructure with a comparable Azure deployment using the  TCO Calculator and estimate savings you can realize by moving to Azure.

In this article i will show you  a little example on how to calculate the TCO on a small architecture.

To access to the TCO calculator, click on this link : TCO Calculator,  click on the TCO calculator image in red rectangle and follow instructions

TCO

My use case:

This example is based on a small architecture with two Virtual Machines, just to give you an insight into the use of this TCO calculator.

The first step is to enter input parameters

2017-07-13 11_56_10-Azure TCO Calculator _ Microsoft

after clicking on the Calculate button, the result appears:

tco2

tco3

tco4

Details are available by clicking on export to Word on the top frame result.

Conclusion:

To be honest, this calculator may not accurately predict the TCO but it offers a good estimate on the kind of savings one can expect. It is a very good tool for companies looking to move their IT infrastructure to Azure cloud.

imageDbiFin

 

 

 

Cet article Azure Compare Total Cost of Ownership (TCO) with comparable Azure deployment est apparu en premier sur Blog dbi services.

Pages