Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 6 hours 26 min ago

Oracle Cloud: script to stop all PaaS services

Wed, 2017-08-02 00:56

With metered cloud services, keeping all your instances running may become expensive. The goal is to start them only when you need them. Here is a script that stops all instances you have on the Oracle Cloud Service PaaS. You can schedule it for example to stop them at the end of the business day, or when they are not active for a long time. The scripts use the REST API called with curl, JSON output parsed with jq, HTTP status explained with links.

In the first part of the script, I set the variables. Set them to your user:password, identity domain, cloud service url, ssh key:

u="MyEmail@Domain.net:MyPassword"
i=a521642
r=https://dbcs.emea.oraclecloud.com
k="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCxAEm1WHYbJa50t61YhM53u4sljjSFGK458fgdljjkNqfihcRxSf2ENw6iaYhiBTPogG9IDaEqW+SbwpororD2/bep16/hHybGswD34jU7bf9kaaKi5gOgASChid4e322zrnwOtlzfHiiquhiUDgLwpQxCYVV5zU1RQ2NS3F3a45bepqkn/GuPY5x/KSn576+7HBCYVbao/PTwZAeTVbo6Xb4ZQQrOIsLJxxDCQsr0/g7ZS8/OJHt8kotydu13n3rANB2y312XtTsW9mAwpfXuCuqDM5+dIjUdmtflkYtqsfrqSpLevVfVt1L7hqo+JGo7OBO0doVs6lQSCblZhYHh Me@MyLaptop"

Here is the script. It starts to download the certificate if not already there. Then queries for all non stopped services and stops them. Finally, the last line displays the status of all services.


[ -f cacert.pem ] || curl --remote-name --time-cond cacert.pem https://curl.haxx.se/ca/cacert.pem
 
for s in $( curl -s --request GET --cacert cacert.pem --user $u --header "X-ID-TENANT-NAME:$i" $r/paas/service/dbcs/api/v1.1/instances/$i | jq -r '.services[]|select(.status!="Stopped")|.service_name' )
do
# call the 'Stop service' REST API and get the http status
httpstatus=$(curl --include --request POST --cacert cacert.pem --user $u --header "X-ID-TENANT-NAME:$i" --header "Content-Type:application/json" --data '{"lifecycleState":"Stop"}' $r/paas/service/dbcs/api/v1.1/instances/$i/$s | awk '{print >"/dev/stderr"} /^HTTP/{print $2}')
# look for http status in documentation
links -dump -width 300 https://docs.oracle.com/en/cloud/paas/java-cloud/jsrmr/Status%20Codes.html | grep -B 1 -A 1 " $httpstatus "
done
 
sleep 1
curl -s --request GET --cacert cacert.pem --user $u --header "X-ID-TENANT-NAME:$i" $r/paas/service/dbcs/api/v1.1/instances/$i | jq .

The script requires:

  • curl to call the REST API
  • jq to format and extract the returned JSON
  • links to get the HTTP status description from the documentation

The Cloud is all about automation and the REST API makes it very easy to do from command line or script.

 

Cet article Oracle Cloud: script to stop all PaaS services est apparu en premier sur Blog dbi services.

Display Data Guard configuration in SQL Developer

Tue, 2017-08-01 13:34

The latest version of SQL Developer, the 17.2 one released after Q2 of 2017, has a new item in the DBA view showing the Data Guard configuration. This is the occasion to show how you can cascade the log shipping in Oracle 12c

A quick note about this new versioning: this is the release for 2017 Q2 and the version number has more digits to mention the exact build time. Here this version is labeled 17.2.0.188.1159 and we can see when it has been built:

SQL> select to_date('17.x.0.188.1159','rr."x.0".ddd.hh24mi') build_time from dual;
 
BUILD_TIME
--------------------
07-JUL-2017 11:59:00

Non-Cascading Standby

Here is my configuration with two standby databases:

DGMGRL> show configuration
Configuration - orcl
 
Protection Mode: MaxPerformance
Members:
orcla - Primary database
orclb - Physical standby database
orclc - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 9 seconds ago)

I have only the LogXptMode defined here, without any RedoRoutes

DGMGRL> show database orcla LogXptMode
LogXptMode = 'SYNC'

with this configuration, the broker has set the following log destination on orcla, orclb and orclc:

INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLA log_archive_dest_1 location=USE_DB_RECOVERY_FILE_DEST, valid_for=(ALL_LOGFILES, ALL_ROLES)
ORCLA log_archive_dest_2 service="ORCLB", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300
db_unique_name="orclb" net_timeout=30, valid_for=(online_logfile,all_roles)
ORCLA log_archive_dest_3 service="ORCLC", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300
db_unique_name="orclc" net_timeout=30, valid_for=(online_logfile,all_roles)
 
INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLB log_archive_dest_1 location=/u01/fast_recovery_area
 
INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLC log_archive_dest_1 location=/u01/fast_recovery_area

In the latest SQL Developer you have the graphical representation of it from the DBA view / Dataguard / console:

SDDG001

Cascading Standby

In 12c we can define cascading standby: instead of the primary shipping the redo to all standby databases, you can have the primary shipping to one standby only, and this one can forward the redo to another one. You define that with the RedoRoute property:


DGMGRL> edit database orcla set property redoroutes = '(local:orclb) (orclb:orclc async)';
Property "redoroutes" updated
DGMGRL> edit database orclb set property redoroutes = '(orcla:orclc async) (local:orcla)';
Property "redoroutes" updated

The first route defined in each property is applied when orcla is the primary database:

  • on orcla (local:orclb) means that orcla sends redo to orclb when primary
  • on orclb (orcla:orclc async) means that orclb sends redo to orclc when orcla is primary. LogXptMode is SYNC but overriden here with ASYNC

The second route defined in each property is applied when orclb is the primary database:

  • on orcla (orclb:orclc async) means that orclb sends redo to orclc when orclb is primary. LogXptMode is SYNC but overriden here with ASYNC
  • on orclb (local:orcla) means that orclb sends redo to orcla when primary

With this configuration, and orcla still being the primary, the broker has set the following log destination on orcla, orclb and orclc:


INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLA log_archive_dest_1 location=USE_DB_RECOVERY_FILE_DEST, valid_for=(ALL_LOGFILES, ALL_ROLES)
ORCLA log_archive_dest_2 service="ORCLB", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300
db_unique_name="orclb" net_timeout=30, valid_for=(online_logfile,all_roles)
 
INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLB log_archive_dest_1 location=/u01/fast_recovery_area
ORCLB log_archive_dest_2 service="ORCLC", ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=3
00 db_unique_name="orclc" net_timeout=30, valid_for=(standby_logfile,all_roles)
 
INSTANCE_NAME NAME VALUE
---------------- -------------------- -------------------------------------------------------------------------------------------------------------
ORCLC log_archive_dest_1 location=/u01/fast_recovery_area

The show configuration from DGMGRL displays them indented to see the cascading redo shipping:

DGMGRL> show configuration
Configuration - orcl
 
Protection Mode: MaxPerformance
Members:
orcla - Primary database
orclb - Physical standby database
orclc - Physical standby database (receiving current redo)
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 27 seconds ago)

And SQL Developer Data Guard console shows:
SDDG002

Switchover

Now the goal of defining several routes is to have all log destination automatically changed when the database role change.
I’m doing a switchover:


Connected to "orclb"
Connected as SYSDG.
DGMGRL> switchover to orclb;
Performing switchover NOW, please wait...
New primary database "orclb" is opening...
Operation requires start up of instance "ORCLA" on database "orcla"
Starting instance "ORCLA"...
ORACLE instance started.
Database mounted.
Database opened.
Connected to "orcla"
Switchover succeeded, new primary is "orclb"

Now it is orcla which cascades the orclb redo to orclc:

DGMGRL> show configuration;
Configuration - orcl
 
Protection Mode: MaxPerformance
Members:
orclb - Primary database
orcla - Physical standby database
orclc - Physical standby database (receiving current redo)
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 74 seconds ago)

Here is how it is displayed from SQL Developer:

SDDG003

We have seen how the configuration is displayed from DGMGRL and graphically from SQL Developer. Of course, you can also query the Data Guard configuration:

SQL> select * from V$DATAGUARD_CONFIG;
 
DB_UNIQUE_NAME PARENT_DBUN DEST_ROLE CURRENT_SCN CON_ID
-------------- ----------- --------- ----------- ------
orcla orclb PHYSICAL STANDBY 3407900 0
orclc orcla PHYSICAL STANDBY 3408303 0
orclb NONE PRIMARY DATABASE 0 0

and the broker configuration:

SQL> select * from V$DG_BROKER_CONFIG;
 
DATABASE CONNECT_IDENTIFIER DATAGUARD_ROLE REDO_SOURCE ENABLED STATUS VERSION CON_ID
-------- ------------------ -------------- ----------- ------- ------ ------- ------
orcla ORCLA PHYSICAL STANDBY -UNKNOWN- TRUE 0 11.0 0
orclb ORCLB PRIMARY -N/A- TRUE 0 11.0 0
orclc ORCLC PHYSICAL STANDBY orcla TRUE 0 11.0 0

This another reason to use the broker. Once the configuration is setup and tested, you have nothing else to think about when you do a switchover. The log archive destination is automatically updated depending on the database roles.

 

Cet article Display Data Guard configuration in SQL Developer est apparu en premier sur Blog dbi services.

Postgres vs. Oracle access paths I – Seq Scan

Tue, 2017-08-01 10:58

Here is the first test I’ve done for my Postgres vs. Oracle access paths series and the first query did a sequential scan. It illustrates the first constant you find in the documentation for the query planner:
seq_page_cost (floating point)
Sets the planner’s estimate of the cost of a disk page fetch that is part of a series of sequential fetches. The default is 1.0.

Table creation

I start by creating a very simple table with 10000 rows and 3 columns. The first column(n) is indexed:

create table demo1 as select generate_series n , 1 a , lpad('x',1000,'x') x from generate_series(1,10000);
SELECT 10000
create unique index demo1_n on demo1(n);
CREATE INDEX
 
analyze verbose demo1;
INFO: analyzing "public.demo1"
INFO: "demo1": scanned 1429 of 1429 pages, containing 10000 live rows and 0 dead rows; 10000 rows in sample, 10000 estimated total rows
ANALYZE
select relkind,relname,reltuples,relpages from pg_class where relname='demo1';
relkind | relname | reltuples | relpages
---------+---------+-----------+----------
r | demo1 | 10000 | 1429
 
select relkind,relname,reltuples,relpages from pg_class where relname='demo1_n';
relkind | relname | reltuples | relpages
---------+---------+-----------+----------
i | demo1_n | 10000 | 30

I checked the table and index statistics that will be used by the optimizer: 10000 rows, all indexed, 1429 table blocks and 30 index blocks. Note that blocks are called pages, but that’s the same idea: the minimal size read and written to disk. They are also called buffers as they are read into a buffer and cached in the buffer cache.

Here is how I create a similar table in Oracle:

create table demo1 as select rownum n , 1 a , lpad('x',1000,'x') x from xmltable('1 to 10000');
Table created.
create unique index demo1_n on demo1(n);
Index created.
exec dbms_stats.gather_table_stats(user,'demo1');
PL/SQL procedure successfully completed.
 
select table_name,num_rows,blocks from user_tables where table_name='DEMO1';
 
TABLE_NAME NUM_ROWS BLOCKS
---------- ---------- ----------
DEMO1 10000 1461
 
select index_name,num_rows,leaf_blocks,blevel from user_indexes where table_name='DEMO1';
 
INDEX_NAME NUM_ROWS LEAF_BLOCKS BLEVEL
---------- ---------- ----------- ----------
DEMO1_N 10000 20 1

The same rows are stored in 1421 table blocks and the index entries in 20 blocks. Both use 8k blocks, but different storage layout and different defaults. This is about 7 rows per table blocks, for rows that are approximately larger than 1k and about 500 index entries per index block to store the number for column N plus the pointer to table row (a few bytes called TID in Postgres or ROWID for Oracle). I’ll not get into the details of the number here. More about the row storage:

My goal is to detail the execution plans and the execution statistics.

Postgres Seq Scan

I start with a very simple query on my table: SELECT SUM(N) from DEMO1;


explain (analyze,verbose,costs,buffers) select sum(n) from demo1 ;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------
Aggregate (cost=1554.00..1554.01 rows=1 width=8) (actual time=4.616..4.616 rows=1 loops=1)
Output: sum(n)
Buffers: shared hit=1429
-> Seq Scan on public.demo1 (cost=0.00..1529.00 rows=10000 width=4) (actual time=0.011..3.614 rows=10000 loops=1)
Output: n, a, x
Buffers: shared hit=1429
Planning time: 0.468 ms
Execution time: 4.661 ms

This query does a sequential scan (Seq Scan), which is the equivalent of Oracle Full Table Scan: read all rows from the table. You might tell me that it would be cheaper to scan the index because the index I’ve created holds all required columns. We will see that in the next post. Here, after having created the table as I did above, the query planner prefers to scan the table.

Here are the maths: my table has 1429 pages and each page access during a sequential scan has cost=1 as defined by:

show seq_page_cost;
seq_page_cost
---------------
1

Here, I see a cost estimated from 0 to 1529 for the Seq Scan operation.
The first number, 0.00 is the initialization cost estimating the work done before returning any rows. A Seq Scan has nothing to do before, and reading the first block can already return rows.
The second number is the cost to return all rows. We have seen that the scan itself costs 1429 but the rows (tuples) must be read and processed. This is evaluated using the following constant:

show cpu_tuple_cost;
cpu_tuple_cost
----------------
0.01

For 10000 rows, the cost to process them is 0.01*10000=100 which is an additional cost over the Seq Scan 1429 to get it to 1529. This explains cost=0.00..1529.00

Then there is a SUM operation applied to 10000 rows and there is a single parameter for the CPU cost of operators and functions:

show cpu_operator_cost;
cpu_operator_cost
-------------------
0.0025

Capturepgoraseqscan001
The sum (Aggregate) operation adds 0.0025*10000=25 to the cost and then the cost is 1554. You can see this cost in the minimal cost for the query, the first number in cost=1554.00..1554.01, which is the cost before retrieving any rows. This makes sense because before retrieving the first row we need to read (Seq Scan) and process (Aggregate) all rows, which is exactly what the cost of 1554 is.

Then there is an additional cost when we retrieve all rows. It is only one row here because it is a sum without group by, and this adds the default cpu_tuple_cost=0.01 to the initial cost: 1554.01

In summary, The total cost of the query is cost=1554.00..1554.01 and we have seen that it depends on:
– number of pages in the table
– number of rows from the result of the scan (we have no where clause here)
– number of rows summed and retrieved
– the planner parameters seq_page_cost, cpu_tuple_cost, and cpu_operator_cost

Oracle Full Table Scan

When I run the same query on Oracle, the optimizer chooses an index fast full scan rather than a table full scan because all rows and columns are in the index that I’ve created:

  • all rows because the SUM(N) do not need to get rows where N is not null (which are not stored in the index)
  • all columns because I need nothing else than the values for N

We will see that in the next post, for the moment, in order to compare with Postgres, I forced a full table scan with the FULL() hint.

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID bhsjquhh6y08q, child number 0
-------------------------------------
select /*+ full(demo1) */ sum(n) from demo1
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 397 (100)| 1 |00:00:00.01 | 1449 |
| 1 | SORT AGGREGATE | | 1 | 1 | | 1 |00:00:00.01 | 1449 |
| 2 | TABLE ACCESS FULL| DEMO1 | 1 | 10000 | 397 (0)| 10000 |00:00:00.01 | 1449 |
---------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=0) SUM("N")[22] 2 - (rowset=256) "N"[NUMBER,22]

We have seen that Postgres cost=1 is for sequential scans (similar to what we call multiblock reads in Oracle) and random reads (single block reads) have by default cost=4 according to random_page_cost.

Oracle cost unit is based on single block reads and this is why the cost here (397) is lower than the number of blocks (1461). Different units. Postgres counts cost=1 for reads and counts a higher cost when a seek is involved. Oracle counts cost=1 for single block reads (including seek) and lower cost for larger I/O size.
Capturepgoraseqscan002
With the default system statistics, where latency is estimated 10 milliseconds and transfer is estimated to 4KB/ms. The single block read time is estimated to 12 milliseconds (10 + 8192/4096).
Again with the default system statistics where optimizer estimates 8 blocks per multiblock read, the multiblock read time is estimated to 26 milliseconds (10 + 8*8192/4096) which is on average 26/8=3.25 millisecond per block. This means that the ratio of single vs. multi block read is very similar for Oracle (3.25/12=0.27833333) and Postgres (seq_page_cost /random_page_cost=1/4=0.25) with default parameters.

Our table is stored in 1461 blocks and the full table scan involves reading all of them plus some segment header blocks. 1461*0.27833333=396

There is also the costing of CPU (the equivalent to cpu_tuple_cost) which is included here but I’ll not go into the details which are more complex than in Postgres and depends on your processor frequency. The goal of those posts is about Postgres. For Oracle, all this is explained in Jonathan Lewis and Chris Antognini books.

But basically, the idea is the same: Postgres Seq Scan and Oracle Full table Scan read the contiguous table blocks sequentially and the cost mainly depends on the size of the table (number of blocks) and the estimated time for sequential I/O (where bandwidth counts more than latency).

Buffers

In my tests, I’ve not only explained the query, but I executed it to get execution statistics. This is done with EXPLAIN ANALYZE in Postgres and DBMS_XPLAN.DISPLAY_CURSOR in Oracle. The statistics include the number of blocks read at each plan operation, with the BUFFERS option in Postgres and with STATISTICS_LEVEL=ALL in Oracle.


explain (analyze,buffers) select sum(n) from demo1 ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------
Aggregate (cost=1554.00..1554.01 rows=1 width=8) (actual time=3.622..3.622 rows=1 loops=1)
Buffers: shared hit=1429
-> Seq Scan on demo1 (cost=0.00..1529.00 rows=10000 width=4) (actual time=0.008..1.724 rows=10000 loops=1)
Buffers: shared hit=1429
Planning time: 0.468 ms
Execution time: 4.661 ms

‘Buffers’ displays the number of blocks that have been read by the Seq Scan and is exactly the number of pages in my table. ‘shared hit’ means that they come from the buffer cache.

Let’s run the same when the cache is empty:

explain (analyze,verbose,costs,buffers) select sum(n) from demo1 ;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=1554.00..1554.01 rows=1 width=8) (actual time=13.837..13.837 rows=1 loops=1)
Output: sum(n)
Buffers: shared read=1429
-> Seq Scan on public.demo1 (cost=0.00..1529.00 rows=10000 width=4) (actual time=0.042..12.506 rows=10000 loops=1)
Output: n, a, x
Buffers: shared read=1429
Planning time: 3.754 ms
Execution time: 13.906 ms

The buffers are now ‘shared read’ instead of ‘shared hit’. In Postgres, the number of logical reads, as we know them in Oracle, is the sum of hits and reads. In Oracle, all blocks are counted as logical reads, which includes the smaller set of physical reads.

IO calls

Here is more about the reads when the block is not in the buffer cache. On Linux, we can trace the system calls to see how those sequential I/Os are implemented.

I get the ‘relfilenode':

postgres=# select relname,relnamespace,reltype,relowner,relfilenode,relpages,reltuples from pg_class where relname='demo1';
relname | relnamespace | reltype | relowner | relfilenode | relpages | reltuples
---------+--------------+---------+----------+-------------+----------+-----------
demo1 | 2200 | 42429 | 10 | 42427 | 1429 | 10000

I get the pid of my session process:

select pg_backend_pid();
-[ RECORD 1 ]--+------
pg_backend_pid | 30732

I can trace system calls:

strace -p 30732

And look at the trace concerning my file (identified with its ‘relfilenode’):

30732 open("base/12924/42427", O_RDWR) = 33
30732 lseek(33, 0, SEEK_END) = 11706368
30732 open("base/12924/42427_vm", O_RDWR) = 43
30732 lseek(33, 0, SEEK_END) = 11706368
30732 lseek(33, 0, SEEK_END) = 11706368
30732 lseek(33, 0, SEEK_SET) = 0
30732 read(33, "\4004\220\3 \4 \360\233\30\10\340\227\30\10"..., 8192) = 8192
30732 read(33, "\4004\220\3 \4 \360\233\30\10\340\227\30\10"..., 8192) = 8192
30732 read(33, "\4004\220\3 \4 \360\233\30\10\340\227\30\10"..., 8192) = 8192
... 1429 read(33) in total

We see two open() calls with the relfilenode of my table in the file name: one for the table and one for the visibility map
The file descriptor for the table file is 33 and I’ve grepped only the related calls.
The lseek(33,0,SEEK_END) goes to the end of the file (11706368 bytes, which is 11706368/8192=1429 pages.
The lseek(33,0,SEEK_SET) goes to the beginning of the file.
Subsequent read() calls read the whole file, reading page per page (8192 bytes), in sequential order.

This is how sequential reads are implemented in Postgres: one lseek() and sequential read() calls. The I/O size is always the same (8k here). The benefit of sequential scan is not larger I/O calls but simply the absence of seek() in between. The optimization is left to the underlying layers filesystem and read-ahead.

This is very different from Oracle. Not going into the details, here are the kind of system calls you see during the full table scan:

open("/u01/oradata/CDB1A/PDB/users01.dbf", O_RDWR|O_DSYNC) = 9
fcntl(9, F_SETFD, FD_CLOEXEC) = 0
fcntl(9, F_DUPFD, 256) = 258
...
pread(258, "\6\242\2\5\3\276\25%\2\4\24\270\1\313!\1x\25%"..., 1032192, 10502144) = 1032192
pread(258, "\6\242\202\5\3\300\25%\2\4\16\247\1\313!\1x\25%"..., 1032192, 11550720) = 1032192
pread(258, "\6\242\2\6\3\302\25%\2\4x\226\1\313!\1x\25%"..., 417792, 12599296) = 417792

Those are also sequential reads of contiguous blocks but done with larger I/O size (126 blocks here). So in addition to the absence of seek() calls, it is optimized to do less I/O calls, not relying on the underlying optimization at OS level.

Oracle can also trace the system calls with wait events, which gives more information about the database calls:

WAIT #140315986764280: nam='db file scattered read' ela= 584 file#=12 block#=1282 blocks=126 obj#=74187 tim=91786554974
WAIT #140315986764280: nam='db file scattered read' ela= 485 file#=12 block#=1410 blocks=126 obj#=74187 tim=91786555877
WAIT #140315986764280: nam='db file scattered read' ela= 181 file#=12 block#=1538 blocks=51 obj#=74187 tim=91786556380

The name ‘scattered’ is misleading. ‘db file scattered read’ are actually multiblock reads: read more than one block in one I/O call. Oracle does not rely on the Operating System read-ahead and this is why we can (and should) use direct I/O and Async I/O if the database buffer cache is correctly sized.

Output and Projection

I’ve run the EXPLAIN with the VERBOSE option which shows the ‘Output’ for each operation, and I’ve done the equivalent in Oracle by adding the ‘+projection’ format in DBMS_XPLAN.

In the Oracle execution plan, we see the columns remaining in the result of each operation, after the projection:

Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=0) SUM("N")[22] 2 - (rowset=256) "N"[NUMBER,22]

The operation 2, the Full Table Scan, reads all rows with all columns, but selects only the one we need: N

In the Postgres equivalent, it seems that the Output mentions the columns available before the projection because we see all table columns here:

explain verbose select sum(n) from demo1 ;
QUERY PLAN
-------------------------------------------------------------------------
Aggregate (cost=1554.00..1554.01 rows=1 width=8)
Output: sum(n)
-> Seq Scan on public.demo1 (cost=0.00..1529.00 rows=10000 width=4)
Output: n, a, x

I prefer to see the columns after the projection and I use it a lot in Oracle to know which columns are needed from the table. A great optimization can be done when we have a covering index where all selected columns are present so that we don’t have to go to the table. But we will see that in the next post about Index Only Scan.

 

Cet article Postgres vs. Oracle access paths I – Seq Scan est apparu en premier sur Blog dbi services.

Postgres unique constraint

Tue, 2017-08-01 02:11

I’ll start a series on Postgres vs. Oracle access paths because I know Oracle and I learn Postgres. While preparing it, I came upon some surprises because I’m so used to Oracle that I take some behavior as granted for any SQL databases. I recently posted a tweet about one of them, comparing latest Postgres version to earliest Oracle version I have on my laptop.
The goal of the tweet was exactly what I said above: show my surprise, using Oracle 7 as a reference because this is the version where I started to learn SQL. And there’s no judgment behind this surprise: I can’t compare a software I use for more than 20 years with one I’m just learning. I have a big admiration for the Oracle design and architecture choices. But I’ve also a big admiration for what the Postgres community is doing.

In my tweet I’ve updated a primary key. I think I’ve never designed in real life a primary key that has to be updated later. For each table we need a key that is immutable to identify rows for referential integrity constraints, or for replication. The value must be known from the first insert (which means the columns are declared not null) and the value is never updated. It makes sense to use a primary key for that as it is unique and not null.

Actually, a better case would be a simple unique constraint where we just exchange two rows. A real-life example is a list of items, having probably a surrogate key as the primary key, and a unique key including an item number. When the user wants to move up one item, we just run an update on two rows, exchanging their numbers. The unique constraint just ensures that we have only distinct values so that a select … order by will always return the values in the same order.

All similar cases have the same consequence: when you process row by row the update, the uniqueness may be violated. But at the end of the statement, the constraint is still valid.

Here is the initial example with updating all rows:


create table demo as select generate_series n from generate_series(1,2);
SELECT 2
alter table demo add constraint demo_pk primary key(n);
ALTER TABLE
select * from demo;
n
---
1
2
(2 rows)
 
begin transaction;
BEGIN
update demo set n=n-1;
UPDATE 2
select * from demo;
n
---
0
1
(2 rows)

This works. I’ve inserted the rows in ascending order of n. Decreasing the value doesn’t violate the uniqueness at any time because it reads rows from the beginning to the end.

However, when we increase the value, we have a duplicate value until we process the next row. And by default, Postgres fails:

update demo set n=n+1;
ERROR: duplicate key value violates unique constraint "demo_pk"
DETAIL: Key (n)=(1) already exists.

The bad thing is that the behavior of the application depends on the physical order of the rows and the order where they are processed. This violates the Codd rule about physical independence. In addition to that, the SQL statements should behave as processing the set of rows rather than low-level row-by-row processing.

But there is also a very good thing: because the constraint is validated row by row, you know which value violates the constraint (here: “DETAIL: Key (n)=(1) already exists” ).

So my statement failed and this in Postgres seems to fail the whole transaction:

commit;
ROLLBACK

My second surprise is that the failure of one statement cancels the whole transaction. I see no error at commit, but it simply tells me that it has done a rollback instead of the commit.

deferrable

So, I compared with Oracle where this statement is always successful, because temporary violations that are resolved later, within the same statement, do not violate the constraint. I compared it with the oldest version I have on my laptop (Oracle 7.3) to show that it is something I’ve never seen as a new feature because I started with Oracle 7. And this kind of thing is the reason why I like SQL. Doing the same with a procedural language requires an intermediate update to be sure that there is no duplicate at any time.

The Postgres community is very responsive, especially when we may think that something works better in Oracle than Postgres (which was not the case here and which was not the goal of my tweet anyway – but tweets are short and may not express the tone properly).

Quickly a solutions were proposed: deferred constraint (example in this blog post).

I know deferred constraints in Oracle. They are similar in Postgres and here is the solution proposed:


alter table demo drop constraint demo_pk;
ALTER TABLE
alter table demo add constraint demo_pk primary key(n) deferrable initially deferred;
ALTER TABLE
begin transaction;
BEGIN
update demo set n=n-1;
UPDATE 2
select * from demo;
n
---
0
1
(2 rows)
 
update demo set n=n+1;
UPDATE 2

That seems good. Because the constraint validation is deferred, the update is successful.

However, this is not what I want. I want the previous statement to succeed, but I want the following statement to fail:

insert into demo values(1);
INSERT 0 1

Because constraint is deferred, this statement is successful and it is only at commit that it fails:

commit;
ERROR: duplicate key value violates unique constraint "demo_pk"
DETAIL: Key (n)=(1) already exists.

Why do I think this is not the good solution? First, because I want the statement to fail as soon as possible. And in addition to that, I want the commit to be fast. Doing expensive things at commit should be avoided, if possible. It is the point where all work is supposed to be done and you just want to save it (make it durable and visible to others).

deferrable initially immediate

Actually, the solution is to declare the constraint as deferrable, but not deferred.

alter table demo drop constraint demo_pk;
ALTER TABLE
alter table demo add constraint demo_pk primary key(n) deferrable initially immediate;
ALTER TABLE

This says that it is deferrable, but not deferred (except if you decide to set the constraint deferred for your transaction). That way it accepts temporary constraint violation if they are resolved at the end of the statement.

Now, my update statement is sucessful:

begin transaction;
BEGIN
update demo set n=n-1;
UPDATE 2
select * from demo;
n
---
0
1
(2 rows)
update demo set n=n+1;
UPDATE 2

Any other statement that violates the constraint fails immediately:

insert into demo values(1);
ERROR: duplicate key value violates unique constraint "demo_pk"
DETAIL: Key (n)=(1) already exists.
commit;
ROLLBACK

Documentation

The nice thing is that this is documented! I didn’t find it immediately because it is in the ‘Compatibility’ part of the ‘create table’ documentation. I’m not yet used to the Postgres documentation. I stopped at the ‘DEFERRED’ definition which mentions: A constraint that is not deferrable will be checked immediately after every command

But later Compatibility adds something more specific to the unique constraint:

Non-deferred Uniqueness Constraints
When a UNIQUE or PRIMARY KEY constraint is not deferrable, PostgreSQL checks for uniqueness immediately whenever a row is inserted or modified. The SQL standard says that uniqueness should be enforced only at the end of the statement; this makes a difference when, for example, a single command updates multiple key values. To obtain standard-compliant behavior, declare the constraint as DEFERRABLE but not deferred (i.e., INITIALLY IMMEDIATE). Be aware that this can be significantly slower than immediate uniqueness checking.

That’s another good point. Postgres documentation is clear and gives the right solution. We just have to read it to the end.

A side note for my French speaking readers here to mention that the Postgres documentation has been translated into French by Guillaume Lelarge, who also translated Markus Winand book and website. Translation is as good as the original in both cases.

Performance

The documentation mentions ‘significantly slower’. Here is a test on 100000 rows with non deferable constraint:

create table demo as select generate_series n from generate_series(1,100000);
SELECT 100000
alter table demo add constraint demo_pk primary key(n);
ALTER TABLE
vacuum demo;
VACUUM
select * from pgstatindex('demo_pk');
version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation
---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
2 | 1 | 2260992 | 3 | 1 | 274 | 0 | 0 | 89.83 | 0
(1 row)

Here is the update n=n-1 where all rows are updated but none violates the constraint at any time:

explain (analyze,verbose,costs,buffers)update demo set n=n-1;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------
Update on public.demo (cost=0.00..1693.00 rows=100000 width=10) (actual time=425.699..425.699 rows=0 loops=1)
Buffers: shared hit=578646 read=1202 dirtied=1267
-> Seq Scan on public.demo (cost=0.00..1693.00 rows=100000 width=10) (actual time=0.013..16.186 rows=100000 loops=1)
Output: (n - 1), ctid
Buffers: shared hit=443

This update has read 578646+1202=579848 buffers.

Now creating the deferrable constraint:

alter table demo drop constraint demo_pk;
ALTER TABLE
alter table demo add constraint demo_pk primary key(n) deferrable initially immediate;
ALTER TABLE
vacuum demo;
VACUUM
select * from pgstatindex('demo_pk');
version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation
---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
2 | 1 | 2260992 | 3 | 1 | 274 | 0 | 0 | 89.83 | 0

And do the n=n+1 update:

explain (analyze,verbose,costs,buffers)update demo set n=n+1;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------
Update on public.demo (cost=0.00..2135.00 rows=100000 width=10) (actual time=481.868..481.868 rows=0 loops=1)
Buffers: shared hit=679405 read=760 dirtied=825
-> Seq Scan on public.demo (cost=0.00..2135.00 rows=100000 width=10) (actual time=0.268..16.329 rows=100000 loops=1)
Output: (n + 1), ctid
Buffers: shared hit=885
Planning time: 0.237 ms
Trigger PK_ConstraintTrigger_75314 for constraint demo_pk: time=174.976 calls=99999
Execution time: 663.799 ms

This read more buffers and we can see that an internal trigger (PK_ConstraintTrigger_75314) has been run to re-check the unique constraint at the end of the statement. But only 17% more here for this special case where all rows are updated.

However, a more realistic test case exchanging only two values is much cheaper:


explain (analyze,verbose,costs,buffers) update demo set n=case when n=2 then 2000 when n=2000 then 2 end where n in (2,2000);
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Update on public.demo (cost=8.85..16.60 rows=2 width=10) (actual time=0.079..0.079 rows=0 loops=1)
Buffers: shared hit=23
-> Bitmap Heap Scan on public.demo (cost=8.85..16.60 rows=2 width=10) (actual time=0.016..0.055 rows=2 loops=1)
Output: CASE WHEN (n = 2) THEN 2000 WHEN (n = 2000) THEN 2 ELSE NULL::integer END, ctid
Recheck Cond: (demo.n = ANY ('{2,2000}'::integer[]))
Heap Blocks: exact=3
Buffers: shared hit=9
-> Bitmap Index Scan on demo_pk (cost=0.00..8.85 rows=2 width=0) (actual time=0.009..0.009 rows=4 loops=1)
Index Cond: (demo.n = ANY ('{2,2000}'::integer[]))
Buffers: shared hit=6
Planning time: 0.137 ms
Trigger PK_ConstraintTrigger_75322 for constraint demo_pk: time=0.005 calls=1
Execution time: 0.120 ms

In my opinion, the overhead here is totally acceptable, especially given the fact that this re-check displays exactly which value violates the constraint in case there is a duplicate.

But I’m going too fast here. I’ve not even started my blog series about access paths where I’ll explain the cost of the execution plans, starting from the most simple: Seq Scan. Follow my blog or twitter to get informed. There will be nothing about ‘which is better, Oracle or Postgres?’. But I’m convinced that knowing the difference helps to understand how it works, and to design an application that has the correct behavior if ported from one to the other.

 

Cet article Postgres unique constraint est apparu en premier sur Blog dbi services.

Postgres vs. Oracle access paths – intro

Tue, 2017-08-01 00:00

This is the start of a series on PostgreSQL execution plans, access path, join methods, hints and execution statistics. The approach will compare Postgres and Oracle. It is not a comparison to see which one is better, but rather to see what is similar and where the approaches diverge. I have a long experience of reading Oracle execution plans and no experience at all on Postgres. This is my way to learn and share what I learn. You will probably be interested if you are in the same situation: an Oracle DBA wanting to learn about Postgres. But you may also be an experienced Postgres DBA who wants to see a different point of view from a different ‘culture’.

I’ll probably use the Oracle terms more often as I’m more familiar with them: blocks for pages, optimizer for query planner, rows for tuples, tables for relations…

Please, don’t hesitate to comment on the blog posts or through twitter (@FranckPachot) if you find some mistakes in my Postgres interpretation. I tend to verify any assumption in the same way I do it with Oracle: the documented behavior and the test result should match. My test should be fully reproducible (using Postgres 9.6.2 here with all defaults). But as I said above, I’ve not the same experience as I have on Oracle when interpreting execution statistics.

Postgres

I’m using the latest versions here. Postgres 9.2.6 (as the one I installed here)
I’ve installed pg_hint_plan to be able to control the execution plan with hints. This is mandatory when doing some research. In order to understand an optimizer (query planner) choice, we need to see the estimated cost for different possibilities. Most of my tests will be done with: EXPLAIN (ANALYZE,VERBOSE,COSTS,BUFFERS)

fpa=# explain (analyze,verbose,costs,buffers) select 1;
 
QUERY PLAN
------------------------------------------------------------------------------------
Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)
Output: 1
Planning time: 0.060 ms
Execution time: 0.036 ms
(4 rows)

I my go further with unix tools (like strace to see the system calls)

Oracle

I’m using Oracle 12.2 here and the tests are done by running the statement after setting ALTER SESSION SET STATISTICS_LEVEL=ALL and displaying the execution plan with DBMS_XPLAN:
select * from dbms_xplan.display_cursor(format=>'+cost allstats last -plan_hash +projection');
Note that if you are in lower Oracle versions, you need to call dbms_xplan through the table() function:
select * from table(dbms_xplan.display_cursor(format=>'+cost allstats last -plan_hash +projection'));
Example:

SQL> set arraysize 5000 linesize 150 trimspool on pagesize 1000 feedback off termout off
SQL> alter session set statistics_level=all;
SQL> select 1 from dual;
SQL> set termout on
SQL> select * from dbms_xplan.display_cursor(format=>'+cost allstats last -plan_hash +projection');
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 520mkxqpf15q8, child number 0
-------------------------------------
select 1 from dual
--------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2 (100)| 1 |00:00:00.01 |
| 1 | FAST DUAL | | 1 | 1 | 2 (0)| 1 |00:00:00.01 |
--------------------------------------------------------------------------------------

I’ll probably never compare the execution time, as this depends on the system and makes no sense on artificial small examples. But I’ll try to compare all other statistics: estimated cost, the actual number of pages/blocks read, etc.

Table of content

I’ll update (or rather insert /*+ append */) the links to the series posts as soon as they are published.

  1. Postgres vs. Oracle access paths I – Seq Scan
  2. Postgres vs. Oracle access paths II – Index Only Scan
 

Cet article Postgres vs. Oracle access paths – intro est apparu en premier sur Blog dbi services.

PostgreSQL on Cygwin

Mon, 2017-07-31 23:00

I run my laptop with Windows 10 for office programs, and VirtualBox machines with Linux for the big stuff (Oracle databases). I have also Cygwin installed on Windows for GNU programs. I wanted to quickly install PosgreSQL and rather than installing it in a Linux VM, or as a Windows program, I installed the Cygwin version of it. Here is how.

Cygwin

Cygwin is easy to install: just run the setup-x86_64.exe from https://www.cygwin.com/ and choose the packages you want to install. Here is what is related to PostgreSQL:
CapturePGCY0001

Note that if you want to install postgres extensions you may need pg_config and you need to install the libpd-devel in addition to postgresql-devel. And gcc and make. Those are not displayed in the screenshot above but you may get something like the following, if you don’t have them, when installing an extension:
pg_config: Command not found

Of course, PostgreSQL is Open Source and you can also compile it yourself.

Cygserver

Cygwin can run daemons through a Windows service (Cygserver) and you need to set it up if not already done. For this step, you will need to run the Cygwin Terminal as Administrator.
fpa@dell-fpa ~
$ /usr/bin/cygserver-config
Overwrite existing /etc/cygserver.conf file? (yes/no) yes
Generating /etc/cygserver.conf file
 
Warning: The following function requires administrator privileges!
 
Do you want to install cygserver as service?
(Say "no" if it's already installed as service) (yes/no) yes
 
The service has been installed under LocalSystem account.
To start it, call `net start cygserver' or `cygrunsrv -S cygserver'.
 
Further configuration options are available by editing the configuration
file /etc/cygserver.conf. Please read the inline information in that
file carefully. The best option for the start is to just leave it alone.
 
Basic Cygserver configuration finished. Have fun!

You start this service as any Windows service:

fpa@dell-fpa ~
$ net start cygserver
The CYGWIN cygserver service is starting.
The CYGWIN cygserver service was started successfully.

You can check from that the service is running:

fpa@dell-fpa ~
$ cygstart services.msc

CapturePGCY0002

PostgreSQL database cluster

Here is the creation of the PostgreSQL database cluster.
fpa@dell-fpa ~
$ /usr/sbin/initdb -D /usr/share/postgresql/data
The files belonging to this database system will be owned by user "fpa".
This user must also own the server process.
 
The database cluster will be initialized with locale "C".
The default database encoding has accordingly been set to "SQL_ASCII".
The default text search configuration will be set to "english".
 
Data page checksums are disabled.
 
creating directory /usr/share/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 30
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
 
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
 
Success. You can now start the database server using:
 
/usr/sbin/pg_ctl -D /usr/share/postgresql/data -l log.txt start

Start PostgreSQL database server

I add my network onto the /usr/share/postgresql/data/postgresql.conf

# IPv4 local connections:
host all all 127.0.0.1/32 trust
host all all 192.168.78.0/24 trust

I define the interface and port where the server listen in /usr/share/postgresql/data/postgresql.conf

listen_addresses = 'localhost,192.168.78.1' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
port = 5432 # (change requires restart)
max_connections = 30 # (change requires restart)

Now ready to start the PostgreSQL server:
fpa@dell-fpa ~
$ /usr/sbin/pg_ctl -D /usr/share/postgresql/data -l log.txt start
server starting

Username

My Windows username is ‘FPA’ and so is the Cygwin user which started the database server and I check that I can connect to the maintenance database with this user:

fpa@dell-fpa ~
$ psql -U fpa postgres
psql (9.6.2)
Type "help" for help.
 
postgres=# \du
 
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
fpa | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
 
postgres=# quit

PgAdmin

As I am on Windows, I install the graphical console PgAdmin and setup the connection to this database:
CapturePGCY0003

SQL Developer

As an Oracle fan, I prefer to connect with SQL Developer. Just download the JDBC driver for PostgreSQL: https://jdbc.postgresql.org/download.html

In SQL Developer you can declare this .jar from Tools -> Preferences -> Third Party JDBC Drivers

CapturePGCY0004

And create the connection with the new ‘PostgreSQL’ tab:

CapturePGCY0005
Then with ‘Choose Database’ you can fill the dropbox and choose the database you want to connect to.

As I have no database with the same name as the username, I have to mention the database name at the end of the hostname, suffixed with ‘?’ to get the proper JDBC url. And what you put in the dropbox will be ignored. I don’t really know the reason, but this is how I got the correct url.

CapturePGCY0006

Extensions

You can install extensions. For example, I’ve installed pg_hint_plan to be able to hint the access path and join methods.

wget https://osdn.net/dl/pghintplan/pg_hint_plan96-1.2.1.tar.gz
tar -zxvf pg_hint_plan96-1.2.1.tar.gz
cd pg_hint_plan96-1.2.1
make
make install

And I’m now able to load it:

$ psql
psql (9.6.2)
Type "help" for help.
 
fpa=# load 'pg_hint_plan';
LOAD

But Why?

You may wonder why I don’t install it directly on Linux. My laptop is on Windows and, of course, I have a lot of VirtualBox VMs. But this doesn’t require to start a VM.
You may wonder why I don’t install the Windows version? I want to investigate the linux behaviour. And I may want to trace the postgres processes. For example, cygwin has a strace.exe which shows similar output as strace on Linux. Here is the I/O calls from a full table scan (Seq Scan):
CaptureStraceCygwinPostgres
I can see that postgres sequential reads are done through one lseek() and sequential 8k read().

This was simple. Just get the pid of the session process:

fpa=# select pg_backend_pid();
pg_backend_pid
----------------
11960

and strace it:

$ strace -p 11960

I’ve done that in about one hour: download, install, setup and write this blog post. Without any virtual machine, you can have a Linux Postgres database server running on Windows.

 

Cet article PostgreSQL on Cygwin est apparu en premier sur Blog dbi services.

Re-assigning all objects from on role to another in PostgreSQL

Mon, 2017-07-31 04:07

From time to time it might be required to move objects (tables, indexes, whatever) from one user to another user in a database system. You could do that by dumping all the objects with pg_dump and then load it to the target user. But there are other solutions which are faster than that. Lets go.

Obviously we need two users when we want to move objects from one user to another (actually it is not really moving the objects but more about changing the ownership):

postgres=# create role a login password 'a';
CREATE ROLE
postgres=# create role b login password 'b';
CREATE ROLE
postgres=# create schema a authorization a;
CREATE SCHEMA
postgres=# create schema b authorization b;
CREATE SCHEMA
postgres=# alter role a set search_path=a;
ALTER ROLE
postgres=# alter role b set search_path=b;
ALTER ROLE
postgres=# 

Lets create some objects in schema “a” owned by user “a”:

postgres=# \c postgres a
You are now connected to database "postgres" as user "a".
postgres=> \! cat a.sql
create table a ( a int );
create table b ( a int );
create table c ( a int );
create table d ( a int );
create index i1 on a (a);
create index i2 on b (a);
create index i3 on c (a);
create index i4 on d (a);

postgres=> \i a.sql
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX

By joining pg_class and pg_roles we can verify who is actually the owner of the objects:

postgres=> select t.relname, d.rolname 
             from pg_class t, pg_roles d 
            where t.relowner = d.oid and d.rolname = 'a';
 relname | rolname 
---------+---------
 a       | a
 b       | a
 c       | a
 d       | a
 i1      | a
 i2      | a
 i3      | a
 i4      | a
(8 rows)

The easiest way to make these objects owned by another user (call it “c”) would be:

postgres=# alter user a rename to c;
NOTICE:  MD5 password cleared because of role rename
ALTER ROLE
postgres=# select t.relname, d.rolname from pg_class t, pg_roles d where t.relowner = d.oid and d.rolname = 'c';
 relname | rolname 
---------+---------
 a       | c
 b       | c
 c       | c
 d       | c
 i1      | c
 i2      | c
 i3      | c
 i4      | c
(8 rows)

Not a good idea though as the schema still is named “a” and this at least will create some confusion with the naming. Of course we could rename the schema as well:

postgres=# alter schema a rename to c;
ALTER SCHEMA
postgres=# \c postgres c
You are now connected to database "postgres" as user "c".
postgres=> select count(*) from a;
2017-07-28 15:51:25.499 CEST [3415] ERROR:  relation "a" does not exist at character 22
2017-07-28 15:51:25.499 CEST [3415] STATEMENT:  select count(*) from a;
ERROR:  relation "a" does not exist
LINE 1: select count(*) from a;

… but now we have another mess. Because the search_path is still set to “a” we can not see the objects by default but we will need to use the fully qualified name:

postgres=> select count(*) from c.a;
 count 
-------
     0
(1 row)

Finally we would need to adjust the search_path to get back the previous behavior:

postgres=> set search_path=c;
SET
postgres=> select count(*) from a;
 count 
-------
     0
(1 row)

A lot of steps to follow. Easier is:

postgres=# reassign owned by c to b;
REASSIGN OWNED
postgres=# alter user b set search_path=c,b;
ALTER ROLE
postgres=# \c postgres b
You are now connected to database "postgres" as user "b".
postgres=> select count(*) from a;
 count 
-------
     0
(1 row)
postgres=> select t.relname, d.rolname from pg_class t, pg_roles d where t.relowner = d.oid and d.rolname = 'b';
 relname | rolname 
---------+---------
 d       | b
 i4      | b
 c       | b
 i3      | b
 b       | b
 i2      | b
 a       | b
 i1      | b
(8 rows)

Cool :) There is also a command to drop all objects of a user:

postgres=> drop owned by b;
DROP OWNED
postgres=> select t.relname, d.rolname from pg_class t, pg_roles d where t.relowner = d.oid and d.rolname = 'b';
 relname | rolname 
---------+---------
(0 rows)

Nice …

 

Cet article Re-assigning all objects from on role to another in PostgreSQL est apparu en premier sur Blog dbi services.

Documentum – Change password – 8 – FT – Dsearch & xDB

Sat, 2017-07-29 06:19

Here we are, already, at the end of my series of blogs about the “Change Password”. This blog will, as already mentioned in a previous one, talk about the Dsearch and xDB passwords. I could have created a lot more blogs in this series but I already presented the most important and most interesting ones so this blog will be the last one – at least for now ;).

 

Please note that below, I will set the same password for the Dsearch admin account and for the xDB. The reason for that is pretty simple: this is what Documentum is doing by default when you install a PrimaryDsearch. Indeed, when you install a Dsearch, the installer will ask you to enter the Dsearch admin account which is the JBoss instance admin account (this blog explained how to change a JBoss Admin password) and it will use this password for the xDB too.

If you want to use a different password, then you can potentially define three passwords here: xDB Admin password, xDB Superuser password and Dsearch JBoss instance password.

 

The xDB (find more info here about what it is) provides a command line interface to manage it (repair, create, list, aso…) which is the “XHCommand” but you aren’t able to change the different passwords through this CLI ;(. Therefore, before even starting with this blog, you will have to start a X Server or something similar, to be able to open the xDB Admin GUI (“XHAdmin”). For this blog, I installed a new VM and I also installed the complete graphical interface on it (my first time in a very long time) in order to have some screenshots that aren’t too ugly…

 

So first of all, let’s login to the Full Text Server and open the xDB Admin GUI:

[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/dsearch/xhive/admin/
[xplore@full_text_server_01 ~]$ ./XHAdmin

 

Just a small funny note, when you will close the XHAdmin tool, you will see that EMC thanks you for using this tool: “Thank you for using EMC Documentum xDB”. I guess this is your reward for having to open a GUI to change a password, in 2017 ;).

 

At this point, the xDB Admin Client will be opened but not connected. So the first thing to do is to connect to the xDB with the CURRENT Dsearch Admin credentials:

  • Click on the “connect” button
  • Set the “Database name” to: xhivedb
  • Set the “Username” to: Administrator
  • Set the “Password” to: ###CURRENT_DSEARCH_ADMIN_PWD###
  • Click on the “OK” button

Dsearch-xDB_Change_Password_1

 

Then you will be connected to the xDB. Therefore, it is now time to change the Superuser password:

  • Click on the “Federation > Change superuser password” menu
  • Set the “New superuser password” to: ###NEW_DSEARCH_ADMIN_PWD###
  • Set the “Retype password” to: ###NEW_DSEARCH_ADMIN_PWD###
  • Click on the “OK” button
  • A pop-up will be opened with a title “Enter superuser password”. Enter the ###CURRENT_DSEARCH_ADMIN_PWD### in it and click on OK to validate the change.

Dsearch-xDB_Change_Password_2

 

After that, time to change the Admin password itself. Here, you have the use the new Superuser password that has been defined above and then define the new Admin password:

  • Click on the “Database > Reset admin password” menu
  • Set the “Database name” to: xhivedb
  • Set the “Superuser password” to: ###NEW_DSEARCH_ADMIN_PWD###
  • Set the “Administrator password” to: ###NEW_DSEARCH_ADMIN_PWD###
  • Set the “Retype password” to: ###NEW_DSEARCH_ADMIN_PWD###
  • Click on the “OK” button

Dsearch-xDB_Change_Password_3

 

At this point, the xDB Superuser and Admin passwords have been defined so you can close the xDB Admin Client (Database > Exit) and we can go back to the command line. Ok so now let’s stop the xPlore components in order to reflect these changes on the configuration files:

[xplore@full_text_server_01 ~]$ read -s -p "Please enter the NEW Dsearch Admin password: " newpw; echo
Please enter the NEW Dsearch Admin password:
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/jboss7.1.1/server
[xplore@full_text_server_01 ~]$ for i in `ls stop*.sh`; do ./${i}; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/jboss7.1.1/server/DctmServer_PrimaryDsearch/deployments/dsearch.war/WEB-INF/classes/
[xplore@full_text_server_01 ~]$ cp indexserver-bootstrap.properties indexserver-bootstrap.properties_bck_$(date +"%Y%m%d-%H%M%S")
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ grep "password=" indexserver-bootstrap.properties
superuser-password=AAAAEF1hfFDU0zff6A6Mmd1f6Le8VvxPmLxF11ZdzjVo5KU/
adminuser-password=AAAAEF1hfFDU0zff6A6Mmd1f6Le8VvxPmLxF11ZdzjVo5KU/
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ sed -i "s,password=.*,password=${newpw}," indexserver-bootstrap.properties
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ grep "password=" indexserver-bootstrap.properties
superuser-password=New_Dsearch_Admin_Pwd
adminuser-password=New_Dsearch_Admin_Pwd
[xplore@full_text_server_01 ~]$

 

As you can see above, I’m defining the environment variable with the NEW Dsearch Admin Password and then I’m replacing the superuser and adminuser encrypted OLD password inside the file indexserver-bootstrap.properties with the non-encrypted NEW password. It was the same for both before and it is still the same for both after. Don’t worry about putting the non-encrypted NEW password in this properties file, it will be encrypted automatically at the next start of the PrimaryDsearch. So now, let’s start the Dsearch only to verify if the passwords have been encrypted successfully and then we can update the Dsearch JBoss Admin password:

[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/jboss7.1.1/server
[xplore@full_text_server_01 ~]$ nohup ./startPrimaryDsearch.sh >> nohup-PrimaryDsearch.out 2>&1 &
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ sleep 60
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ grep "password=" ./DctmServer_PrimaryDsearch/deployments/dsearch.war/WEB-INF/classes/indexserver-bootstrap.properties
superuser-password=AAAAFAq2fFwiHf21P98hRch982+1hEDGe824E21eL21e9c57f6A
adminuser-password=AAAAFAq2fFwiHf21P98hRch982+1hEDGe824E21eL21e9c57f6A
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/jboss7.1.1/server/DctmServer_PrimaryDsearch/configuration
[xplore@full_text_server_01 ~]$ cp dctm-users.properties dctm-users.properties_bck_$(date +"%Y%m%d-%H%M%S")
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ encrypted_newpw=`grep "adminuser-password=" ../deployments/dsearch.war/WEB-INF/classes/indexserver-bootstrap.properties | sed 's,adminuser-password=,,'`
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ echo "# users.properties file to use with UsersRolesLoginModule" > dctm-users.properties
[xplore@full_text_server_01 ~]$ echo "admin=${encrypted_newpw}" >> dctm-users.properties
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ cat dctm-users.properties
# users.properties file to use with UsersRolesLoginModule
admin=AAAAFAq2fFwiHf21P98hRch982+1hEDGe824E21eL21e9c57f6A
[xplore@full_text_server_01 ~]$

 

If you are referring to the JBoss Admin password blog I already published and that I linked above, you could think that there is nothing more to be done but actually there is one more file that needs to be updated when it comes to the Dsearch Admin password and this file is the xplore.properties:

[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/dsearch/admin/
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ grep -B2 "password=" xplore.properties
# Specify admin password for xPlore primary instance
# It is highly suggested to input the encrypted password, you can get it from indexserver-bootstrap.properties
password=AAAAEF1hfFDU0zff6A6Mmd1f6Le8VvxPmLxF11ZdzjVo5KU/
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ sed -i "s,password=.*,password=${encrypted_newpw}," xplore.properties
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ grep -B2 "password=" xplore.properties
# Specify admin password for xPlore primary instance
# It is highly suggested to input the encrypted password, you can get it from indexserver-bootstrap.properties
password=AAAAFAq2fFwiHf21P98hRch982+1hEDGe824E21eL21e9c57f6A
[xplore@full_text_server_01 ~]$

 

To complete the changes, we can now restart all xPlore components and verify that the PrimaryDsearch has been started properly:

[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/jboss7.1.1/server
[xplore@full_text_server_01 ~]$ ./stopPrimaryDsearch.sh
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ sleep 30
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ nohup ./startPrimaryDsearch.sh >> nohup-PrimaryDsearch.out 2>&1 &
[xplore@full_text_server_01 ~]$ for i in `ls startIndexag*.sh`; do ia=`echo $i|sed 's,start\(.*\).sh,\1,'`; nohup ./$i >> nohup-$ia.out 2>&1 &; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ sleep 30
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ curl -g --user admin -D - http://localhost:9305/management --header "Content-Type: application/json" -d '{"operation":"read-attribute","name":"server-state","json.pretty":1}'
Enter host password for user 'admin':
HTTP/1.1 200 OK
Transfer-encoding: chunked
Content-type: application/json
Date: Sun, 23 Jul 2017 08:14:56 GMT

{
    "outcome" : "success",
    "result" : "running"
}
[xplore@full_text_server_01 ~]$

 

To be sure, you can also open the Dsearch Admin UI (E.g.: https://hostname:9302/dsearchadmin) and then log in with the NEW Dsearch Admin password. If the login is working, you should be good to go! :)

 

 

Cet article Documentum – Change password – 8 – FT – Dsearch & xDB est apparu en premier sur Blog dbi services.

Documentum – Change password – 7 – DB – Schema Owner

Sat, 2017-07-29 04:48

In this serie, I completed the passwords I wanted to talk about on the Content Server. Therefore in this blog, I will talk about the only Database Account that is relevant for Documentum: the Database Schema Owner. Since there are a few steps to be done on the Content Server, I’m just doing everything from there… In this blog, I will assume there is one Global Registry (GR_DOCBASE) and one normal Repository (DocBase1). Each docbase has a different Database Schema Owner of course but both Schemas are on the same Database and therefore the same SID will be used.

 

In High Availability setups, you will have to execute the steps below for all Content Servers. Of course, when it comes to changing the password inside the DB, this needs to be done only once since the Database Schema Owner is shared between the different Content Servers of the HA setup.

 

In this blog, I’m using a CS 7.2. Please note that in CS 7.2, there is a property inside the dfc.properties of the Content Server ($DOCUMENTUM_SHARED/config/dfc.properties) that defines the crypto repository (dfc.crypto.repository). The repository that is used for this property is the one that Documentum will use for encryption/decryption of passwords and therefore I will use this one below to encrypt the password. By default, the Repository used for this property is the last one created… I tend to use the Global Registry instead, but it’s really up to you.

 

As said before, I’m considering two different repositories and therefore two different accounts and two different passwords. So, let’s start with encrypting these two passwords:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW GR_DOCBASE Schema Owner's password: " new_gr_pw; echo
Please enter the NEW GR_DOCBASE Schema Owner's password:
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW DocBase1 Schema Owner's password: " new_doc1_pw; echo
Please enter the NEW DocBase1 Schema Owner's password:
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ iapi `cat $DOCUMENTUM_SHARED/config/dfc.properties | grep crypto | tail -1 | sed 's/.*=//'` -Udmadmin -Pxxx << EOF
> encrypttext,c,${new_gr_pw}
> encrypttext,c,${new_doc1_pw}
> EOF


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0150.0154


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f12345605ae7b started for user dmadmin."


Connected to Documentum Server running Release 7.2.0160.0297  Linux64.Oracle
Session id is s0
API> ...
DM_ENCR_TEXT_V2=AAAAEH7UNwFub2ubf92h+21/rc8HEc3rd1C82hc52c8bz2cFl1cQ721zex2nxWDEegwqgdotwncZVVqgZlDLmfflWK6+f8AGf0dSRzi5rr3h3::GR_DOCBASE
API> ...
DM_ENCR_TEXT_V2=AAAAEGBQ6Zy7FxQ10idQdFj+Gn20nFlif02ieMx+AGBHLz+vQfmGu2GAiv8KeIN2PhPOf1oiF9u2fP98zEFhhuBAmxY+d5AoBCGNf61ZRavpa::GR_DOCBASE
API> Bye
[dmadmin@content_server_01 ~]$

 

If you have more repositories, you will have to encrypt those too, if you want to change them of course. Once the new password has been encrypted, we can change it on the Database. To avoid any issues and error messages, let’s first stop Documentum (the docbases at the very least) and then printing the Database Connection information:

[dmadmin@content_server_01 ~]$ service documentum stop
  ** JMS stopped
  ** DocBase1 stopped
  ** GR_DOCBASE stopped
  ** Docbroker stopped
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora
<sid> =
    (DESCRIPTION =
        (ADDRESS_LIST =
            (ADDRESS = (PROTOCOL = TCP)(HOST = <database_hostname>)(PORT = <database_port>))
        )
        (CONNECT_DATA =
            (SERVICE_NAME = <service_name>)
        )
    )
[dmadmin@content_server_01 ~]$

 

Once you know what the SID is, you can now login to the database to change the password so I will do that for both repositories. This could also be scripted to retrieve the list of docbases, create new passwords for them, encrypt them all automatically and then connect to each database using different SQL scripts to change the passwords, however I will use here manual steps:

[dmadmin@content_server_01 ~]$ sqlplus GR_DOCBASE@<sid>

SQL*Plus: Release 12.1.0.2.0 Production on Sat Jul 22 15:05:08 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter password:
    -->> Enter here the OLD GR_DOCBASE Schema Owner's password
Last Successful login time: Sat Jul 22 2017 15:04:18 +00:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

SQL> PASSWORD
Changing password for GR_DOCBASE
Old password:
    -->> Enter here the OLD GR_DOCBASE Schema Owner's password
New password:
    -->> Enter here the NEW GR_DOCBASE Schema Owner's password
Retype new password:
    -->> Re-enter here the NEW GR_DOCBASE Schema Owner's password
Password changed

SQL> quit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sqlplus DocBase1@<sid>

SQL*Plus: Release 12.1.0.2.0 Production on Sat Jul 22 15:08:20 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter password:
    -->> Enter here the OLD DocBase1 Schema Owner's password
Last Successful login time: Sat Jul 22 2017 15:07:10 +00:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

SQL> PASSWORD
Changing password for DocBase1
Old password:
    -->> Enter here the OLD DocBase1 Schema Owner's password
New password:
    -->> Enter here the NEW DocBase1 Schema Owner's password
Retype new password:
    -->> Re-enter here the NEW DocBase1 Schema Owner's password
Password changed

SQL> quit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options
[dmadmin@content_server_01 ~]$

 

At this point, the passwords have been changed in the database and we encrypted them properly. The next step is therefore to update the password files for each repository with the encrypted password so that the repositories can start again:

[dmadmin@content_server_01 ~]$ cd $DOCUMENTUM/dba/config
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for i in `ls -d *`; do echo "  ** dbpasswd.txt for ${i} **"; cat ./${i}/dbpasswd.txt; echo; done
  ** dbpasswd.txt for GR_DOCBASE **
DM_ENCR_TEXT_V2=AAAAEH7UNwFgncubfd1C82hc5l1cwqgdotwQ7212c8bz2cFZVVqgZub2zex8bz2cFWK92h+21EelDLmffl2/rc82c8bz2cFf0dSRazi5rr3h3::GR_DOCBASE

  ** dbpasswd.txt for DocBase1 **
DM_ENCR_TEXT_V2=AAAAQ10idQdFj+Gn2EGBPZy7e0niF9uQfAGBHLz+vv8KQ62fP98zE+02iFhhuBAmxY+FFxeMxIN2Phl1od5AoBCGNf61ZRifmGu2GAiOfavpa::GR_DOCBASE

[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for i in `ls -d *`; do cp ./${i}/dbpasswd.txt ./${i}/dbpasswd.txt_bck_$(date +"%Y%m%d-%H%M%S"); done
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ echo "DM_ENCR_TEXT_V2=AAAAEH7UNwFub2ubf92h+21/rc8HEc3rd1C82hc52c8bz2cFl1cQ721zex2nxWDEegwqgdotwncZVVqgZlDLmfflWK6+f8AGf0dSRzi5rr3h3::GR_DOCBASE" > ./GR_DOCBASE/dbpasswd.txt
[dmadmin@content_server_01 ~]$ echo "DM_ENCR_TEXT_V2=AAAAEGBQ6Zy7FxQ10idQdFj+Gn20nFlif02ieMx+AGBHLz+vQfmGu2GAiv8KeIN2PhPOf1oiF9u2fP98zEFhhuBAmxY+d5AoBCGNf61ZRavpa::GR_DOCBASE" > ./DocBase1/dbpasswd.txt
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for i in `ls -d *`; do echo "  ** dbpasswd.txt for ${i} **"; cat ./${i}/dbpasswd.txt; echo; done
  ** dbpasswd.txt for GR_DOCBASE **
DM_ENCR_TEXT_V2=AAAAEH7UNwFub2ubf92h+21/rc8HEc3rd1C82hc52c8bz2cFl1cQ721zex2nxWDEegwqgdotwncZVVqgZlDLmfflWK6+f8AGf0dSRzi5rr3h3::GR_DOCBASE

  ** dbpasswd.txt for DocBase1 **
DM_ENCR_TEXT_V2=AAAAEGBQ6Zy7FxQ10idQdFj+Gn20nFlif02ieMx+AGBHLz+vQfmGu2GAiv8KeIN2PhPOf1oiF9u2fP98zEFhhuBAmxY+d5AoBCGNf61ZRavpa::GR_DOCBASE

[dmadmin@content_server_01 ~]$

 

Once the dbpasswd.txt files have been updated with the new encrypted password that has been generated at the beginning of this blog, then we can restart Documentum and verify that the docbases are up&running. If they are, then the password has been changed properly!

[dmadmin@content_server_01 ~]$ service documentum start
  ** Docbroker started
  ** GR_DOCBASE started
  ** DocBase1 started
  ** JMS started
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ ps -ef | grep "documentum.*docbase_name"
...
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ grep -C3 "DM_DOCBROKER_I_PROJECTING" $DOCUMENTUM/dba/log/GR_DOCBASE.log
2017-07-22T15:28:40.657360      9690[9690]      0000000000000000        [DM_SERVER_I_START]info:  "Sending Initial Docbroker check-point "

2017-07-22T15:28:40.671878      9690[9690]      0000000000000000        [DM_MQ_I_DAEMON_START]info:  "Message queue daemon (pid : 9870, session 010f123456000456) is started sucessfully."
2017-07-22T15:28:40.913699      9869[9869]      010f123456000003        [DM_DOCBROKER_I_PROJECTING]info:  "Sending information to Docbroker located on host (content_server_01) with port (1490).  Information: (Config(GR_DOCBASE), Proximity(1), Status(Open), Dormancy Status(Active))."
Tue Jul 22 15:29:38 2017 [INFORMATION] [AGENTEXEC 10309] Detected during program initialization: Version: 7.2.0160.0297  Linux64
Tue Jul 22 15:29:44 2017 [INFORMATION] [AGENTEXEC 10309] Detected during program initialization: Agent Exec connected to server GR_DOCBASE:  [DM_SESSION_I_SESSION_START]info:  "Session 010f123456056d00 started for user dmadmin."

[dmadmin@content_server_01 ~]$

 

When the docbase has been registered to the Docbroker, you are sure that it was able to contact and log in to the database so that the new password is now used properly. To be sure that everything in Documentum is working properly however, I would still check the complete log file…

 

 

Cet article Documentum – Change password – 7 – DB – Schema Owner est apparu en premier sur Blog dbi services.

Documentum – Change password – 6 – CS/FT – JKS

Sat, 2017-07-29 04:29

Just like for the JBoss Admin password (here), this blog will be for both Content Servers and Full Text Servers. I will provide commands below to change the passwords of the Java KeyStore for the Java Method Server (JMS) and Full Text Servers (Dsearch/IndexAgent). Again, JKS aren’t only used in the scope of Documentum so if you are here for the JKS and not for Documentum, that’s fine too ;).

 

The steps are exactly the same for all JKS files, it’s just a matter of integrating that inside Documentum. Therefore, I will continue to use the JMS for single JKS update and I will use the Dsearch/IndexAgent for multiple updates. The steps are pretty simple:

  1. Store the current and new password in variables
  2. Backup the old configuration and JKS files
  3. Update the JKS password
  4. Restart the components
  5. Verify that the components are running over https

 

I. JMS Java KeyStore

For the JMS Java KeyStore, you obviously need to connect to all Content Servers and then perform the steps. Below, I’m using a JKS named “my.keystore” which is placed right next to the standalone.xml file. So let’s do that:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the CURRENT JKS password: " currentpw; echo
Please enter the CURRENT JKS password:
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW JKS password: " newpw; echo
Please enter the NEW JKS password:
[dmadmin@content_server_01 ~]$ read -p "Please enter the name of the JKS file with extension: " jks_name
Please enter the name of the JKS file with extension: my.keystore
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cd $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/configuration/
[dmadmin@content_server_01 ~]$ cp ${jks_name} ${jks_name}_bck_$(date +"%Y%m%d-%H%M%S")
[dmadmin@content_server_01 ~]$ cp standalone.xml standalone.xml_bck_$(date +"%Y%m%d-%H%M%S")
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ keytool -storepasswd -keystore ${jks_name} -storepass ${currentpw} -new ${newpw}
[dmadmin@content_server_01 ~]$ keytool -keypasswd -keystore ${jks_name} -storepass ${newpw} -alias jms_alias -keypass ${currentpw} -new ${newpw}
[dmadmin@content_server_01 ~]$

 

These last two commands are the ones updating the Java KeyStore and the key passwords. In case your JKS and its included key do not have the same password, you will have to use the real passwords at the end of the second line. If the last command (the 2nd keytool command) is working, it means that you changed the JKS password properly in the first keytool command because you are now able to change the key password using the new JKS password (-storepass ${newpw}). Still following me?

Once this has been done, the next step is simply to update the password in the JMS configuration file and restart the JMS:

[dmadmin@content_server_01 ~]$ grep "password=" standalone.xml
                <ssl name="https" password="Old_JKS_Password" certificate-key-file="$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i 's,\(<[sk][se][ly].*password="\)[^"]*,\1'${newpw}',' standalone.xml
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ grep "password=" standalone.xml
                <ssl name="https" password="New_JKS_Password" certificate-key-file="$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cd $DOCUMENTUM_SHARED/jboss7.1.1/server
[dmadmin@content_server_01 ~]$ ./stopMethodServer.sh
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ nohup ./startMethodServer.sh >> nohup-JMS.out 2>&1 &
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sleep 30
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ curl -k -D - https://localhost:9082/DmMethods/servlet/DoMethod; echo
HTTP/1.1 200 OK
Content-Length: 144
Date: Sat, 22 Jul 2017 09:58:41 GMT
Server: MethodServer

<html><title>Documentum Java Method Server</title><body><font face="Verdana, Arial" size="-1"><p>Documentum Java Method Server</p></body></html>
[dmadmin@content_server_01 ~]$

 

If the password of the JKS has been changed properly for the JKS file as well as in the configuration file, then you should get an HTTP 200 OK return code.

 

II. Dsearch/IndexAgent Java KeyStores

For the Dsearch/IndexAgent Java KeyStores, you obviously need to connect to all Full Text Servers and then perform the steps again. Below, I’m using a JKS named “my.keystore”. It doesn’t matter where this file is placed since the commands below will anyway just find them. However, by default this file will be placed right next to the standalone.xml file: this is the default setup if you used the “ConfigSSL.groovy” script to setup the xPlore in SSL (see this blog for information about that and a lot of other of my blogs to see issues related to this script/setup in SSL). These commands are adapted in case you have several IndexAgents installed. Please note that the commands below will set the same JKS password for all JBoss instances (all Dsearch/IndexAgents). Therefore, if that’s not what you want (if you have Subject Alternative Names for example), you will have to execute the commands for each keystore, one by one.

[xplore@full_text_server_01 ~]$ read -s -p "Please enter the CURRENT JKS password: " currentpw; echo
Please enter the CURRENT JKS password:
[xplore@full_text_server_01 ~]$ read -s -p "Please enter the NEW JKS password: " newpw; echo
Please enter the NEW JKS password:
[xplore@full_text_server_01 ~]$ read -p "Please enter the name of the JKS file with extension: " jks_name
Please enter the name of the JKS file with extension: my.keystore
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `find $XPLORE_HOME -name ${jks_name}`; do cp ${i} ${i}_bck_$(date +"%Y%m%d-%H%M%S"); done
[xplore@full_text_server_01 ~]$ for i in `find $XPLORE_HOME -name standalone.xml`; do if [[ ${i} != */template/* ]]; then cp ${i} ${i}_bck_$(date +"%Y%m%d-%H%M%S"); fi; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `find $XPLORE_HOME -name ${jks_name}`; do keytool -storepasswd -keystore ${i} -storepass ${currentpw} -new ${newpw}; done
[xplore@full_text_server_01 ~]$ for i in `find $XPLORE_HOME -name ${jks_name}`; do keytool -keypasswd -keystore ${i} -storepass ${newpw} -alias ft_alias -keypass ${currentpw} -new ${newpw}; done
[xplore@full_text_server_01 ~]$

 

At this point, all the Java KeyStore have been backed up and updated and the related standalone.xml files have been backed up too. The only remaining step is to replace the passwords in the standalone.xml files, restart the components and test again the availability of the xPlore components:

[xplore@full_text_server_01 ~]$ for i in `find $XPLORE_HOME -name standalone.xml`; do if [[ ${i} != */template/* ]]; then grep "password=" ${i}; fi; done
                <ssl name="https" password="Old_JKS_Password" certificate-key-file="$XPLORE_HOME/jboss7.1.1/server/DctmServer_PrimaryDsearch/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
                <ssl name="https" password="Old_JKS_Password" certificate-key-file="$XPLORE_HOME/jboss7.1.1/server/DctmServer_Indexagent_DocBase1/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
                <ssl name="https" password="Old_JKS_Password" certificate-key-file="$XPLORE_HOME/jboss7.1.1/server/DctmServer_Indexagent_DocBase2/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
                <ssl name="https" password="Old_JKS_Password" certificate-key-file="$XPLORE_HOME/jboss7.1.1/server/DctmServer_Indexagent_DocBase3/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `find $XPLORE_HOME -name standalone.xml`; do if [[ ${i} != */template/* ]]; then sed -i 's,\(<[sk][se][ly].*password="\)[^"]*,\1'${newpw}',' ${i}; fi; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `find $XPLORE_HOME -name standalone.xml`; do if [[ ${i} != */template/* ]]; then grep "password=" ${i}; fi; done
                <ssl name="https" password="New_JKS_Password" certificate-key-file="$XPLORE_HOME/jboss7.1.1/server/DctmServer_PrimaryDsearch/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
                <ssl name="https" password="New_JKS_Password" certificate-key-file="$XPLORE_HOME/jboss7.1.1/server/DctmServer_Indexagent_DocBase1/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
                <ssl name="https" password="New_JKS_Password" certificate-key-file="$XPLORE_HOME/jboss7.1.1/server/DctmServer_Indexagent_DocBase2/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
                <ssl name="https" password="New_JKS_Password" certificate-key-file="$XPLORE_HOME/jboss7.1.1/server/DctmServer_Indexagent_DocBase3/configuration/my.keystore" cipher-suite="TLS_RSA_WITH_AES_128_CBC_SHA"/>
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/jboss7.1.1/server
[xplore@full_text_server_01 ~]$ for i in `ls stop*.sh`; do ./${i}; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ nohup ./startPrimaryDsearch.sh >> nohup-PrimaryDsearch.out 2>&1 &
[xplore@full_text_server_01 ~]$ for i in `ls startIndexag*.sh`; do ia=`echo $i|sed 's,start\(.*\).sh,\1,'`; nohup ./$i >> nohup-$ia.out 2>&1 &; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ sleep 30
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ curl -k -D - https://localhost:9302/dsearch/; echo
HTTP/1.1 259
Server: Apache-Coyote/1.1
Pragma: No-cache
Cache-Control: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 UTC
Content-Type: text/html;charset=UTF-8
Content-Length: 65
Date: Sat, 22 Jul 2017 11:33:38 GMT

The xPlore instance PrimaryDsearch [version=1.5.0020.0048] normal
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do port=`grep '<socket-binding .*name="https"' ./${i}/configuration/standalone.xml|sed 's,.*port="\([0-9]*\).*,\1,'`; echo; echo "  ** Accessing IndexAgent URL of '${i}' (${port})"; curl -k -D - https://localhost:${port}/IndexAgent; done

  ** Accessing IndexAgent URL of 'DctmServer_Indexagent_DocBase1' (9202)
HTTP/1.1 302 Moved Temporarily
Server: Apache-Coyote/1.1
Location: https://localhost:9202/IndexAgent/
Transfer-Encoding: chunked
Date: Sat, 22 Jul 2017 11:36:57 GMT


  ** Accessing IndexAgent URL of 'DctmServer_Indexagent_DocBase2' (9222)
HTTP/1.1 302 Moved Temporarily
Server: Apache-Coyote/1.1
Location: https://localhost:9222/IndexAgent/
Transfer-Encoding: chunked
Date: Sat, 22 Jul 2017 11:36:57 GMT


  ** Accessing IndexAgent URL of 'DctmServer_Indexagent_DocBase3' (9242)
HTTP/1.1 302 Moved Temporarily
Server: Apache-Coyote/1.1
Location: https://localhost:9242/IndexAgent/
Transfer-Encoding: chunked
Date: Sat, 22 Jul 2017 11:36:58 GMT
[dmadmin@content_server_01 ~]$

 

For the Dsearch, the proper answer is a HTTP 259 return code while for the IndexAgent, I didn’t put an ending “/” in the URL so that we don’t have the full page of the IA loaded but only some header. Therefore HTTP 302 Moved Temporarily is the proper return code there.

 

Please note that for the “sed” commands above, I deliberately used “<[sk][se][ly]” even if for all JBoss 7.1.1 instances, the SSL configuration will always start with “<ssl “. The reason for that is to make the steps compatible with WidlFly 9.0.1 too (xPlore 1.6). There are a few differences between JBoss 7.1.1 and WildFly 9.0.1 and one of them is that the JKS password is now on a line starting with “<keystore ” so that’s why :).

 

 

Cet article Documentum – Change password – 6 – CS/FT – JKS est apparu en premier sur Blog dbi services.

Setting up default parameters for roles in PostgreSQL

Fri, 2017-07-28 08:31

As you might know you can set various parameters on the session level in PostgreSQL by using the “set” command:

postgres=# \h set
Command:     SET
Description: change a run-time parameter
Syntax:
SET [ SESSION | LOCAL ] configuration_parameter { TO | = } { value | 'value' | DEFAULT }
SET [ SESSION | LOCAL ] TIME ZONE { timezone | LOCAL | DEFAULT }

This allows a session to adjust parameters at runtime and can be a handy way for on the fly configuration when you need special settings. Wouldn’t it be great if we could have a default set of parameters for a role or user? Maybe there is one user who needs a special setting for work_mem and another one who needs a special setting for search_path. Instead of setting that each time after the connect in the session you can also do that on the server side.

Lets create to users, user a and user b:

postgres=# create user a login password 'a';
CREATE ROLE
postgres=# create user b login password 'b';
CREATE ROLE
postgres=# \du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 a         |                                                            | {}
 b         |                                                            | {}
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

When we want a special setting for work_mem every time user a creates a new connection and a special setting for search_path every time user b creates a connection we can do it like this:

postgres=# alter user a set work_mem = '1MB';
ALTER ROLE
postgres=# alter user b set search_path='b';
ALTER ROLE
postgres=# 

When user a connects from now on:

postgres=# \c postgres a
You are now connected to database "postgres" as user "a".
postgres=> show work_mem;
 work_mem 
----------
 1MB
(1 row)

When user b connects from now on:

postgres=> \c postgres b
You are now connected to database "postgres" as user "b".
postgres=> show search_path ;
 search_path 
-------------
 b
(1 row)

Notice that this does not prevent a user from overriding the setting:

postgres=> select current_user;
 current_user 
--------------
 b
(1 row)

postgres=> set search_path=c;
SET
postgres=> show search_path ;
 search_path 
-------------
 c
(1 row)

… this is more meant as setting defaults that differ from the main server configuration where it makes sense. And how can you know then which settings are configured for a specific role? Easy, there is pg_roles:

postgres=> select rolname,rolconfig from pg_roles where rolname in ('a','b');
 rolname |    rolconfig    
---------+-----------------
 a       | {work_mem=1MB}
 b       | {search_path=b}
(2 rows)

Good to know…

 

Cet article Setting up default parameters for roles in PostgreSQL est apparu en premier sur Blog dbi services.

Alfresco 5.2 our DMS of choice!

Thu, 2017-07-27 06:46
Introduction

Nowadays companies have to deal with lots of electronic documents, some of them being mission critical. Insurances, Banks and Pharma industries are good candidates for ECM/DMS solutions since they produce and deal with lots of documentations, contracts and receipts. Usually the largest ECM/DMS infrastructures can be found at those customers which initiate large digitalization processes. However even for smaller businesses, managing e-documents like sales quotations, offers, answers to RFIs and RFPs becomes mission critical. Indeed, while creating such quotations and offers, collaboration is often requested between salesmen and eventually with the technical department too. The ECM/DMS solutions must offer the means to share and work together on the same document. Unfortunately these documents are, most of the time, simply lying around on a basic Windows Share, if the users even took the time to copy the documents on this share. As a result, there is no concurrency management, preventing any data loss and the “locking strategy” is quite simple: “last wrote … won”. It’s even incredible to see how many “larger” companies still work like that. All companies follow the digitalization trends but sometimes in a quite elementary way.

So basically what prevents the usage of a ECM/DMS solution in all companies? From my point of view, most of the time ECM/DMS projects are wrongly sized and approached. Indeed, each customer has lots of good intentions at the begin of the project. Therefore instead of focusing on the essentials, project responsible want to implement almost everything, and may be too much:

  • workflow management
  • complex user/group management and security rules
  • full text indexing
  • infrastructure redundancy
  • full integration in existing solutions
  • complex business processes (mixing up BPM and ECM/DMS)
  • aso … aso …

As a result the proposed ECM/DMS solutions can become quite complex to set up and quite expensive in terms of licenses. That’s exactly where those kinds of projects usually get stuck and die. We want to do too much, it gets too complex, so let’s do nothing! :-)

Is there a way and a technology which allows to start smoothly in the ECM/DMS area?

Why a DMS?

First of all, let’s summarize again which core functionalities we need from a DMS. In other words, what do we want to achieve with such a solution?

As a salesman, and in particular as a Chief Sales Officer, I need to keep a clear track of all changes. Indeed, while editing/changing documents, and in particular quotations, we should keep traces of each modifications. Release management and traceability is a “must have” nowadays. Document validation (workflow processes) would be nice to have in a second step.

Of course in the current context of cyber-attacks, I need a high security level. I also need to protect the documents against unauthorized users: we do not want/need all people in the company to know the sales engineering policy. Furthermore, we do not want viruses encrypting all our documents lying on a simple Windows Share. If the ECM/DMS solutions request identifications to the system to proceed with CheckOut/CheckIn procedures to work on documents the virus has almost no chance to access easily all files.

If this CheckOut/CheckIn procedure is included in the Microsoft Office suite, it won’t even decrease the efficiency of the users or of the salesmen. Users are always afraid when they have to do more that simple double clicks :-)

Why Alfresco?

As explained in the introduction, the costs and the over sizing of ECM/DMS projects may sometimes kill them before they even born.

Alfresco is an Open Source ECM/DMS solution allowing to implement quite quickly and easily the core needed functions without license costs. Of course, the Enterprise version offers some additional features like:

  • content encryption at rest and encrypted configuration files
  • clustering
  • synchronization of content between Cloud and On-Premises installations

At dbi services, since we are ECM/DMS experts we decided to implement Alfresco on our own. However, the setup and documentation of such a solution can be limited to several days, not weeks or years. We do not need bunch of senior technical experts and presales over several months to set it up, like for some un-named ERP solutions :-)

Out of the box, and in particular with the version 5.x, Alfresco really covers 100% of what I do expect from an ECM/DMS, as a salesman:

  • integrated release management
  • protection and management of concurrency between users
  • protection against viruses since some identification is needed and you can always revert a previous version if needed
  • easy drag & drop functionality to copy documents/folders into alfresco

Below, an example of the smooth integration of Alfresco in each Small and Medium Businesses environment using MS Office. With a smooth integration in MS Office it is now possible to directly work on a document and save it into alfresco without having to “CheckOut/CheckIn” it, since this operation is integrated in the Office connector. Below an example of the integration of Alfresco in MS Office once a so called “SharePoint online location” (compatible with alfresco) has been created. you can directly open the documents in Word from the Alfresco repository (checkin/checkout happens in the background) :

alfresco_5.2_open_MS_Office_4

Another example of smooth integration in the MS or Mac world, the drag and drop feature from the file browser directly in the Alfresco browser using any Web browser :

alfresco_5.2_drag_and_drop_4

It is even possible to save a newly created MS Office document directly into Alfresco, the integration has been really maximized in the last Alfresco release (5.x).

Another strong advantage of Alfresco is basically coming from the Open Source culture. Despite the fact that some companies still have the feeling that they have to pay expensive software licenses, it may sometimes be possible to think about the “service only” model. This approach, used by Open Source software, allows the product to improve and growth through contributors offering their services around the product. That’s the case for dbi services providing support around Alfresco which allows a professional usage of the solution. In the same idea, lots of contributors developed some Alfresco extensions allowing to improve the core functionalities and to integrate the product in lots of other solutions or products (i.e. in ERP solutions like Odoo, SAP, Salesforce, aso…). Some of these add-ons that were developed by the community are even integrated directly into the next Alfresco releases to improve the product (HTML5 Previewer, Trashcan Cleaner, aso…).

Providing the complete set of required core features, easy to deploy, manage and administrate, cost efficient and extensible, Alfresco has become a kind of optimum choice for our company development while insuring the quality of our Sales activities.

Conclusion

As for each IT project, we do strongly advice to follow a pragmatic way, ideally proceeding with POCs (Proof Of Concepts), in order to validate, step by step, the solution. Furthermore, it is advised to focus on essential functionalities first, avoiding huge and complex specifications giving the impression that we will never reach the end of the project.

Combining efficiency and security and providing the required features, Alfresco was the most competitive price/feature solution which helped us to growth as we did over the last years. The last version we just migrated to (version 5.2) did even increase the user acceptance since the integration into the usual office tools has been seriously improved.

 

Cet article Alfresco 5.2 our DMS of choice! est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 17 – Identifying a blocking session

Tue, 2017-07-25 13:49

One single blocking session in a database can completely halt your application so identifying which session is blocking other sessions is a task you must be able to perform quickly. In Oracle you can query v$session for getting that information (blocking_session, final_blocking_session). Can you do the same in PostgreSQL? Yes, you definitely can, lets go.

As usual we’ll start by creating a test table:

postgres@pgbox:/home/postgres/ [PG10B] psql -X postgres
psql (10beta2 dbi services build)
Type "help" for help.

postgres=# create table t1 ( a int );
CREATE TABLE
postgres=# 

One way to force other sessions to wait is to start a new transaction, modify the table:

postgres=# begin;
BEGIN
postgres=# alter table t1 add column t2 text;
ALTER TABLE
postgres=#  

… and then try to insert data into the same table from another session:

postgres@pgbox:/home/postgres/ [PG10B] psql -X postgres
psql (10beta2 dbi services build)
Type "help" for help.

postgres=# insert into t1 (a) values (1);

The insert statement will hang/wait because the modification of the table is still ongoing (the transaction did neither commit nor rollback, remember that DDLs in PostgreSQL are transactional). Now that we have a blocking session how can we identify the session?

What “v$session” is in Oracle, pg_stat_activity is in PostgreSQL (Note: I am using PostgreSQL 10Beta2 here):

postgres=# \d pg_stat_activity 
                      View "pg_catalog.pg_stat_activity"
      Column      |           Type           | Collation | Nullable | Default 
------------------+--------------------------+-----------+----------+---------
 datid            | oid                      |           |          | 
 datname          | name                     |           |          | 
 pid              | integer                  |           |          | 
 usesysid         | oid                      |           |          | 
 usename          | name                     |           |          | 
 application_name | text                     |           |          | 
 client_addr      | inet                     |           |          | 
 client_hostname  | text                     |           |          | 
 client_port      | integer                  |           |          | 
 backend_start    | timestamp with time zone |           |          | 
 xact_start       | timestamp with time zone |           |          | 
 query_start      | timestamp with time zone |           |          | 
 state_change     | timestamp with time zone |           |          | 
 wait_event_type  | text                     |           |          | 
 wait_event       | text                     |           |          | 
 state            | text                     |           |          | 
 backend_xid      | xid                      |           |          | 
 backend_xmin     | xid                      |           |          | 
 query            | text                     |           |          | 
 backend_type     | text         

There is no column which identifies a blocking session but there are other interesting columns:

postgres=# select datname,pid,usename,wait_event_type,wait_event,state,query from pg_stat_activity where backend_type = 'client backend' and pid != pg_backend_pid();
 datname  | pid  | usename  | wait_event_type | wait_event |        state        |               query                
----------+------+----------+-----------------+------------+---------------------+------------------------------------
 postgres | 2572 | postgres | Client          | ClientRead | idle in transaction | alter table t1 add column t2 text;
 postgres | 2992 | postgres | Lock            | relation   | active              | insert into t1 (a) values (1);
(2 rows)

This shows only client connections (excluding all the backend connections) and does not show the current session. In this case it is easy to identify the session which is blocking because we only have two sessions. When you have hundreds of sessions it becomes more tricky to identify the session which is blocking by looking at pg_stat_activity.

When you want to know which locks are currently being held/granted in PostgreSQL you can query pg_locks:

postgres=# \d pg_locks
                   View "pg_catalog.pg_locks"
       Column       |   Type   | Collation | Nullable | Default 
--------------------+----------+-----------+----------+---------
 locktype           | text     |           |          | 
 database           | oid      |           |          | 
 relation           | oid      |           |          | 
 page               | integer  |           |          | 
 tuple              | smallint |           |          | 
 virtualxid         | text     |           |          | 
 transactionid      | xid      |           |          | 
 classid            | oid      |           |          | 
 objid              | oid      |           |          | 
 objsubid           | smallint |           |          | 
 virtualtransaction | text     |           |          | 
 pid                | integer  |           |          | 
 mode               | text     |           |          | 
 granted            | boolean  |           |          | 
 fastpath           | boolean  |           |          | 

What can we see here:

postgres=# select locktype,database,relation,pid,mode,granted from pg_locks where pid != pg_backend_pid();
   locktype    | database | relation | pid  |        mode         | granted 
---------------+----------+----------+------+---------------------+---------
 virtualxid    |          |          | 2992 | ExclusiveLock       | t
 virtualxid    |          |          | 2572 | ExclusiveLock       | t
 relation      |    13212 |    24576 | 2992 | RowExclusiveLock    | f
 relation      |    13212 |    24581 | 2572 | AccessExclusiveLock | t
 transactionid |          |          | 2572 | ExclusiveLock       | t
 relation      |    13212 |    24579 | 2572 | ShareLock           | t
 relation      |    13212 |    24576 | 2572 | AccessExclusiveLock | t
(7 rows)

There is one lock for session 2992 which is not granted and that is the session which currently is trying to insert a row in the table (see above). We can get more information by joining pg_locks with pg_database and pg_class taking the pids from above:

select b.locktype,d.datname,c.relname,b.pid,b.mode 
  from pg_locks b 
     , pg_database d
     , pg_class c
 where b.pid in (2572,2992)
   and b.database = d.oid
   and b.relation = c.oid;

 locktype | datname  | relname | pid  |        mode         
----------+----------+---------+------+---------------------
 relation | postgres | t1      | 2992 | RowExclusiveLock
 relation | postgres | t1      | 2572 | AccessExclusiveLock
(2 rows)

Does that help us beside that we now know that both sessions want to do some stuff against the t1 table? Not really. So how can we then identify a blocking session? Easy, use the pg_blocking_pids system information function passing in the session which is blocked:

postgres=# select pg_blocking_pids(2992);
 pg_blocking_pids 
------------------
 {2572}
(1 row)

This gives you a list of sessions which are blocking. Can we kill it? Yes, of course, PostgreSQL comes with a rich set of system administration functions:

postgres=# select pg_terminate_backend(2572);
 pg_terminate_backend 
----------------------
 t

… and the insert succeeds. Hope this helps …

PS: There is a great page on the PostgreSQL Wiki about locks.

 

Cet article Can I do it with PostgreSQL? – 17 – Identifying a blocking session est apparu en premier sur Blog dbi services.

Documentum – Change password – 5 – CS/FT – JBoss Admin

Sat, 2017-07-22 02:57

The next password I wanted to blog about is the JBoss Admin password. As you know, there are several JBoss Application Servers in Documentum. The most used being the ones for the Java Method Server (JMS) and for the Full Text Servers (Dsearch/IndexAgent). In this blog, I will only talk about the JBoss Admin password of the JMS and IndexAgents simply because I will include the Dsearch JBoss instance in another blog which will talk about the xDB.

 

The steps are exactly the same for all JBoss instances, it’s just a matter of checking/updating the right file. In this blog, I will still separate the steps for JMS and IndexAgents but that’s because I usually have more than one IndexAgent on the same FT and therefore I’m also providing a way to update all JBoss instances at the same time using the right commands.

 

As always, I will define an environment variable to store the password to avoid using clear text passwords in the shell. The generic steps to change a JBoss Admin password, in Documentum, are pretty simple:

  1. Store the password in a variable
  2. Encrypt the password
  3. Backup the old configuration file
  4. Replace the password file with the new encrypted password
  5. Restart the component
  6. Checking the connection with the new password

 

As you can see above, there is actually nothing in these steps to change the password… We are just replacing a string inside a file with another string and that’s done, the password is changed! That’s really simple but that’s also a security issue since you do NOT need to know the old password… That’s how Documentum works with JBoss…

 

I. JMS JBoss Admin

For the JMS JBoss Admin, you obviously need to connect to all Content Servers and then perform the steps. Below are the commands I use to set the variable, encrypt the password and the update the password file with the new encrypted password (I’m just overwriting it):

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW JBoss admin password: " jboss_admin_pw; echo
Please enter the NEW JBoss admin password:
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -cp "$DOCUMENTUM_SHARED/dfc/dfc.jar" com.documentum.fc.tools.RegistryPasswordUtils ${jboss_admin_pw}
AAAAENwH4N2fF92dfRajKzaARvrfnIG29fnqf8Kgnd2fWfYKmMd9x
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cd $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/configuration/
[dmadmin@content_server_01 ~]$ mv dctm-users.properties dctm-users.properties_bck_$(date "+%Y%m%d")
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ echo "# users.properties file to use with UsersRolesLoginModule" > dctm-users.properties
[dmadmin@content_server_01 ~]$ echo "admin=AAAAENwH4N2fF92dfRajKzaARvrfnIG29fnqf8Kgnd2fWfYKmMd9x" >> dctm-users.properties
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cat dctm-users.properties
# users.properties file to use with UsersRolesLoginModule
admin=AAAAENwH4N2fF92dfRajKzaARvrfnIG29fnqf8Kgnd2fWfYKmMd9x
[dmadmin@content_server_01 ~]$

 

At this point, the new password has been put in the file dctm-users.properties in its encrypted form so you can now restart the component and check the status of the JBoss Application Server. To check that, I will use below a small curl command which is really useful… If just like me you always restrict the JBoss Administration Console to 127.0.0.1 (localhost only), for security reasons, then this is really awesome since you don’t need to start a X server and you don’t need to start a browser and all this stuff, simply put the password when asked and voila!

[dmadmin@content_server_01 ~]$ cd $DOCUMENTUM_SHARED/jboss7.1.1/server
[dmadmin@content_server_01 ~]$ ./stopMethodServer.sh
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ nohup ./startMethodServer.sh >> nohup-JMS.out 2>&1 &
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sleep 30
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ curl -g --user admin -D - http://localhost:9085/management --header "Content-Type: application/json" -d '{"operation":"read-attribute","name":"server-state","json.pretty":1}'
Enter host password for user 'admin':
HTTP/1.1 200 OK
Transfer-encoding: chunked
Content-type: application/json
Date: Wed, 15 Jul 2017 11:16:51 GMT

{
    "outcome" : "success",
    "result" : "running"
}
[dmadmin@content_server_01 ~]$

 

If everything has been done properly, you should get a “HTTP/1.1 200 OK” status meaning that the JBoss Application Server is up & running and the “result” should be “running”. This proves that the password provided in the command match the encrypted one from the file dctm-users.properties because the JMS is able to answer your request.

 

II. IndexAgent JBoss Admin

For the IndexAgent JBoss Admin, you obviously need to connect to all Full Text Servers and then perform the steps again. Below are the commands to do that. These commands are adapted in case you have several IndexAgents installed. Please note that the commands below will set the same Admin password for all JBoss instances (all IndexAgents JBoss Admin). Therefore, if that’s not what you want, you will have to take the commands from the JMS section but adapt the paths.

[xplore@full_text_server_01 ~]$ read -s -p "Please enter the NEW JBoss admin password: " jboss_admin_pw; echo
Please enter the NEW JBoss admin password:
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ $JAVA_HOME/bin/java -cp "$XPLORE_HOME/dfc/dfc.jar" com.documentum.fc.tools.RegistryPasswordUtils ${jboss_admin_pw}
AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/jboss7.1.1/server/
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do mv ./$i/configuration/dctm-users.properties ./$i/configuration/dctm-users.properties_bck_$(date "+%Y%m%d"); done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do echo "# users.properties file to use with UsersRolesLoginModule" > ./$i/configuration/dctm-users.properties; done
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do echo "AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x" >> ./$i/configuration/dctm-users.properties; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do echo "--$i:"; cat ./$i/configuration/dctm-users.properties; echo; done
--DctmServer_Indexagent_DocBase1:
# users.properties file to use with UsersRolesLoginModule
AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x

--DctmServer_Indexagent_DocBase2:
# users.properties file to use with UsersRolesLoginModule
AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x

--DctmServer_Indexagent_DocBase3:
# users.properties file to use with UsersRolesLoginModule
AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x

[xplore@full_text_server_01 ~]$

 

At this point, the new password has been put in its encrypted form in the file dctm-users.properties for each IndexAgent. So, the next step is to restart all the components and check the status of the JBoss instances. Just like for the JMS, I will use below the curl command to check the status of a specific IndexAgent:

[xplore@full_text_server_01 ~]$ for i in `ls stopIndexag*.sh`; do ./$i; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `ls startIndexag*.sh`; do ia=`echo $i|sed 's,start\(.*\).sh,\1,'`; nohup ./$i >> nohup-$ia.out 2>&1 &; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ sleep 30
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ curl -g --user admin -D - http://localhost:9205/management --header "Content-Type: application/json" -d '{"operation":"read-attribute","name":"server-state","json.pretty":1}'
Enter host password for user 'admin':
HTTP/1.1 200 OK
Transfer-encoding: chunked
Content-type: application/json
Date: Wed, 15 Jul 2017 11:16:51 GMT

{
    "outcome" : "success",
    "result" : "running"
}
[xplore@full_text_server_01 ~]$

 

If you want to check all IndexAgents at once, you can use this command instead (it’s a long one I know…):

[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do port=`grep '<socket-binding .*name="management-http"' ./$i/configuration/standalone.xml|sed 's,.*http.port:\([0-9]*\).*,\1,'`; echo; echo "  ** Please enter below the password for '$i' ($port)"; curl -g --user admin -D - http://localhost:$port/management --header "Content-Type: application/json" -d '{"operation":"read-attribute","name":"server-state","json.pretty":1}'; done

  ** Please enter below the password for 'DctmServer_Indexagent_DocBase1' (9205)
Enter host password for user 'admin':
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Content-Length: 55
Date: Wed, 15 Jul 2017 12:37:35 GMT

{
    "outcome" : "success",
    "result" : "running"
}
  ** Please enter below the password for 'DctmServer_Indexagent_DocBase2' (9225)
Enter host password for user 'admin':
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Content-Length: 55
Date: Wed, 15 Jul 2017 12:37:42 GMT

{
    "outcome" : "success",
    "result" : "running"
}
  ** Please enter below the password for 'DctmServer_Indexagent_DocBase3' (9245)
Enter host password for user 'admin':
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Content-Length: 55
Date: Wed, 15 Jul 2017 12:37:45 GMT

{
    "outcome" : "success",
    "result" : "running"
}
[xplore@full_text_server_01 ~]$

 

If everything has been done properly, you should get a “HTTP/1.1 200 OK” status for all IndexAgents.

 

 

Cet article Documentum – Change password – 5 – CS/FT – JBoss Admin est apparu en premier sur Blog dbi services.

Documentum – Change password – 4 – CS – Presets & Preferences

Sat, 2017-07-22 01:58

In a previous blog (see this one), I already provided the steps to change the BOF password and I mentioned that this was more or less the only important account in the Global Registry. Well in this blog, I will show you how to change the passwords for the two other important accounts: the Presets and Preferences accounts.

 

These two accounts can actually be created in a dedicated repository for performance reasons but by default they will be taken from the Global Registry and they are used – as you can easily understand – to create Presets and Preferences…

 

As said above, these accounts are docbase accounts so let’s start with setting up some environment variable containing the passwords and then updating their passwords on a Content Server:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW Preset password: " prespw; echo
Please enter the NEW Preset password:
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW Preferences password: " prefpw; echo
Please enter the NEW Preferences password:
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ iapi GR_DOCBASE -Udmadmin -Pxxx << EOF
> retrieve,c,dm_user where user_login_name='dmc_wdk_presets_owner'
> set,c,l,user_password
> $prespw
> save,c,l
> retrieve,c,dm_user where user_login_name='dmc_wdk_preferences_owner'
> set,c,l,user_password
> $prefpw
> save,c,l
> EOF


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000907 started for user dmadmin."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> ...
110f123456000144
API> SET> ...
OK
API> ...
OK
API> ...
110f123456000145
API> SET> ...
OK
API> ...
OK
API> Bye
[dmadmin@content_server_01 ~]$

 

Again, to verify that the passwords have been set properly, you can try to login to the respective accounts:

[dmadmin@content_server_01 ~]$ echo quit | iapi GR_DOCBASE -Udmc_wdk_presets_owner -P$prespw


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000908 started for user dmc_wdk_presets_owner."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ echo quit | iapi GR_DOCBASE -Udmc_wdk_preferences_owner -P$prefpw


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000909 started for user dmc_wdk_preferences_owner."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@content_server_01 ~]$

 

When the docbase account has been updated, the first part is done. That’s good but just like for the BOF account, you still need to update the references everywhere… Fortunately for the Presets and Preferences accounts there are less references so it’s less a pain in the… ;)

 

There are references to these two accounts in the WDK-based Applications. Below I will use Documentum Administrator as an example which is deployed as a WAR file on a WebLogic Server, however the steps would be the same for other Application Servers, except that you might use exploded folders and not war files… Below I will use:

  • $WLS_APPLICATIONS as the directory where the DA WAR file is present.
  • $WLS_APPS_DATA as the directory where the Data are present (log files, dfc.keystore, cache, …).

 

These two folders might be the same depending on how you configured your Application Server. So, first of all, let’s encrypt the two passwords on the Application Server using the DA libraries:

[weblogic@weblogic_server_01 ~]$ cd $WLS_APPLICATIONS/
[weblogic@weblogic_server_01 ~]$ jar -xvf da.war wdk/app.xml WEB-INF/classes WEB-INF/lib/dfc.jar WEB-INF/lib
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ read -s -p "Please enter the NEW Preset password: " prespw; echo
Please enter the NEW Preset password:
[weblogic@weblogic_server_01 ~]$ read -s -p "Please enter the NEW Preferences password: " prefpw; echo
Please enter the NEW Preferences password:
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ java -Djava.security.egd=file:///dev/./urandom -classpath WEB-INF/classes:WEB-INF/lib/dfc.jar:WEB-INF/lib/commons-io-1.2.jar com.documentum.web.formext.session.TrustedAuthenticatorTool $prespw $prefpw
Encrypted: [jpQm5FfqdD3HWqP4mgoIIw==], Decrypted: [Pr3seTp4sSwoRd]
Encrypted: [YaGqNkj2FqfQDn3gfna8Nw==], Decrypted: [Pr3feRp4sSwoRd]
[weblogic@weblogic_server_01 ~]$

 

Once this has been done, let’s check the old passwords, updating them in the app.xml file for DA and then checking that the update has been done. The sed commands below are pretty simple: the first part will search for the parent XML tag (so either <presets>…</presets> or <preferencesrepository>…</preferencesrepository>) and the second part will replace the first occurrence of the <password>…</password> line INSIDE the XML tag mentioned in the command (presets or preferencesrepository) with the new password we encrypted before. So, again, just replace my encrypted password with what you got:

[weblogic@weblogic_server_01 ~]$ grep -C20 "<password>.*</password>" wdk/app.xml | grep -E "dmc_|</password>|presets>|preferencesrepository>"
         <presets>
            <!-- Encrypted password for default preset user "dmc_wdk_presets_owner" -->
            <password>tqQd5gfWGF3tVacfmgwL2w==</password>
         </presets>
         <preferencesrepository>
            <!-- Encrypted password for default preference user "dmc_wdk_preferences_owner" -->
            <password>LdFinAwf2F2fuB29cqfs2w==</password>
         </preferencesrepository>
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ sed -i "/<presets>/,/<\/presets>/ s,<password>.*</password>,<password>jpQm5FfqdD3HWqP4mgoIIw==</password>," wdk/app.xml
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ sed -i "/<preferencesrepository>/,/<\/preferencesrepository>/ s,<password>.*</password>,<password>YaGqNkj2FqfQDn3gfna8Nw==</password>," wdk/app.xml
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ grep -C20 "<password>.*</password>" wdk/app.xml | grep -E "dmc_|</password>|presets>|preferencesrepository>"
         <presets>
            <!-- Encrypted password for default preset user "dmc_wdk_presets_owner" -->
            <password>jpQm5FfqdD3HWqP4mgoIIw==</password>
         </presets>
         <preferencesrepository>
            <!-- Encrypted password for default preference user "dmc_wdk_preferences_owner" -->
            <password>YaGqNkj2FqfQDn3gfna8Nw==</password>
         </preferencesrepository>
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ jar -uvf da.war wdk/app.xml
[weblogic@weblogic_server_01 ~]$ rm -rf WEB-INF/ wdk/
[weblogic@weblogic_server_01 ~]$

 

Normally the passwords returned by the second grep command should be different and they should match the ones returned by the JAVA previously executed to encrypt the Presets and Preferences passwords. Once that is done, simply repack the war file and redeploy it (if needed).

 

To verify that the passwords are properly set you can simply stop DA, remove the cache containing the Presets’ jars and restart DA. If the jars are automatically re-created, then the passwords should be OK:

[weblogic@weblogic_server_01 ~]$ cd $WLS_APPS_DATA/documentum.da/dfc.data/cache
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ ls -l
total 4
drwxr-x---. 4 weblogic weblogic 4096 Jul 15 20:58 7.3.0000.0205
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ ls -l ./7.3.*/bof/*/
...
[weblogic@weblogic_server_01 ~]$

 

This last ‘ls’ command will display a list of 10 or 15 jars (12 for me in DA 7.3 GA release) as well as a few files (content.lck, content.xml and GR_DOCBASE.lck usually). If you don’t see any jar files before the restart, it means the old password was probably not correct… Ok so now to verify that the new passwords have been put properly in the app.xml file, simply stop the Managed Server hosting DA with your preferred way (I will use “msDA-01″ for the example below), then remove the cache folder and restart DA. Once DA is up&running again, it will re-create this cache folder in a few seconds and all the jars should be back:

[weblogic@weblogic_server_01 ~]$ $DOMAIN_HOME/bin/startstop stop msDA-01
  ** Managed Server msDA-01 stopped
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ rm -rf ./7.3*/
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ $DOMAIN_HOME/bin/startstop start msDA-01
  ** Managed Server msDA-01 started
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ sleep 30
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ ls -l ./7.3.*/bof/*/
...
[weblogic@weblogic_server_01 ~]$

 

If you did it properly, the jars will be back. If you want a list of the jars that should be present, take a look at the file “./7.3.*/bof/*/content.xml”. Obviously above I was using the DA 7.3 GA so my cache folder starts with 7.3.xxx. If you are using another version of DA, the name of this folder will change so just keep that in mind.

 

 

Cet article Documentum – Change password – 4 – CS – Presets & Preferences est apparu en premier sur Blog dbi services.

Documentum – Change password – 3 – CS – Installation Owner

Sat, 2017-07-22 00:36

In this blog, I will describe the few steps needed to change the Documentum Installation Owner’s password. As you all know, the Installation Owner is (one of) the most important password in Documentum and it is probably the first you define even before starting the installation.

 

As always, I will use a linux environment and in this case, I’m assuming the “dmadmin” account is a local account to each Content Server and therefore the change of the password must be done on all of them. In case you have an AD integration or something similar, you can just change the password at the AD level so that’s not funny, right?!

 

So, let’s start with log in to all Content Servers using the Installation Owner’s account. In case you don’t remember the old password, you will have to use the root account instead. So changing the dmadmin’s password is pretty simple, you just have to change it on the OS level (again this is the default… If you changed the dmadmin’s account type, then…):

[dmadmin@content_server_01 ~]$ passwd
    Changing password for user dmadmin.
    Changing password for dmadmin.
    (current) UNIX password:
    New password:
    Retype new password:
    passwd: all authentication tokens updated successfully.
[dmadmin@content_server_01 ~]$

 

To verify that the dmadmin’s password has been changed successfully, you can use the dm_check_password utility as follow (leave the extra #1 and #2 empty):

[dmadmin@content_server_01 ~]$ $DOCUMENTUM/dba/dm_check_password
    Enter user name: dmadmin
    Enter user password:
    Enter user extra #1 (not used):
    Enter user extra #2 (not used):
    $DOCUMENTUM/dba/dm_check_password: Result = (0) = (DM_EXT_APP_SUCCESS)
[dmadmin@content_server_01 ~]$

 

Once you are sure that the password is set properly, one could think that it’s over but actually, it’s not… There is one additional place where this password must be set and I’m not talking about new installations which obviously will requires you to enter the new password. For that, let’s first encrypt this password:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW dmadmin's password: " dmadmin_pw; echo
    --> Enter the NEW dmadmin's password: 
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -cp $DOCUMENTUM_SHARED/dfc/dfc.jar com.documentum.fc.tools.RegistryPasswordUtils ${dmadmin_pw}
AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0
[dmadmin@content_server_01 ~]$

 

I generated a random string for this example (“AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0″) but this will be the encrypted password of dmadmin. I will use this value in the commands below so whenever you see this, just replace it with what your “java -cp ..” command returned.

 

Then where should this be used? On the Full Text Server! So log in to your FT and inside the watchdog configuration, the dmadmin’s password is used for the IndexAgent connection. The commands below will take a backup of the configuration file and then update it to use the new encrypted password:

[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/watchdog/config/
[xplore@full_text_server_01 config]$ cp dsearch-watchdog-config.xml dsearch-watchdog-config.xml_bck_$(date "+%Y%m%d")
[xplore@full_text_server_01 config]$
[xplore@full_text_server_01 config]$ sed -i 's,<property name="docbase_password" value="[^"]*",<property name="docbase_password" value="AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0",' dsearch-watchdog-config.xml
[xplore@full_text_server_01 config]$

 

Small (but important) note on the above commands: if you are using the same FT for different environments or if one of the IndexAgent is linked to a different dmadmin’s account (and therefore different password), then you will need to open the file manually and replace the passwords for the corresponding xml tags (or use a different sed command which will be more complicated). Each IndexAgent will have the following lines for its configuration:

<application-config instance-name="<hostname>_9200_IndexAgent" name="IndexAgent">
        <properties>
                <property name="application_url" value="https://<hostname>:9202/IndexAgent"/>
                <property name="docbase_user" value="dmadmin"/>
                <property name="docbase_name" value="DocBase1"/>
                <property name="docbase_password" value="AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0"/>
                <property name="servlet_wait_time" value="3000"/>
                <property name="servlet_max_retry" value="5"/>
                <property name="action_on_servlet_if_stopped" value="notify"/>
        </properties>
        <tasks>
                ...
        </tasks>
</application-config>

 

Once the above modification has been done, simply restart the xPlore components.

 

Another thing that must be done is linked to D2 and D2-Config… If you are using these components, then you will need to update the D2 Lockbox on the Content Server side and you probably defined the LoadOnStartup property which will require you to put the dmadmin’s password in the D2 Lockbox on the Web Application side too. In this blog, I won’t discuss the full recreation of the D2 Lockbox with new password/passphrases since this is pretty simple and most likely known by everybody so I’m just going to update the dmadmin’s password inside the D2 Lockbox instead for the different properties. If you would like a more complete blog for the lockbox, just let me know! This only apply to “not so old nor so recent” D2 versions since the D2 Lockbox has been introduced only a few years ago but is yet not present anymore with D2 4.7, so…

 

On the Content Server – I’m just setting up the environment to contain the libraries needed to update the D2 Lockbox and then updating the D2-JMS properties inside the lockbox. I’m using $DOCUMENTUM/d2-lib as the root folder under which the D2 Installer put the libraries and initial lockbox:

[dmadmin@content_server_01 ~]$ export LD_LIBRARY_PATH=$DOCUMENTUM/d2-lib/lockbox/lib/native/linux_gcc34_x64:$LD_LIBRARY_PATH
[dmadmin@content_server_01 ~]$ export PATH=$DOCUMENTUM/d2-lib/lockbox/lib/native/linux_gcc34_x64:$PATH
[dmadmin@content_server_01 ~]$ export CLASSPATH=$DOCUMENTUM/d2-lib/D2.jar:$DOCUMENTUM/d2-lib/LB.jar:$DOCUMENTUM/d2-lib/LBJNI.jar:$CLASSPATH
[dmadmin@content_server_01 ~]$ cp -R $DOCUMENTUM/d2-lib/lockbox $DOCUMENTUM/d2-lib/lockbox-bck_$(date "+%Y%m%d-%H%M%S")
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for docbase in `cd $DOCUMENTUM/dba/config/; ls`; do java com.emc.common.java.crypto.SetLockboxProperty $DOCUMENTUM/d2-lib/lockbox D2-JMS.${docbase}.password ${dmadmin_pw}; done
[dmadmin@content_server_01 ~]$ for docbase in `cd $DOCUMENTUM/dba/config/; ls`; do java com.emc.common.java.crypto.SetLockboxProperty $DOCUMENTUM/d2-lib/lockbox D2-JMS.${docbase}.${docbase}.password ${dmadmin_pw}; done

 

The last command above mention “${docbase}.${docbase}”… Actually the first one is indeed the name of the docbase but the second one is the name of the local dm_server_config. Therefore, for a single Content Server the above commands are probably enough (since by default dm_server_config name = docbase name) but if you have a HA setup, then you will need to also include the remote dm_server_config names for each docbase (alternatively you can also use wildcards…). Once that is done, just replace the old lockbox with the new one in the JMS.

 

On the Web Application Server – the same environment variables are needed but of course the paths will change and you might need to include the C6-Common jar file too (which is known to cause issues with WebLogic if it is still in the CLASSPATH when you start it). So on the Web Application Server, I’m also setting up the environment variables with the dmadmin’s password and D2 Lockbox passphrase as well as another variable for the list of docbases to loop on them:

[weblogic@weblogic_server_01 ~]$ for docbase in $DOCBASES; do java com.emc.common.java.crypto.SetLockboxProperty $WLS_DOMAIN/D2/lockbox LoadOnStartup.${docbase}.password ${dmadmin_pw} ${d2method_pp}; done
[weblogic@weblogic_server_01 ~]$

 

With the D2 Lockbox, you will need to restart the components using them when you recreate it from scratch. However, when you update a property inside it, like above, it’s usually not needed. The next time the password is needed, it will be picked from the Lockbox.

 

Last comment on this, if you are using an ADTS and if you used the dmadmin’s account to manage it (I wouldn’t recommend this! Please use a dedicated user for this instead), then the password is also encrypted in a password file for each docbases under “%ADTS_HOME%/CTS/docbases/”.

 

 

Cet article Documentum – Change password – 3 – CS – Installation Owner est apparu en premier sur Blog dbi services.

OUD 11.1.2.3 – What are the minimum privileges for administering connect strings in OUD?

Thu, 2017-07-20 10:44

Security in OUD plays a very important role. Keeping your patch level up to date is just one of those topics, like explained in my recent blog.

https://blog.dbi-services.com/oud-11-1-2-3-patch-25840309-oud-bundle-patch-11-1-2-3-170718/

Another one are privileges. You don’t need to do all of your work with the cn=Directory Manager. The cn=Directory Manager is the most powerful user in an Oracle Unified Directory. You can see it as the root user. However, for just managing Connect strings via the netmgr it is kinda oversized.
OK. You could create another root user in cn=Root DNs,cn=config with less privileges, but this would have the disadvantage that it is not replicated.

I would just like to have an user, e.g. cn=oud,ou=people,dc=dbi,dc=com which I can use only for creating/deleting TNS Connect strings. But what are the minimum privileges for doing this job?

First of all, we need to create the user. Specifying the Common Name, the Lastname and the password is enough.

If we try now to add a new connect string via the netmgr, we will get immediately the famous “TNS-04411: Directory service: permission denied” error.

...
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:82)
oracle.net.common.dataStore.DataStoreException: error writing ServiceAlias to: LDAPDataStore [svr: dbidg01:1389, type: OID, ctxt: cn=OracleContext,dc=dbi,dc=com, home: /u01/app/oracle/product/12.2.0/dbhome_1]
original exception message: TNS-04411: Directory service: permission denied
  caused by: oracle.net.config.DirectoryServiceException: TNS-04411: Directory service: permission denied
  caused by: oracle.net.ldap.NNFLException
original stack trace: oracle.net.config.ServiceAliasException: TNS-04411: Directory service: permission denied
  caused by: oracle.net.config.DirectoryServiceException: TNS-04411: Directory service: permission denied
  caused by: oracle.net.ldap.NNFLException
oracle.net.config.DirectoryServiceException: TNS-04411: Directory service: permission denied
  caused by: oracle.net.ldap.NNFLException
oracle.net.ldap.NNFLException

To resolve this error, we need to make sure, that the user has the password reset privilege and any of the following the uniqueMember attribute:

  • cn=oraclenetadmins,dc=oraclecontext,dc=dbi,dc=com
  • cn=oraclecontextadmins,cn=groups,cn=oraclecontext,dc=dbi,dc=com

To create net service names or net service aliases with the netmgr, you need to assign the OracleNetAdmins group (cn=OracleNetAdmins,cn=OracleContext…) to that user.

The OracleContextAdmins group is a super-user group for the Oracle Context and members of the OracleContextAdmins group can add all supported types of entries to the Oracle Context. The directory user (in most cases the cn=Directory Manager) that created Oracle Context is automatically added to these groups. Other users can be added to these groups by the directory administrator.

To add the cn=oud,ou=people,dc=dbi,dc=com user to the particular groups, run the following ldapmodify command:

[dbafmw@dbidg01 bin]$ ldapmodify --hostname dbidg01 --port 1389 --bindDN cn="Directory Manager" --bindPasswordFile ~/.oudpwd
dn: cn=oud,ou=people,dc=dbi,dc=com
changetype: modify
add: ds-privilege-name
ds-privilege-name: password-reset
Processing MODIFY request for cn=oud,ou=people,dc=dbi,dc=com
MODIFY operation successful for DN cn=oud,ou=people,dc=dbi,dc=com

dn: cn=oraclenetadmins,cn=oraclecontext,dc=dbi,dc=com
changetype: modify
add: uniquemember
uniquemember:  cn=oud,ou=people,dc=dbi,dc=com
Processing MODIFY request for cn=oraclenetadmins,cn=oraclecontext,dc=dbi,dc=com
MODIFY operation successful for DN cn=oraclenetadmins,cn=oraclecontext,dc=dbi,dc=com

dn: cn=oraclecontextadmins,cn=groups,cn=oraclecontext,dc=dbi,dc=com
changetype: modify
add: uniquemember
uniquemember:  cn=oud,ou=people,dc=dbi,dc=com
Processing MODIFY request for cn=oraclecontextadmins,cn=groups,cn=oraclecontext,dc=dbi,dc=com
MODIFY operation successful for DN cn=oraclecontextadmins,cn=groups,cn=oraclecontext,dc=dbi,dc=com

[dbafmw@dbidg01 bin]$

Now, let’s connect again via netmgr and add a new entry.

That’s it. :-)

Conclusion

Not only patching, but also a minimum set of privileges makes your OUD more secure.

 

Cet article OUD 11.1.2.3 – What are the minimum privileges for administering connect strings in OUD? est apparu en premier sur Blog dbi services.

OUD 11.1.2.3 – Patch 25840309: OUD BUNDLE PATCH 11.1.2.3.170718

Wed, 2017-07-19 05:55

A new OUD bundle patch was release at the 18th of July 2017, and this gives me the opportunity to show how fast an OUD can be patched, if configured correctly. In case you have an OUD multi master replication with a load balancer virtual IP on top, then zero downtime patching is possible. This is the configuration that I usually recommend. Not only for high availability, but also for maintenance.

A typical OUD installation consists not only of the OUD itself, but also of a few other components like the JDK, the WebLogic and the ODSM which is deployed in WebLogic and the last but not lease the OPatch utility.

In the end, for upgrading successfully to the OUD to bundle patch 11.1.2.3.170718 we need the following patches.

  • Patch 25840309: OUD BUNDLE PATCH 11.1.2.3.170718
  • Patch 13079846: Oracle JDK 1.7.0 (build 1.7.0_151-b15)
  • Patch 25869650: SU Patch [B25A]: WLS PATCH SET UPDATE 10.3.6.0.170718
  • Patch 6880880: OPatch patch of version 11.1.0.12.9 for Oracle software releases 11.1.0.x (OCT 2015)

A few steps can be done beforehand, like patching the OPatch itself and rolling out the new JDK. No downtime is required for those steps. A very important part is the Java environment. If you have set it up like in the following blog, you can change to the new JDK by simply changing a symbolic link.

https://blog.dbi-services.com/oud-oracle-unified-directory-11-1-2-3-how-to-change-the-java-version-of-oud-11-1-2-3-with-weblogic-and-odsm/

Ok. Let’s get started. Currently we do have the following patches in the staging area.

[dbafmw@dbidg01 20170718]$ ls -l
total 698560
-rw-r--r-- 1 dbafmw oinstall 467017155 Jul 19 07:37 p13079846_17000_Linux-x86-64.zip
-rw-r--r-- 1 dbafmw oinstall  98414546 Jul 19 07:33 p25840309_111230_Generic.zip
-rw-r--r-- 1 dbafmw oinstall  95019336 Jul 19 07:38 p25869650_1036_Generic.zip
-rw-r--r-- 1 dbafmw oinstall  54867592 Jul 19 08:34 p6880880_111000_Linux-x86-64.zip

And the following versions.

-- Current Java Version

[dbafmw@dbidg01 bin]$ ./java -version
java version "1.7.0_141"
Java(TM) SE Runtime Environment (build 1.7.0_141-b11)
Java HotSpot(TM) 64-Bit Server VM (build 24.141-b11, mixed mode)

-- Current WebLogic Version

. /u01/app/oracle/product/Middleware/11.1.2.3/wlserver_10.3/server/bin/setWLSEnv.sh
java weblogic.version -verbose
	
WebLogic Server 10.3.6.0.170117 PSU Patch for BUG24667634 FRI NOV 25 18:34:42 IST 2016 ImplVersion: 10.3.6.0
WebLogic Server 10.3.6.0  Tue Nov 15 08:52:36 PST 2011 1441050  ImplVersion: 10.3.6.0

-- Current OUD Version and OPatch version

[dbafmw@dbidg01 OPatch]$ ./opatch lsinv
Oracle Interim Patch Installer version 11.1.0.11.0
Copyright (c) 2017, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/oraInst.loc
OPatch version    : 11.1.0.11.0
OUI version       : 11.1.0.11.0
Log file location : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/cfgtoollogs/opatch/opatch2017-07-19_08-53-32AM_1.log

OPatch detects the Middleware Home as "/u01/app/oracle/product/Middleware/11.1.2.3"

Lsinventory Output file location : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/cfgtoollogs/opatch/lsinv/lsinventory2017-07-19_08-53-32AM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle Unified Directory                                             11.1.2.3.0
There are 1 products installed in this Oracle Home.

Interim patches (2) :

Patch  25383162     : applied on Fri Jun 02 11:39:44 CEST 2017
Unique Patch ID:  21007593
   Created on 8 Feb 2017, 02:26:53 hrs PST8PDT
   Bugs fixed:
     21110317, 24731779, 19596240, 20987332, 20786014, 19271756, 21024218
     24333856, 21184039, 17409345, 22555819, 21569267, 19987242, 17416498
     23703560, 22927723, 21835208, 21278352, 22308304, 24409239, 21620706
     21225831, 21634829, 23317872, 21126991, 20771322, 21189204, 19670854
     23327572, 22843404, 21097296, 23066020, 21298039, 21967348, 20688926
     20813995, 17488044, 18311877, 21249082, 20198154, 20415424, 23032811
     21207635, 21415093, 21831011, 21575993, 21882589, 17568241, 22611713
     20363721, 21498205, 21301617, 24577814, 19991366, 21098103, 21932111
     24390289, 21282258, 19636259, 24399724, 22916463, 20235234, 20630323
     20113230, 20554639, 24481116, 24681490, 21924319, 20979061, 20469049
     24490948, 22017920, 23195988, 22564810, 24565597, 22950438, 20881625
     21169776, 20736946, 21032490, 22641166, 21644986, 21337744, 23262289
     21509636, 21183856, 22369027, 22075202, 21464932, 20692543, 20805136
     21754188, 21889432, 20924952, 21683538, 20554436, 24372668, 20881496
     21527426, 23703466, 21605154, 24310538, 21959508, 22689164, 20875947
     22119339, 20778847, 23207713, 21503767, 21310463, 20554511, 21387378
     21075558, 22180844, 21665762, 20987758, 24405001, 20529805, 20470402
     20381973, 21881962, 20830152, 23090053, 17406282, 21461976, 20989144
     20180870, 20312191, 21111836, 22732230, 21387387, 19504096, 24544860, 18312432

Patch  21197325     : applied on Mon Mar 27 12:25:39 CEST 2017
Unique Patch ID:  18996262
   Created on 3 Jul 2015, 07:28:59 hrs PST8PDT
   Bugs fixed:
     21197325

--------------------------------------------------------------------------------

OPatch succeeded.

Patching the OPatch itself is an operation which requires no downtime. You simply unzip the new OPatch in the OUD home. Please be aware that the OPatch version 11.1.x is needed for patching the OUD. Any other version is not supported. In my case I patch the OPatch from version 11.1.0.11.0 to version 11.1.0.12.9.

[dbafmw@dbidg01 20170718]$ cp p6880880_111000_Linux-x86-64.zip /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/
[dbafmw@dbidg01 20170718]$ cd /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/
[dbafmw@dbidg01 Oracle_OUD1]$ unzip p6880880_111000_Linux-x86-64.zip
Archive:  p6880880_111000_Linux-x86-64.zip
  inflating: OPatch/operr
   creating: OPatch/scripts/oms/
  inflating: OPatch/scripts/oms/opatchauto.bat
  inflating: OPatch/scripts/oms/opatchauto
  inflating: OPatch/scripts/oms/generateMultiOMSPatchingScripts.pl
   creating: OPatch/scripts/oms/oms_child_scripts/
  inflating: OPatch/scripts/oms/oms_child_scripts/opatchauto_fmw
  inflating: OPatch/scripts/oms/oms_child_scripts/opatchauto_fa
   creating: OPatch/scripts/oms/wlskeys/
  inflating: OPatch/scripts/oms/wlskeys/createkeys.sh
...
...

[dbafmw@dbidg01 OPatch]$ ./opatch lsinv | grep "OPatch version"
OPatch version    : 11.1.0.12.9

As a next step. We roll out the new JDK, which is 1.7.0_151-b15. This is also a step which requires no downtime, because we simply untar the new JDK in the appropriate directory. We are not changing the symbolic link at this point.

[dbafmw@dbidg01 20170718]$ cp jdk-7u151-linux-x64.tar.gz /u01/app/oracle/product/Middleware/11.1.2.3/
[dbafmw@dbidg01 20170718]$ cd /u01/app/oracle/product/Middleware/11.1.2.3/
[dbafmw@dbidg01 11.1.2.3]$ tar -xzvf jdk-7u151-linux-x64.tar.gz
jdk1.7.0_151/
jdk1.7.0_151/LICENSE
jdk1.7.0_151/release
jdk1.7.0_151/jre/
jdk1.7.0_151/jre/LICENSE
jdk1.7.0_151/jre/Welcome.html
jdk1.7.0_151/jre/plugin/
jdk1.7.0_151/jre/plugin/desktop/
jdk1.7.0_151/jre/plugin/desktop/sun_java.png
jdk1.7.0_151/jre/plugin/desktop/sun_java.desktop
jdk1.7.0_151/jre/lib/
jdk1.7.0_151/jre/lib/fontconfig.SuSE.10.properties.src
jdk1.7.0_151/jre/lib/alt-rt.jar
jdk1.7.0_151/jre/lib/meta-index
jdk1.7.0_151/jre/lib/net.properties
jdk1.7.0_151/jre/lib/oblique-fonts/
jdk1.7.0_151/jre/lib/oblique-fonts/LucidaSansDemiOblique.ttf
jdk1.7.0_151/jre/lib/oblique-fonts/fonts.dir
jdk1.7.0_151/jre/lib/oblique-fonts/LucidaTypewriterOblique.ttf
...
jdk1.7.0_151/db/bin/sysinfo.bat
jdk1.7.0_151/db/bin/setNetworkClientCP
jdk1.7.0_151/db/bin/derby_common.bat
jdk1.7.0_151/db/bin/startNetworkServer.bat
jdk1.7.0_151/db/bin/startNetworkServer
jdk1.7.0_151/db/bin/setNetworkServerCP.bat
jdk1.7.0_151/db/bin/stopNetworkServer
jdk1.7.0_151/db/README-JDK.html
jdk1.7.0_151/db/NOTICE
jdk1.7.0_151/README.html
jdk1.7.0_151/THIRDPARTYLICENSEREADME.txt

The symbolic link is still pointing to the old JDK version, which is 1.7.0_141.

[dbafmw@dbidg01 11.1.2.3]$ rm jdk-7u151-linux-x64.tar.gz
rm: remove write-protected regular file ‘jdk-7u151-linux-x64.tar.gz’? y
[dbafmw@dbidg01 11.1.2.3]$ ls -l | grep jdk
lrwxrwxrwx  1 dbafmw oinstall     12 May 29 14:10 jdk -> jdk1.7.0_141
drwxr-x---  8 dbafmw oinstall   4096 Dec 12  2016 jdk1.7.0_131
drwxr-xr-x  8 dbafmw oinstall   4096 Mar 14 06:10 jdk1.7.0_141
drwxr-xr-x  8 dbafmw oinstall   4096 Jul 12 11:31 jdk1.7.0_151

Now the downtime for the WebLogic and OUD server starts. From a downtime perspective, the WebLogic server is not so important, because it is only used for ODSM. If the WebLogic server is down, it means only that you can’t use the graphical ODSM tool anymore. It is much more important to get the OUD up and running again.

[dbafmw@dbidg01 ~]$ ./stopOUD.sh
Stopping Weblogic Server...
...
Stopping Derby Server...
Stopping Server...
...

After the WebLogic and OUD server is stopped, it is time to switch the symbolic link for the JDK to the new version.

[dbafmw@dbidg01 11.1.2.3]$ rm jdk
[dbafmw@dbidg01 11.1.2.3]$ ln -s jdk1.7.0_151 jdk
[dbafmw@dbidg01 11.1.2.3]$ ls -l | grep jdk
lrwxrwxrwx  1 dbafmw oinstall     12 Jul 19 09:02 jdk -> jdk1.7.0_151
drwxr-x---  8 dbafmw oinstall   4096 Dec 12  2016 jdk1.7.0_131
drwxr-xr-x  8 dbafmw oinstall   4096 Mar 14 06:10 jdk1.7.0_141
drwxr-xr-x  8 dbafmw oinstall   4096 Jul 12 11:31 jdk1.7.0_151

Now we can patch the OUD to 11.1.2.3.170718.

[dbafmw@dbidg01 OPatch]$ cdh
[dbafmw@dbidg01 11.1.2.3]$ cd Oracle_OUD1/OPatch/
[dbafmw@dbidg01 OPatch]$ ./opatch apply /u01/app/oracle/product/Middleware/stage/PSU/20170718/25840309
Oracle Interim Patch Installer version 11.1.0.12.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/oraInst.loc
OPatch version    : 11.1.0.12.9
OUI version       : 11.1.0.11.0
Log file location : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/cfgtoollogs/opatch/25840309_Jul_19_2017_09_05_01/apply2017-07-19_09-05-01AM_1.log

OPatch detects the Middleware Home as "/u01/app/oracle/product/Middleware/11.1.2.3"

Applying interim patch '25840309' to OH '/u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1'
Verifying environment and performing prerequisite checks...

Conflicts/Supersets for each patch are:

Patch : 25840309

        Bug Superset of 25383162
        Super set bugs are:
        21110317,  24731779,  19596240,  20987332,  20786014,  19271756,  21024218,  24333856,  21184039,  17409345,  22555819,  21569267,  19987242,  17416498,  23703560,  22927723,  21835208,  21278352,  22308304,  24409239,  21620706,  21225831,  21634829,  23317872,  21126991,  20771322,  21189204,  19670854,  23327572,  22843404,  21097296,  23066020,  21298039,  21967348,  20688926,  20813995,  17488044,  18311877,  21249082,  20198154,  20415424,  23032811,  21207635,  21415093,  21831011,  21575993,  21882589,  17568241,  22611713,  20363721,  21498205,  21301617,  24577814,  19991366,  21098103,  21932111,  24390289,  21282258,  19636259,  24399724,  22916463,  20235234,  20630323,  20113230,  20554639,  24481116,  24681490,  21924319,  20979061,  20469049,  24490948,  22017920,  23195988,  22564810,  24565597,  22950438,  20881625,  21169776,  20736946,  21032490,  22641166,  21644986,  21337744,  23262289,  21509636,  21183856,  22369027,  22075202,  21464932,  20692543,  20805136,  21754188,  21889432,  20924952,  21683538,  20554436,  24372668,  20881496,  21527426,  23703466,  21605154,  24310538,  21959508,  22689164,  20875947,  22119339,  20778847,  23207713,  21503767,  21310463,  20554511,  21387378,  21075558,  22180844,  21665762,  20987758,  24405001,  20529805,  20470402,  20381973,  21881962,  20830152,  23090053,  17406282,  21461976,  20989144,  20180870,  20312191,  21111836,  22732230,  21387387,  19504096,  24544860,  18312432
Interim patch 25840309 is a superset of the patch(es) [  25383162 ] in the Oracle Home
OPatch will roll back the subset patches and apply the given patch.
All checks passed.
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]:  Y

Backing up files...
Rolling back interim patch '25383162' from OH '/u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1'

Patching component oracle.idm.oud, 11.1.2.3.0...
RollbackSession removing interim patch '25383162' from inventory

OPatch back to application of the patch '25840309' after auto-rollback.

Patching component oracle.idm.oud, 11.1.2.3.0...

Verifying the update...
Patch 25840309 successfully applied
Log file location: /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/cfgtoollogs/opatch/25840309_Jul_19_2017_09_05_01/apply2017-07-19_09-05-01AM_1.log

OPatch succeeded.
[dbafmw@dbidg01 OPatch]$

[dbafmw@dbidg01 OPatch]$ ./opatch lsinv
Oracle Interim Patch Installer version 11.1.0.12.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/oraInst.loc
OPatch version    : 11.1.0.12.9
OUI version       : 11.1.0.11.0
Log file location : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/cfgtoollogs/opatch/opatch2017-07-19_09-05-45AM_1.log

OPatch detects the Middleware Home as "/u01/app/oracle/product/Middleware/11.1.2.3"

Lsinventory Output file location : /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/cfgtoollogs/opatch/lsinv/lsinventory2017-07-19_09-05-45AM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle Unified Directory                                             11.1.2.3.0
There are 1 products installed in this Oracle Home.

Interim patches (2) :

Patch  25840309     : applied on Wed Jul 19 09:05:10 CEST 2017
Unique Patch ID:  21265541
   Created on 14 Jul 2017, 00:55:06 hrs PST8PDT
   Bugs fixed:
     21310463, 21503767, 23531972, 25396981, 25201795, 25411721, 20554639
     24372668, 20415424, 17406282, 22180844, 24409239, 17416498, 20987332
     21298039, 21075558, 21575993, 24577814, 21110317, 22927723, 20235234
     22308304, 17409345, 21301617, 21846477, 20989144, 22641166, 21098103
     24390289, 24753992, 22564810, 21415093, 20554511, 20469049, 22732230
     21569267, 23607708, 21889432, 22963292, 21882589, 25022506, 25326997
     19596240, 21387378, 21278352, 21683538, 21387387, 18325609, 20786014
     20692543, 21665762, 21184039, 20778847, 20805136, 21932111, 20881625
     20736946, 22555819, 25176210, 19504096, 23703466, 23032811, 20771322
     20830152, 24677562, 21461976, 22017920, 19670854, 23090053, 21527426
     20813995, 21169776, 20113230, 20875947, 21282258, 24490948, 24731779
     18312432, 20529805, 21097296, 25173266, 24681490, 21464932, 21881962
     21498205, 21605154, 24793179, 21249082, 21509636, 22611713, 23703560
     22369027, 21126991, 20470402, 22075202, 21207635, 24902148, 24763428
     25451319, 25167070, 21111836, 22950438, 19991366, 20312191, 19987242
     20198154, 25166847, 21967348, 23066020, 24544860, 21032490, 24481116
     24701083, 25951621, 24405001, 21835208, 23327572, 24399724, 20979061
     20630323, 21959508, 19636259, 17568241, 25437857, 23319809, 19947883
     20987758, 20881496, 20924952, 20554436, 21754188, 20381973, 21183856
     24333856, 21620706, 24310538, 22916463, 21634829, 21337744, 22843404
     21924319, 20180870, 17488044, 20688926, 21811333, 21189204, 21644986
     23195988, 24565597, 22119339, 22689164, 23207713, 25686699, 25166924
     21831011, 18311877, 19271756, 21024218, 21680391, 21225831, 20979247
     20363721, 23262289, 23317872

Patch  21197325     : applied on Mon Mar 27 12:25:39 CEST 2017
Unique Patch ID:  18996262
   Created on 3 Jul 2015, 07:28:59 hrs PST8PDT
   Bugs fixed:
     21197325

--------------------------------------------------------------------------------

OPatch succeeded.

After the patching has successfully done, we can start the OUD instance.

[dbafmw@dbidg01 ~]$ /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD/bin/start-ds
[19/Jul/2017:09:06:48 +0200] category=CORE severity=INFORMATION msgID=132 msg=The Directory Server is beginning the configuration bootstrapping process
[19/Jul/2017:09:06:48 +0200] category=CORE severity=NOTICE msgID=458886 msg=Oracle Unified Directory 11.1.2.3.170718 (build 20170621135318Z, R1706210545) starting up
[19/Jul/2017:09:06:52 +0200] category=RUNTIME_INFORMATION severity=NOTICE msgID=20381717 msg=Installation Directory:  /u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1
[19/Jul/2017:09:06:52 +0200] category=RUNTIME_INFORMATION severity=NOTICE msgID=20381719 msg=Instance Directory:      /u01/app/oracle/product/Middleware/11.1.2.3/asinst_1/OUD
...
...
[19/Jul/2017:09:06:57 +0200] category=CORE severity=INFORMATION msgID=720 msg=No worker queue thread pool size specified: sizing automatically to use 24 threads
[19/Jul/2017:09:06:57 +0200] category=PROTOCOL severity=NOTICE msgID=2556180 msg=Started listening for new connections on Administration Connector 0.0.0.0 port 4444
[19/Jul/2017:09:06:57 +0200] category=PROTOCOL severity=NOTICE msgID=2556180 msg=Started listening for new connections on LDAP Connection Handler 192.168.56.201 port 1389
[19/Jul/2017:09:06:57 +0200] category=PROTOCOL severity=NOTICE msgID=2556180 msg=Started listening for new connections on LDAP Connection Handler 192.168.56.201 port 1636
[19/Jul/2017:09:06:57 +0200] category=CORE severity=NOTICE msgID=458887 msg=The Directory Server has started successfully
[19/Jul/2017:09:06:57 +0200] category=CORE severity=NOTICE msgID=458891 msg=The Directory Server has sent an alert notification generated by class org.opends.server.core.DirectoryServer (alert type org.opends.server.DirectoryServerStarted, alert ID 458887):  The Directory Server has started successfully
[dbafmw@dbidg01 ~]$

Ok. OUD is up and running again and ready to use. The whole downtime for patching it, in case you don’t have a multi master replication, was only a few minutes. That is very fast from my point of view.

Now let’s take care of the WebLogic Server. First of all, we need to unzip the new WebLogic patch into the bsu/cache_dir directory, or any other directory.

[dbafmw@dbidg01 ~]$ echo $JAVA_HOME
/u01/app/oracle/product/Middleware/11.1.2.3/jdk
[dbafmw@dbidg01 ~]$ /u01/app/oracle/product/Middleware/11.1.2.3/jdk/bin/java -version
java version "1.7.0_151"
Java(TM) SE Runtime Environment (build 1.7.0_151-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.151-b15, mixed mode)

[dbafmw@dbidg01 20170718]$ cp p25869650_1036_Generic.zip /u01/app/oracle/product/Middleware/11.1.2.3/utils/bsu/cache_dir/
[dbafmw@dbidg01 20170718]$ cd /u01/app/oracle/product/Middleware/11.1.2.3/utils/bsu/cache_dir/
[dbafmw@dbidg01 cache_dir]$ unzip p25869650_1036_Generic.zip
Archive:  p25869650_1036_Generic.zip
  inflating: README.txt
  inflating: B25A.jar
  inflating: patch-catalog_25250.xml
[dbafmw@dbidg01 cache_dir]$

Patching the WebLogic server is a little different then patching the OUD. It does not come with the OPatch utility, but for and foremost it is not cumulative. That means, you can’t apply patch B over patch A. You have to remove patch A beforehand. In case you have not removed it beforehand, you will end up with conflict like shown below.

[dbafmw@dbidg01 cache_dir]$ . /u01/app/oracle/product/Middleware/11.1.2.3/wlserver_10.3/server/bin/setWLSEnv.sh

[dbafmw@dbidg01 cache_dir]$ cd /u01/app/oracle/product/Middleware/11.1.2.3/utils/bsu/

[dbafmw@dbidg01 bsu]$ ./bsu.sh -install -patch_download_dir=/u01/app/oracle/product/Middleware/11.1.2.3/utils/bsu/cache_dir -patchlist=XIDD -prod_dir=/u01/app/oracle/product/Middleware/11.1.2.3/wlserver_10.3
Result: Failure
Unable to locate patch archive: XIDD
[dbafmw@dbidg01 bsu]$ ./bsu.sh -install -patch_download_dir=/u01/app/oracle/product/Middleware/11.1.2.3/utils/bsu/cache_dir -patchlist=B25A -prod_dir=/u01/app/oracle/product/Middleware/11.1.2.3/wlserver_10.3
Checking for conflicts.....
Conflict(s) detected - resolve conflict condition and execute patch installation again
Conflict condition details follow:
Patch B25A is mutually exclusive and cannot coexist with patch(es): XIDD

[dbafmw@dbidg01 bsu]$ ./bsu.sh -remove -patchlist=XIDD -prod_dir=/u01/app/oracle/product/Middleware/11.1.2.3/wlserver_10.3
Checking for conflicts....
No conflict(s) detected

Removing Patch ID: XIDD..
Result: Success

[dbafmw@dbidg01 bsu]$

[dbafmw@dbidg01 bsu]$ ./bsu.sh -install -patch_download_dir=/u01/app/oracle/product/Middleware/11.1.2.3/utils/bsu/cache_dir -patchlist=B25A -prod_dir=/u01/app/oracle/product/Middleware/11.1.2.3/wlserver_10.3
Checking for conflicts.....
No conflict(s) detected

Installing Patch ID: B25A..
Result: Success

[dbafmw@dbidg01 bsu]$

[dbafmw@dbidg01 ~]$ nohup /u01/app/oracle/product/Middleware/11.1.2.3/user_projects/domains/base_domain/bin/startWebLogic.sh &
[1] 20001
[dbafmw@dbidg01 ~]$ nohup: ignoring input and appending output to ‘nohup.out’

Ready. We have now patched the WebLogic server to 10.3.6.0.170718. In case you have issues with the bsu utility, you might want to use the -log option for debugging: bsu.sh -log=bsu_debug.trc. Or you take a quick look at the BSU Command line interface reference guide. http://docs.oracle.com/cd/E14759_01/doc.32/e14143/commands.htm

Ok. OUD and WebLogic is fully patched and running. But what about the ODSM? Do I have to redeploy it? No, you don’t. The odsm.ear is redeployed automatically. The odsm.ear file is shipped along with every OUD bundle patch (bppatchnumber/files/odsm/odsm.ear). A new odsm.ear file is included with Patch 11.1.2.3.170718 at the following location:

[dbafmw@dbidg01 odsm]$ pwd
/u01/app/oracle/product/Middleware/11.1.2.3/Oracle_OUD1/odsm
[dbafmw@dbidg01 odsm]$ ls -l
total 36660
-rw-r----- 1 dbafmw oinstall 37535804 Jul 14 09:55 odsm.ear

When you restart the Server for the first time after applying the patch, it will automatically redeploy the new odsm.ear file. Therefore, you might experience a slower restart of the WebLogic server. If you want, you cat take a look at the logs related to odsm.ear file deployment.

[dbafmw@dbidg01 logs]$ pwd
/u01/app/oracle/product/Middleware/11.1.2.3/user_projects/domains/base_domain/servers/AdminServer/logs
[dbafmw@dbidg01 logs]$ cat AdminServer.log | grep odsm.ear
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <J2EE> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602820> <BEA-160166> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/application.xml'. The plan was merged.>
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <Munger> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602830> <BEA-2156202> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/application.xml'. The plan was merged.>
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <J2EE> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602830> <BEA-160166> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/weblogic-application.xml'. The plan was merged.>
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <Munger> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602832> <BEA-2156202> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/weblogic-application.xml'. The plan was merged.>
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <Munger> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602833> <BEA-2156202> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/weblogic-application.xml'. The plan was merged.>
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <Munger> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602863> <BEA-2156202> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/application.xml'. The plan was merged.>
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <Munger> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602863> <BEA-2156202> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/weblogic-application.xml'. The plan was merged.>
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <Munger> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602863> <BEA-2156202> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/application.xml'. The plan was merged.>
####<Jul 19, 2017 10:23:22 AM CEST> <Info> <Munger> <dbidg01> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1500452602863> <BEA-2156202> <References were found in deployment plan for module 'odsm.ear' with uri, 'META-INF/weblogic-application.xml'. The plan was merged.>

If for any reason the odsm.ear file deployment fails during the first restart of the Server, then you manually need to redeploy the file. That’s it. We have now successfully upgraded JDK, OPatch, OUD, and WebLogic/ODSM to the latest release.

At the end, repeat the whole procedure on the second OUD, if you have one.

Conclusion

Patching an OUD requires a few steps, however, it can be done very fast if it is prepared well. And in case you have multi master replication with a load balancer vip on top, you can do it with no downtime at all.

 

Cet article OUD 11.1.2.3 – Patch 25840309: OUD BUNDLE PATCH 11.1.2.3.170718 est apparu en premier sur Blog dbi services.

Recommended DB Parameters on the Oracle Database Appliance (ODA)

Tue, 2017-07-18 10:06

When creating a DB on the ODA using


# oakcli create database -db <db-name>

a template is used to set a couple of DB parameters like e.g.


_datafile_write_errors_crash_instance=FALSE
_db_writer_coalesce_area_size=16777216
_disable_interface_checking=TRUE
_ENABLE_NUMA_SUPPORT=FALSE
_FILE_SIZE_INCREASE_INCREMENT=2143289344
_gc_policy_time=0
_gc_undo_affinity=FALSE
db_block_checking='FULL'
db_block_checksum='FULL'
db_lost_write_protect='TYPICAL'
filesystemio_options='setall'
parallel_adaptive_multi_user=FALSE
parallel_execution_message_size=16384
parallel_threads_per_cpu=2
use_large_pages='ONLY'

In recent projects I saw a couple of DBs running on ODA, which did not have (all) those parameters set, because the DBs were migrated from a non-ODA-platform and the customer took over the previous settings.

The questions are: Are above parameters mandatory on ODA and where do I find them?

Actually Oracle writes in the documentation

http://docs.oracle.com/cd/E83239_01/doc.121/e83201/database-templates-oracle-database-appliance.htm#CMTAR269

“Oracle strongly recommends that you use the Oracle Database Appliance templates, because they implement best practices and are configured specifically for Oracle Database Appliance.”

So it’s not mandatory, but it’s “strongly recommended” by Oracle to set those parameters on ODA.

The parameters are actually defined in the XML-files

/opt/oracle/oak/onecmd/templates/OAK_oltp.dbt
/opt/oracle/oak/onecmd/templates/OAK_dss.dbt

E.g. on a virtualized X5-2 HA with 12.1.2.8.0 installed:


# grep "initParam name" /opt/oracle/oak/onecmd/templates/OAK_oltp.dbt
<initParam name="AUDIT_SYS_OPERATIONS" value="TRUE"/>
<initParam name="AUDIT_TRAIL" value="DB"/>
<initParam name="GLOBAL_NAMES" value="TRUE"/>
<initParam name="OS_AUTHENT_PREFIX" value=""/>
<initParam name="SQL92_SECURITY" value="TRUE"/>
<initParam name="PARALLEL_ADAPTIVE_MULTI_USER" value="FALSE"/>
<initParam name="PARALLEL_EXECUTION_MESSAGE_SIZE" value="16384"/>
<initParam name="PARALLEL_THREADS_PER_CPU" value="2"/>
<initParam name="_disable_interface_checking" value="TRUE"/>
<initParam name="_gc_undo_affinity" value="FALSE"/>
<initParam name="_gc_policy_time" value="0"/>
<initParam name="SESSION_CACHED_CURSORS" value="100"/>
<initParam name="OPEN_CURSORS" value="1000"/>
<initParam name="CURSOR_SHARING" value="EXACT"/>
<initParam name="_ENABLE_NUMA_SUPPORT" value="FALSE"/>
<initParam name="DB_LOST_WRITE_PROTECT" value="TYPICAL"/>
<initParam name="DB_BLOCK_CHECKSUM" value="FULL"/>
<initParam name="DB_BLOCK_CHECKING" value="FULL"/>
<initParam name="FAST_START_MTTR_TARGET" value="300"/>
<initParam name="UNDO_RETENTION" value="900"/>
<initParam name="_FILE_SIZE_INCREASE_INCREMENT" value="2143289344"/>
<initParam name="FILESYSTEMIO_OPTIONS" value="setall"/>
<initParam name="use_large_pages" value="only"/>
<initParam name="DB_FILES" value="1024"/>
<initParam name="processes" value="4800"/>
<initParam name="pga_aggregate_target" value="49152" unit="MB"/>
<initParam name="sga_target" value="98304" unit="MB"/>
<initParam name="db_create_file_dest" value="+DATA"/>
<initParam name="log_buffer" value="64000000" />
<initParam name="cpu_count" value="48"/>
<initParam name="pga_aggregate_limit" value="49152" unit="MB"/>
<initParam name="_datafile_write_errors_crash_instance" value="false"/>
<initParam name="_fix_control" value="18960760:on"/>
<initParam name="db_block_size" value="8" unit="KB"/>
<initParam name="compatible" value="11.2.0.x.0"/>
<initParam name="undo_tablespace" value="UNDOTBS1"/>
<initParam name="control_files" value="("{ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/control01.ctl")"/>
<initParam name="audit_file_dest" value="{ORACLE_BASE}/admin/{DB_UNIQUE_NAME}/adump"/>
<initParam name="audit_trail" value="db"/>
<initParam name="diagnostic_dest" value="{ORACLE_BASE}"/>
<initParam name="remote_login_passwordfile" value="EXCLUSIVE"/>
<initParam name="dispatchers" value="(PROTOCOL=TCP) (SERVICE={SID}XDB)"/>
<initParam name="db_recovery_file_dest" value="+RECO"/>
<initParam name="db_recovery_file_dest_size" value="1843200" unit="MB"/>
<initParam name="db_create_online_log_dest_1" value="+REDO" />
<initParam name="_db_writer_coalesce_area_size" value="16777216"/>

Oracle does not take those parameters blindly when creating a DB with oakcli, but adjusts them as e.g. described in the Blog

https://blog.dbi-services.com/oda-32gb-template-but-got-a-database-with-16gb-sga/

I.e. the SGA_TARGET and PGA_AGGREGATE_TARGET parameters are adjusted based on the chosen database class. Also the COMPATIBLE-parameter is set to the current release (e.g. to 12.1.0.2.0).

So if you’re not able to create the DB on ODA using

# oakcli create database -db <db-name>

then I recommend to check the XML-file OAK_oltp.dbt (or in case of a Decision Support System/Warehouse-DB the file OAK_dss.dbt) and set the parameters in your database accordingly. Alternatively (and probably even better) you may create a dummy-DB using oakcli and check Oracle’s settings, which can then be used in your migrated DB, and drop the dummy-DB afterwards.

Here the parameters of a 12c-DB created on a virtualized ODA X6-2 HA 12.1.2.11.0 with oakcli create database using the smallest setting with 1 Core and the DSS-template:


*._datafile_write_errors_crash_instance=false
*._db_writer_coalesce_area_size=16777216
*._disable_interface_checking=TRUE
*._ENABLE_NUMA_SUPPORT=FALSE
*._FILE_SIZE_INCREASE_INCREMENT=2143289344
*._fix_control='18960760:on'
*._gc_policy_time=0
*._gc_undo_affinity=FALSE
*.audit_file_dest='/u01/app/oracle/admin/C12TEST/adump'
*.audit_sys_operations=TRUE
*.audit_trail='db'
*.cluster_database=true
*.compatible='12.1.0.2.0'
*.control_files='/u01/app/oracle/oradata/datastore/C12TEST/C12TEST/controlfile/o1_mf_dpw4ljnv_.ctl'
*.cpu_count=2
*.cursor_sharing='EXACT'
*.db_block_checking='FULL'
*.db_block_checksum='FULL'
*.db_block_size=8192
*.db_create_file_dest='/u02/app/oracle/oradata/datastore/.ACFS/snaps/C12TEST'
*.db_create_online_log_dest_1='/u01/app/oracle/oradata/datastore/C12TEST'
*.db_domain=''
*.db_files=1024
*.db_lost_write_protect='TYPICAL'
*.db_name='C12TEST'
*.db_recovery_file_dest='/u01/app/oracle/fast_recovery_area/datastore/C12TEST'
*.db_recovery_file_dest_size=476160m
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=C12TESTXDB)'
*.fast_start_mttr_target=300
*.filesystemio_options='setall'
*.global_names=TRUE
*.inmemory_size=0m
*.log_archive_format='%t_%s_%r.dbf'
*.log_buffer=16000000
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=1000
*.os_authent_prefix=''
*.parallel_adaptive_multi_user=TRUE
*.parallel_degree_policy='MANUAL'
*.parallel_execution_message_size=16384
*.parallel_force_local=FALSE
*.parallel_max_servers=80
*.parallel_min_servers=8
*.parallel_threads_per_cpu=2
*.pga_aggregate_limit=2048m
*.pga_aggregate_target=2048m
*.processes=200
*.remote_login_passwordfile='exclusive'
*.session_cached_cursors=100
*.sga_target=1024m
*.sql92_security=TRUE
*.undo_retention=900
*.use_large_pages='ONLY'

 

Cet article Recommended DB Parameters on the Oracle Database Appliance (ODA) est apparu en premier sur Blog dbi services.

Documentum – Change password – 2 – CS – dm_bof_registry

Sat, 2017-07-15 03:30

When installing a Global Registry on a Content Server, you will be asked to setup the BOF username and password. The name of this user is by default “dm_bof_registry” so even if you can change it, I will use this value in this blog. This is one of the important accounts that are being created inside the Global Registry. So, what would be the needed steps to change the password of this account?

 

Let’s start with the simple part: changing the password of the account in the Global Registry. For this, I will use iapi below but you can do the same thing using Documentum Administrator, idql, dqMan or anything else that works. First, let’s login on the Content Server, switch to the Installation Owner’s account and start with defining an environment variable that will contain the NEW password to be used:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the dm_bof_registry password: " bof_pwd; echo
Please enter the dm_bof_registry password:
[dmadmin@content_server_01 ~]$

 

Once that is done, we can now execute the iapi commands below to update the password for the dm_bof_registry account. As there is a local trust on the Content Server with the Installation Owner, I don’t need to enter the password, so I use “xxx” instead to login to the Global Registry (GR_DOCBASE). Execute the commands below one after the other and don’t include the “> ” characters, just past the iapi commands and after pasting the final EOF, an iapi session will be opened and all commands will be executed, like that:

[dmadmin@content_server_01 ~]$ iapi GR_DOCBASE -Udmadmin -Pxxx << EOF
> retrieve,c,dm_user where user_login_name='dm_bof_registry'
> set,c,l,user_password
> $bof_pwd
> save,c,l
> EOF


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000905 started for user dmadmin."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> ...
110f123456000144
API> SET> ...
OK
API> ...
OK
API> Bye
[dmadmin@content_server_01 ~]$

 

Then to verify that the password has been set properly in the Global Registry, we can try to login with the dm_bof_registry account:

[dmadmin@content_server_01 ~]$ echo quit | iapi GR_DOCBASE -Udm_bof_registry -P$bof_pwd


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000906 started for user dm_bof_registry."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@content_server_01 ~]$

 

If the password has been changed properly, the output will be similar to the one above: a session will be opened and the only command executed will be “quit” which will close the iapi session automatically. That was pretty easy, right? Well that’s clearly not all there is to do to change the BOF password, unfortunately…

 

The “problem” with the dm_bof_registry account is that it is used on all DFC Clients to register them, to establish trust, aso… Therefore, if you change the password of this account, you will need to reflect this change on all clients that are connecting to your Content Servers. In the steps below, I will provide some commands that can be used to do that on the different typical DFC clients (JMS, xPlore, DA, D2, …). If I’m not talking about one of your DFC client, then basically the steps are always the same, it’s just the commands that differs:

  • Listing all dfc.keystore
  • Updating the dfc.properties
  • Removing/renaming the dfc.keystore files
  • Restarting the DFC clients
  • Checking that the dfc.keystore files have been recreated

 

Before going through the different DFC Clients, you first need to encrypt the BOF user’s password because it is always be used in its encrypted form, so let’s encrypt it on a Content Server:

[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -cp $DOCUMENTUM_SHARED/dfc/dfc.jar com.documentum.fc.tools.RegistryPasswordUtils ${bof_pwd}
AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0
[dmadmin@content_server_01 ~]$

 

I generated a random string for this example (“AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0″) but this will be the encrypted password of our user. I will use this value in the commands below so whenever you see this, just replace it with what your “java -cp ..” command returned.

 

I. Content Server

On the Content Server, the main dfc client is the JMS. You will have one dfc.properties for each JMS application, one global for the CS, aso… So, let’s update all that with a few commands only. Normally you should only get the definition of the dfc.globalregistry.password in the file $DOCUMENTUM_SHARED/config/dfc.properties. If you got this definition elsewhere, you should maybe consider using the “#include” statement to avoid duplicating the definitions…

[dmadmin@content_server_01 ~]$ for i in `find $DOCUMENTUM_SHARED -type f -name "dfc.keystore"`; do ls -l ${i}; done
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' $DOCUMENTUM_SHARED/config/dfc.properties
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for i in `find $DOCUMENTUM_SHARED -type f -name "dfc.keystore"`; do ls -l ${i}; mv "${i}" "${i}_bck_$(date "+%Y%m%d")"; done
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ $DOCUMENTUM_SHARED/jboss7.1.1/server/stopMethodServer.sh
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ nohup $DOCUMENTUM_SHARED/jboss7.1.1/server/startMethodServer.sh >> $DOCUMENTUM_SHARED/jboss7.1.1/server/nohup-JMS.out 2>&1 &
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for i in `find $DOCUMENTUM_SHARED -type f -name "dfc.keystore"`; do ls -l ${i}; done

 

If you do it properly, all the dfc.keystore files will be recreated with the restart and you can verify that by comparing the output of the first and last commands.

 

II. WebLogic Server

In this part, I will assume a WebLogic Server is used for the D2, D2-Config and DA applications. If you are using Tomcat instead, then just adapt the path. Below I will use:

  • $WLS_APPLICATIONS as the directory where all the Application WAR files are present. If you are using exploded applications (it’s just a folder, not a WAR file) OR if you are using an external dfc.properties file (it’s possible even with a WAR file to extract the dfc.properties for it), then the “jar -xvf” and “jar -uvf” commands aren’t needed.
  • $WLS_APPS_DATA as the directory where the Application Data are present (Application log files, dfc.keystore, cache, …)

 

These two folders might be the same depending on how you configured your Application Server. All I’m doing below is just updating the dfc.properties files for D2, D2-Config and DA in order to use the new encrypted password.

[weblogic@weblogic_server_01 ~]$ for i in `find $WLS_APPS_DATA -type f -name "dfc.keystore"`; do ls -l ${i}; done
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ cd $WLS_APPLICATIONS/
[weblogic@weblogic_server_01 ~]$ jar -xvf D2.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ jar -uvf D2.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ jar -xvf D2-Config.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ jar -uvf D2-Config.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ jar -xvf da.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ jar -uvf da.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ for i in `find $WLS_APPS_DATA -type f -name "dfc.keystore"`; do ls -l ${i}; mv "${i}" "${i}_bck_$(date "+%Y%m%d")"; done

 

Once done, the next steps depend, again, on how you configured your Application Server. If you are using WAR files, you will need to redeploy them. If not, you might have to restart your Application Server for the change to be taken into account and for the keystore file to be re-created.

 

III. Full Text Server

On the Full Text Server, it’s again the same stuff but for all Index Agents this time.

[xplore@xplore_server_01 ~]$ for i in `find $XPLORE_HOME -type f -name "dfc.keystore"`; do ls -l ${i}; done
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ for i in `ls $XPLORE_HOME/jboss7.1.1/server/DctmServer_*/deployments/IndexAgent.war/WEB-INF/classes/dfc.properties`; do sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' ${i}; done
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ for i in `find $XPLORE_HOME -type f -name "dfc.keystore"`; do ls -l ${i}; mv "${i}" "${i}_bck_$(date "+%Y%m%d")"; done
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ service xplore stop
[xplore@xplore_server_01 ~]$ service xplore start
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ for i in `find $XPLORE_HOME -type f -name "dfc.keystore"`; do ls -l ${i}; done

 

Again if you do it properly, all the dfc.keystore files will be recreated with the restart.

 

When everything has been done, just let the environment up&running for some time and check the logs for authentication failures regarding the dm_bof_registry user. As you saw above, changing the dm_bof_registry password isn’t really complicated but it’s quite redundant and time consuming so better script all this! :)

 

 

 

Cet article Documentum – Change password – 2 – CS – dm_bof_registry est apparu en premier sur Blog dbi services.

Pages