Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 3 hours 33 min ago

Oracle 12cR2: Pluggable database relocation

Thu, 2016-11-10 23:19

Here is, in my opinion, the most beautiful feature of the multitenant architecture. You know how I love Transportable Tablespaces. But here:

  • No need to put the source in read/only
  • No need to export/import the metadata logically
  • No need for any option: available even in Standard Edition

Standard Edition

I am in Standard Edition here in both source and target, no option required for this:

SQL*Plus: Release 12.2.0.1.0 Production on Thu Nov 10 13:40:05 2016
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production

Source: PDB1 on CDB1

On server opc1 I have a container database CDB1 with one pluggable database PDB1 where I create a new table:

23:40:20 (opc1)CDB1 SQL>alter session set container=PDB1;
Session altered.
23:40:20 (opc1)CDB1 SQL>create table DEMO as select current_timestamp insert_timestamp,instance_name from v$instance;
Table created.
23:40:21 (opc1)CDB1 SQL>insert into DEMO select current_timestamp,instance_name from v$instance;
1 row created.
23:40:21 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1

Export encryption key

I’m in Oracle Public Cloud where tablespaces are encrypted. To ship a pluggable database I must export the keys. Here is the query to get them:

23:40:23 (opc1)CDB1 SQL>select key_id from v$encryption_keys where creator_pdbname='PDB1';
 
KEY_ID
------------------------------------------------------------------------------
AWlnBaUXG0/gv4evS9Ywu8EAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

And I can filter with this query to export it:

23:40:23 (opc1)CDB1 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1');
administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1')
*
ERROR at line 1:
ORA-28417: password-based keystore is not open

I can’t do that with auto-login wallet.

23:40:23 (opc1)CDB1 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER WALLET_TY
-------- -------------------------------------- ---------
FILE /u01/app/oracle/admin/CDB1/tde_wallet/ AUTOLOGIN

Let’s open the wallet with password:

23:40:23 (opc1)CDB1 SQL>administer key management set keystore close;
keystore altered.
23:40:23 (opc1)CDB1 SQL>administer key management set keystore open identified by "Ach1z0#d";
keystore altered.
23:40:23 (opc1)CDB1 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER WALLET_TY
-------- -------------------------------------- ---------
FILE /u01/app/oracle/admin/CDB1/tde_wallet/ PASSWORD

and re-try my export:

23:40:23 (opc1)CDB1 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1');
keystore altered.

This file must be copied to the destination server. I did it with scp. You can also use dbms_file_transfer as you will need a database link anyway for the remote clone.

Import encryption key

On the destination server, where I have no CDB (I’m limited to one PDB here without the multitenant option)

23:40:31 (opc2)CDB2 SQL>show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO

I have to import the encryption key:

23:40:31 (opc2)CDB2 SQL>administer key management set keystore open identified by "Ach1z0#d";
keystore altered.
 
23:40:31 (opc2)CDB2 SQL>administer key management import encryption keys with secret "oracle" from '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d";
keystore altered.

I’m now ready to relocate my PDB as I’m sure I’ll be ready to open it.

Database link

The remote clone is done through a DB link. I’ve a TNS entry named CDB1:

23:40:31 (opc2)CDB2 SQL>select dbms_tns.resolve_tnsname('CDB1') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('CDB1')
--------------------------------------------------------------------------------
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=opc1)(PORT=1521))(CONNECT_DAT
A=(SERVER=DEDICATED)(SERVICE_NAME=CDB1.opcoct.oraclecloud.internal)(CID=(PROGRAM
=oracle)(HOST=SE222.compute-opcoct.oraclecloud.internal)(USER=oracle))))
 
23:40:31 (opc2)CDB2 SQL>create database link CDB1 connect to C##DBA identified by oracle using 'CDB1';
Database link created.

DML on source

In order to show that the source doesn’t have to be read only as in previous release, I’m running the following inserts every 5 minutes:

23:40:44 (opc1)CDB1 SQL>commit;
Commit complete.
23:40:44 (opc1)CDB1 SQL>insert into DEMO select current_timestamp,instance_name from v$instance;
1 row created.
23:40:44 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
6 rows selected.

PDB remote clone

Here is the syntax.
I need to provide the masterkey of the source wallet.
The RELOCATE is this new feature where the source PDB will be relocated to the destination when the clone is opened.

23:40:48 (opc2)CDB2 SQL>create pluggable database PDB1 from PDB1@CDB1 keystore identified by "Ach1z0#d" relocate;
Pluggable database created.
23:41:08 (opc2)CDB2 SQL>

It took some time, shipping the datafiles through the DB link, but this is online.
I was still inserting during this time:

23:41:04 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
10-NOV-16 11.40.49.454661 PM +00:00 CDB1
10-NOV-16 11.40.54.532699 PM +00:00 CDB1
10-NOV-16 11.40.59.614745 PM +00:00 CDB1
10-NOV-16 11.41.04.692784 PM +00:00 CDB1
 
10 rows selected.

Note that you need to be in ARCHIVELOG and LOCAL UNDO to be able to do this because syncronisation will be made by media recovery when we open the clone.

Open the clone

Now, the theory is that when we open the clone, DML is quiesced on source during the recovery of the target and sessions can continue on the target once opened.

23:41:09 (opc2)CDB2 SQL>alter pluggable database PDB1 open;
alter pluggable database PDB1 open
*
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource
23:41:26 (opc2)CDB2 SQL>

Bad luck. Every time I tested this scenario, the first open after the relocate fails in deadlock and the session on the source crashes:

23:41:09 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
10-NOV-16 11.40.49.454661 PM +00:00 CDB1
10-NOV-16 11.40.54.532699 PM +00:00 CDB1
10-NOV-16 11.40.59.614745 PM +00:00 CDB1
10-NOV-16 11.41.04.692784 PM +00:00 CDB1
10-NOV-16 11.41.09.773300 PM +00:00 CDB1
 
11 rows selected.
 
23:41:14 (opc1)CDB1 SQL> commit;
ERROR:
ORA-03114: not connected to ORACLE

It’s a good occasion to look at the traces.
We can see some messages about the recovery:

*** 2016-11-10T23:41:12.660402+00:00 (PDB1(3))
Media Recovery Log /u03/app/oracle/fast_recovery_area/CDB1/foreign_archivelog/PDB1/2016_11_10/o1_mf_1_24_2025109931_.arc
Log read is SYNCHRONOUS though disk_asynch_io is enabled!

Those FOREIGN ARCHIVED LOG is a new type of file that you will see in the FRA in 12.2.

So I lost my session on source and now if I try again it works:

23:42:20 (opc2)CDB2 SQL>alter pluggable database PDB1 open;
Pluggable database altered.
23:42:24 (opc2)CDB2 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
10-NOV-16 11.40.49.454661 PM +00:00 CDB1
10-NOV-16 11.40.54.532699 PM +00:00 CDB1
10-NOV-16 11.40.59.614745 PM +00:00 CDB1
10-NOV-16 11.41.04.692784 PM +00:00 CDB1
 
10 rows selected.

All the inserts that were commited on the source are there.
Even with this deadlock bug (SR 3-13618219421), it’s the easiest and fastest way to migrate a database, with the minimum of downtime. Especially in Standard Edition where transportable tablespaces import is not enabled.
Without the deadlock bug, the sessions on the source are supposed to be still running , only paused during the recovery, and then continue on the destination.

 

Cet article Oracle 12cR2: Pluggable database relocation est apparu en premier sur Blog dbi services.

Documentum story – Disable PDF automatic generation but allow jpeg

Thu, 2016-11-10 00:00

We had a request by a customer where he wanted the pdf automatic rendition to be disabled but keep allowing the jpeg renditions to be used as thumbnails for D2. The pdf is generated by an eternal tool so it wasn’t needed here. Here is what we did:

Login to the ADTS server and edit the following file:
<CTS_HOME>\config\CTSServerService.xml

For the repository named “DOCBASE1″, comment the QueueProcessorContext tag as follow and only if it include the dm_autorender_win31 value:

<!-- Start of comment to be added
<QueueProcessorContext DocbaseName="DOCBASE1">
    <CTSServer AttributeName="queueItemName" AttributeValue="dm_autorender_win31"/>
    ...
</QueueProcessorContext>
       End of comment to be added -->

Then restart the services

After the restart of the Content Transformation Services, wait for a few minutes and open the CTS log file <CTS_HOME>\logs\CTS_log.txt.

On the CTS log file:
– Ensure that there are no important errors and/or exceptions
– Search for the message “Product ADTS version <CTS_VERSION> is installed on repository: DOCBASE1″

Open the file <CTS_HOME>\logs\Polling_log.txt in order to verify that the changes have been applied successfully.

In this file, find all “DOCBASE1″ related CTSLegacyQueueProcessor log entries by searching for the following text:

DOCBASE1 CTSLegacyQueueProcessor

There should not be any matching text in the log file.

 

Note that you can use the jpeg renditions in the D2 4.5 Preview widget in order to display a kind of preview directly in the D2 interface.

 

Cet article Documentum story – Disable PDF automatic generation but allow jpeg est apparu en premier sur Blog dbi services.

Oracle 12c – DB_UNKNOWN in ASM

Wed, 2016-11-09 04:06

Have you ever noticed a DB_UNKNOWN directory in your ASM structure? It usually happens in combination with spfile creation in ASM, or with RMAN spfile restores to ASM.

The correct location is +DATA/<SID>/PARAMETERFILE/SPFILE.<#>.<#>, and an ASM alias from +DATA/<SID>/ pointing to it.

But sometimes, the spfile ends up in +DATA/DB_UNKNOWN/PARAMETERFILE/SPFILE.<#>.<#>

Technically no issue. The spfile in the DB_UNKNOWN directory is perfectly ok and can be used. However, you might need to adjust your init<SID>.ora in case you have a config like the following

oracle@oel001:/u00/app/oracle/product/12.1.0.2/dbs/ [OCM121] cat initOCM121.ora
SPFILE='+data/DB_UNKNOWN/PARAMETERFILE/SPFILE.293.927371209'

Maybe you have a 4 node RAC, then you need to adjust it on every node. Maybe you have a cluster resource with a spfile entry. Then you need to adjust that one as well. And besides that, to what database does the DB_UNKNOWN belong to? Imagine you have 20 DB’s running and you need to find out, which database has something in the DB_UNKNOWN directory, in case there are more entries.

No … it is not a good situation. It has to be corrected. But how?

First of all, let’s create a situation that ends up with a DB_UNKNOWN directory.

It is quite easy to do. Typically, with spfile restores or with a “create spfile from pfile”

  1. Shutdown the DB
  2. Startup RMAN dummy instance
  3. Restore the spfile to pfile
  4. Shutdown the Instance
  5. Adjust the pfile
  6. Create the spfile from pfile while the DB is shutdown

Here is an example with 12cR1 (12.1.0.2). I am jumping directly to the RMAN restore, because RMAN dummy instance was already explained in http://blog.dbi-services.com/oracle-12c-when-the-rman-dummy-instance-does-not-start-up/

Ok. Let’s check the current location of the spfile of the cluster resource.

oracle@oel001:/home/oracle/ [OCM121] srvctl config database -d OCM121 | grep -i spfile
Spfile: +DATA/OCM121/spfileOCM121.ora

Now we can run the RMAN restore of the spfile to pfile. Restoring it to a pfile first has the advantage, that we can take a look at all settings and maybe adjust them, before we put it back into production.

run {
restore spfile to pfile '/tmp/initOCM121.ora' for db_unique_name='OCM121' from
'+fra/OCM121/AUTOBACKUP/2016_10_29/s_926511850.517.926511853';
}

Starting restore at 08-NOV-2016 11:01:04
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=364 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=12 device type=DISK
channel ORA_DISK_2: skipped, AUTOBACKUP already found
channel ORA_DISK_1: restoring spfile from AUTOBACKUP +fra/OCM121/AUTOBACKUP/2016_10_29/s_926511850.517.926511853
channel ORA_DISK_1: SPFILE restore from AUTOBACKUP complete
Finished restore at 08-NOV-2016 11:01:14

The pfile was successfully created. Now we can correct some settings in the pfile if we want and then create a spfile again.

oracle@oel001:/home/oracle/ [OCM121] ls -l /tmp/initOCM121.ora
-rw-r--r-- 1 oracle asmadmin 1777 Nov  8 11:01 /tmp/initOCM121.ora

Ok. Let’s create the new spfile while the DB is shutdown.

oracle@oel001:/home/oracle/ [OCM121] sqh

SQL*Plus: Release 12.1.0.2.0 Production on Tue Nov 8 11:03:56 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> create spfile='+DATA' from pfile='/tmp/initOCM121.ora';

File created.

Oppssss … and now it happened. The directory DB_UNKNOWN is created. While the database is shutdown, Oracle does not know the DB_NAME and so, it has to create a placeholder directory to save the spfile.

ASMCMD> pwd
+data
ASMCMD> ls -l
Type  Redund  Striped  Time             Sys  Name
                                        Y    CDB121/
                                        Y    DB_UNKNOWN/
                                        Y    OCM121/

ASMCMD> pwd
+data/DB_UNKNOWN/PARAMETERFILE
ASMCMD> ls -l
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   NOV 08 11:00:00  Y    SPFILE.293.927371209

However, this is not the configuration that we want. To correct it, cleanup the DB_UNKNOWN entries, and start your DB into the nomount state and execute then the spfile from pfile command again.

SQL> startup nomount pfile=/tmp/initOCM121.ora
ORACLE instance started.

Total System Global Area 1610612736 bytes
Fixed Size                  2924928 bytes
Variable Size             436211328 bytes
Database Buffers         1157627904 bytes
Redo Buffers               13848576 bytes

SQL> create spfile='+DATA' from pfile='/tmp/initOCM121.ora';

File created.

And here we go. The spfile is the correct location.

ASMCMD> pwd
+data/OCM121/PARAMETERFILE
ASMCMD> ls -l
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   NOV 08 11:00:00  Y    spfile.291.927372029

The only thing missing is the ASM alias. That one has to be created manually afterwards.

ASMCMD> cd +data/OCM121
ASMCMD> mkalias +data/OCM121/PARAMETERFILE/spfile.291.927372029 spfileOCM121.ora
ASMCMD>
Conclusion

It makes a big difference if you create your spfile in the nomount state or while the database is shutdown. You might end up with a totally different directory structure in ASM. With 12.1.0.2 and 11.2.0.4 the nomount state is enough to end up in the correct location. In earlier versions you might need to startup mount to have the same effect.

Cheers,
William

 

 

 

Cet article Oracle 12c – DB_UNKNOWN in ASM est apparu en premier sur Blog dbi services.

Oracle 12cR2 multitenant containers in SQL_TRACE

Tue, 2016-11-08 14:21

In multitenant you session can switch between containers. For example, since 12.1, a common user can switch explicitly between CDB$ROOT and any PDB with the ‘ALTER SYSTEM SET CONTAINER’. Any user connected to a PDB will also have it session switching implicitely when querying through metadata links and data links (new name for object links). In 12.1 there are no ways to trace this. This is fixed in 12.2

I set sql_trace and get the tracefile name:

SQL> select value tracefile from v$diag_info where name='Default Trace File';
 
TRACEFILE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb1/CDB1/trace/CDB1_ora_6307.trc
 
SQL> alter session set sql_trace=true;
Session altered.

The container ID is CON_ID=1 because I’m connected to the root:


SQL> host grep "^\*\*\*" &tracefile
 
*** 2016-09-04T16:08:17.968360+02:00 (CDB$ROOT(1))
*** SESSION ID:(14.25101) 2016-09-04T16:08:17.968399+02:00
*** CLIENT ID:() 2016-09-04T16:08:17.968405+02:00
*** SERVICE NAME:(SYS$USERS) 2016-09-04T16:08:17.968410+02:00
*** MODULE NAME:(sqlplus@VM115 (TNS V1-V3)) 2016-09-04T16:08:17.968415+02:00
*** ACTION NAME:() 2016-09-04T16:08:17.968420+02:00
*** CLIENT DRIVER:(SQL*PLUS) 2016-09-04T16:08:17.968425+02:00
*** CONTAINER ID:(1) 2016-09-04T16:08:17.968430+02:00

In 12.1 you had no more information about the container in the trace file. This is improved in 12.2

Explicit ALTER SYSTEM SET CONTAINER

I’ll run a simple query, then change to container PDB (which is CON_ID=3 here) and run again a query:

SQL> select * from dual;
 
D
-
X
 
SQL> alter session set container=PDB;
Session altered.
 
SQL> select * from dual;
 
D
-
X

The lines with starting with ‘***’ followed by a timestamp are not new. But now we also have the container name (here CON_NAME=PDB) and container ID (CON_ID=3):

SQL> host grep "^\*\*\*" &tracefile
 
*** 2016-09-04T16:09:54.397448+02:00 (PDB(3))
*** CONTAINER ID:(3) 2016-09-04T16:09:54.397527+02:00

You get those line for each ALTER SESSION SET CONTAINER and you have the CON_NAME and CON_ID of the PDB: (PDB(3))

Implicit switch though data link

I’m still in PDB and I’ll query a data link view: DBA_PDBS. Data link views (previously called ‘object link’ views) query data from the CDB$ROOT even when you are in a PDB. DBA_PDBS show information from pluggable databases, which are stored in CDB$ROOT (because they must be available before the PDB is opened).

SQL> select count(*) from dba_pdbs;
 
COUNT(*)
----------
1
 

The execution of the query had to switch to CDB$ROOT (CON_ID=1) to get the rows and switch back to PDB (CON_ID=3):


SQL> host grep "^\*\*\*" &tracefile
 
*** 2016-09-04T16:09:54.406379+02:00 (CDB$ROOT(1))
*** 2016-09-04T16:09:54.406676+02:00 (PDB(3))

If you look at the detail you will see that my query is parsed in my container:

=====================
PARSING IN CURSOR #139807307349184 len=29 dep=0 uid=0 oct=3 lid=0 tim=203051393258 hv=2380449338 ad='896cae38' sqlid='3cngtnf6y5jju'
select count(*) from dba_pdbs
END OF STMT
PARSE #139807307349184:c=0,e=53,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,plh=1333657383,tim=203051393256

I think the following is to check that the table behind the data link view are valid in the PDB even if we don’t want to query them. This is only a parse call:

=====================
PARSING IN CURSOR #139807307295488 len=46 dep=1 uid=0 oct=3 lid=0 tim=203051393450 hv=1756598280 ad='7b5dfd58' sqlid='5ucyn75nb7408'
SELECT * FROM NO_OBJECT_LINK("SYS"."DBA_PDBS")
END OF STMT
PARSE #139807307295488:c=0,e=26,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=810534000,tim=203051393449
CLOSE #139807307295488:c=0,e=7,dep=1,type=1,tim=203051393490

Then when I execute my query:

EXEC #139807307349184:c=0,e=246,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,plh=1333657383,tim=203051393539

my session switches to root:

*** 2016-09-04T16:09:54.406379+02:00 (CDB$ROOT(1))

and the recursive query is parsed and executed in CDB$ROOT:
=====================
PARSING IN CURSOR #139807307379504 len=170 dep=1 uid=0 oct=3 lid=0 tim=203051393687 hv=1291428476 ad='895c6940' sqlid='g34kja56gm8mw'
SELECT /*+ NO_STATEMENT_QUEUING RESULT_CACHE (SYSOBJ=TRUE) */ CON_ID FROM NO_OBJECT_LINK("SYS"."DBA_PDBS") "DBA_PDBS" WHERE "DBA_PDBS"."CON_ID"=0 OR "DBA_PDBS"."CON_ID"=3
END OF STMT
PARSE #139807307379504:c=0,e=44,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=2042216988,tim=203051393685
EXEC #139807307379504:c=0,e=48,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=2042216988,tim=203051393790
FETCH #139807307379504:c=0,e=20,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,plh=2042216988,tim=203051393826
STAT #139807307379504 id=1 cnt=1 pid=0 pos=1 obj=0 op='RESULT CACHE 8p3h095ufc042f32tf05b23qf3 (cr=0 pr=0 pw=0 str=1 time=18 us)'
STAT #139807307379504 id=2 cnt=0 pid=1 pos=1 obj=0 op='NESTED LOOPS (cr=0 pr=0 pw=0 str=0 time=0 us cost=2 size=16 card=1)'
STAT #139807307379504 id=3 cnt=0 pid=2 pos=1 obj=161 op='TABLE ACCESS BY INDEX ROWID CONTAINER$ (cr=0 pr=0 pw=0 str=0 time=0 us cost=1 size=11 card=1)'
STAT #139807307379504 id=4 cnt=0 pid=3 pos=1 obj=163 op='INDEX UNIQUE SCAN I_CONTAINER2 (cr=0 pr=0 pw=0 str=0 time=0 us cost=0 size=0 card=1)'
STAT #139807307379504 id=5 cnt=0 pid=2 pos=2 obj=36 op='INDEX RANGE SCAN I_OBJ1 (cr=0 pr=0 pw=0 str=0 time=0 us cost=1 size=5 card=1)'
CLOSE #139807307379504:c=0,e=4,dep=1,type=1,tim=203051393959

You note that result cache is used for optimization and query is run with NO_OBJECT_LINK() to prevent further data links if any.

Then, my session switches back to my PDB:

*** 2016-09-04T16:09:54.406676+02:00 (PDB(3))

and execution of my query finishes:

FETCH #139807307349184:c=0,e=375,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=2,plh=1333657383,tim=203051393981
STAT #139807307349184 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=0 pr=0 pw=0 str=1 time=544 us)'
STAT #139807307349184 id=2 cnt=1 pid=1 pos=1 obj=0 op='DATA LINK FULL DBA_PDBS (cr=0 pr=0 pw=0 str=1 time=525 us cost=1 size=1300 card=100)'
FETCH #139807307349184:c=0,e=1,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=1333657383,tim=203051394259
CLOSE #139807307349184:c=0,e=10,dep=0,type=1,tim=203051397922

you see that the execution plan is explicit: ‘DATA LINK FULL’ in 12.2 (it was FIXED TABLE FULL X$OBLNK$ in 12.1)

_diag_cdb_logging

This new behaviour is controlled by an underscore parameter:

SQL> alter system set "_diag_cdb_logging"=thisIsMyWayToGetHelp;
alter system set "_diag_cdb_logging"=thisIsMyWayToGetHelp
*
ERROR at line 1:
ORA-00096: invalid value THISISMYWAYTOGETHELP for parameter _diag_cdb_logging,
must be from among long, short, off

By default on 12.2 the parameter is set to SHORT and writes the traces as above.
SQL> alter system set "_diag_cdb_logging"=SHORT;

If you set it to OFF, you have same behavior as in 12.1: a ‘*** CONTAINER ID:’ line is displayed for explicit SET CONTAINER but no more information.

When set to LONG you get the CON_UID which may be useful for traces that cover plug/unplug operations:

SQL> select con_id,name,dbid,con_uid,guid from v$containers;

CON_ID NAME DBID CON_UID GUID
---------- -------- ---------- ---------- --------------------------------
1 CDB$ROOT 893728006 1 3817ED090B9766FDE0534440E40ABD67
2 PDB$SEED 1943618461 1943618461 3A29D20830E760B7E053734EA8C047BB
3 PDB 4128224117 4128224117 3A2C965DE81E15A8E053734EA8C023AC
 
SQL> host grep "^\*\*\*" &tracefile
*** 2016-09-04T16:50:43.462870+02:00 (PDB(3/4128224117))
*** CONTAINER ID:(3) 2016-09-04T16:50:43.463067+02:00
*** 2016-09-04T16:50:43.493035+02:00 (CDB$ROOT(1/1))
*** 2016-09-04T16:50:43.495053+02:00 (PDB(3/4128224117))

If you want more information about CON_ID, CON_UID, GUID, and a lot more about multitenant, the Oracle Database 12c Release 2 Multitenant (Oracle Press) 1st Edition by Anton Els (Author), Vit Spinka (Author), Franck Pachot (Author) goes into all details.

 

Cet article Oracle 12cR2 multitenant containers in SQL_TRACE est apparu en premier sur Blog dbi services.

Oracle 12cR2 multitenant: Local UNDO

Tue, 2016-11-08 12:58

Pluggable Databases are supposed to be isolated, containing the whole of user data and metadata. This is the definition of dictionary separation coming with multitenant architecture: only system data and metadata are at CDB level. User data and metadata are in separate tablespaces belonging to the PDB. And this is what makes the unplug/plug available: because PDB tablespaces contain everything, you can transport their datafiles from one CDB to another.
However, if they are so isolated, can you explain why

  • You cannot flashback a PDB?
  • You need an auxiliary instance for PDB Point-In-Time recovery?
  • You need to put the PDB read-only before cloning it?


There is something that is not contained in your PDB but is at CDB level, and which contains user data. The UNDO tablespace is shared:

CaptureLocalUndo001

You cannot flashback a PDB because doing so requires to rollback the ongoing transactions at the time you flashback. Information was in UNDO tablespace at that time, but is not there anymore.

It’s the same idea with Point-In-Time recovery of PDB. You need to restore the UNDO tablespace to get those UNDO records from the Point-In-Time. But you cannot restore it in place because it’s shared with other PDBs that need current information. This is why you need an auxiliary instance for PDBPITR in 12.1

To clone a PDB cannot be done with ongoing transactions because their UNDO is not in the PDB. This is why it can be done only when the PDB is read-only.

12.2 Local UNDO

In 12.2 you can choose to have one UNDO tablespace per PDB, in local undo mode, which is the default in DBCA:

CaptureLocalUndo000

With local undo PDBs are truly isolated even when opened with ongoing transactions:

CaptureLocalUndo002

Look at the ‘RB segs’ column from RMAN report schema:

[oracle@OPC122 ~]$ rman target /
 
Recovery Manager: Release 12.2.0.1.0 - Production on Tue Nov 8 18:53:46 2016
 
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
 
connected to target database: CDB1 (DBID=901060295)
 
RMAN> report schema;
 
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 880 SYSTEM YES /u02/app/oracle/oradata/CDB1/system01.dbf
3 710 SYSAUX NO /u02/app/oracle/oradata/CDB1/sysaux01.dbf
4 215 UNDOTBS1 YES /u02/app/oracle/oradata/CDB1/undotbs01.dbf
5 270 PDB$SEED:SYSTEM NO /u02/app/oracle/oradata/CDB1/pdbseed/system01.dbf
6 560 PDB$SEED:SYSAUX NO /u02/app/oracle/oradata/CDB1/pdbseed/sysaux01.dbf
7 5 USERS NO /u02/app/oracle/oradata/CDB1/users01.dbf
8 180 PDB$SEED:UNDOTBS1 NO /u02/app/oracle/oradata/CDB1/pdbseed/undotbs01.dbf
9 270 PDB1:SYSTEM YES /u02/app/oracle/oradata/CDB1/PDB1/system01.dbf
10 590 PDB1:SYSAUX NO /u02/app/oracle/oradata/CDB1/PDB1/sysaux01.dbf
11 180 PDB1:UNDOTBS1 YES /u02/app/oracle/oradata/CDB1/PDB1/undotbs01.dbf
12 5 PDB1:USERS NO /u02/app/oracle/oradata/CDB1/PDB1/users01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 33 TEMP 32767 /u04/app/oracle/oradata/temp/temp01.dbf
2 64 PDB$SEED:TEMP 32767 /u04/app/oracle/oradata/temp/temp012016-10-04_11-34-07-330-AM.dbf
3 100 PDB1:TEMP 100 /u04/app/oracle/oradata/CDB1/PDB1/temp012016-10-04_11-34-07-330-AM.dbf

You have an UNDO tablespace in ROOT, in PDB$SEED and in each user PDB.

If you have a database in shared undo mode, you can move to local undo mode while in ‘startup migrate’. PDBs when opened will have an UNDO tablespace created. You can also create an UNDO tablespace in PDB$SEED.

Yes, in 12.2, you can open the PDB$SEED read/write for this purpose:


18:55:59 SQL> alter pluggable database PDB$SEED open read write force;
 
Pluggable database altered.
 
18:56:18 SQL> show pdbs;
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ WRITE NO
3 PDB1 READ WRITE NO
18:56:23 SQL> alter pluggable database PDB$SEED open read only force;
 
Pluggable database altered.

But remember this is only allowed for local undo migration.

The recommandation is to run in local undo mode, even in Single-Tenant.

More about it in the 12cR2 Multitenant book:

 

Cet article Oracle 12cR2 multitenant: Local UNDO est apparu en premier sur Blog dbi services.

Oracle 12cR2 PL/SQL new feature: TNSPING from the database

Tue, 2016-11-08 11:55

Database links are resolved with the server TNS_ADMIN configuration (sqlnet.ora and tnsnames.ora). You can use tnsping to check the resolution, but it supposes that you are on the server and have set the same environment as the one which started the database.
In 12.2 you have a new package to check that: DBMS_TNS. It’s the kind of little new features that make our life easier.

The easy way to verify a connection string is to use tnsping. Here is an example with an EZCONNECT resolution:

[oracle@SE122 ~]$ tnsping //10.196.234.38/CDB1.opcoct.oraclecloud.internal
TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 08-NOV-2016 17:45:34
Copyright (c) 1997, 2016, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora
Used EZCONNECT adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=CDB1.opcoct.oraclecloud.internal))(ADDRESS=(PROTOCOL=TCP)(HOST=10.196.234.38)(PORT=1521)))
OK (0 msec)

The full connection description is displayed here before contacting the listener.

This resolution is valid only with a specific TNS configuration (which is here /u01/app/oracle/product/12.2.0/dbhome_1/network/admin). However, you may have different configurations (using the TNS_ADMIN environment variable) and if it’s not set consistently, you may have different results.
Basically:

  • When you connect locally to the server (no SQL*Net, no listener), the Oracle session inherits the client environment
  • When you connect remotely to a service statically registered on the listener, the Oracle session inherits the environment which started the listener
  • When you connect remotely to a service dynamically registered on the listener, the Oracle session inherits the environment which started the database
DBMS_TNS

So here is this new package:

SQL> desc dbms_tns
FUNCTION RESOLVE_TNSNAME RETURNS VARCHAR2
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
TNS_NAME VARCHAR2 IN

And you can run it when connected to the database to see how the name is resolved:

SQL> select dbms_tns.resolve_tnsname('&_connect_identifier') from dual;
old 1: select dbms_tns.resolve_tnsname('&_connect_identifier') from dual
new 1: select dbms_tns.resolve_tnsname('//10.196.234.38/CDB1.opcoct.oraclecloud.internal') from dual
 
DBMS_TNS.RESOLVE_TNSNAME('//10.196.234.38/CDB1.OPCOCT.ORACLECLOUD.INTERNAL')
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=CDB1.opcoct.oraclecloud.internal)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=10.196.234.38)(PORT=1521)))

The resolution is done without attempting to contact the listener. This ip address do not exist on my network:

select dbms_tns.resolve_tnsname('//10.1.1.1/XX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('//10.1.1.1/XX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=XX)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=10.1.1.1)(PORT=1521)))

As you can see, the client identification is send here (PROGRAM and HOST).

Demo

I’ll use this new feature to prove my assumption above about which environment is used when connecting locally or through dynamic or static service.

I create 3 directories with different names for the SERVICE_NAME in order to see which one is used:


mkdir -p /tmp/tns_lsnr ; echo "NAMES.DIRECTORY_PATH=TNSNAMES" > /tmp/tns_lsnr/sqlnet.ora ; echo "XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_lsnr))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))" > /tmp/tns_lsnr/tnsnames.ora
mkdir -p /tmp/tns_sess ; echo "NAMES.DIRECTORY_PATH=TNSNAMES" > /tmp/tns_sess/sqlnet.ora ; echo "XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_sess))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))" > /tmp/tns_sess/tnsnames.ora
mkdir -p /tmp/tns_inst; echo "NAMES.DIRECTORY_PATH=TNSNAMES" > /tmp/tns_inst/sqlnet.ora ; echo "XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_inst))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))" > /tmp/tns_inst/tnsnames.ora

In addition, I’ll need a listener configuration with a static service, let’s call it STATIC:


cat > /tmp/tns_lsnr/listener.ora <<END
LISTENER=(DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=$HOSTNAME)(PORT=1521))))
SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(ORACLE_HOME=$ORACLE_HOME)(GLOBAL_DBNAME=STATIC)(SID_NAME=CDB1)))
END

Here’s a summary of the different configurations:


$ tail /tmp/tns*/*
 
==> /tmp/tns_inst/sqlnet.ora <==
NAMES.DIRECTORY_PATH=TNSNAMES
====> /tmp/tns_inst/tnsnames.ora <==
XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_inst))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))
====> /tmp/tns_lsnr/listener.ora <==
LISTENER=(DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=SE122.compute-opcoct.oraclecloud.internal)(PORT=1521))))
SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(ORACLE_HOME=/u01/app/oracle/product/122EE)(GLOBAL_DBNAME=STATIC)(SID_NAME=CDB1)))
====> /tmp/tns_lsnr/sqlnet.ora <==
NAMES.DIRECTORY_PATH=TNSNAMES
====> /tmp/tns_lsnr/tnsnames.ora <==
XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_lsnr))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))
====> /tmp/tns_sess/sqlnet.ora <==
NAMES.DIRECTORY_PATH=TNSNAMES
====> /tmp/tns_sess/tnsnames.ora <==
XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_sess))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

I start the listener and the instance with their own environment, and set the session one to another:


export TNS_ADMIN=/tmp/tns_lsnr ; lsnrctl start
export TNS_ADMIN=/tmp/tns_inst ; sqlplus / as sysdba <<< startup
export TNS_ADMIN=/tmp/tns_sess

Now it’s time to use this new DBMS_TNS when connecting locally, through the dynamic service (CDB1) and through the static service (STATIC):


SQL> connect system/oracle
Connected.
 
SQL> select dbms_tns.resolve_tnsname('XXX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('XXX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_sess)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

When connected locally the TNS_ADMIN from my shell environment running sqlplus is used.


SQL> connect system/oracle@//localhost/CDB1
Connected.
 
SQL> select dbms_tns.resolve_tnsname('XXX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('XXX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_inst)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

When connected to dynamic service, the TNS_ADMIN used to startup the instance is used.


SQL> connect system/oracle@//localhost/STATIC
Connected.
 
SQL> select dbms_tns.resolve_tnsname('XXX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('XXX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_lsnr)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

When connected to static service, the TNS_ADMIN used to startup the listener is used.

So what?

You should use a consistent environment setting in order to be sure that all sessions will use the same name resolution. But if you have a doubt about it, DBMS_TNS can help to troubleshoot. It’s better than DBMS_SYSTEM.GET_ENV as it does the name resolution rather than just showing the environment variables.

Want to know quickly where all database links are going? Here it is:

SQL> select username,dbms_tns.resolve_tnsname(host) from cdb_db_links;

 

Cet article Oracle 12cR2 PL/SQL new feature: TNSPING from the database est apparu en premier sur Blog dbi services.

Oracle 12cR2 : Partitioning improvements – auto-list partitioning

Tue, 2016-11-08 09:16

The second release of Oracle Database 12c (12cR2) comes with new improvements regarding partitioning.
In this blog post I’ll show one of them : auto-list partitioning.
Auto-list partitioning is an extension of list partitioning. It enable the automatic creation of partitions for new values inserted into the partitioned table.
auto-list-part
Image : “Oracle Partitioning in Oracle Database 12c Release 2″ – Hermann Bär

Let’s have a look at the syntax. As you can see below, you only have to specify the new “Automatic” keyword after the “Partition by list” statement :
SQL> CREATE TABLE city(
city_zip NUMBER(5),
city_name VARCHAR2(30)
)
PARTITION BY LIST (city_name) AUTOMATIC
(PARTITION p_delemont VALUES ('Delemont'));


Table created.
SQL>

The auto-list partitioned table is created with only one partition, explicitly created at the table creation :
SQL> SELECT partition_name, high_value, partition_position FROM dba_tab_partitions WHERE table_name = 'CITY';

PARTITION_NAME HIGH_VALUE PARTITION_POSITION
-------------------- ------------------------------ ------------------
P_DELEMONT 'Delemont' 1

Let’s insert new cities in the table… :
SQL> INSERT INTO city VALUES (2800, 'Delemont');
1 row created.

SQL> INSERT INTO city VALUES (4001, 'Basel');
1 row created.

SQL> INSERT INTO city VALUES (8001, 'Zurich');
1 row created.

SQL> INSERT INTO city VALUES (1000, null);
1 row created.

Great ! Every data insertion succeed without the famous partitions error message : “ORA-14400: inserted partition key does not map to any partition.
And if we check the partitions…
SQL> SELECT partition_name, high_value, partition_position FROM dba_tab_partitions WHERE table_name = 'CITY';

PARTITION_NAME HIGH_VALUE PARTITION_POSITION
-------------------- -------------------- ------------------
P_DELEMONT 'Delemont' 1
SYS_P5004 'Basel' 2
SYS_P5005 'Zurich' 3
SYS_P5006 '' 4

…one partition has been automatically created for each values. Take note that a partition has also been created to store the “null” values.

Now, if you want to evolve a list partitioned table to an auto-list partitioned table, you can do it easily and in one shot :
SQL> ALTER TABLE cars SET AUTOMATIC;;
ALTER TABLE cars SET AUTOMATIC
*
ERROR at line 1:
ORA-14852: SET [SUB]PARTITIONING AUTOMATIC is not legal on this table.

Oops, this error occurs because the table contains a DEFAULT partition. As you can imagine, it’s now not necessary to have one, so you MUST drop it before moving to auto-list partitioning (take care of the data stored inside it ! ;-) ):
SQL> ALTER TABLE cars DROP PARTITION (P_DEFAULT);

Table altered.

SQL> ALTER TABLE cars SET AUTOMATIC;

Table altered.

The table is now auto-list partitioned :
SQL> SELECT partitioning_type, autolist, partition_count FROM dba_part_tables WHERE table_name = 'CITY';

PARTITIONING_TYPE AUT PARTITION_COUNT
------------------------------ --- ---------------
LIST YES 4

Conclusion :
From my point of view, this partitioning improvement is a really good one. Indeed, if your list-partitioned table has a big quantity of distinct values, creating and managing partitions for each of them could become an onerous work. But now with 12cR2 you can forget this task : partition creation for new values is automated !

 

Cet article Oracle 12cR2 : Partitioning improvements – auto-list partitioning est apparu en premier sur Blog dbi services.

EDB Postgres Advanced Server 9.6 BETA released

Tue, 2016-11-08 05:25

Yesterday EnterpriseDB released the beta of the next version of Postgres Advanced Server. You can sign up for the beta program here and download the software here.

As usual this release includes all the features of the community release of PostgreSQL (version 9.6) plus the EDB additions such as Oracle compatibility. The release notes are here.

 

Cet article EDB Postgres Advanced Server 9.6 BETA released est apparu en premier sur Blog dbi services.

Oracle Public Cloud: create a database from command line

Mon, 2016-11-07 15:11

You love the Oracle Public Cloud with its simple Web interface? Great. But what if you want to automate a database creation from command line?
Easy with the CURL and REST API.

JSON

First, you need to create a JSON file with all the information for your service.
Everything is documented: https://apicatalog.oraclecloud.com/ui/views/apicollection/oracle-public/database/1.1/serviceinstances

$ cat createrequestbody.json
 
{
"description": "Mon Nov 7 21:03:39 CET 2016",
"edition": "EE_HP",
"level": "PAAS",
"serviceName": "CDB122",
"shape": "oc3",
"subscriptionType": "MONTHLY",
"version": "12.2.0.1",
"vmPublicKeyText": "ssh-dss AAAAB3NzaC1kc3MAAACBAMrw5Au0hHP1BT/W3gcSg+Fwq36LdfzroubjS6g8RSvcaeltk1O/uQwJV73MCsBDgs4PaAuDekZTW5w6kN8ESd6r6BGLm/sETHNiRzOWWap3ds18iiaLJWcMbKRlZUWLdfhGemryWZaQIFrSNkfE5YkFz4V4m5d4EwKpLzIthKh3AAAAFQDtjTsaF7nydePPJPDqYERu8QlcMQAAAIBjl8NxEmE7nud7b4xuLkuJKnwlf2urHrOZGcQapNUZAjuehe6/8VhPB4GebZb52GlyYOuELDP6e9PXyFRxTfTPff22JE5tPM8vTjCmFEKhBspl43YurJxwvDtvgTNKk5Zp5MBXMDjQ8KNHXlpnRrfh45acHI8gs0KlH51+e7j+6AAAAIA/Q8rVC4g+MBepJGKed2ar0JzralZo7Q8vsZfQ889Y3wkaBJl2/SRaaW1JNmkB20eZIEbRkh9e/ex07ryKg65dgUzU4/2dE2CSxplG0vSf/xp7hYr/bJzR1SZXMKbAdZ2wg+SGaTlKWAAS9xhvKGw1jVWdVgacYJOPl343bMKkuw==",
"parameters": [
{
"type": "db",
"usableStorage": "15",
"adminPassword": "P4ss#ord",
"sid": "CDB122",
"pdbName": "PDB1",
"failoverDatabase": "no",
"backupDestination": "NONE"
}
] }

You can see that you have exactly the same information as from the GUI.

Create Instance

Then, you run the following CURL command (having the cacert.pem certificate in the current directory:

$ curl --include --request POST --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" --header "Content-Type:application/json" --data @createrequestbody.json https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct
 
HTTP/2 202
server: Oracle-Application-Server-11g
location: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/status/create/job/2738110
content-language: en
access-control-allow-origin: *
access-control-allow-headers: Content-Type, api_key, Authorization
retry-after: 60
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBi63mCP3n315RvWByd0003Ri0004Zg
x-oracle-dms-ecid: 005GBi63mCP3n315RvWByd0003Ri0004Zg
service-uri: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:03:59 GMT

Here “opcoct” is my identity domain id. You find it in the header X-ID-TENANT-NAME and the URL.
The myuser@oracle.com:P4ss#ord is the user and password in the domain.

From the GUI you can see that the creation has started:

CaptureDBAASREST01

DBaaS instance information

Here is the information for the database service


$ curl --include --request GET --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
 
HTTP/2 200
server: Oracle-Application-Server-11g
content-language: en
service-uri: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
access-control-allow-headers: Content-Type, api_key, Authorization
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBiK7U4I3z015Rvl3id00071a0000yo
x-oracle-dms-ecid: 005GBiK7U4I3z015Rvl3id00071a0000yo
access-control-allow-origin: *
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:07:52 GMT
content-length: 1244
 
{
"service_name": "CDB122",
"version": "12.2.0.1",
"status": "In Progress",
"description": "Mon Nov 7 21:03:39 CET 2016",
"identity_domain": "opcoct",
"creation_time": "2016-11-07T20:03:59.524+0000",
"last_modified_time": "2016-11-07T20:03:59.505+0000",
"created_by": "myuser@oracle.com",
"sm_plugin_version": "16.4.3-541",
"service_uri": "https:\/\/dbcs.emea.oraclecloud.com:443\/paas\/service\/dbcs\/api\/v1.1\/instances\/opcoct\/CDB122",
"num_nodes": 1,
"level": "PAAS",
"edition": "EE_HP",
"shape": "oc3",
"subscriptionType": "MONTHLY",
"creation_job_id": "2738110",
"num_ip_reservations": 1,
"backup_destination": "NONE",
"failover_database": false,
"rac_database": false,
"sid": "CDB122",
"pdbName": "PDB1",
"demoPdb": "",
"listenerPort": 1521,
"timezone": "UTC",
"is_clone": false,
"clone_supported_version": "16.3.1",
"active_jobs": [
{
"active_job_operation": "start-db-compute-resources",
"active_job_id": 2738113,
"active_job_messages": [] },
{
"active_job_operation": "create-dbaas-service",
"active_job_id": 2738110,
"active_job_messages": [] }
],
"compute_site_name": "EM003_Z19",
"jaas_instances_using_service": ""
}

The status is ‘in progress’. Let’s look at the compute service.

Compute instance information

From the compute service, you can see if there is already an IP address assigned here


$ curl --include --request GET --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122/servers
 
HTTP/2 200
server: Oracle-Application-Server-11g
content-language: en
access-control-allow-headers: Content-Type, api_key, Authorization
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBiMizXo3z015Rvl3id00071a0004p_
x-oracle-dms-ecid: 005GBiMizXo3z015Rvl3id00071a0004p_
access-control-allow-origin: *
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:08:35 GMT
content-length: 430
 
[{
"status": "Running",
"creation_job_id": "2738110",
"creation_time": "2016-11-07T20:03:59.524+0000",
"created_by": "myuser@oracle.com",
"shape": "oc3",
"sid": "CDB122",
"pdbName": "PDB1",
"listenerPort": 1521,
"connect_descriptor": "CDB122:1521\/PDB1",
"connect_descriptor_with_public_ip": "null:1521\/PDB1",
"initialPrimary": true,
"storageAllocated": 142336,
"reservedIP": "",
"hostname": "CDB122"
}]

No IP address yet. I have the job id (2738110) so that I can check it later.

Job information


$ curl --include --request GET --cacert ./cacert.pem --user myuser@oracle.com:P4ss#ord --header "X-ID-TENANT-NAME:opcoct" https://dbcs.emea.oraclecloud.com/paas/service/dbcs/api/v1.1/instances/opcoct/status/create/job/2738110
 
HTTP/2 202
server: Oracle-Application-Server-11g
location: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/status/create/job/2738110
content-language: en
access-control-allow-origin: *
access-control-allow-headers: Content-Type, api_key, Authorization
retry-after: 60
access-control-allow-methods: GET, POST, DELETE, PUT, OPTIONS, HEAD
x-oracle-dms-ecid: 005GBiOeMbz3n315RvWByd0003Ri00048d
x-oracle-dms-ecid: 005GBiOeMbz3n315RvWByd0003Ri00048d
service-uri: https://dbcs.emea.oraclecloud.com:443/paas/service/dbcs/api/v1.1/instances/opcoct/CDB122
x-frame-options: DENY
content-type: application/json
vary: user-agent
date: Mon, 07 Nov 2016 20:09:08 GMT
 
{
"service_name": "CDB122",
"version": "12.2.0.1",
"status": "In Progress",
"description": "Mon Nov 7 21:03:39 CET 2016",
"identity_domain": "opcoct",
"creation_time": "2016-11-07T20:03:59.524+0000",
"last_modified_time": "2016-11-07T20:03:59.505+0000",
"created_by": "myuser@oracle.com",
"sm_plugin_version": "16.4.3-541",
"service_uri": "https:\/\/dbcs.emea.oraclecloud.com:443\/paas\/service\/dbcs\/api\/v1.1\/instances\/opcoct\/CDB122",
"message": ["Starting Compute resources..."],
"job_start_date": "Mon Nov 07 20:04:01 GMT 2016",
"job_status": "InProgress",
"job_operation": "create-dbaas-service",
"job_request_params": {
"edition": "EE_HP",
"vmPublicKeyText": "ssh-dss AAAAB3NzaC1kc3MAAACBAMrw5Au0hHP1BT/W3gcSg+Fwq36LdfzroubjS6g8RSvcaeltk1O/uQwJV73MCsBDgs4PaAuDekZTW5w6kN8ESd6r6BGLm/sETHNiRzOWWap3ds18iiaLJWcMbKRlZUWLdfhGemryWZaQIFrSNkfE5YkFz4V4m5d4EwKpLzIthKh3AAAAFQDtjTsaF7nydePPJPDqYERu8QlcMQAAAIBjl8NxEmE7nud7b4xuLkuJKnwlf2urHrOZGcQapNUZAjuehe6/8VhPB4GebZb52GlyYOuELDP6e9PXyFRxTfTPff22JE5tPM8vTjCmFEKhBspl43YurJxwvDtvgTNKk5Zp5MBXMDjQ8KNHXlpnRrfh45acHI8gs0KlH51+e7j+6AAAAIA/Q8rVC4g+MBepJGKed2ar0JzralZo7Q8vsZfQ889Y3wkaBJl2/SRaaW1JNmkB20eZIEbRkh9e/ex07ryKg65dgUzU4/2dE2CSxplG0vSf/xp7hYr/bJzR1SZXMKbAdZ2wg+SGaTlKWAAS9xhvKGw1jVWdVgacYJOPl343bMKkuw==",
"count": "2",
"provisioningTimeout": "180",
"subscriptionType": "MONTHLY",
"createStorageContainerIfMissing": "false",
"dbConsolePort": "1158",
"listenerPort": "1521",
"serviceName": "CDB122",
"namespace": "dbaas",
"version": "12.2.0.1",
"timezone": "UTC",
"pdbName": "PDB1",
"level": "PAAS",
"tenant": "opcoct",
"serviceInstance": "CDB122",
"description": "Mon Nov 7 21:03:39 CET 2016",
"failoverDatabase": "false",
"emExpressPort": "5500",
"ncharset": "AL16UTF16",
"trial": "false",
"sid": "CDB122",
"noRollback": "false",
"operationName": "create-dbaas-service",
"goldenGate": "false",
"backupDestination": "NONE",
"ibkup": "false",
"charset": "AL32UTF8",
"serviceVersion": "12.2.0.1",
"shape": "oc3",
"identity_domain_id": "opcoct",
"serviceType": "dbaas",
"usableStorage": "15",
"disasterRecovery": "false",
"server_base_uri": "https:\/\/dbcs.emea.oraclecloud.com:443\/paas\/service\/dbcs\/",
"computeSiteName": "EM003_Z19",
"isRac": "false"
}
}

REST Endpoint

Here, my test is on the EMEA datacenter and this is why the URL starts with https://dbcs.emea.oraclecloud.com
If you don’t know, you can check on My Cloud Services where you have all information:
CaptureDBAASREST02

 

Cet article Oracle Public Cloud: create a database from command line est apparu en premier sur Blog dbi services.

Documentum story – Migrate repository cryptography from crypto key to the crypto keystore

Fri, 2016-11-04 11:45

In this blog post I will explain the migration of the crypto key (aek.key) to the crypto keystore (lockbox) for an upgraded repository. In fact, during customer activities, I has been involved in the migration of a repository hosted on a content server 6.7 SP2 to a new infrastructure. A content server 7.2 P05 was on the new infrastructure where the lockbox was already used by other repositories. The pre-requisite to achieve this migration is to have an already configured repository using the lockbox which will help to re-encrypt the password of the database owner stored later in the dbpasswd.txt file. The repository for which we wanted to migrate the crypto key was properly upgraded to the content server 7.2 P05 using the Content Server Configuration Program keeping the existing aek.key from the old system. Our goal was to align the cryptography mechanism to use the lockbox.

Please note that the above instructions cannot be used if the file store is encrypted with the Trusted Content Services.

We will also recommend removing LDAP objects and re-create them later so the password can be properly re-encrypted and DER et PEM file re-created under $DOCUMENTUM/dba/secure/ldapdb.

First of all we need to check which encryption algorithm is currently used by retrieving the r_crypto_mode from the docbase config

1> select r_crypto_mode from dm_docbase_config
2> GO
r_crypto_mode
----------------------------------------------------------------
3DES_RSA1024_SHA256
(1 row affected)

Before shutting down the repository we will retrieve a list of all users that are configured with an inline password. We will have to reset their password as the encryption mechanism will be changed.

Query to retrieve inline password users:

select user_login_name from dm_user where user_source = 'inline password';

We can now shutdown the repository:

$DOCUMENTUM/dba/dm_shutdown_repo1

It’s a best practice to backup the aek.key and the lockbox files located under the $DOCUMENTUM/dba/secure folder:

cd $DOCUMENTUM/dba
tar -czvf secure-bck-MigAek-$(date +%Y%m%d).tgz secure

It’s also a good practice to backup all files that will be updated by this procedure:

cd $DOCUMENTUM/dba/config/repo1
cp server.ini server.ini.bck-MigAek-$(date +%Y%m%d)
cp dbpasswd.txt dbpasswd.txt.bck-MigAek-$(date +%Y%m%d)

We will now connect to the Database schema to remove all encryption values. During the next startup of the repository, encryption values will be properly regenerated automatically.

sqlplus repo1/passwd@db1

Once connected you can issue the following SQL instructions :

update dm_docbase_config_s set i_crypto_key = ' ', i_ticket_crypto_key = ' ';

delete from dmi_vstamp_s where i_application = 'dm_docbase_config_crypto_key_init';
delete from dmi_vstamp_s where i_application = 'dm_docbase_config_ticket_crypto_key_init';

delete dm_sysobject_s where r_object_id = (select r_object_id from dm_public_key_certificate_s where key_type = 1);
delete dm_sysobject_r where r_object_id = (select r_object_id from dm_public_key_certificate_s where key_type = 1);
delete dm_public_key_certificate_s where key_type = 1;

delete dm_sysobject_s where r_object_id = (select r_object_id from dm_cryptographic_key_s where key_type = 1);
delete dm_sysobject_r where r_object_id = (select r_object_id from dm_cryptographic_key_s where key_type = 1);
delete dm_cryptographic_key_s where key_type = 1;
commit;

The repository initialization file can now be updated to move from the crypto key (aek.key) to the the crypto keystore (lockbox):

vi $DOCUMENTUM/dba/config/repo1/server.ini

Inside this file, you should comment the following lines:

crypto_keystore=Local
crypto_mode=3DES_RSA1024_SHA256
crypto_keyname=aek.key

Then uncomment/add the following lines:

crypto_mode = AES256_RSA1024_SHA256
crypto_keystore = Local
crypto_lockbox = lockbox.lb
crypto_keyname = CSaek

Please make sure that the crypto_mode in the first section is the same as the one retrieved before stopping the repository. The crypto_mode of the second section (new one) can contain the value that you want. The strongest mode available is “AES256_RSA1024_SHA256″ so that’s probably the value that you will want to set. In our case, we faced an issue because of the old repository and we had to keep the same crypto_mode as the original which was “3DES_RSA1024_SHA256″. Therefore at the end, our server.ini file contained the following:

crypto_mode = 3DES_RSA1024_SHA256
crypto_keystore = Local
crypto_lockbox = lockbox.lb
crypto_keyname = CSaek

Once this is done, we now need to encrypt the database owner of the repository repo1. But how can we do this since the repository isn’t started? Simple: we need an already available repository on this content server (it’s a prerequisite actually) which is already configured to use the lockbox. We used the global registry for that purpose:

> iapi gr_dbi

Please enter a user (dmadmin):
Please enter password for dmadmin:


EMC Documentum iapi - Interactive API interface
(c) Copyright EMC Corp., 1992 - 2015
All rights reserved.
Client Library Release 7.2.0050.0084


Connecting to Server using docbase gr_dbi
[DM_SESSION_I_SESSION_START]info: "Session 010f583d80095f8a started for user dmadmin."


Connected to Documentum Server running Release 7.2.0050.0214 Linux64.Oracle
Session id is s0
API> initcrypto,c
...
OK
API> encrypttext,c,xxxx
...
DM_ENCR_TEXT_V2=AAAAEJpJA5bVkJGghYFqrik3kxJ0gaWIRNvhVmZA586sBuFx7NqKnbKSpdUYf/BvJgn10OQpoZXL1T7Y2L+RmwZRUPkWqsv139zbU7u2vw7UYKX3

We can now set the encrypted password to the dbpasswd.txt as follow:

vi /app/dctm/server/dba/config/repo1/dbpasswd.txt

Add the following content inside this file:

DM_ENCR_TEXT_V2=AAAAEPAcQcFYDlYJ26cqrQ17oC0yXek7E+1g83e069TNkYXjwoRd5zXpZmCm4NdrNYAJDLJ4iEKEzrzFlAuGUWw25KXs/98L0AI7VJx2DLUZNkjX

When this has been done, the repository is now ready to start:

$DOCUMENTUM/dba/dm_start_repo1

Check the log files and validate that the repository has been properly started. Something similar should be displayed:

2016-09-05T09:11:28.907641 21003[21003] 0000000000000000 [DM_SERVER_I_START_SERVER]info: "Docbase repo1 opened"

2016-09-05T09:11:28.907790 21003[21003] 0000000000000000 [DM_SERVER_I_SERVER]info: "Setting exception handlers to catch all interrupts"

2016-09-05T09:11:28.907820 21003[21003] 0000000000000000 [DM_SERVER_I_START]info: "Starting server using service name: repo1"

2016-09-05T09:11:29.402450 21003[21003] 0000000000000000 [DM_SERVER_I_LAUNCH_MTHDSVR]info: "Launching Method Server succeeded."

2016-09-05T09:11:29.420505 21003[21003] 0000000000000000 [DM_SERVER_I_LISTENING]info: "The server is listening on network address (Service Name: repo1, Host Name: content_server_01 :V4 IP)"

2016-09-05T09:11:29.420579 21003[21003] 0000000000000000 [DM_SERVER_I_IPV6_DISABLED]info: "The server can not listen on IPv6 address because the operating system does not support IPv6"

2016-09-05T09:11:29.426518 21003[21003] 0000000000000000 [DM_SERVER_I_LISTENING]info: "The server is listening on network address (Service Name: repo1_s, Host Name: content_server_01:V4 IP)"

2016-09-05T09:11:29.429969 21003[21003] 0000000000000000 [DM_SERVER_I_IPV6_DISABLED]info: "The server can not listen on IPv6 address because the operating system does not support IPv6"

2016-09-05T09:11:31.162134 21003[21003] 0000000000000000 [DM_WORKFLOW_I_AGENT_START]info: "Workflow agent master (pid : 21060, session 010095b980000007) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2016-09-05T09:11:31.163095 21003[21003] 0000000000000000 [DM_WORKFLOW_I_AGENT_START]info: "Workflow agent worker (pid : 21061, session 010095b98000000a) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2016-09-05T09:11:32.165584 21003[21003] 0000000000000000 [DM_WORKFLOW_I_AGENT_START]info: "Workflow agent worker (pid : 21074, session 010095b98000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2016-09-05T09:11:33.167476 21003[21003] 0000000000000000 [DM_WORKFLOW_I_AGENT_START]info: "Workflow agent worker (pid : 21087, session 010095b98000000c) is started sucessfully."
2016-09-05T09:11:34.167673 21003[21003] 0000000000000000 [DM_SERVER_I_START]info: "Sending Initial Docbroker check-point "

2016-09-05T09:11:34.188560 21003[21003] 0000000000000000 [DM_MQ_I_DAEMON_START]info: "Message queue daemon (pid : 21105, session 010095b980000456) is started sucessfully."
2016-09-05T09:11:34.488373 21102[21102] 010095b980000003 [DM_DOCBROKER_I_PROJECTING]info: "Sending information to Docbroker located on host (content_server_01) with port (1490). Information: (Config(repo1), Proximity(1), Status(Open), Dormancy Status(Active))."
Mon Sep 05 09:12:28 2016 [INFORMATION] [AGENTEXEC 21546] Detected during program initialization: Version: 7.2.0050.0214 Linux64
Mon Sep 05 09:12:32 2016 [INFORMATION] [AGENTEXEC 21546] Detected during program initialization: Agent Exec connected to server repo1: [DM_SESSION_I_SESSION_START]info: "Session 010095b980502d02 started for user dmadmin."

 

The migration of the encryption key to the encryption keystore is nearly completed. It remains 2 tasks which are the password reset of the internal Documentum users such as the dm_bof_registry, dmc_wdk_preferences_owner and the dmc_wdk_presets_owner users and the password reset of the application specific users (the list created just before stopping the repository) but this will not be described in this blog.

As said before, if the LDAP configuration is required, it is better to remove the LDAP objects before the migration and recreate them now that the repository is running properly using the new encryption mechanisms.

All the steps and information to migrate the encryption mechanisms to the lockbox have been provided and I hope that it can be useful for you.

 

Cet article Documentum story – Migrate repository cryptography from crypto key to the crypto keystore est apparu en premier sur Blog dbi services.

SQL Server AlwaysOn availability groups and statistic issues on secondaries

Fri, 2016-11-04 11:23

I would like to share with you an interesting issue you may face while using SQL Server AlwaysOn availability groups and secondary read-only replicas. For those who use secondary read-only replicas as reporting servers, keep reading this blog post because it is about update statistics behavior on the secondary replicas and as you may know cardinality estimation accuracy is an important part of the queries performance in this case.

So a couple of days ago, I had an interesting discussion with one of my MVP French friend about an availability group issue he faced at a customer shop and related to the above topic. Without going into details now, he was surprised to see a difference between the primary replica and one secondary about last update statistic dates as well as rows sampled value for specific statistics. The concerned environment runs with SQL Server 2014 SP2.

First of all, let’s say that having different updated statistic dates between a primary and a secondary is part of a normal process. Indeed, changes related to statistic objects are replicated over the time from the primary to secondaries but it is not uncommon to encounter situation where data from the primary is updated until reaching the update statistic threshold value and to notice auto update statistics triggered by SQL Server on the secondary replica from Reporting queries. But what is more surprising is that this behavior noticed by my friend was exacerbated in his specific context with large tables.

Let’s demonstrate with an example. I was able to easily reproduce the issue on my environment but I was also interested in testing behaviors from different versions of SQL Server in response to this specific issue. In fact, I tried to figure out if the problem concerned only a specific build of SQL Server – SQL Server 2014 SP2 in this case – or if the problem concerns all the SQL Server versions.

 

blog 108 - 0 - demo archi

 

Let’s use the AdventureWorks2012 database with the bigTransactionHistory table that contains roughly 34 million of rows (343910073 rows).

Let’s say statistic information you will see later in this blog post came from each secondary that runs on a specific SQL Server version (respectively 2012, 2014 and 2016) by using the following T-SQL script on each secondary replica.

use AdventureWorks2012;

select 
	object_name(s.object_id) as table_name,
	s.name as stat_name,
	s.is_temporary,
	ds.last_updated,
	ds.modification_counter,
	ds.rows,
	ds.rows_sampled,
	CAST(ds.modification_counter * 100. / ds.rows AS DECIMAL(5,2)) AS modif_ratio,
	ds.steps
from sys.stats as s (nolock)
cross apply sys.dm_db_stats_properties(s.object_id, s.stats_id) as ds
where s.object_id = object_id('dbo.bigTransactionHistory');
go

 

Let’s begin with the starting scenario where I inserted approximatively 20% of the initial data in the bigTransactionHistory table as you may notice below. During the test we will focus only on the idx_cl_big_TransactionHistoryTransactionDate statistic related to the clustered index on the bigTransactionHistory table.

blog 108 - 1 - stats after modif and before update fullscan

Let’s update then the idx_cl_big_TransactionHistoryTransactionDate statistic with FULLSCAN from the primary replica. This operation may be part of a maintenance plan on daily / monthly basis depending on your scenario. Here a picture of statistic information from each secondary:

blog 108 - 2 - stats after modif and after update fullscan

Regarding this first output, we may notice that the modification counter from thesys.dm_db_stats_properties DMF did not drop to zero. To be honest I expected a value equal to 0 here. In addition, executing a Reporting query from each secondary did not have effect in this case. The reporting query is as follows and it is designed to use specifically the idx_cl_big_TransactionHistoryTransactionDate statistic.

use AdventureWorks2012;


select count(*)
from dbo.bigTransactionHistory
where TransactionDate between '20060101' and '20080101';
go

 

Keep going and let’s continue by inserting a new bunch of data (approximatively 10% more). After running a new update statistics operation with FULLSCAN from the primary (let’s say we are again in the situation where a maintenance plan comes into play) here the corresponding statistic information output from each secondary:

blog 108 - 3 - stats after insert data and after update fullscan

As expected, the modification of the rows counter value increased up to 24% but once again we may only notice that running update statistics with FULLSCAN on the primary doesn’t reset correctly the modification rows counter on the secondary regardless the SQL Server version. Let’s run the Reporting query from each secondary and let’s have a look at the statistic information output

blog 108 - 4 - stats after insert data and after update fullscan and read data from secondary

Well, it seems that some update statistics stuff came into play but surprisingly we get different results between versions. Let’s focus first on SQL Server 2012 (first line) where an auto update statistics operation was triggered by SQL Server. Thus the last_updated column value was updated, the concerned statistic is now defined as temporary on the concerned secondary replica and the rows sampled value is different from the previous step (423928 vs 45774517). This is because SQL Server used a default sampling algorithm in this case which does not correspond to that using in the previous step with FULLSCAN method.

Then if we move quickly to the last two lines (respectively SQL Server 2014 and 2016), we may notice only one change that concerns the is_temporary column and no changes concerning either the last update statistic date, the modification counter value or sampled rows. At this point, I’m not sure to understand the reason. Is it a metadata issue? Is it a normal behavior? Well, I will go back there to update this section if I get further information.

Let’s continue by performing the same previous tasks (insert a bunch of data and then update statistics with FULLSCAN from the primary). The statistic output from each secondary is as follows:

 

blog 108 - 7 - stats after update fullscan and 2 insert

I’m sure you are beginning to understand what’s happening here. The update statistic with FULLSCAN from the primary replica seems to never reset the modification counter on each secondary. This is a big problem here because if we execute the Reporting query on each secondary we have now good chance to invalidate quickly what has been done by the update statistics operation with FULLSCAN from the primary. In our context, the main concern is the sampled rows value that can lead to inaccurate cardinality estimations. Let’s run again the Reporting query from each secondary and let’s have a look at the corresponding statistics information output

blog 108 - 8 - stats after update fullscan and 2 insert and read data from secondary

Got it! This time, each concerned statistic switched to temporary and the modification counter was reset to 0. Note also that a default sampling was used in this case in contrast to previous picture that concerned update statistic with FULLSCAN. And if I continue ever and ever in this way I will face every time the same behavior for all versions of SQL Server since 2012.

Let’s summarize the situation: in common scenarios, we would probably not pay attention to this specific issue because cardinality estimation will not be affected and statistics concerned by Reporting queries on the secondaries are fewer in order of magnitude. But my friend encountered a specific scenario with large tables where the issue is magnified.

As workaround, you may consider to exclude the concerned statistic(s) from the auto update statistics algorithm (NORECOMPUTE option). You may also vote for the following connect item opened by my friend if it makes sense for you!

Hope it helps!

 

 

 

Cet article SQL Server AlwaysOn availability groups and statistic issues on secondaries est apparu en premier sur Blog dbi services.

Dbvisit replicate when you flashback the target

Thu, 2016-11-03 16:00

I had a quick question today about Dbvisit replicate as a solution to feed an Operational Data Store. The need is to run some end-of-month jobs on data from a specific state. The best way is to stop the APPLY at that fixed state for the duration of the job. Then no need for flashback query and no risk of ORA-1555. And what if we know this state only afterwards? Easy if the target is in Enterprise Edition, running in FLASHBACK ON. Then how to continue the APPLY? It’s easy with Dbvisit replicate. Here is an example.

One solution would be to create a restore point before doing the flashback in order to be able to get forth to the state before the flashback and re-start the APPLY where it was stopped. But if we do that, we need to open the database READ ONLY for running the job on it.
The other solution is to be able to re-start the apply at the point where the database has been flashed back. Do you think we have to get the right SCN, reset some configuration with it, etc? No. There’s something very simple and practical with Dbvisit replicate:

  • The MINE configuration is stored in the source database
  • The APPLY configuration is stored in the destination database

When you install Dbvisit for the first time, you may wonder why there is not only one repository. But this architecture makes it easy as the metadata is stored with the related data.

In my case, if I flashback the target database, the APPLY repository is flashed back as well, so the APPLY continues exactly at the right point

But just in case let’s test it.
I use my old #repattack VMs, which explains why the dbvrep version is old, and why the time is on New Zealand time zone. Here is a running replication:

/ Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 30 days
MINE IS running. Currently at plog 392 and SCN 6648759 (11/04/2016 09:14:08).
APPLY IS running. Currently at plog 392 and SCN 6648742 (11/04/2016 09:14:06).
Progress of replication dbvrep_XE:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
REPOE.CUSTOMERS: 100% Mine:210/210 Unrecov:0/0 Applied:210/210 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.ADDRESSES: 100% Mine:210/210 Unrecov:0/0 Applied:210/210 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.CARD_DETAILS: 100% Mine:195/195 Unrecov:0/0 Applied:195/195 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.ORDER_ITEMS: 99% Mine:1553/1553 Unrecov:0/0 Applied:1550/1550 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.ORDERS: 99% Mine:1245/1245 Unrecov:0/0 Applied:1243/1243 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.INVENTORIES: 99% Mine:1523/1523 Unrecov:0/0 Applied:1521/1521 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
REPOE.LOGON: 99% Mine:1493/1493 Unrecov:0/0 Applied:1491/1491 Conflicts:0/0 Last:04/11/2016 09:09:13/OK
--------------------------------------------------------------------------------------------------------------------------------------------

I want to flashback to a quarter before, at 09:00, and because I’m in Oracle XE I’ll restore rather than flashback:

[oracle@target ~]$ rman target /
 
Recovery Manager: Release 11.2.0.2.0 - Production on Fri Nov 4 09:10:40 2016
 
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
 
connected to target database: XE (DBID=2736105154)
 
RMAN> startup force mount;
 
Oracle instance started
database mounted
...

RMAN> restore database until time "timestamp'2016-11-04 09:00:00'";
 
Starting restore at 04-NOV-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=10 device type=DISK

RMAN> recover database until time "timestamp'2016-11-04 09:00:00'";
 
Starting recover at 04-NOV-16
using channel ORA_DISK_1
 
starting media recovery
...
media recovery complete, elapsed time: 00:02:17
Finished recover at 04-NOV-16
 
RMAN> sql "alter database open resetlogs";
sql statement: alter database open resetlogs

So the APPLY stopped because the destination was down, but the MINE continues:

- Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 30 days
MINE IS running. Currently at plog 392 and SCN 6654476 (11/04/2016 09:27:12).
Could not connect to APPLY process. Process not started or connection refused.
Progress of replication dbvrep_XE:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
REPOE.CUSTOMERS: ---% Mine:300/300 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.ADDRESSES: ---% Mine:300/300 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.CARD_DETAILS: ---% Mine:277/277 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.ORDER_ITEMS: ---% Mine:2178/2178 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.ORDERS: ---% Mine:1735/1735 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.INVENTORIES: ---% Mine:2129/2129 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
REPOE.LOGON: ---% Mine:2109/2109 Unrecov:0/0 Applied:0/0 Conflicts:0/0 Last:--/--
--------------------------------------------------------------------------------------------------------------------------------------------

My database has been flashed back to its state at 09:00 and I can do what I want. Then I just re-start the APPLY:

[oracle@target dbvrep_XE]$ sh dbvrep_XE-run-target.sh
Initializing......done
DDC loaded from database (352 variables).
Dbvisit Replicate version 2.7.06.4485
Copyright (C) Dbvisit Software Limited. All rights reserved.
DDC file /u01/app/oracle/dbvrep_XE/dbvrep_XE-APPLY.ddc loaded.
Starting process APPLY...started

and nothing to do manually, it restarts from where it needs, applying the changes from 09:00

/ Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 30 days
MINE IS running. Currently at plog 392 and SCN 6655130 (11/04/2016 09:28:33).
APPLY IS running. Currently at plog 392 and SCN 6645037 (11/04/2016 09:05:53).
Progress of replication dbvrep_XE:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
REPOE.CUSTOMERS: 46% Mine:312/312 Unrecov:0/0 Applied:145/12 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.ADDRESSES: 46% Mine:312/312 Unrecov:0/0 Applied:145/12 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.CARD_DETAILS: 46% Mine:289/289 Unrecov:0/0 Applied:135/11 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.ORDER_ITEMS: 49% Mine:2247/2247 Unrecov:0/0 Applied:1105/80 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.ORDERS: 49% Mine:1793/1793 Unrecov:0/0 Applied:890/60 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.INVENTORIES: 49% Mine:2199/2199 Unrecov:0/0 Applied:1083/77 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
REPOE.LOGON: 48% Mine:2183/2183 Unrecov:0/0 Applied:1055/75 Conflicts:0/0 Last:04/11/2016 09:22:39/OK
--------------------------------------------------------------------------------------------------------------------------------------------

This is quick as it has already been mined. The APPLY quickly resolved the gap:

- Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 30 days
MINE IS running. Currently at plog 392 and SCN 6655589 (11/04/2016 09:29:36).
APPLY IS running. Currently at plog 392 and SCN 6655567 (11/04/2016 09:29:34).
Progress of replication dbvrep_XE:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
REPOE.CUSTOMERS: 99% Mine:319/319 Unrecov:0/0 Applied:317/184 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.ADDRESSES: 99% Mine:319/319 Unrecov:0/0 Applied:317/184 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.CARD_DETAILS: 99% Mine:296/296 Unrecov:0/0 Applied:294/170 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.ORDER_ITEMS: 99% Mine:2291/2291 Unrecov:0/0 Applied:2289/1264 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.ORDERS: 100% Mine:1828/1828 Unrecov:0/0 Applied:1828/998 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.INVENTORIES: 100% Mine:2238/2238 Unrecov:0/0 Applied:2238/1232 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
REPOE.LOGON: 99% Mine:2235/2235 Unrecov:0/0 Applied:2232/1252 Conflicts:0/0 Last:04/11/2016 09:23:32/OK
--------------------------------------------------------------------------------------------------------------------------------------------

Things are simple when they are well designed. The APPLY has to synchronize with the destination, so they have put the APPLY repository into the destination. No problem if the replication went further than the point in time you need to freeze. Any solution is possible as long ans the whole database is consistent: Flashback database, PITR, snapshots, etc.

 

Cet article Dbvisit replicate when you flashback the target est apparu en premier sur Blog dbi services.

Momentum16 – Day 3 – Documentum with Docker

Thu, 2016-11-03 13:42

The last day of the Momentum 16 in Barcelona has been a short one… Indeed there were presentations only in the morning and the event ended at 13:00 with the farewell lunch. So this will be a short blog too!

 

So first thing in the morning, I had the opportunity to attend a very good presentation about containerizing your On-Premise environment with Docker. This was actually the main purpose of this third day for me: learn more about Documentum with Docker. Dennis Buis (ING) and Mike Mohen (Dell EMC) presented us what has been done at ING and what is coming soon with the Documentum 7.3 release around Docker. Dell EMC will provide soon Docker images containing one of the numerous components that will be available: Content Server + DocBroker, JMS, D2 Client, D2 Config, DA, BPM, aso… The purpose of Documentum with Docker is to accelerate your deployment/upgrade processes by separating all components in simple containers. This is really a huge challenge because there is a lot of open questions BUT once it will be mature enough and once the processes and documentations will be complete enough, it will be really great. We will be able to build complete environments within a few days instead of weeks with the insurance that everything will always be setup in the same way because images are images! You can use them as many times as you want and you will always have the same result.

DockerMike Mohen presenting Docker @ Momentum 16

 

After this first session, I wanted to practice/play a little bit with Docker and therefore I participated in a hackathon simply named Using Docker with Documentum. In this hackathon, I was indeed able to play with deployment and upgrade processes of Documentum with Docker as well as creating a custom DA image from a Dockerfile. In addition to that, I also had to manage the deployment models, the security and data management/storage. For the latter, it is simply because the data should not be stored in containers… Indeed, if a container is upgraded, then all data will be lost and that’s not what you want ;).

 

The morning ended with a closing keynote with Beau Lotto (Neuro-Scientist and Founder of Lotto Lab) which was actually very funny. He explained and showed us, among other things, why the brain is an extraordinary thing and why the information is meaningless! I will let you think about that ;).

 

So in conclusion, the first day of the Momentum 16 was a little bit boring, the second one was a little bit better and finally the third one was too short but really interesting.

 

Cet article Momentum16 – Day 3 – Documentum with Docker est apparu en premier sur Blog dbi services.

12c online move datafile in same filesystem.

Wed, 2016-11-02 16:03

On Linux, when you move a datafile, with “mv” within the filesystem it’s just a rename. There is no copy. In 12c you can move a datafile online where oracle takes care of the move at OS level. But be careful. Even if you are in the same filesystem, moving a datafile online does a copy of the file.

Linux mv

I have a file, /u01/oradata/test1.txt and I move it to /u01/oradata/test2.txt within the same filesystem:

mv /u01/oradata/test1.txt /u01/oradata/test2.txt

Actually, I’m running it with strace, tracing file operations for these files:

strace -e trace=file mv /u01/oradata/test1.txt /u01/oradata/test2.txt

Then I can see clearly that there is no open() call but just a rename():

execve("/usr/bin/mv", ["mv", "/u01/oradata/test1.txt", "/u01/oradata/test2.txt"], [/* 29 vars */]) = 0
stat("/u01/oradata/test2.txt", 0x7ffcfa624270) = -1 ENOENT (No such file or directory)
lstat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
lstat("/u01/oradata/test2.txt", 0x7ffcfa623f20) = -1 ENOENT (No such file or directory)
rename("/u01/oradata/test1.txt", "/u01/oradata/test2.txt") = 0

If I do the same to another filesystem:

strace -e trace=file mv /u01/oradata/test2.txt /u02/oradata/test2.txt

a rename() is attempted:

execve("/usr/bin/mv", ["mv", "/u01/oradata/test2.txt", "/u02/oradata/test2.txt"], [/* 29 vars */]) = 0
stat("/u02/oradata/test2.txt", 0x7fff1e2b3340) = -1 ENOENT (No such file or directory)
lstat("/u01/oradata/test2.txt", {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
lstat("/u02/oradata/test2.txt", 0x7fff1e2b2ff0) = -1 ENOENT (No such file or directory)
rename("/u01/oradata/test2.txt", "/u02/oradata/test2.txt") = -1 EXDEV (Invalid cross-device link)

but because it’s another filesystem, the “mv” command will do like a “cp”, open the source in read and the target in write, creating it if not exist:

unlink("/u02/oradata/test2.txt") = -1 ENOENT (No such file or directory)
lgetxattr("/u01/oradata/test2.txt", "security.selinux", "unconfined_u:object_r:unlabeled_t:s0", 255) = 37
open("/u01/oradata/test2.txt", O_RDONLY|O_NOFOLLOW) = 3
open("/u02/oradata/test2.txt", O_WRONLY|O_CREAT|O_EXCL, 0600) = 4
newfstatat(AT_FDCWD, "/u01/oradata/test2.txt", {st_mode=S_IFREG|0644, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0
unlinkat(AT_FDCWD, "/u01/oradata/test2.txt", 0) = 0

Move datafile

Let’s do the same from the database where I’ve created the same datafile:

SQL> create tablespace DEMO datafile '/u01/oradata/test1.txt' size 5M;
Tablespace created.

I get my shadow process PID:

SQL> set define %
SQL> column spid new_value spid
SQL> select spid from v$process join v$session on paddr=addr where sid=sys_context('userenv','sid');
 
SPID
------------------------
7257
 
SQL> host ps -fp %spid
UID PID PPID C STIME TTY TIME CMD
oracle 7257 7256 93 21:35 ? 00:00:03 oracleCDB1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

and run strace on it:
SQL> host strace -o /tmp/tmp.log -e trace=file -p %spid &
I move the file online to the same filesystem:

SQL> alter database move datafile '/u01/oradata/test1.txt' to '/u01/oradata/test2.txt';
Database altered.

and read the trace:
grep /test /tmp/tmp.log


stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
open("/u01/oradata/test1.txt", O_RDONLY) = 11
stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
open("/u01/oradata/test1.txt", O_RDONLY) = 11
stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
statfs("/u01/oradata/test1.txt", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=3063586, f_bfree=1014477, f_bavail=853095, f_files=786432, f_ffree=672545, f_fsid={1366374386, -1275721753}, f_namelen=255, f_frsize=4096}) = 0
open("/u01/oradata/test1.txt", O_RDONLY) = 11
open("/u01/oradata/test1.txt", O_RDWR|O_DSYNC) = 11
stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
statfs("/u01/oradata/test1.txt", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=3063586, f_bfree=1014477, f_bavail=853095, f_files=786432, f_ffree=672545, f_fsid={1366374386, -1275721753}, f_namelen=255, f_frsize=4096}) = 0
open("/u01/oradata/test1.txt", O_RDONLY) = 11
stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
statfs("/u01/oradata/test1.txt", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=3063586, f_bfree=1014477, f_bavail=853095, f_files=786432, f_ffree=672545, f_fsid={1366374386, -1275721753}, f_namelen=255, f_frsize=4096}) = 0
open("/u01/oradata/test1.txt", O_RDONLY) = 11

At this point the source datafile is opened in read. It continues with:

open("/u01/oradata/test2.txt", O_RDONLY) = -1 ENOENT (No such file or directory)
stat("/u01/oradata/test2.txt", 0x7ffd0201e5d8) = -1 ENOENT (No such file or directory)
open("/u01/oradata/test2.txt", O_RDWR|O_CREAT|O_EXCL|O_SYNC, 0660) = 11

and now the destination datafile in write, created if not exist (existence had been tested before). Then it starts the copy:

stat("/u01/oradata/test2.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
stat("/u01/oradata/test2.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
open("/u01/oradata/test2.txt", O_RDONLY) = 11
stat("/u01/oradata/test2.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
open("/u01/oradata/test2.txt", O_RDONLY) = 11
statfs("/u01/oradata/test2.txt", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=3063586, f_bfree=1014475, f_bavail=853093, f_files=786432, f_ffree=672544, f_fsid={1366374386, -1275721753}, f_namelen=255, f_frsize=4096}) = 0
open("/u01/oradata/test2.txt", O_RDONLY) = 11
open("/u01/oradata/test2.txt", O_RDWR) = 11
open("/u01/oradata/test2.txt", O_RDWR|O_DSYNC) = 11

at the end, because I didn’t use the ‘KEEP’ option, so the source file is deleted:

stat("/u01/oradata/test1.txt", {st_mode=S_IFREG|0640, st_size=5251072, ...}) = 0
unlink("/u01/oradata/test1.txt") = 0
stat("/u01/app/oracle/diag/rdbms/cdb1/CDB1/log/test", {st_mode=S_IFDIR|0750, st_size=4096, ...}) = 0

So what?

As any online move operation, you need two times the space during the copy. Here, the source file is still used for read and write until the move finishes, so this cannot be a rename. A rename would have to put the datafile offline even for a short time. If you can afford a small downtime, then you may prefer to use the offline rename (“mv” at OS level, ALTER DATABASE RENAME FILE at database level).

 

Cet article 12c online move datafile in same filesystem. est apparu en premier sur Blog dbi services.

Momentum16 – Day 2 – C6, Load Testing and REST API

Wed, 2016-11-02 13:27

So today is the second day of the Momentum 16 in Barcelona and as expected, this day contained many more technical presentations and discussions than the first day. I attended several interesting presentations and hackathon/hands on sessions as well so I haven’t really had time to rest today.

 

So first thing in the morning, I started with a two hours hackathon session related to how it is possible to augment Documentum reliability and performance. This hackathon was lead by Henrik Rexed from Neotys. This company is actually providing Neoload, which is a load and performance testing tool that realistically simulates user activity and monitors infrastructure behavior so you can eliminate bottlenecks in all your web and mobile applications. This Load Test solution isn’t related to Documentum so you can actually Load Test everything but they do have some experience with Documentum which makes it even better. In this session, Henrik presented us the methodology that should be used to load test and monitor each and every application starting during the project phase with Unit Testing and Unit Load to ends up with complete SLA verifications by Load Testing the whole application. One thing that it is important to note and remember is that Load Testing each and every component one by one is good but that’s not enough. Sometimes, your system will break only when all components are solicited all together and that’s what you should test in the end!

 

After that, I attended a session presented by Mirko Weller (Volkswagen) and Christian Meier (fme) related to how 60k users are accessing the Documentum Platform at Volkswagen. This wasn’t actually a technical presentation but I wanted to follow it because we, at dbi services, are also managing a similar Documentum Platform at one of our customers with so many users and environments/instances that I thought it would be good to compare a little bit with what others are doing and to exchange feelings and ideas about how to improve our Platform.

 

Finally at the end of the morning, I attended a session about how to provide the right interface to your users at the right time. Boris Carbonneill (C6) and Hugo Simon-Chautemps (Sanofi) presented us the “C6 Mobile” application, the “C6 Office” interface as well as their “any” solution. The purpose of these three products is to help your users accessing their documents whenever they want with the best possible user’s experience. I actually found these applications/UIs quite good. There is a good balance in the sizing, colors, features displayed, aso…

  • The C6 Mobile application can be used to access documents while traveling directly using your phone and it can be used to quickly review/approve your tasks.
  • The C6 Office is actually an add-in for Microsoft Office, PowerPoint and Excel that lets you create/update documents based on templates defined in D2 and rules/permissions/acls defined in D2-Config without having to launch an internet browser. What you have to do is just to start Office, connect to the repository and work directly inside Microsoft’s application. For the moment it is apparently not working for Outlook. Maybe a future version?
  • The any solution can be used in the Cloud to merge all kind of repositories coming from Documentum, SharePoint, OpenText (and more) and store all that in one place. This solution actually makes me thing to InfoArchive which is providing similar thing and some other/different features… You can synchronize documents from D2 to the Cloud to share them with external people for example or just do that so that you will have a duplicate copy in case it is needed (E.g.: copying IQs/OQs/DR/IT stuff to the Cloud might be a good idea in case there is a Disaster and your DMS isn’t there anymore, then you would still have enough to actually execute and document your Disaster Recovery!)

 

Beginning of the afternoon, I followed another two hours hackathon on the following subject: Developing Custom Applications with Documentum APIs. I’m not a Documentum developer but I wanted to do this hackathon so I can learn some stuff related to the Documentum REST API and it was a good session. There were three different levels for this hackathon: beginner, intermediate and advanced. I choose the intermediate one and I had to build an application using Angular and the Documentum REST API. That was actually pretty funny and I was able to play with the APIs calls, aso… Really cool.

 

This concludes the summary of the second day. See you tomorrow for the third and last one!

 

Cet article Momentum16 – Day 2 – C6, Load Testing and REST API est apparu en premier sur Blog dbi services.

Momentum16 – Day 2 – What’s new in Documentum 7.3 ?

Wed, 2016-11-02 13:24

As described in my last post related to Momentum 16, I was excited to see the session “What’s new in Documentum 7.3″. Do not worry, I could slept last night. But are my expectations fulfilled?

 

So yesterday I wrote about PostgreSQL which will decrease the cost of the DB licensing and then your TCO. It seems that based on benchmarks, there is still some place for small improvements to have the same performance as with traditional DBs. Dell EMC is working on it with the PostgreSQL experts. Let’s see with real case and based on the applications usage.

 

So a lot of news have been described. In this post, I will list some of them which will help me and my team to deliver even better service. Of course this has to be tested and see if those news can really be applied on our customer’s platforms.

  • Docker and Documentum: starting with Documentum 7.3, docker is now supported. Not only the Content Server but also D2, xCP, aso… So I guess this could help us when we have to build a platform. Using docker images as templates for the Content Server, D2 and their web servers for instance.
  • Renaming repository name: this can be helpful in case you build a docker image which can then be used as a template for your Content Server
  • Renaming Docbase ID: same as for the previous point. Building a docker image, using it for a container, changing the docbase name and ID to have the Content Server and repositories installed as you need.
  • Changing the installation owner: can this be used in case you have to move one repository to another Content Server installation where the installation owner is not the same? This has to be studied and tested…
  • Dormant State for non-Admins: With this new feature you can work on your repository but the normal users cannot access it. So for maintenance tasks, I think this can be helpful too.
  • Upgrade process without touching the database: this should decrease the upgrade time.
  • JDK update tool: with this tool, the JDK patches and security update can be applied. This can not be used to upgrade to a new major version (like from Java 7 to Java 8).
  • Xplore 1.6: fast search within folders, non admin reports are some enhancements in the fulltext area.
  • UCF will no more be used, this is replaced by an Applet-free transfer where there is no more dependencies with the JRE plugin or applet.

 

In addition to that, SAML 2.0 Single Sign-On, Cross Forest Kerberos, more REST Services, CMIS 1.1 are also some new features coming with Document 7.3! So I hope that we can download this new version in few weeks so we can play with, do tests and see how this can fit with our daily work and future visions.

 

Now two questions:

  •  Are my expectations fulfilled?
    I would say yes at 90%. Regarding the remaining 10%, I talked to the Content Server and D2 product managers and they were able to give me some answers which comfort me for the future.
  • Am I more exited now compared to yesterday, waiting for the session? I guess yes :-)

 

New tasks and challenges are coming…

 

Cet article Momentum16 – Day 2 – What’s new in Documentum 7.3 ? est apparu en premier sur Blog dbi services.

Documentum story – Documentum JMS Log Configuration

Wed, 2016-11-02 01:00

The aim of this blog is to provide you a way to configure the JMS Logs in order to align all applications logging with date information, log rotation and retention. Some changes have to be done on the jboss container as well as on the log4j utility for each deployed JMS applications (acs.ear, serverapps.ear and bpm.ear).

General configuration

First, go to the JMS configuration at $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/configuration/
The jboss version may vary depending on your Content Server version.
Do a backup of the standalone file like follow:

cp standalone.xml standalone.xml_$(date +%Y%m%d).log

Then edit the file standalone.xml by replacing each pattern-formatter with the following configuration:

<pattern-formatter pattern="%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %s%E%n"/>

Note that you can change this setting which will change the way the log file will look like, but try to be consistent with other environments and components.
Now go to application deployments: $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments
Once again, depending your Content Server version you could have to go into deploy instead of deployments.

For ServerApps.ear

Backup the current log4j.properties file:

cp ./ServerApps.ear/APP-INF/classes/log4j.properties ./ServerApps.ear/APP-INF/classes/log4j.properties_$(date +%Y%m%d).log

Then edit ./ServerApps.ear/APP-INF/classes/log4j.properties and set it like this:

log4j.rootCategory=WARN, A1, F1
 log4j.category.MUTE=OFF
 log4j.additivity.tracing=false
 log4j.category.tracing=DEBUG, FILE_TRACE
#------------------- CONSOLE --------------------------
 log4j.appender.A1=org.apache.log4j.ConsoleAppender
 log4j.appender.A1.threshold=ERROR
 log4j.appender.A1.layout=org.apache.log4j.PatternLayout
 log4j.appender.A1.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- FILE --------------------------
 log4j.appender.F1=org.apache.log4j.RollingFileAppender
 log4j.appender.F1.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/ServerApps.log
 log4j.appender.F1.MaxFileSize=10MB
 log4j.appender.F1.layout=org.apache.log4j.PatternLayout
 log4j.appender.F1.MaxBackupIndex=10
 log4j.appender.F1.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- ACS --------------------------
 log4j.category.acs=WARN, ACS_LOG
 log4j.appender.ACS_LOG=org.apache.log4j.RollingFileAppender
 log4j.appender.ACS_LOG.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/AcsServer.log
 log4j.appender.ACS_LOG.MaxFileSize=100KB
 log4j.appender.ACS_LOG.layout=org.apache.log4j.PatternLayout
 log4j.appender.ACS_LOG.MaxBackupIndex=10
 log4j.appender.ACS_LOG.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- FILE_TRACE --------------------------
 log4j.appender.FILE_TRACE=org.apache.log4j.RollingFileAppender
 log4j.appender.FILE_TRACE.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/ServerApps_trace.log
 log4j.appender.FILE_TRACE.MaxFileSize=100MB
 log4j.appender.FILE_TRACE.layout=org.apache.log4j.PatternLayout
 log4j.appender.FILE_TRACE.MaxBackupIndex=10
 log4j.appender.FILE_TRACE.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n

We changed the conversion pattern in order to add more info when logging. For example, here we added “z” in order to show the TimeZone. With such configuration it is easier to compare several logs which can be generated from different servers in different time zones.

We also added the MaxFileSize and MaxBackupIndex in order to manage the retention. In the code above the logs will be generated in maximum 10 files of 100MB, hence it will never exceed 1GB on the file system. The drawback is that if you have a lot of logs generated it will grow fast and the older files will be replaced by the new ones.

For acs.log

You can do the same as before, first backup the configuration file:

cp ./acs.ear/lib/configs.jar/log4j.properties ./acs.ear/lib/configs.jar/log4j.properties_$(date +%Y%m%d).log

Then edit ./acs.ear/lib/configs.jar/log4j.properties and set it like this:

log4j.rootCategory=WARN, A1, F1
 log4j.category.MUTE=OFF
 log4j.additivity.tracing=false
 log4j.category.tracing=DEBUG, FILE_TRACE
#------------------- CONSOLE --------------------------
 log4j.appender.A1=org.apache.log4j.ConsoleAppender
 log4j.appender.A1.threshold=ERROR
 log4j.appender.A1.layout=org.apache.log4j.PatternLayout
 log4j.appender.A1.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- FILE --------------------------
 log4j.appender.F1=org.apache.log4j.RollingFileAppender
 log4j.appender.F1.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/acs.log
 log4j.appender.F1.MaxFileSize=10MB
 log4j.appender.F1.layout=org.apache.log4j.PatternLayout
 log4j.appender.F1.MaxBackupIndex=10
 log4j.appender.F1.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- ACS --------------------------
 log4j.category.acs=WARN, ACS_LOG
 log4j.appender.ACS_LOG=org.apache.log4j.RollingFileAppender
 log4j.appender.ACS_LOG.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/AcsServer.log
 log4j.appender.ACS_LOG.MaxFileSize=100KB
 log4j.appender.ACS_LOG.layout=org.apache.log4j.PatternLayout
 log4j.appender.ACS_LOG.MaxBackupIndex=10
 log4j.appender.ACS_LOG.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- FILE_TRACE --------------------------
 log4j.appender.FILE_TRACE=org.apache.log4j.RollingFileAppender
 log4j.appender.FILE_TRACE.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/acs_trace.log
 log4j.appender.FILE_TRACE.MaxFileSize=100MB
 log4j.appender.FILE_TRACE.layout=org.apache.log4j.PatternLayout
 log4j.appender.FILE_TRACE.MaxBackupIndex=10
 log4j.appender.FILE_TRACE.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#-------------------- ATMOS LOGGING ---------------------
 log4j.logger.com.documentum.content.store.plugin.atmos=DEBUG,ACS_LOG
 log4j.logger.com.emc.esu=WARN,ACS_LOG

 

For bpm.ear

You can do the same as before, first backup the configuration file:

cp ./bpm.ear/APP-INF/classes/log4j.properties ./bpm.ear/APP-INF/classes/log4j.properties_$(date +%Y%m%d).log

Then edit ./bpm.ear/APP-INF/classes/log4j.properties and set it like this:

log4j.rootCategory=WARN, A1, F1
 log4j.category.MUTE=OFF
 log4j.additivity.tracing=false
 log4j.category.tracing=DEBUG, FILE_TRACE
#------------------- CONSOLE --------------------------
 log4j.appender.A1=org.apache.log4j.ConsoleAppender
 log4j.appender.A1.threshold=ERROR
 log4j.appender.A1.layout=org.apache.log4j.PatternLayout
 log4j.appender.A1.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- FILE --------------------------
 log4j.appender.F1=org.apache.log4j.RollingFileAppender
 log4j.appender.F1.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/bpm.log
 log4j.appender.F1.MaxFileSize=10MB
 log4j.appender.F1.layout=org.apache.log4j.PatternLayout
 log4j.appender.F1.MaxBackupIndex=10
 log4j.appender.F1.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- BPM --------------------------
 log4j.logger.com.documentum.bpm=WARN, bpmappender
 log4j.logger.com.documentum.bps=WARN, bpmappender
 log4j.additivity.com.documentum.bpm=false
 log4j.additivity.com.documentum.bps=false
 log4j.appender.bpmappender=org.apache.log4j.RollingFileAppender
 log4j.appender.bpmappender.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/bpm-runtime.log
 log4j.appender.bpmappender.MaxFileSize=1MB
 log4j.appender.bpmappender.layout=org.apache.log4j.PatternLayout
 log4j.appender.bpmappender.MaxBackupIndex=10
 log4j.appender.bpmappender.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n
#------------------- FILE_TRACE --------------------------
 log4j.appender.FILE_TRACE=org.apache.log4j.RollingFileAppender
 log4j.appender.FILE_TRACE.File=$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs/bpm_trace.log
 log4j.appender.FILE_TRACE.MaxFileSize=100MB
 log4j.appender.FILE_TRACE.layout=org.apache.log4j.PatternLayout
 log4j.appender.FILE_TRACE.MaxBackupIndex=10
 log4j.appender.FILE_TRACE.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS z} %-5p [%c] (%t) %m%n

 

When everything is setup you can restart the JMS and verify that all logs are properly written in $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/logs

Now you have your JMS log system setup consistently.

 

Cet article Documentum story – Documentum JMS Log Configuration est apparu en premier sur Blog dbi services.

Momentum16 – Day 1 – InfoArchive first approach

Tue, 2016-11-01 12:04

As Gérard explained in its first blog today was the first day not specific to the partners. I had the opportunity to attend some business centric (not really) interesting sessions in the morning. Then the morning ended and the afternoon begun with two keynotes: “Dell EMC Opening Keynote” and “Digital Transformation Keynote”. Finally I was able to attend a hands on session on InfoArchive and that’s what I will talk about in this blog since that’s the only piece of technical information I was able to get today.

 

Like every other events, there are exhibitions and exhibitors that are showing what they are doing around EMC in their booths. Of course there is also a booth regarding the InfoArchive solution if you want to talk to some EMC experts and I think that’s a pretty good way to see and understand what this solution is doing.

 

EMC InfoArchive is a unified enterprise archiving platform that stores related structured data and unstructured content in a single consolidated repository. This product enables corporations to preserve the value of enterprise information in a single, compliant, and easily accessible unified archive. Basically, that’s a place where you can store your content to be archived on a low price storage because this kind of information is usually kept only for legal constraints (read only) and don’t need to be accessed very often.

 

InfoArchive is composed of three components: an included Web Server, a Server (core of the application) and finally a Database (it is using an Xhive Database (XML), just like xPlore). Therefore you can very easily provide an XML file that will be used as an import file and that contains content to be archived by InfoArchive. Basically everything that can be transformed to an XML format (metadata/content) can be put inside InfoArchive. This solution provides some default connectors like:

  • Documentum
  • SharePoint (can archive documents and/or complete sites)
  • SAP

 

These default connectors are great but if that’s not enough, then you can just define your own with the information that you want to store and how you want to index them, transform them, aso… And of course this is defined in XML files. At the moment, this configuration can be a little bit scary since it is all done manually but I heard that a GUI configuration might be coming soon if it’s not in the version 4.2 already? InfoArchive is apparently fully web-based and  therefore based on a discussion I had with an EMC colleague, it should technically be possible to archive all the content of an SharePoint Site for example and then accessing this content from Documentum or any other location as long as it is using web-based requests to query the InfoArchive.

 

During the hands on session (first time working with InfoArchive for me), I had to create a new application/holding that can be used to archive Tweets. At the end of the one and a half hour, I had successfully created my application and I was able to search for Tweets based on their creationDate, userName, hashTags, retweetCount, aso… That was done actually pretty easily by following the help guide provided by EMC (specific to this use case) but if you don’t have this help guide, you better be an InfoArchive expert because you need to know each and every one of the XML tags that need to be added and where to add them to get something working properly.

 

See you tomorrow for the next blog with hopefully more technical stuff to share.

 

Cet article Momentum16 – Day 1 – InfoArchive first approach est apparu en premier sur Blog dbi services.

Momentum16 – Day1 – Feelings

Tue, 2016-11-01 12:00

This first day at Momentum 2016

Normally I should write the second one as we started yesterday with a partner session where we got some information. One of these news was that EMC had more than 400 partners a few years ago and today this has been reduced to less than 80 and dbi services is still one of them. For us it is a good news, I hope this is also one for our current and future customers.

 

Today the different sessions, a part of the key notes hold by Rohit Ghai, were more related to customer experience, solutions ECD partners can propose, business presentations, description of particular challenges that companies had to face and how they dealt with it without presenting technical details.
As I am more on the technical side, this was more for my culture, I would say.

 

In the keynote we learned that with Documentum 7.3 cost saving will increase. For instance, PostgreSQL can be used with Document 7.3, the upgrade will be faster, aso… Since time is money…
PostgreSQL can be an interesting subject as dbi services is also active in this database and I will have to work with our DB experts to see what we have to test, how and find out the pro and cons using PostgreSQL on a technical point of view, as for sure the license cost will decrease. I planned, no I have, to go to the technical session tomorrow about “What’s new in Documentum 7.3″.

 

I also took the opportunity to discuss with some Dell EMC partners to learn more about the solutions they propose. For instance I was able to talk with Neotys people to understand what their product can bring us compared to JMeter or LoadRunner which we or our customers are using to do the load tests. Having a better view of possible solutions in this area can help me in case some customers have specific requirements to help him choose the best tool.
I also had a chat with Aerow and they showed me how ARender4Documentum is working and how fast “big” documents can be displayed in their html5 viewer. So even if the fist day cannot be viewed as a technical day, I actually learned a lot.
In this kind of event, what I find cool too, is that you can meet people, for instance at lunch time around a table and start talking about your/their experiences, your/their concerns, solutions, aso… So today we had a talk about cloud (private, public) and what does this means in case you have a validated system.

 

So let’s see what will happen tomorrow, the day where more technical information will be shared.

Note: Read Morgan’s blog where you can find technical stuff. You know I felt Morgan frustrated today as he could not “eat” technical food :-)

 

Cet article Momentum16 – Day1 – Feelings est apparu en premier sur Blog dbi services.

SQL Server 2016 – R Services Tips: How to find installed packages using T-SQL?

Tue, 2016-11-01 09:14

If you have restricted access to the server and you do not know if your packages are installed on the SQL Server with the R Services, you have the possibility to do it by T-SQL.
The R command/function to use is “installed.packages()”.
As you can read in the R Documentation for installed.packages(),  this function scans the description of each package.
The output is a table with 16 columns with basically these information:

  • Package
  • LibPath
  • Version
  • Priority Depends
  • Imports
  • LinkingTo
  • Suggests
  • Enhances
  • OS_type
  • License
  • Built

To understand, I propose an example with this function installed.packages() with a result writes in a table of 16 columns:

EXECUTE sp_execute_external_script @language = N'R',
@script=N'x <- data.frame(installed.packages()) 
			OutputDataSet <- x[,c(1:16)]'

R_Services_Installed_packages01

Just for your information, if you change the number of columns to 17, you get the following error message:
Msg 39004, Level 16, State 20

R_Services_Installed_packages02

Just to find the information that I need, I create a temporary table with the package name, the path and the version. These information are in the 3 first columns:

CREATE TABLE #packages_list
(
[Package] sysname
,[Package_Path] sysname
,[Version] NVARCHAR(20)
)
INSERT INTO #packages_list
EXECUTE sp_execute_external_script @language = N'R' ,
@script=N'x <- data.frame(installed.packages())
OutputDataSet <- x[,c(1:3)]'

SELECT COUNT(*) as NumberOfPackages FROM #packages_list

SELECT * FROM #packages_list

R_Services_Installed_packages03

As you can see, 47 packages are installed by default with the R Services.
I hope that my little tips will help you to begin with the R language in SQL Server ;-)

 

Cet article SQL Server 2016 – R Services Tips: How to find installed packages using T-SQL? est apparu en premier sur Blog dbi services.

Pages