Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 17 hours 23 min ago

Enterprise Manager does not display correct values for memory

Thu, 2016-10-20 08:03

I recently had problems with Enterprise Manager, receiving such alerts:

EM Event Critical hostname Memory Utilization is 93,205 % crossed warning (80%) or critical (90%)

When we have a look at the EM 13c console for the host:


On the system the free -m command displays:

oracle@host:~/24437699/ [agent13c] free -m
             total       used       free     shared    buffers     cached
Mem:         48275      44762       3512          0        205      37483
-/+ buffers/cache:       7073      41201
Swap:         8189       2397       5791

Em 13c does not take into account the buffer / cached component.

In fact the memory calculation has changed from EM and EM  According to Metalink Note 2144976.1:

“While the total Memory available in the host target is displayed correctly after applying the latest PSU # 23030165 (Agent-Side, the formula used for Memory Utilization is (100.0 * (realMem-freeMem) / realMem) and does not consider Buffers / Cached component for the calculation.”

To solve the problem we have to patch the OMS and the different agents:

For the oms: use the patch 23134365

For the agents : use the patch 24437699

Watch out, when you want to apply the 23134365 patch for oms, we have to install the latest version of omspatcher. We download  Patch 19999993 of Release from MOS.

We backup the OMSPatcher directory in the $ORACLE_HOME oms13c environment:

oracle:OMS_HOME:/ [oms13c] mv OMSPatcher/ OMSPatcher_save

then we copy and unzip the p19999993_131000_Generic.zip from the $ORACLE_HOME directory:

oracle:$OMS_HOME/ [oms13c] unzip p19999993_131000_Generic.zip
Archive:  p19999993_131000_Generic.zip
   creating: OMSPatcher/
   creating: OMSPatcher/oms/
  inflating: OMSPatcher/oms/generateMultiOMSPatchingScripts.pl
   creating: OMSPatcher/jlib/
  inflating: OMSPatcher/jlib/oracle.omspatcher.classpath.jar
  inflating: OMSPatcher/jlib/oracle.omspatcher.classpath.unix.jar
  inflating: OMSPatcher/jlib/omspatcher.jar
  inflating: OMSPatcher/jlib/oracle.omspatcher.classpath.windows.jar
   creating: OMSPatcher/scripts/
   creating: OMSPatcher/scripts/oms/
   creating: OMSPatcher/scripts/oms/oms_child_scripts/
  inflating: OMSPatcher/scripts/oms/oms_child_scripts/omspatcher_wls.bat
  inflating: OMSPatcher/scripts/oms/oms_child_scripts/omspatcher_jvm_discovery
  inflating: OMSPatcher/scripts/oms/oms_child_scripts/omspatcher_jvm_discovery.bat
  inflating: OMSPatcher/scripts/oms/oms_child_scripts/omspatcher_wls
  inflating: OMSPatcher/scripts/oms/omspatcher
  inflating: OMSPatcher/scripts/oms/omspatcher.bat
  inflating: OMSPatcher/omspatcher
   creating: OMSPatcher/wlskeys/
  inflating: OMSPatcher/wlskeys/createkeys.cmd
  inflating: OMSPatcher/wlskeys/createkeys.sh
  inflating: OMSPatcher/omspatcher.bat
  inflating: readme.txt
  inflating: PatchSearch.xml

We check the OMSPatcher version:

oracle:/ [oms13c] ./omspatcher version
OMSPatcher Version:
OPlan Version:
OsysModel build: Wed Oct 14 06:21:23 PDT 2015
OMSPatcher succeeded.

We download from Metalink the p23134265_131000_Generic-zip file, and we run:

oracle@host:/home/oracle/23134365/ [oms13c] omspatcher apply -analyze
OMSPatcher Automation Tool
Copyright (c) 2015, Oracle Corporation.  All rights reserved.
OMSPatcher version :
OUI version        :
Running from       : /u00/app/oracle/product/
Log file location  : /u00/app/oracle/product/
OMSPatcher log file: /u00/app/oracle/product/
Please enter OMS weblogic admin server URL(t3s://hostname:7102):>
Please enter OMS weblogic admin server username(weblogic):>
Please enter OMS weblogic admin server password:>
Configuration Validation: Success
Running apply prerequisite checks for sub-patch(es) "23134365" 
and Oracle Home "/u00/app/oracle/product/"...
Sub-patch(es) "23134365" are successfully analyzed for Oracle Home 

Complete Summary

OMSPatcher succeeded.

We stop the oms and we run:

oracle@hostname:/home/oracle/23134365/ [oms13c] omspatcher apply
OMSPatcher Automation Tool
Copyright (c) 2015, Oracle Corporation.  All rights reserved.
OMSPatcher version :
OUI version        :
Running from       : /u00/app/oracle/product/
Please enter OMS weblogic admin server URL(t3s://hostname:7102):>
Please enter OMS weblogic admin server username(weblogic):>
Please enter OMS weblogic admin server password:>
Configuration Validation: Success

OMSPatcher succeeded.

We finally restart the OMS:

oracle@hostname:/home/oracle/ [oms13c] emctl start oms

Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation.  All rights reserved.
Starting Oracle Management Server...
WebTier Successfully Started
Oracle Management Server Successfully Started
Oracle Management Server is Up
JVMD Engine is Up
Starting BI Publisher Server ...
BI Publisher Server Already Started
BI Publisher Server is Up


Now we apply the patch to the agents:

After downloaded and unzipped the p24437699_131000_Generic.zip, we stop the management agent and we run:

oracle@hostname:/home/oracle/24437699/ [agent13c] opatch apply
Oracle Interim Patch Installer version
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
Oracle Home       : /u00/app/oracle/product/
Central Inventory : /u00/app/oraInventory
OPatch version    :
OUI version       :

OPatch detects the Middleware Home as "/u00/app/oracle/product/"
Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   24437699
Do you want to proceed? [y|n]
User Responded with: Y
All checks passed.
Backing up files...
Applying interim patch '24437699' to 
OH '/u00/app/oracle/product/'
Patching component oracle.sysman.top.agent,
Patch 24437699 successfully applied.
OPatch succeeded.

Finally we restart the agent with the emctl start agent command.

After the patches have been applied, the memory used displayed is correct:



And we do not receive critical alerts anymore :=)



Cet article Enterprise Manager does not display correct values for memory est apparu en premier sur Blog dbi services.

Documentum story – Replicate an Embedded LDAP manually in WebLogic

Thu, 2016-10-20 02:00

In this blog, I will talk about the WebLogic Embedded LDAP. This LDAP is created by default on all AdminServers and Managed Servers of any WebLogic installation. The AdminServer always contains the Primary Embedded LDAP and all other Servers are synchronized with this one. This Embedded LDAP is the default security provider database for the WebLogic Authentication, Authorization, Credential Mapping and Role Mapping providers: it usually contains the WebLogic users, groups, and some other stuff like the SAML2 setup, aso… So basically a lot of stuff configured under the “security realms” in the WebLogic Administration Console. This LDAP is based on files that are stored under “$DOMAIN_HOME/servers/<SERVER_NAME>/data/ldap/”.


Normally the Embedded LDAP is automatically replicated from the AdminServer to the Managed Servers during startup but this can fail for a few reasons:

  • AdminServer not running
  • Problems in the communications between the AdminServer and Managed Servers
  • aso…


Oracle usually recommend to use an external RDBMS Security Store instead of the Embedded LDAP but not all information are stored in the RDBMS and therefore the Embedded LDAP is always used, at least for a few things. More information on this page: Oracle WebLogic Server Documentation.


So now in case the automatic replication isn’t working properly, for any reason, or if a manual replication is needed, how can it be done? Well that’s pretty simple and I will explain that below. I will also use a home made script in order to quickly and efficiently start/stop one, several or all WebLogic components. If you don’t have such script available, then please adapt the steps below to manually stop and start all WebLogic components.


So first you need to stop all components:

[weblogic@weblogic_server_01 ~]$ $DOMAIN_HOME/bin/startstop stopAll
  ** Managed Server msD2-01 stopped
  ** Managed Server msD2Conf-01 stopped
  ** Managed Server msDA-01 stopped
  ** Administration Server AdminServer stopped
  ** Node Managed NodeManager stopped
[weblogic@weblogic_server_01 ~]$ ps -ef | grep weblogic
[weblogic@weblogic_server_01 ~]$


Once this is done, you need to retrieve the list of all Managed Servers installed/configured in this WebLogic Domain for which a manual replication is needed. For me, it is pretty simple, they are printed above in the start/stop command but otherwise you can find them like that:

[weblogic@weblogic_server_01 ~]$ cd $DOMAIN_HOME/servers
[weblogic@weblogic_server_01 servers]$ ls | grep -v "domain_bak"


Now that you have the list, you can proceed with the manual replication for each and every Managed Server. First backup the Embedded LDAP and then replicate it from the Primary (in the AdminServer as explained above):

[weblogic@weblogic_server_01 servers]$ current_date=$(date "+%Y%m%d")
[weblogic@weblogic_server_01 servers]$ 
[weblogic@weblogic_server_01 servers]$ mv msD2-01/data/ldap msD2-01/data/ldap_bck_$current_date
[weblogic@weblogic_server_01 servers]$ mv msD2Conf-01/data/ldap msD2Conf-01/data/ldap_bck_$current_date
[weblogic@weblogic_server_01 servers]$ mv msDA-01/data/ldap msDA-01/data/ldap_bck_$current_date
[weblogic@weblogic_server_01 servers]$ 
[weblogic@weblogic_server_01 servers]$ cp -R AdminServer/data/ldap msD2-01/data/
[weblogic@weblogic_server_01 servers]$ cp -R AdminServer/data/ldap msD2Conf-01/data/
[weblogic@weblogic_server_01 servers]$ cp -R AdminServer/data/ldap msDA-01/data/


When this is done, just start all WebLogic components again:

[weblogic@weblogic_server_01 servers]$ $DOMAIN_HOME/bin/startstop startAll
  ** Node Manager NodeManager started
  ** Administration Server AdminServer started
  ** Managed Server msDA-01 started
  ** Managed Server msD2Conf-01 started
  ** Managed Server msD2-01 started


And if you followed these steps properly, the Managed Servers will now be able to start normally with a replicated Embedded LDAP containing all recent changes coming from the Primary Embedded LDAP.


Cet article Documentum story – Replicate an Embedded LDAP manually in WebLogic est apparu en premier sur Blog dbi services.

Datawarehouse ODS load is fast and easy in Enterprise Edition

Wed, 2016-10-19 14:56

In a previous post, tribute to transportable tablespaces (TTS), I said that TTS is also used to move data quickly from operational database to a datawarehouse ODS. For sure, you don’t transport directly from the production database because TTS requires that the tablespace is read only. But you can transport from a snapshot standby. Both features (transportable tablespaces and Data Guard snapshot standby) are free in Enterprise Edition without option. Here is an exemple to show that it’s not difficult to automate

I have a configuration with the primary database “db1a”

DGMGRL> show configuration
Configuration - db1
Protection Mode: MaxPerformance
db1a - Primary database
db1b - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 56 seconds ago)
DGMGRL> show database db1b
Database - db1b
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 0 Byte/s
Real Time Query: ON
Database Status:

I’ve a few tables in the tablespace USERS and this is what I want to transport to ODS database:

SQL> select segment_name,segment_type,tablespace_name from user_segments;
------------ ---------- ----------

Snapshot standby

With Data Guard it is easy to open temporarily the standby database. Just convert it to a snapshot standby with a simple command:

DGMGRL> connect connect system/oracle@//db1b
DGMGRL> convert database db1b to snapshot standby;
Converting database "db1b" to a Snapshot Standby database, please wait...
Database "db1b" converted successfully


Here you can start to do some Extraction/Load but better to reduce this window where the standby is not in sync. The only thing we will do is export the tablespace in the fastest way: TTS.

First, we put the USERS tablespace in read only:

SQL> connect system/oracle@//db1b
SQL> alter tablespace users read only;
Tablespace altered.

and create a directory to export metadata:

SQL> create directory TMP_DIR as '/tmp';
Directory created.

Then export is easy

SQL> host expdp system/oracle@db1b transport_tablespaces=USERS directory=TMP_DIR
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01": system/********@db1b transport_tablespaces=USERS directory=TMP_DIR
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Datafiles required for transportable tablespace USERS:
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Wed Oct 19 21:03:36 2016 elapsed 0 00:00:52

I’ve the metadata in /tmp/expdat.dmp and the data in /u02/oradata/db1/users01.dbf. I copy this datafile directly in his destination for the ODS database:

[oracle@VM118 ~]$ cp /u02/oradata/db1/users01.dbf /u02/oradata/ODS/users01.dbf

This is physical copy, which is the fastest data movement we can do.

I’m ready to import it into my ODA database, but I can already re-sync the standby database because I extracted everything I wanted.

Re-sync the physical standby

DGMGRL> convert database db1b to physical standby;
Converting database "db1b" to a Physical Standby database, please wait...
Operation requires shut down of instance "db1" on database "db1b"
Shutting down instance "db1"...
Connected to "db1B"
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires start up of instance "db1" on database "db1b"
Starting instance "db1"...
ORACLE instance started.
Database mounted.
Connected to "db1B"
Continuing to convert database "db1b" ...
Database "db1b" converted successfully

The duration depends on the time to flashback the changes (and we did no change here as we only exported) and the time to apply the redo stream generated since the convert to snapshot standby (which duration has been minimized to its minimum).

This whole process can be automated. We did that at several customers and it works well. No need to change anything unless you have new tablespaces.


Here is the import to the ODS database and I rename the USERS tablespace to ODS_USERS:

SQL> host impdp system/oracle transport_datafiles=/u02/oradata/db2B/users02.dbf directory=TMP_DIR remap_tablespace=USERS:ODS_USERS
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** transport_datafiles=/u02/oradata/ODS/users01.dbf directory=TMP_DIR remap_tablespace=USERS:ODS_USERS
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" completed with 3 error(s) at Wed Oct 19 21:06:18 2016 elapsed 0 00:00:10

Everything is there. You have all your data in ODS_USERS. You can have other data/code in this database. Only the ODS_USERS tablespace have to be dropped to be re-imported. You can have your staging tables here adn even permanent tables.

12c pluggable databases

In 12.1 it is even easier because the multitenant architecture gives the possibility to transport the pluggable databases in one command, through file copy or database links. It is even faster because metadata are transported physically with the PDB SYSTEM tablespace. I said multitenant architecture here, and didn’t mention any option. Multitenant option is needed only if you want multiple PDBs managed by the same instance. But if you want the ODS database to be an exact copy of the operational database, then you don’t need any option to unplug/plug.

In 12.1 you need to put the source in read only, so you still need a snapshto standby. And from my test, there’s no problem to convert it back to physical standby after a PDB has been unplugged. In next release, we may not need a standby because it has been announced that PDB can be cloned online.

I’ll explain the multitenant features available without any option (in 12c current and next release) at Oracle Geneva office on 23rd of November:

Do not hesitate to register by e-mail.


Cet article Datawarehouse ODS load is fast and easy in Enterprise Edition est apparu en premier sur Blog dbi services.

Feedback on my session at Oracle Open World 2016

Wed, 2016-10-19 11:00

I was a speaker at Oracle Open World and received the feedback and demographic data. this session took place on the Sunday, which is the User Group Forum day about Multitenant, defining what is the multitenant architecture and which features it brings to us even wen we dont’ have the multitenant option. Unfortunately, I cannot upload the slides before the next 12c release is available. If you missed the session or want to hear it in my native language, I’ll give it in Geneva on 23rd of November at Oracle Switzerland office.

Here is the room when we was setting up de demo on my laptop, but from demographic statistics below 84 people attended (or planned to attend) my session.

2016-09-18 10.28.07

Feedback survey

Depending on the conferences, the percentage of people that fill the feedback goes from low to very low. Here 6 people gave feedback wich is 7% or the attendees:

Number of Respondents: 6
Q1: How would you rate the content of the session? (select a rating of 1 to 3, 3 being the best): 2.67
Q2: How would you rate the speaker(s) of the session? (select a rating of 1 to 3, 3 being the best): 2.83
Q3: Overall, based on content and speakers I would rate this session as (select a rating of 1 to 3, 3 being the best): 2.67

Thanks for this. But most important is the quality than the quantity. I received only one comment and this one is very important for me because it can help me to improve my slide:

Heavy accent was difficult to understand at times where I lost interest/concentration. Ran through slides too quick (understanding time constraints). Did not allow image capturing (respected). Did provide examples which was nice. Advised slides will be downloadable…to be seen.

The accent is not a surprise. It’s an international event a lot of people coming from all around the world may have accent that is difficult to understand. I would love to speak English more clearly but I know that my French accent is there, and my lack of vocabulary as well. That will be hard to change. But the remark about the slides is very pertinent. I usually put lot of material in my presentations: lot of slides, lot of texts, lot of demos. My idea is that you don’t need to read all what is written. It is there to read it later when you download the slides (I expected the 12cR2 to be fully available for OOW when I prepared the slides). And it’s also there in case my live demos fails so that I’ve the info on the slides, but I usually skip them quickly when all was seen in the demo.

But thanks to this comment, I understand that reading the slides is important when you don’t get what I say, and having too much text makes it difficult to follow. For future presentations, I’ll remove text from slides and put it as powerpoint presenter notes, made available in the pdf.

So thanks for the one that has written this comment. I’ll improve that. And don’t hesitate to ping me to know when the slides can be downloadable, and maybe I can already share a few ones.

Demographic data

Open World gives some demographic data about attendees. As at the Sunday Session you don’t have to scan you badge, I suppose it’s about people that registered and may not have been there. But intention counts ;)

About countries: we were in US so that’s the main country represented here. Next comes 6 people from Switzerland, the country where I live and work:


When we register to OOW we fill-in the industry we are working on. The most represented in the room were Financial, Heathcare, High Tech:


And the job title which is a free text have several values, which makes it difficult to aggregate:


That’s no surprise that there were a lot of DBAs. I’m happy to see some managers/supervisors interested in technical sessions.
My goal for future events is to get more attention from developers because a database is not a black box storage for data, but a full software where data is processed.

I don’t think that 84 people were in that room actually, there were several good sessions at the same time, as well as the bridge run.


This is the kind of slides where there’s lot of text but I go fast. Actually I had initially 3 slides about this point (feature usage detection, multitenant and CON_IDs). I removed some, and kept one with too much text. When I remove slides, I usually post a blog about what I’ll not have time to detail.

Here are those posts:



My session was part of part of the stream which was selected by the EMEA Oracle Usergroup Community. Thanks a lot to EOUC. They have good articles in their newly created magazine www.oraworld.org. Similar name but nothing to do with the team of worldwide OCMs and ACEs publishing for years as Oraworld Team.


Cet article Feedback on my session at Oracle Open World 2016 est apparu en premier sur Blog dbi services.

Documentum xPlore – ftintegrity tool not working

Wed, 2016-10-19 08:38

I’ve been patching around some xPlore servers for a while and recently I went into an issue regarding the ftintegrity tool. Maybe you figured it out as well, for xPlore 1.5 Patch 15 the ftintegrity tool available in $DSEARCH_HOME/setup/indexagent/tools was corrupted by the patch.
I think for some reason the libs were changed and the tool wasn’t able to load anymore. I asked EMC and they said it was a known bug which will be fixed in next release.

So I came to patch it again when the Patch 17 went out (our customer processes doesn’t allow us to patch every month, so I skipped the Patch 16). After patching, I directly started the ftintegrity tool in order to check that everything was fixed, and…. no.

In fact yes, but you have something to do before making it work. The error you have is like ‘Could not load because config is NULL’, or ‘dfc.properties not found’. I found these error kinda strange, so I wondered if the ftintegrity tool script was patched as well. And the answer is no, the script is still the same but the jar libraries have been changed, that means that the script is pointing to the wrong libraries and it can’t load properly.

Thus the solution is simple, I uninstalled the Index Agent and installed it again right after. The ftintegrity tool script was re-created with the good pointers to the new libraries. Little tip: If you have several Index Agents and don’t want to re-install them all, you may want to copy the content of the updated ftintegrity tool script and paste it into the other instances (do not forget to adapt it because it may point to different docbases).

To summary, if you have issues regarding the execution of the ftintegrity tool, check the libraries call in the script and try to re-install it in order to have the latest one.


Cet article Documentum xPlore – ftintegrity tool not working est apparu en premier sur Blog dbi services.

Documentum story – Jobs in a high availability installation

Wed, 2016-10-19 04:55

When you have an installation with one Content Server (CS) you do not take care where the job will be running. It’s always on your single CS.
But how should you configure the jobs in case you have several CS? Which jobs have to be executed and which one not? Let’s see that in this post.

When you have to run your jobs in a high availability installation you have to configure some files and objects.

Update the method_verb of the dm_agent_exec method:

API> retrieve,c,dm_method where object_name = 'agent_exec_method'
API> get,c,l,method_verb
API> set,c,l,method_verb
SET> ./dm_agent_exec -enable_ha_setup 1
API> get,c,l,method_verb
API> save,c,l
API> reinit,c


The java methods have been updated to be restartable:

update dm_method object set is_restartable=1 where method_type='java';


On our installation we use jms_max_wait_time_on_failures = 300 instead the default one (3000).
In server.ini (Primary Content Server) and server_HOSTNAME2_REPO01.ini (Remote Content Server), we have:



Based on some issues we faced, for instance with the dce_clean job that ran twice when we had both JMS projected to each CS, EMC advised us to each JMS with its local CS only. With this configuration, in case the JMS is down on the primary CS, the job (using a java method) is started on the remote JMS via the remote CS.

Regarding which jobs have to be executed – I am describing only the one which are used for the housekeeping.
So the question to answer is which job does what and what is “touched”, either metadata or/and content.

To verify that, check how many CS are used and where they are installed:

select object_name, r_host_name from dm_server_config
REPO1               HOSTNAME1.DOMAIN


Verify on which CS the jobs will run and “classify” them.
Check the job settings:

select object_name, target_server, is_inactive from dm_job

The following jobs work only on metadata, they can run anywhere so the target_server has to be empty

 object_name target_server is_inactive dm_ConsistencyChecker False dm_DBWarning False dm_FileReport False dm_QueueMgt False dm_StateOfDocbase False



The following jobs work only on content.


As we are using a NAS for the Data directory which is shared for both servers, only one of the two jobs has to run. Per default the target_server is defined. So for the one which has to run, target_server has to be empty.

  object_name  target_server  is_inactive dm_ContentWarning False dm_ContentWarningHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN True dm_DMClean False dm_DMCleanHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN True Metadata and Content

These following jobs are working on metadata and content.


Filescan scans the NAS content storage. As said above, it is shared and therefore the job only need to be execute once: the target_server has to be empty to be run everywhere.

LogPurge is also cleaning files under $DOCUMENTUM/dba/log and subfolders which are obviously not shared and therefore both dm_LogPurge jobs have to run. You just have to use another start time to avoid an overlap when objects are removed from the repository.

   object_name   target_server   is_inactive dm_DMFilescan False dm_DMFilescanHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN True dm_LogPurge  REPO1.REPO1@HOSTNAME1.DOMAIN False dm_LogPurgeHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN False

Normally with this configuration your housekeeping jobs should be well configured.

One point you have to take care is when you use DA to configure your jobs. Once you open the job properties, the “Designated Server” is set to one of your server and not to “Any Running Server” which means target_server = ‘ ‘. If you click the OK button, you will set the target server and in case this CS is down, the job will fail because it cannot use the second CS.


Cet article Documentum story – Jobs in a high availability installation est apparu en premier sur Blog dbi services.

Documentum story – How to display correct client IP address in the log file when a WebLogic Domain is fronted by a load Balancer

Wed, 2016-10-19 04:32

The Load Balancers do not provide the client IP address by default. The WebLogic HTTP log file (access_log) does not provide the client IP address but the Load Balancer one.
This is sometimes a problem when diagnosing issues and the Single Sign On configuration does not provide the user name in the HTTP log either.

In most of  the cases, the Load Balancer can provides an additional header named “X-Forwarded-For” but it needs to be configured from the Load Balancer administration people.
If the “X-Forwarded-For” Header is provided, it can be fetched using the WebLogic Server HTTP extended logging.

To enable the WebLogic Server HTTP logging to fetch the “X-Forwarded-For” Header follow the steps below for each WebLogic Server in the WebLogic Domain:

  1. Browse to the WebLogic Domain administration console and sign in as an administrator user
  2. Open the servers list and select the first managed server
  3. Select the logging TAB and the HTTP sub-tab
  4. Open the advanced folder and change the format to “extended” and the Extended Logging Format Fields to:
    "cs(X-Forwarded-For) date time cs-method cs-uri sc-status bytes"
  5. Save
  6. Browse back to the servers list and repeat the steps for each WebLogic Server from the domain placed behind the load balancer.
  7. Activate the changes.
  8. Stop and restart the complete WebLogic domain.

After this, the WebLogic Servers HTTP Logging (access_log) should display the client IP address and not the Load Balancer one.

When using the WebLogic Server extended HTTP logging, the username field is not available any more.
This feature is described in the following Oracle MOS article:
Missing Username In Extended Http Logs (Doc ID 1240135.1)

To get the authenticated usename displayed, an additional custom filed provided by a custom Java class needs to be used.

Here is an example of such Java class:

import weblogic.servlet.logging.CustomELFLogger; 
import weblogic.servlet.logging.FormatStringBuffer; 
import weblogic.servlet.logging.HttpAccountingInfo;

/* This example outputs the User-Agent field into a
 custom field called MyCustomField

public class MyCustomUserNameField implements CustomELFLogger{

public void logField(HttpAccountingInfo metrics,
  FormatStringBuffer buff) {

The next step is to compile and create a jar library.

Set the environment running the WebLogic setWLSEnv.sh script.

javac MyCustomUserNameField.java

jar cvf MyCustomUserNameField.jar MyCustomUserNameField.class

Once done, copy the jar library file under the WebLogic Domain lib directory. This way, it will be made available in the class path of each WebLogic Server of this WebLogic Domain.

The WebLogic Server HTTP Extended log format can now be modified to include a custom field named “x-MyCustomUserNameField”.


Cet article Documentum story – How to display correct client IP address in the log file when a WebLogic Domain is fronted by a load Balancer est apparu en premier sur Blog dbi services.

Manage GitHub behind a proxy

Wed, 2016-10-19 02:00

I’m quite used to GitHub since I’m using it pretty often but I actually never tried to use it behind a proxy. In the last months, I was working on a project and I had to use GitHub to version control the repository that contained scripts, monitoring configurations, aso… When setting up my local workstation (Windows) using the GUI, I faced an error showing that GitHub wasn’t able to connect to the repository while I was able to access it using my Web Browser… This is the problem I faced some time ago and I just wanted to share this experience because even if I’m writing a lot of blogs related to Documentum, it is sometimes good to change your mind you know… Therefore today is GitHub Day!


After some research and analysis and you already understood it if you read the first paragraph of this blog, I thought that maybe it was a proxy that is automatically setup in the Web Browser and that would prevent the GitHub process to access the GitHub repository and I was right! So GitHub behind a proxy, how can you manage that? Actually that’s pretty simple because everything is there so you just need to configure it. Unfortunately, I didn’t find any options in the GUI that would allow you to do that and therefore I had to use the Command Line Interface for that purpose. If there is a way to do that using the GUI, you are welcome to share!


Ok so let’s define some parameters:

  • PROXY_USER = The user’s name to be used for the Proxy Server
  • PROXY_PASSWORD = The password of the proxy_user
  • PROXY.SERVER.COM = The hostname of your Proxy Server
  • PORT = The port used by your Proxy Server in HTTP
  • PORT_S = The port used by your Proxy Server in HTTPS


With these information, you can execute the following commands to configure GitHub using the Command Line Interface (Git Shell on Windows). These two lines will simply tell GitHub that it needs to use a proxy server in order to access Internet properly:

git config --global http.proxy http://PROXY_USER:PROXY_PASSWORD@PROXY.SERVER.COM:PORT
git config --global https.proxy https://PROXY_USER:PROXY_PASSWORD@PROXY.SERVER.COM:PORT_S


If your Proxy Server is public (no authentication needed), then you can simplify these commands as follow:

git config --global http.proxy http://PROXY.SERVER.COM:PORT
git config --global https.proxy https://PROXY.SERVER.COM:PORT_S


With this simple configuration, you should be good to go. Now you can decide, whenever you want, to just remove this configuration. That’s also pretty simple since you just have to unset this configuration with the same kind of commands:

git config --global --unset http.proxy
git config --global --unset https.proxy


The last thing I wanted to show you is that if it is still not working, then you can check what you entered previously and what is currently configured by executing the following commands:

git config --global --get http.proxy
git config --global --get https.proxy


This conclude this pretty small blog but I really wanted to share this because I think it can help a lot of people!


Cet article Manage GitHub behind a proxy est apparu en premier sur Blog dbi services.

Using JMeter to run load test on a ADF application protected by Oracle Access Manager Single Sign On

Tue, 2016-10-18 10:50


In one of my missions, I was requested to  run performance and load tests on a ADF application running in a Oracle Fusion Middleware environment protected using Oracle Access Manager. For this task we decided to use Apache JMeter because it  provides the control needed on the tests and uses multiple threads to emulate multiple users. It can be used to do distributed testing which uses multiple systems to do stress test.  Additionally, the GUI interface provides an easy way to manage the load test scenarios that can be easily recorded using the HTTP(s) Test Script Recorder.

Prepare a JMeter test plan

A first start is to review the following Blog: My Shot on Using JMeter to Load Test Oracle ADF Applications

The blog above explains how to record and use a test plan in JMeter.
It provides a SimplifiedADFJMeterPlan.jmx  JMeter test plan that can be used as a base for the JMeter test plan creation.
But this ADF starter test plan has to be reviewed for the jsessionId and afrLoop Extractors. As the regular expression associated with them might need to be adapted as they might change depending on the version of the ADF software.

In this environment, Oracle Fusion Middleware ADF WebLogic Server 10.3.6 and Oracle Access Manager 11.2.3 were used.
The regular expressions for afrLoop and jsessionid needed to be updated as shown below:

reference name regular expression afrLoop _afrLoop\’, \';([0-9]{13,16}) jsesionId ;jsessionid=([-_0-9A-Za-z!]{62,63})

Coming to the single Sign On layer, it appears that the Oracle Access Manager compatible login screen requires three parameters:

  • username
  • password
  • request_id

First username and password pattern values will be provided by the recording of the test scenario. To run the same scenario with multiple users, a CSV file is used to store test users and passwords. This will be detailed later in this blog.
The request_id is provided by the Oracle Access Manager Single Sign On layer and needs to be fetched and re-injected to the authentication URL.
To resolve this, a new variable needed to be created and the regular expression below is used.

reference name regular expression requestId name=\’request_id\’ value=\'([&amp;#;0-9]{18,25})\';

Once the test plan scenario is recorded, look for the OAM standard “/oam/server/auth_cred_submit” URL and change the request_id parameter to use the defined requestId variable.

**  click on the image to increase the size
OAM Authentication URL
name: request_id   value: ${requestId}

After those changes, the new JMeter test plan can be run.

Steps to run the test plan with multiple users

In JMeter,
Right click on the “Thread Group” on the tree.
Select “Add” – “Config Element” – “CSV Data Set Config”.
Add CSV config in JMeter

Create a CSV file which contains USERNAME,PASSWORD and saved it in a folder on your Jmeter server. Make sure the users exist in OAM/OID:


Adapt the path in the “CSV Data Set Config” and define the variable values (USERNAME and PASSWORD) in “Variable Names comma-delimited”
Look for the URL that is submitting the authentication – /oam/server/auth_cred_submit- and click on it. In the right frame, replace the username and password got during the recording with respectively ${USERNAME} and ${PASSWORD} as shown below:
At last you can adapt the thread group of your test plan to the number of users (Number of Threads) and loop (Loop Count) you want to run and execute it. The Ramp-Up Period in Seconds is the time between the Threads start.
JMeter test plan IMG5
The test plan can be executed now and results visualised in tree, graph or table views.



Cet article Using JMeter to run load test on a ADF application protected by Oracle Access Manager Single Sign On est apparu en premier sur Blog dbi services.

Documentum story – Change the location of the Xhive Database for the DSearch (xPlore)

Tue, 2016-10-18 02:00

When using xPlore with Documentum, you will need to setup a DSearch which will be used to perform the searches and this DSearch uses in the background an Xhive Database. This is a native XML Database that persists XML DOMs and provide access to them using XPath and XQuery. In this blog, I will share the steps needed to change the location of the Xhive Database used by the DSearch. You usually don’t want to move this XML Database everyday but it might be useful as a one-time action. In this customer case, one of the DSearch in a Sandbox/Dev environment has been installed using a wrong path for the Xhive Database (not following our installation conventions) and therefore we had to correct that just to keep the alignment between all environments and to avoid a complete uninstall/reinstall of the IndexAgents + DSearch.


In the steps below, I will suppose that xPlore has been installed under “/app/xPlore” and that the Xhive Database has been created under “/app/xPlore/data”. This is the default value and then when installing an IndexAgent, it will create, under the data folder, a sub-folder with a name equal to the DSearch Domain’s name (usually the name of the docbase/repository). In this blog I will show you how to move this Xhive Database to “/app/xPlore/test-data” without having to reinstall everything. This means that the Xhive Database will NOT be deleted/recreated from scratch (this is also possible) and therefore you will NOT have to perform a full reindex which would have taken a looong time.


So let’s start with stopping all components first:

[xplore@xplore_server_01 ~]$ sh -c "/app/xPlore/jboss7.1.1/server/stopIndexagent.sh"
[xplore@xplore_server_01 ~]$ sh -c "/app/xPlore/jboss7.1.1/server/stopPrimaryDsearch.sh"


Once this is done, we need to backup the data and config files, just in case…

[xplore@xplore_server_01 ~]$ current_date=$(date "+%Y%m%d")
[xplore@xplore_server_01 ~]$ cp -R /app/xPlore/data/ /app/xPlore/data_bck_$current_date
[xplore@xplore_server_01 ~]$ cp -R /app/xPlore/config/ /app/xPlore/config_bck_$current_date
[xplore@xplore_server_01 ~]$ mv /app/xPlore/data/ /app/xPlore/test-data/


Ok now everything in the background is prepared and we can start the actual steps to move the Xhive Database. The first step is to change the data location in the files stored in the config folder. There is actually two files that need to be updated: indexserverconfig.xml and XhiveDatabase.bootstrap. In the first file, you need to update the “storage-location” path that defines where the data are kept and in the second file you need to update all paths pointing to the Database files. Here are some simple commands to replace the old path with the new path and check that it has been done properly:

[xplore@xplore_server_01 ~]$ sed -i "s,/app/xPlore/data,/app/xPlore/test-data," /app/xPlore/config/indexserverconfig.xml
[xplore@xplore_server_01 ~]$ sed -i "s,/app/xPlore/data,/app/xPlore/test-data," /app/xPlore/config/XhiveDatabase.bootstrap
[xplore@xplore_server_01 ~]$ 
[xplore@xplore_server_01 ~]$ grep -A2 "<storage-locations>" /app/xPlore/config/indexserverconfig.xml
        <storage-location path="/app/xPlore/test-data" quota_in_MB="10" status="not_full" name="default"/>
[xplore@xplore_server_01 ~]$ 
[xplore@xplore_server_01 ~]$ grep "/app/xPlore/test-data" /app/xPlore/config/XhiveDatabase.bootstrap | grep 'id="[0-4]"'
        <file path="/app/xPlore/test-data/xhivedb-default-0.XhiveDatabase.DB" id="0"/>
        <file path="/app/xPlore/test-data/SystemData/xhivedb-SystemData-0.XhiveDatabase.DB" id="2"/>
        <file path="/app/xPlore/test-data/SystemData/MetricsDB/xhivedb-SystemData#MetricsDB-0.XhiveDatabase.DB" id="3"/>
        <file path="/app/xPlore/test-data/SystemData/MetricsDB/PrimaryDsearch/xhivedb-SystemData#MetricsDB#PrimaryDsearch-0.XhiveDatabase.DB" id="4"/>
        <file path="/app/xPlore/test-data/xhivedb-temporary-0.XhiveDatabase.DB" id="1"/>


The next step is to announce the new location of the data folder to the DSearch so it can create future Xhive Databases at the right location and this is done inside the file indexserver-bootstrap.properties. After the update, this file should look like the following:

[xplore@xplore_server_01 ~]$ cat /app/xPlore/jboss7.1.1/server/DctmServer_PrimaryDsearch/deployments/dsearch.war/WEB-INF/classes/indexserver-bootstrap.properties
# (c) 1994-2009, EMC Corporation. All Rights Reserved.
#Wed May 20 10:40:49 PDT 2009
#Note: Do not change the values of the properties in this file except xhive-pagesize and force-restart-xdb.
# xhive-cache-pages=40960
isPrimary = true


In this file:

  • indexserver.config.file => defines the location of the indexserverconfig.xml file that must be used to recreate the DSearch Xhive Database.
  • xhive-bootstrapfile-name => defines the location and name of the Xhive bootstrap file that will be generated during bootstrap and will be used to create the empty DSearch Xhive Database.
  • xhive-data-directory => defines the path of the data folder that will be used by the Xhive bootstrap file. This will therefore be the future location of the DSearch Xhive Database.


As you probably understood, to change the data folder, you just have to adjust the value of the parameter “xhive-data-directory” to point to the new location: /app/xPlore/test-data.


When this is done, the third step is to change the Lucene temp path:

[xplore@xplore_server_01 ~]$ cat /app/xPlore/jboss7.1.1/server/DctmServer_PrimaryDsearch/deployments/dsearch.war/WEB-INF/classes/xdb.properties


In this file, xdb.lucene.temp.path defines the path for temporary uncommitted indexes. Therefore it will just be used for temporary indexes but it is still a good practice to change this location since it’s also talking about the data of the DSearch and it helps to keep everything consistent.


Then the next step is to clean the cache and restart the DSearch. You can use your custom start/stop script if you have one or use something like this:

[xplore@xplore_server_01 ~]$ rm -rf /app/xPlore/jboss7.1.1/server/DctmServer_*/tmp/work/*
[xplore@xplore_server_01 ~]$ sh -c "cd /app/xPlore/jboss7.1.1/server;nohup ./startPrimaryDsearch.sh & sleep 5;mv nohup.out nohup-PrimaryDsearch.out"


Once done, just verify in the log file generated by the start command (for me: /app/xPlore/jboss7.1.1/server/nohup-PrimaryDsearch.out) that the DSearch has been started successfully. If that’s true, then you can also start the IndexAgent:

[xplore@xplore_server_01 ~]$ sh -c "cd /app/xPlore/jboss7.1.1/server;nohup ./startIndexagent.sh & sleep 5;mv nohup.out nohup-Indexagent.out"


And here we are, the Xhive Database is now located under the “test-data” folder!



Additional note: As said at the beginning of this blog, it is also possible to recreate an empty Xhive Database and change its location at the same time. Doing a recreation of am empty DB will result in the same thing as the steps above BUT you will have to perform a full reindexing which will take a lot of time if this isn’t a new installation (the more documents are indexed, the more time it will take)… To perform this operation, the steps are mostly the same and are summarized below:

  1. Backup the data and config folders
  2. Remove all files inside the config folder except the indexserverconfig.xml
  3. Create a new (empty) data folder with a different name like “test-data” or “new-data” or…
  4. Update the file indexserver-bootstrap.properties with the reference to the new path
  5. Update the file xdb.properties with the reference to the new path
  6. Clean the cache and restart the DSearch+IndexAgents

Basically, the steps are exactly the same except that you don’t need to update the files indexserverconfig.xml and XhiveDatabase.bootstrap. The first one is normally updated by the DSearch automatically and the second file will be recreated from scratch using the right data path thanks to the update of the file indexserver-bootstrap.properties.


Have fun :)


Cet article Documentum story – Change the location of the Xhive Database for the DSearch (xPlore) est apparu en premier sur Blog dbi services.

CDB resource plan: shares and utilization_limit

Mon, 2016-10-17 16:00

I’m preparing some slides about PDB security (lockdown) and isolation (resource) for DOAG and as usual I’ve more info to share than what can fit in 45 minutes. In order to avoid the frustration of removing slides, I usually share them in blog posts. Here is the basic concepts of CDB resource plans in multitenant: shares and resource limit.

The CDB resource plan is mainly about CPU. It also governs the degree when in parallel query and the I/O when on Exadata, but the main resource is the CPU: sessions that are not allowed to used more CPU will wait on ‘resmgr: cpu quantum’. In a cloud environment where you provision a PDB, like in the new Exadata Express Cloud Service, you need to ensure that one PDB do not take all CDB resources, but you also have to ensure that resources are fairly shared.


Let’s start with resource limit. This do not depend on the number of PDB: it is defined as a percentage of the CDB resources. Here I have a CDB with two PDBs and I’ll run a workload on one PDB only. I run 8 sessions, all cpu bound, on PDB1.

I’ve defined a CDB resource plan that sets the resource_limit to 50% for PDB1:

------------------------------------ ------------ ------------------------- ------------------------------ ---------- ----------
14-OCT-16 PM +00:00 MY_CDB_PLAN PDB1 PDB 1 50
14-OCT-16 PM +00:00 MY_CDB_PLAN PDB2 PDB 1 100

This is an upper limit. I’ve 8 CPUs so PDB1 will be allowed to run only 4 sessions in CPU at a time. Here is the result:


What you see here is that when more than the allowed percentage has been used the sessions are scheduled out of CPU and wait on ‘resmgr: cpu quantum’. And the interesting thing is that they seem to be stopped all at the same time:


This make sense because the suspended sessions may hold resources that are used by others. However, this pattern does not reproduce for any workload. More work and future blog posts are probably required about that.

Well, the goal here is to explain that resource_limit is there to define a maximum resource usage. Even if there is no other activity, you will not be able to use all CDB resources if you have a resource limit lower than 100%.


Share are there for the opposite reason: guarantee a minimum of ressources to a PDB.
However, the unit is not the same. It cannot be the same. You cannot guarantee a percentage of CDB ressources to one PDB because you don’t know how many other PDBs you have. Let’s say you have 4 PDBs and you want to have them equal. You want to define a minimum of 25% percent for each. But then, what happens when a new PDB is created? You need to change all 25% to 20%. To avoid that, the minimum ressources is allocated by shares. You give shares to each PDB and they will get a percentage of ressources calculated from their share divided by the total number of shares.

The result is that when there is not enough ressources in the CDB to run all the sessions, then the PDBs that use more than their share will wait. Here is an example where PDB1 has 2 shares and PDB2 has 1 share, which means that PDB1 will get at least 66% of ressources and PDB2 at least 33%:

------------------------------------ ------------ ------------------------- ------------------------------ ---------- ----------
14-OCT-16 PM +00:00 MY_CDB_PLAN PDB1 PDB 2 100
14-OCT-16 PM +00:00 MY_CDB_PLAN PDB2 PDB 1 100

Here is the ASH on each PDB when I run 8 CPU-bound sessions on each. System is saturated because I have only 8 CPUs.



Because of the shares difference (2 shares for PDB1 and 1 share for PDB2) PDB1 has been able ti use more CPU than PDB2 when the system was saturated:
PDB1 was 72% in cpu and 22% waiting, PDB2 was 50% in cpu and 50% waiting.


In order to illustrate what changes when the system is saturated, I’ve run 16 sessions on PDB1 and then, after 60 seconds, 4 sessions on PDB2.

Here is the activity of PDB1:


and PDB2:


At 22:14 PDB1 was able to use all available CPU because there is no utilization_limit and no other PDB have activity. The system is saturated, but from PDB1 only.
At 22:15 PDB has also activity, so the resource manager must limit PDB1 in order to give ressources to PDB2 proportionally to its share. PDB1 with 2 shares are guaranteed to be able to use 2/3 of cpu. PDB1 with 1 share is guaranteed to use 1/3 of it.
At 22:16 PDB1 activity has completed, so PDB2 can use more resources. The 4 sessions are lower than the available cpu, so the system is not saturated and there is no wait.

What to remember?

Shares are there to guarantee a minimum of ressources utilization when system is saturated.
Resource_limit is there to set a maximum of resource utilization, whether the system is saturated or not.


Cet article CDB resource plan: shares and utilization_limit est apparu en premier sur Blog dbi services.

Documentum story – Documentum installers fail with various errors

Mon, 2016-10-17 02:00

Some months ago when installing/removing/upgrading several Documentum components, we ended up facing a strange issue (yes I know, another one!). We were able to see these specific errors during the installation or removal of a Docbase, during the installation of a patch for the Content Server, the installation of the Thumbnail Server, aso… The errors we faced change for different installers but in the end, all of these errors were linked to the same issue. The only error that wasn’t completely useless was the one faced during the installation of a new docbase: “Content is not allowed in trailing section”. Yes I know this might not be really meaningful for everybody but this kind of error usually appears when an XML file isn’t formatted properly: some content isn’t allowed at this location in the file…


The strange thing is that these installers were working fine a few days before so what changed in the meantime exactly? After some research and analysis, I finally found the guilty! One thing that has been added in these few days was D2 which has been installed a few hours before the first error. Now what can be the link between D2 and these errors when running some installers? The first thing to do when there is an issue with D2 on the Content Server is to check the Java Method Server. The first time I saw this error, it was during the installation of a new docbase. As said before, I checked the logs of the Java Method Server and I found the following WARNING which confirmed what I suspected:

2015-10-24 09:39:59,948 UTC WARNING [javax.enterprise.resource.webcontainer.jsf.config] (MSC service thread 1-3) JSF1078: Unable to process deployment descriptor for context ''{0}''.: org.xml.sax.SAXParseException; lineNumber: 40; columnNumber: 1; Content is not allowed in trailing section.
        at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:196) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:175) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:394) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:322) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:281) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLScanner.reportFatalError(XMLScanner.java:1459) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLDocumentScannerImpl$TrailingMiscDispatcher.dispatch(XMLDocumentScannerImpl.java:1302) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:324) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:845) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:768) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.parsers.XMLParser.parse(XMLParser.java:108) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1196) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:555) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.jaxp.SAXParserImpl.parse(SAXParserImpl.java:289) [xercesImpl-2.9.1-jbossas-1.jar:]
        at javax.xml.parsers.SAXParser.parse(SAXParser.java:195) [rt.jar:1.7.0_72]
        at com.sun.faces.config.ConfigureListener$WebXmlProcessor.scanForFacesServlet(ConfigureListener.java:815) [jsf-impl-2.1.7-jbossorg-2.jar:]
        at com.sun.faces.config.ConfigureListener$WebXmlProcessor.<init>(ConfigureListener.java:768) [jsf-impl-2.1.7-jbossorg-2.jar:]
        at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:178) [jsf-impl-2.1.7-jbossorg-2.jar:]
        at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:3392) [jbossweb-7.0.13.Final.jar:]
        at org.apache.catalina.core.StandardContext.start(StandardContext.java:3850) [jbossweb-7.0.13.Final.jar:]
        at org.jboss.as.web.deployment.WebDeploymentService.start(WebDeploymentService.java:90) [jboss-as-web-7.1.1.Final.jar:7.1.1.Final]
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811)
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_72]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_72]
        at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_72]


So the error “Content is not allowed in trailing section” comes from the JMS which isn’t able to properly read the first character of the line 40 coming from an XML file “deployment descriptor”. So which file is that? That’s where the fun begin! There are several deployment descriptors in JBoss like web.xml, jboss-app.xml, jboss-deployment-structure.xml, jboss-web.xml, aso…


The D2 installer is updating some configuration files like the server.ini. This is a text file, pretty simple to update and indeed the file is properly formatted so no issue on this side. Except this file, the D2 installer is mainly updating XML files like the following ones:
  • $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/META-INF/jboss-deployment-structure.xml
  • $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/DmMethods.war/WEB-INF/web.xml
  • $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/bpm.ear/META-INF/jboss-deployment-structure.xml
  • $DOCUMENTUM_SHARED/jboss7.1.1/modules/emc/d2/lockbox/main/module.xml
  • aso…


At this point, it was pretty simple to figure out the issue: I just checked all these files until I found the wrongly updated/corrupted XML file. And the winner was… the file web.xml for the DmMethods inside the ServerApps. The D2 installer usually update/read this file but in the process of doing so, it actually does also corrupt it… It is not a big corruption but it is still boring since it will prevent some installers from working properly and it will display the error shown above in the Java Method Server. Basically whenever you have some parsing errors, I would suggest you to take a look at the files web.xml across the JMS. The D2 Installer in our case added at the end of this file the word “ap”. As you know, an XML file should be properly formatted to be readable and “ap” isn’t a correct XML ending tag:

[dmadmin@content_server_01 ~]$ cat $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/DmMethods.war/WEB-INF/web.xml
<?xml version="1.0" encoding="UTF-8"?>
    <display-name>Documentum Method Invocation Servlet</display-name>
    <description>This servlet is for Java method invocation using the DO_METHOD apply call.</description>
        <description>Documentum Method Invocation Servlet</description>
[dmadmin@content_server_01 ~]$


So to correct this issue, you just have to remove the word “ap” from the end of this file, restart the JMS and finally restart any installer and the issue should be gone. That’s pretty simple but still annoying that installers provided by EMC can cause such trouble on their own products.


The errors mentioned above are related to these XML files being wrongly updated by the D2 installer but that’s actually not the only installer that is often wrongly updating XML files. As far as I remember, the BPM installer and Thumbnail Server installer can also produce the exact same issue and the reason behind that is probably that the XML files of the Java Method Server on Linux Boxes have a wrong FileFormat… We faced this issue with all versions that we installed so far on our different environments: CS 7.2 P02, P05, P16… Each and every time we install a new Documentum Content Server, all XML files of the JMS are all using the DOS FileFormat and this prevents the D2/Thumbnail/BPM installers to do their job.


As a sub-note, I have also seen some issues with the file “jboss-deployment-structure.xml”. Just like the “web.xml” above, this one is also present for all applications deployed under the Java Method Server. Some installers will try to update this file (including D2, in order to configure the Lockbox in it) but again the same issue is happening, mostly because of the wrong FileFormat: I have already seen the whole content of this file just being removed by a Documentum installer… So before doing anything, I would suggest you to take a backup of the JMS as soon as it is installed and running and before installing all additional components like D2, bpm, Thumbnail Server, aso… On Linux, it is pretty easy to see and change the FileFormat of a file. Just open it using “vi” for example and then write “:set ff?”. This will display the current FileFormat and you can then change it using: “:set ff=unix”, if needed.


I don’t remember seeing such kind of behavior before the CS 7.2 so maybe it is just linked to this specific version… If you already have seen such thing for a previous version, don’t hesitate to share!


Cet article Documentum story – Documentum installers fail with various errors est apparu en premier sur Blog dbi services.

Documentum story – Monitoring of WebLogic Servers

Sat, 2016-10-15 02:00

As you already know if you are following our Documentum Story, we are building, working and managing, for some time now, a huge Documentum Platform with more than 115 servers so far (still growing). To be able to manage properly this platform, we need an efficient monitoring tool. In this blog, I will not talk about Documentum but rather I will talk a little bit about the monitoring solution we integrated with Nagios to be able to support all of our WebLogic Servers. For those of you who don’t know, Nagios is a very popular Open Source monitoring tool launched in 1999. By default Nagios doesn’t provide any interface to monitor WebLogic or Documentum and therefore we choose to build our own script package to be able to properly monitor our Platform.


At the beginning of the project when we were installing the first WebLogic Servers, we used the monitoring scripts coming from the old Platform (a Documentum 6.7 Platform not managed by us). The idea behind these monitoring scripts was the following one:

  • The Nagios Server needs to perform a check of a service
  • The Nagios Server contacts the Nagios Agent which executes the check
  • The Check is starting its own WLST script to retrieve only the value needed for this check (each check calls a different WLST script)
  • The Nagios Agent returns the value to the Nagios Server which is then happy with it


This pretty simple approach was working fine at the beginning when we only had a few WebLogic Servers with not so much to monitor on them… The problem is that the Platform was growing very fast and we quickly started to see a few timeouts on the different checks because Nagios was trying to execute a lot of check at the same time on the same host. For example on a specific environment, we had two WebLogic Domains running with 4 or 5 Managed Servers for each domain that were hosting a Documentum Application (DA, D2, D2-Config, …). We were monitoring the heapSize, the number of threads, the server state, the number of sessions, the different URLs with and without Load Balancer, aso… for each Managed Server and for the AdminServers too. Therefore we quickly reached a point where 5 or 10 WLST scripts were running at the same time for the monitoring and only the monitoring.


The problem with the WLST script is that it takes a lot of time to initialize itself and start (several seconds) and during that time, 1 or 2 CPUs are fully used only for that. Now correlate this figure with the fact that there are dozens of checks running every 5 minutes for each domain and that are all starting their own WLST script. In the end, you will get a WebLogic Server highly used with a huge CPU consumption only for the monitoring… So that might be sufficient for a small installation but that’s definitively not the right thing to do for a huge Platform.


Therefore we needed to do something else. To solve this particular problem, I developed a new set to scripts that I integrated with Nagios to replace the old ones. The idea behind these new scripts was that it should be able to provide us at least the same thing as the old ones but without starting so much WLST scripts and it should be easily extensible. I worked on this small development and this is what I came with:

  • The Nagios Server needs to perform a check of a service
  • The Nagios Server contacts the Nagios Agent which executes the check
  • The Check is reading a log file to find the value needed for this check
  • The Nagios Agent returns the value to the Nagios Server which is then happy with it


Pretty similar isn’t it? Indeed… And yet so different! The main idea behind this new version is that instead of starting a WLST script for each check which will fully use 1 or 2 CPUs and last for 2 to 10 seconds (depending on the type of check and on the load), this new version will only read a very short log file (1 log file per check) that contains one line: the result of the check. Reading a log file like that takes a few milliseconds and it doesn’t consume 2 CPUs for doing that… Now the remaining question is how can we handle the process that will populate the log files? Because yes checking a log file is fast but how can we ensure that this log file will contain the correct data?


To manage that, this is what I did:

  • Creation of a shell script that will:
    • Be executed by the Nagios Agent for each check
    • Check if the WebLogic Domain is running and exit if not
    • Check if the WLST script is running and start it if not
    • Ensure the log file has been updated in the last 5 minutes (meaning the monitoring is running and the value that will be read is correct)
    • Read the log file
    • Analyze the information coming from the log file and return that to the Nagios Agent
  • Creation of a WLST script that will:
    • Be started once, do its job, sleep for 2 minutes and then do it again
    • Retrieve the monitoring values and store that in log files
    • Store error messages in the log files if there is any issue


It will not describe any longer the shell script because that’s just basic shell commands but I will show you instead an example of a WLST script that can be used to monitor a few things (ThreadPool of all Servers, HeapFree of all Severs, Sessions of all Applications deployed on all Servers):

[nagios@weblogic_server_01 scripts]$ cat DOMAIN_check_weblogic.wls
from java.io import File
from java.io import FileOutputStream

userConfig=directory + '/DOMAIN_configfile.secure'
userKey=directory + '/DOMAIN_keyfile.secure'

connect(userConfigFile=userConfig, userKeyFile=userKey, url='t3s://' + address + ':' + port)

def setOutputToFile(fileName):

def setOutputToNull():

while 1:
  for server in domainRuntimeService.getServerRuntimes():
    setOutputToFile(directory + '/threadpool_' + domainName + '_' + server.getName() + '.out')
    cd('/ServerRuntimes/' + server.getName() + '/ThreadPoolRuntime/ThreadPoolRuntime')
    print 'threadpool_' + domainName + '_' + server.getName() + '_OUT',get('ExecuteThreadTotalCount'),get('HoggingThreadCount'),get('PendingUserRequestCount'),get('CompletedRequestCount'),get('Throughput'),get('HealthState')
    setOutputToFile(directory + '/heapfree_' + domainName + '_' + server.getName() + '.out')
    cd('/ServerRuntimes/' + server.getName() + '/JVMRuntime/' + server.getName())
    print 'heapfree_' + domainName + '_' + server.getName() + '_OUT',get('HeapFreeCurrent'),get('HeapSizeCurrent'),get('HeapFreePercent')

    setOutputToFile(directory + '/sessions_' + domainName + '_console.out')
    print 'sessions_' + domainName + '_console_OUT',get('OpenSessionsCurrentCount'),get('SessionsOpenedTotalCount')
  except WLSTException,e:
    setOutputToFile(directory + '/sessions_' + domainName + '_console.out')
    print 'CRITICAL - The Server AdminServer or the Administrator Console is not started'

  for app in cmo.getAppDeployments():
    cd('/AppDeployments/' + app.getName())
    for appServer in cmo.getTargets():
        setOutputToFile(directory + '/sessions_' + domainName + '_' + app.getName() + '.out')
        cd('/ServerRuntimes/' + appServer.getName() + '/ApplicationRuntimes/' + app.getName() + '/ComponentRuntimes/' + appServer.getName() + '_/' + app.getName())
        print 'sessions_' + domainName + '_' + app.getName() + '_OUT',get('OpenSessionsCurrentCount'),get('SessionsOpenedTotalCount')
      except WLSTException,e:
        setOutputToFile(directory + '/sessions_' + domainName + '_' + app.getName() + '.out')
        print 'CRITICAL - The Managed Server ' + appServer.getName() + ' or the Application ' + app.getName() + ' is not started'


[nagios@weblogic_server_01 scripts]$


A few notes related to the above WLST script:

  • userConfig and userKey are two files created previously in WLST that contain the username/password of the current user (at the time of creation of these files) in an encrypted way. This allows you to login to WLST without having to type your username and password and more importantly, without having to put a clear text password in this file…
  • To ensure the security of this environment we are always using t3s to perform the monitoring checks and this requires you to configure the AdminServer to HTTPS.
  • In the script, I’m using the “setOutputToFile” and “setOutputToNull” functions. The first one is to redirect the output to the file mentioned in parameter while the second one is to remove all output. That’s basically to ensure that the log files generated ONLY contain the needed lines and nothing else.
  • There is an infinite loop (while 1) that executes all checks, create/update all log files and then sleep for 120 000 ms (so that’s 2 minutes) before repeating it.


As said above, this is easily extendable and therefore you can just add a new paragraph with the new values to retrieve. So have fun with that! :)


Comparison between the two methods. I will use below real figures coming from one of our WebLogic Server:

  • Old:
    • 40 monitoring checks running every 5 minutes => 40 WLST scripts started
    • each one for a duration of 6 seconds (average)
    • each one using 200% CPU during that time (2 CPUs)
  • New:
    • Shell script:
      • 40 monitoring checks running every 5 minutes => 40 log files read
      • each one for a duration of 0,1s (average)
      • each one using 100% CPU during that time (1 CPU)
    • WLST script:
      • One loop every 2 minutes (so 2.5 loops in 5 minutes)
      • each one for a duration of 0.5s (average)
      • each one using 100% CPU during that time (1 CPU)


Period CPU Time (Old) CPU Time (New) 5 minutes 40*6*2 <~> 480 s 40*0.1*1 + 2.5*0.5*1 <~> 5.25 s 1 day 480*(1440/5) <~> 138 240 s
<~> 2 304 min
<~> 38.4 h 4.25*(1440/5) <~> 1 512 s
<~> 25.2 min
<~> 0.42 h

Based on these figures, we can see that our new monitoring solution is almost 100 times more efficient than the old one so that’s a success: instead of spending 38.4 hours using the CPU on a 24 hours period (so that’s 1.6 CPU the whole day), we are now using 1 CPU for only 25 minutes! Here I’m just talking about the CPU Time but of course you can do the same thing for the memory, processes, aso…


Note: Starting with WebLogic 12c, Oracle introduced the RESTful services which can now be used to monitor WebLogic too… It has been improved in 12.2 and that can become a pretty good alternative to WLST scripting but for now we are still using this WLST approach with one single execution every 2 minutes and then Nagios reading the log files when needs be.


Cet article Documentum story – Monitoring of WebLogic Servers est apparu en premier sur Blog dbi services.

Using Apache JMeter to run load test on a Web applications protected by Microsoft Advanced Directory Federation Services

Fri, 2016-10-14 09:37

One of my last mission was to configure Apache JMeter for performance and load tests on a Web Application. The access to this Web Application requires authentication provided by a Microsoft Advanced Directory Federation Services single Sign On environment.
This Single Sign On communication is based on SAML (Security Assertion Markup Language). SAML is an XML-based open standard data format for exchanging authentication and authorization data between parties, in particular, between an identity provider and a service provider. ADFS login steps relies on several parameters that need to be fetched and re-injected to the following steps like ‘SAMLRequest’, ‘RelayState’ and ‘SAMLResponse’.
This step-by-step tutorial shows the SAML JMeter scenario part to perform those ADFS login steps.

Record a first scenario

After installing the Apache JMeter tool, you are ready to record a first scenario. Have a look on the JMeter user manual to configure JMeter for recording scenario.

1. Adapt the HTTP(s) Test Script Recorder

For this task we need to record all HTTP(S) requests. Those from the Application and those from the Single Sign On Server. We need then to change the HTTP(S) test Proxy Recorder parameters as below

Open the “WorkBench” on the tree and click on the “HTTP(S) Test Script Recorder”.

The scenario recording requires some changes onto the “HTTP(S) Test Script Recorder”.
Change the:
Port:  this is the port on the server running JMeter that will act as proxy. Default value is 8080.
URL Patterns to Include: Add “.*” to include all requests (you may exclude some later, if you desire).

2.    Configure the Browser to use the Test Script recorder as proxy

Go to your favourite browser (Firefox, Internet Explorer, Chrome, etc.) and configure the proxy as explained as follow:
The example below is for Internet Explorer 11 (it may differ from version to version):

  1. Go to Tools > Internet Options.
  2. Select the “Connection” tab.
  3. Click the “LAN settings” button.
  4. Check the  “Use a proxy server for your LAN” check-box. The address and port fields should be enabled now.
  5. In the Address type the server name or the IP address of the server running JMeter HTTP(S) Test Script Recorder and in the Port, enter the port entered in Step 1.

From now, the JMeter is proxying the requests.

3.    Record your first scenario

Connect to the Web Application using the browser you have configured in the previous step. Run a simple scenario including the authentication steps. Once done, stop the HTTP(S) Test Script Recorder in JMeter.

4.    Analyse the recorded entries

Analyse the recorded entries to find out the entry that redirects to the login page. In this specific case, it was the first request because the Web Application automatically displays the login page for all users not authenticated. From this request, we need to fetch two values ‘SAMLRequest’ and ‘RelayState’ included in the page response data and submit them to the ADFS login URL. After successful login, ADFS will provide a SAMLResponse that need to be submitted back to the callback URL.  This can be done by using  Regular Expression Extractors. Refer to the image below  to see how to do this.


Extractor Name Associated variable Regular Expression SAMLRequest Extractor SAMLRequest name=”SAMLRequest” value=”([0-9A-Za-z;.: \/=+]*)” RelayState Extractor RelayState name=”RelayState” value=”([&#;._a-zA-Z0-9]*)” SAMLResponse Extractor SAMLResponse name=”SAMLResponse” value=”([&#;._+=a-zA-Z0-9]*)”

In the registered scenario look for the entries having SAMLRequest, RelayState and SAMLResponse as parameter and replace them with the corresponding variable set in the regular expressions created in the previous step.

* Click on the image to increase the size

Once this is done the login test scenario can be executed now.

This JMeter test plan can be cleaned from the URL requests and be used as a base plan to record more complex test plans.


Cet article Using Apache JMeter to run load test on a Web applications protected by Microsoft Advanced Directory Federation Services est apparu en premier sur Blog dbi services.

Manage Azure in PowerShell (RM)

Fri, 2016-10-14 02:49

Azure offers two deployment models for cloud components: Resource Manager (RM) and Classic deployment model. Newer and more easier to manage, Microsoft recommends to use the Resource Manager.
Even if these two models can exist at the same time in Azure, they are different and managed differently: in PowerShell cmdlets are specific to RM.

In order to be able to communicate with Azure from On-Premises in PowerShell, you need to download and install the Azure PowerShell from WebPI. For more details, please refer to this Microsoft Azure post “How to install and configure Azure PowerShell“.
Azure PowerShell installs many modules located in C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell:
Get-module -ListAvailable -Name *AzureRm*
ModuleType Version Name ExportedCommands
---------- ------- ---- ----------------
Manifest 1.1.3 AzureRM.ApiManagement {Add-AzureRmApiManagementRegion, Get-AzureRmApiManagementSsoToken, New-AzureRmApiManagementHostnam...
Manifest 1.0.11 AzureRM.Automation {Get-AzureRMAutomationHybridWorkerGroup, Get-AzureRmAutomationJobOutputRecord, Import-AzureRmAutom...
Binary 0.9.8 AzureRM.AzureStackAdmin {Get-AzureRMManagedLocation, New-AzureRMManagedLocation, Remove-AzureRMManagedLocation, Set-AzureR...
Manifest 0.9.9 AzureRM.AzureStackStorage {Add-ACSFarm, Get-ACSEvent, Get-ACSEventQuery, Get-ACSFarm...}
Manifest 1.0.11 AzureRM.Backup {Backup-AzureRmBackupItem, Enable-AzureRmBackupContainerReregistration, Get-AzureRmBackupContainer...
Manifest 1.1.3 AzureRM.Batch {Remove-AzureRmBatchAccount, Get-AzureRmBatchAccount, Get-AzureRmBatchAccountKeys, New-AzureRmBatc...
Manifest 1.0.5 AzureRM.Cdn {Get-AzureRmCdnCustomDomain, New-AzureRmCdnCustomDomain, Remove-AzureRmCdnCustomDomain, Get-AzureR...
Manifest 0.1.2 AzureRM.CognitiveServices {Get-AzureRmCognitiveServicesAccount, Get-AzureRmCognitiveServicesAccountKey, Get-AzureRmCognitive...
Manifest 1.3.3 AzureRM.Compute {Remove-AzureRmAvailabilitySet, Get-AzureRmAvailabilitySet, New-AzureRmAvailabilitySet, Get-AzureR...
Manifest 1.0.11 AzureRM.DataFactories {Remove-AzureRmDataFactory, Get-AzureRmDataFactoryRun, Get-AzureRmDataFactorySlice, Save-AzureRmDa...
Manifest 1.1.3 AzureRM.DataLakeAnalytics {Get-AzureRmDataLakeAnalyticsDataSource, Remove-AzureRmDataLakeAnalyticsCatalogSecret, Set-AzureRm...
Manifest 1.0.11 AzureRM.DataLakeStore {Add-AzureRmDataLakeStoreItemContent, Export-AzureRmDataLakeStoreItem, Get-AzureRmDataLakeStoreChi...
Manifest 1.0.2 AzureRM.DevTestLabs {Get-AzureRmDtlAllowedVMSizesPolicy, Get-AzureRmDtlAutoShutdownPolicy, Get-AzureRmDtlAutoStartPoli...
Manifest 1.0.11 AzureRM.Dns {Get-AzureRmDnsRecordSet, New-AzureRmDnsRecordConfig, Remove-AzureRmDnsRecordSet, Set-AzureRmDnsRe...
Manifest 1.1.3 AzureRM.HDInsight {Get-AzureRmHDInsightJob, New-AzureRmHDInsightSqoopJobDefinition, Wait-AzureRmHDInsightJob, New-Az...
Manifest 1.0.11 AzureRM.Insights {Add-AzureRmMetricAlertRule, Add-AzureRmLogAlertRule, Add-AzureRmWebtestAlertRule, Get-AzureRmAler...
Manifest 1.1.10 AzureRM.KeyVault {Get-AzureRmKeyVault, New-AzureRmKeyVault, Remove-AzureRmKeyVault, Remove-AzureRmKeyVaultAccessPol...
Manifest 1.0.7 AzureRM.LogicApp {Get-AzureRmIntegrationAccountAgreement, Get-AzureRmIntegrationAccountCallbackUrl, Get-AzureRmInte...
Manifest 0.9.2 AzureRM.MachineLearning {Export-AzureRmMlWebService, Get-AzureRmMlWebServiceKeys, Import-AzureRmMlWebService, Remove-Azure...
Manifest 1.0.12 AzureRM.Network {Add-AzureRmApplicationGatewayBackendAddressPool, Get-AzureRmApplicationGatewayBackendAddressPool,...
Manifest 1.0.11 AzureRM.NotificationHubs {Get-AzureRmNotificationHubsNamespaceAuthorizationRules, Get-AzureRmNotificationHubsNamespaceListK...
Manifest 1.0.11 AzureRM.OperationalInsights {Get-AzureRmOperationalInsightsSavedSearch, Get-AzureRmOperationalInsightsSavedSearchResults, Get-...
Manifest 1.0.0 AzureRM.PowerBIEmbedded {Remove-AzureRmPowerBIWorkspaceCollection, Get-AzureRmPowerBIWorkspaceCollection, Get-AzureRmPower...
Manifest 1.0.11 AzureRM.Profile {Enable-AzureRmDataCollection, Disable-AzureRmDataCollection, Remove-AzureRmEnvironment, Get-Azure...
Manifest 1.1.3 AzureRM.RecoveryServices {Get-AzureRmRecoveryServicesBackupProperties, Get-AzureRmRecoveryServicesVault, Get-AzureRmRecover...
Manifest 1.0.3 AzureRM.RecoveryServices.Backup {Backup-AzureRmRecoveryServicesBackupItem, Get-AzureRmRecoveryServicesBackupManagementServer, Get-...
Manifest 1.1.9 AzureRM.RedisCache {Reset-AzureRmRedisCache, Export-AzureRmRedisCache, Import-AzureRmRedisCache, Remove-AzureRmRedisC...
Manifest 2.0.2 AzureRM.Resources {Get-AzureRmADApplication, Get-AzureRmADGroupMember, Get-AzureRmADGroup, Get-AzureRmADServicePrinc...
Manifest 1.0.2 AzureRM.ServerManagement {Install-AzureRmServerManagementGatewayProfile, Reset-AzureRmServerManagementGatewayProfile, Save-...
Manifest 1.1.10 AzureRM.SiteRecovery {Stop-AzureRmSiteRecoveryJob, Get-AzureRmSiteRecoveryNetwork, Get-AzureRmSiteRecoveryNetworkMappin...
Manifest 1.0.11 AzureRM.Sql {Get-AzureRmSqlDatabaseImportExportStatus, New-AzureRmSqlDatabaseExport, New-AzureRmSqlDatabaseImp...
Manifest 1.1.3 AzureRM.Storage {Get-AzureRmStorageAccount, Get-AzureRmStorageAccountKey, Get-AzureRmStorageAccountNameAvailabilit...
Manifest 1.0.11 AzureRM.StreamAnalytics {Get-AzureRmStreamAnalyticsFunction, Get-AzureRmStreamAnalyticsDefaultFunctionDefinition, New-Azur...
Manifest 1.0.11 AzureRM.Tags {Remove-AzureRmTag, Get-AzureRmTag, New-AzureRmTag}
Manifest 1.0.11 AzureRM.TrafficManager {Disable-AzureRmTrafficManagerEndpoint, Enable-AzureRmTrafficManagerEndpoint, Set-AzureRmTrafficMa...
Manifest 1.0.11 AzureRM.UsageAggregates Get-UsageAggregates
Manifest 1.1.3 AzureRM.Websites {Get-AzureRmAppServicePlanMetrics, New-AzureRmWebAppDatabaseBackupSetting, Restore-AzureRmWebAppBa...

The basic cmdlets to connect and navigate between your different Accounts or Subscriptions are located in “AzureRM.Profile” module:
PS C:\> Get-Command -Module AzureRM.Profile
CommandType Name Version Source
----------- ---- ------- ------
Alias Login-AzureRmAccount 1.0.11 AzureRM.Profile
Alias Select-AzureRmSubscription 1.0.11 AzureRM.Profile
Cmdlet Add-AzureRmAccount 1.0.11 AzureRM.Profile
Cmdlet Add-AzureRmEnvironment 1.0.11 AzureRM.Profile
Cmdlet Disable-AzureRmDataCollection 1.0.11 AzureRM.Profile
Cmdlet Enable-AzureRmDataCollection 1.0.11 AzureRM.Profile
Cmdlet Get-AzureRmContext 1.0.11 AzureRM.Profile
Cmdlet Get-AzureRmEnvironment 1.0.11 AzureRM.Profile
Cmdlet Get-AzureRmSubscription 1.0.11 AzureRM.Profile
Cmdlet Get-AzureRmTenant 1.0.11 AzureRM.Profile
Cmdlet Remove-AzureRmEnvironment 1.0.11 AzureRM.Profile
Cmdlet Save-AzureRmProfile 1.0.11 AzureRM.Profile
Cmdlet Select-AzureRmProfile 1.0.11 AzureRM.Profile
Cmdlet Set-AzureRmContext 1.0.11 AzureRM.Profile
Cmdlet Set-AzureRmEnvironment 1.0.11 AzureRM.Profile

According to the cmdlets present in “AzureRM.Profile” module, you will be able to connect to your Azure Account(enter your credentials):
PS C:\> Login-AzureRmAccount
Environment : AzureCloud
Account : n.courtine@xxxxxx.com
TenantId : a123456b-789b-123c-4de5-67890fg123h4
SubscriptionId : z123456y-789x-123w-4vu5-67890ts123r4
SubscriptionName : ** Subscription Name **
CurrentStorageAccount :

You can list your associated Azure Subscriptions:
SubscriptionName : ** Subscription Name **
SubscriptionId : z123456y-789x-123w-4vu5-67890ts123r4
TenantId : a123456b-789b-123c-4de5-67890fg123h4

To switch your Subscription, do as follows:
Select-AzureRmSubscription -SubscriptionId z123456y-789x-123w-4vu5-67890ts123r4
Environment : AzureCloud
Account : n.courtine@xxxxxx.com
TenantId : a123456b-789b-123c-4de5-67890fg123h4
SubscriptionId : z123456y-789x-123w-4vu5-67890ts123r4
SubscriptionName : ** Subscription Name **
CurrentStorageAccount :

Or you can take a specific “snapshot” of your current location in Azure. It will help you to easily return to a specific context at the moment you ran the command:
PS C:\> $context = Get-AzureRmContext
Environment : AzureCloud
Account : n.courtine@xxxxxx.com
TenantId : a123456b-789b-123c-4de5-67890fg123h4
SubscriptionId : z123456y-789x-123w-4vu5-67890ts123r4
SubscriptionName : ** Subscription Name **
CurrentStorageAccount :
PS C:\> Set-AzureRmContext -Context $context
Environment : AzureCloud
Account : n.courtine@xxxxxx.com
TenantId : a123456b-789b-123c-4de5-67890fg123h4
SubscriptionId : z123456y-789x-123w-4vu5-67890ts123r4
SubscriptionName : ** Subscription Name **
CurrentStorageAccount :

It is also possible to list all the available Storage Account associated to your current subscriptions:
PS C:\> Get-AzureRmStorageAccount | Select StorageAccountName, Location
StorageAccountName Location
------------------ --------
semicroustillants259 westeurope
semicroustillants4007 westeurope
semicroustillants8802 westeurope

To see the existing blob container in each Storage Account:
PS C:\> Get-AzureRmStorageAccount | Select StorageAccountName, ResourceGroupName, Location
Blob End Point: https://dbimssql.blob.core.windows.net/
Name Uri LastModified
---- --- ------------
bootdiagnostics-t... https://dbimssql.blob.core.windows.net/bootdiagnostics-ta... 30.09.2016 12:36:12 +00:00
demo https://dbimssql.blob.core.windows.net/demo 05.10.2016 14:16:01 +00:00
vhds https://dbimssql.blob.core.windows.net/vhds 30.09.2016 12:36:12 +00:00
Blob End Point: https://semicroustillants259.blob.core.windows.net/
Name Uri LastModified
---- --- ------------
mastervhds https://semicroustillants259.blob.core.windows.net/master... 28.09.2016 13:41:19 +00:00
uploads https://semicroustillants259.blob.core.windows.net/uploads 28.09.2016 13:41:19 +00:00
vhds https://semicroustillants259.blob.core.windows.net/vhds 28.09.2016 13:55:57 +00:00
Blob End Point: https://semicroustillants4007.blob.core.windows.net/
Name Uri LastModified
---- --- ------------
artifacts https://semicroustillants4007.blob.core.windows.net/artif... 28.09.2016 13:59:47 +00:00

Azure infrastructure can be easily managed from On-Premises in PowerShell. In a previous post, I explained how to deploy a Virtual Machine from an Image in Azure PowerShell.
If you have remarks or advises, do not hesitate to share ;-)


Cet article Manage Azure in PowerShell (RM) est apparu en premier sur Blog dbi services.

Documentum story – IAPI login with a DM_TICKET for a specific user

Fri, 2016-10-14 02:00

During our last project, one of the Application Teams requested our help because they needed to execute some commands in IAPI with a specific user for which they didn’t know the password. They tried to use a DM_TICKET as I suggested them but they weren’t able to do so. Therefore I gave them detailed explanations of how to do that and I thought I should do the same in this blog because I was thinking that maybe a lot of people don’t know how to do that.


So let’s begin! The first thing to do is obviously to obtain a DM_TICKET… For that purpose, you can log in to the Content Server and use the local trust to login to the docbase with the Installation Owner (I will use “dmadmin” below). As said just before, there is a local trust in the Content Server and therefore you can put any password, the login will always work for the Installation Owner (if the docbroker and docbase are up of course…):

[dmadmin@content_server_01 ~]$ iapi DOCBASE -Udmadmin -Pxxx
        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2015
        All rights reserved.
        Client Library Release 7.2.0050.0084
Connecting to Server using docbase DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 013f245a8000b7ff started for user dmadmin."
Connected to Documentum Server running Release 7.2.0050.0214  Linux64.Oracle
Session id is s0


Ok so now we have a session with the Installation Owner and we can therefore get a DM_TICKET for the specific user I was talking about before. In this blog, I will use “adtsuser” as the “specific user” (ADTS user used for renditions). Getting a DM_TICKET is really simple in IAPI:

API> getlogin,c,adtsuser
API> exit


Now we do have a DM_TICKET for the user “adtsuser” so let’s try to use it. You can try to login in the “common” way as I did above but that will just not work because what we got is a DM_TICKET and that’s not a valid password. Therefore you will need to use something else:

[dmadmin@content_server_01 ~]$ iapi -Sapi
Running with non-standard init level: api


Pretty simple, right? So let’s try to use our session like we always do:

API> retrieve,c,dm_server_config
API> dump,c,l

  object_name                                   : DOCBASE
  title                                         : 


And that’s it, you have a working session with a specific user without the need to know any password, you just have to obtain a DM_TICKET for this user using the local trust of the Installation Owner!


Cet article Documentum story – IAPI login with a DM_TICKET for a specific user est apparu en premier sur Blog dbi services.

How to destroy your performance: PL/SQL vs SQL

Fri, 2016-10-14 00:11

Disclaimer: This is in no way a recommendation to avoid PL/SQL. This post just describes a case I faced at a customer with a specific implementation in PL/SQL the customer (and me) believed is the most efficient way of doing it in PL/SQL. This was a very good example for myself to remind me to check the documentation and to verify if what I believed a feature does is really what the feature is actually doing. When I was doing PL/SQL full time in one my of previous jobs I used the feature heavily without really thinking on what happened in the background. Always keep learning …

Lets start by building the test case. The issue was on on Linux but I think this will be reproducible on any release (although, never be sure :) ).

SQL> create table t1 as select * from dba_objects;
SQL> insert into t1 select * from t1;
SQL> /
SQL> /
SQL> /
SQL> /
SQL> /
SQL commit;
SQL> select count(*) from t1;


SQL> create table t2 as select object_id from t1 where mod(object_id,33)=0;
SQL> select count(*) from t2;


This are my two tables used for the test: t1 contains around 5,5 millions rows and there is t2 which contains 168896 rows. Coming to the issue: There is a procedure which does this:

create or replace procedure test_update
  cursor c1 is select object_id from t2;
  type tab is table of t2.object_id%type index by pls_integer;
  ltab tab;
  open c1;
  fetch c1 bulk collect into ltab;
  close c1;
  forall indx in 1..ltab.count
    update t1 set owner = 'AAA' where object_id = ltab(indx);
end test_update;

The procedure uses “bulk collect” and “forall” to fetch the keys from t2 in a first step and then uses these keys to update t1 in a second step. Seemed pretty well done: Not a loop over each single row, compare with the list and then do the update when there is a match. I really couldn’t see an issue here. But when you execute this procedure you’ll wait for ages (at least if you are in VM running on a notebook and not on super fast hardware).

The situation at the customer was that I was told that the update, when executed as plain SQL in sqlplus, takes less than a second. And really, when you execute this on the test case from above:

SQL> update t1 set owner = 'AAA' where object_id in ( select object_id from t2 );

168896 rows updated.

Elapsed: 00:00:05.30
SQL> rollback;

Rollback complete.

Elapsed: 00:00:02.44
SQL> update t1 set owner = 'AAA' where object_id in ( select object_id from t2 );

168896 rows updated.

Elapsed: 00:00:06.34
SQL> rollback;

Rollback complete.

Elapsed: 00:00:02.70

It is quite fast (between 5 and 6 seconds on my environment). So why is the PL/SQL version so much slower? Aren’t “bulk collect” and “forall” the right methods to boost performance? Lets take a look at the execution plan for the plain SQL version:

| Id  | Operation             | Name     | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   | A-Rows |    A-Time     | Buffers | Reads  |  OMem |  1Mem |  O/1/M|
|   0 | UPDATE STATEMENT      |          |      1 |       |       | 24303 (100)|          |       0 |00:00:04.52    |     259K|   9325 |       |       |       |
|   1 |  UPDATE               | T1       |      1 |       |       |            |          |       0 |00:00:04.52    |     259K|   9325 |       |       |       |
|*  2 |   HASH JOIN           |          |      1 |    48 |  4416 | 24303   (1)| 00:00:01 |     168K|00:00:01.76    |   86719 |   9325 |  2293K|  2293K|  1/0/0|
|   3 |    VIEW               | VW_NSO_1 |      1 |   161K|  2044K|    72   (2)| 00:00:01 |    2639 |00:00:00.05    |     261 |     78 |       |       |       |
|   4 |     SORT UNIQUE       |          |      1 |     1 |  2044K|            |          |    2639 |00:00:00.04    |     261 |     78 |   142K|   142K|  1/0/0|
|   5 |      TABLE ACCESS FULL| T2       |      1 |   161K|  2044K|    72   (2)| 00:00:01 |     168K|00:00:00.01    |     261 |     78 |       |       |       |
|   6 |    TABLE ACCESS FULL  | T1       |      1 |  5700K|   429M| 23453   (1)| 00:00:01 |    5566K|00:00:05.88    |   86458 |   9247 |       |       |       |

It is doing a hash join as expected. What about the PL/SQL version? It is doing this:

SQL_ID  4hh65t1u4basp, child number 0

Plan hash value: 2927627013

| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | UPDATE STATEMENT   |      |       |       | 23459 (100)|          |
|   1 |  UPDATE            | T1   |       |       |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |   951 | 75129 | 23459   (1)| 00:00:01 |

Uh! Why that? This is what I wasn’t aware of. I always thought when you use “forall” to send PL/SQL’s SQL to the SQL engine Oracle would rewrite the statement to expand the list in the where clause or do other optimizations. But this does not happen. The only optimization that takes place when you use “forall” is that the statements are send in batches to the SQL engine rather then sending each statement after another. What happens here is that you execute 168896 full table scans because the same statement (with a another bind variable value) is executed 168896 times. Can’t be really fast compared to the SQL version.

Of course you could rewrite the procedure to do the same as the SQL but this is not the point here. The point is: When you think what you have implemented in PL/SQL is the same as what you compare to when you run it SQL: Better think twice and even better read the f* manuals, even when you think you are sure what a feature really does :)


Cet article How to destroy your performance: PL/SQL vs SQL est apparu en premier sur Blog dbi services.

MariaDB: audit plugin

Thu, 2016-10-13 11:19
Why should you Audit your MySQL Instances?

First to provide you a way to track user accessing sensible data
Secondly to investigate on suspicious queries in all your critical databases
Thirdly to comply with law and industry standards

The MariaDB Audit Plugin can help you to log all or part of the server activity as:
– who was connected and at which time
– which databases and tables were accessed
– which action/event (CONNECT, TABLE,…)
– which queries were run
All of them can be stored in a dedicated audit log file

This Plugin provides auditing not only for MariaDB where it is included by default, but also for Percona Server and
even for Oracle MySQL when using the community version

Installation on MariaDB

The MariaDB Audit Plugin library is shipped with the MariaDB server
Once the server is installed, you need still to locate your plugin directory and install/load it
mysqld7-[MariaDB]>SHOW GLOBAL VARIABLES LIKE 'plugin_dir';
| Variable_name | Value |
| plugin_dir | /u00/app/mysql/product/mariadb-10.1.16-linux-x86_64/lib/plugin/ |
mysqld7-[MariaDB]>INSTALL PLUGIN server_audit SONAME 'server_audit.so';

Check then if it has been loaded
mysqld7-[MariaDB]>SELECT * from information_schema.plugins where plugin_name='server_audit'\G
*************************** 1. row ***************************
PLUGIN_LIBRARY: server_audit.so
PLUGIN_AUTHOR: Alexey Botchkov (MariaDB Corporation)
PLUGIN_DESCRIPTION: Audit the server activity

Configuration of the important audit system variables

You can either set them manually with SET GLOBAL but I recommend to define them in the option file (my.cnf)
## Audit Plugin MariaDB
server_audit_events = CONNECT,QUERY,TABLE # specifies the types of events to log
server_audit_logging = ON # Enable logging
server_audit = FORCE_PLUS_PERMANENT # Load the plugin at startup & prevent it from beeing removed
server_audit_file_path = /u00/app/mysql/admin/mysqld8/log/mysqld8-audit.log # Path & log name
server_audit_output_type = FILE # separate log file
server_audit_file_rotate_size = 100000 # Limit of the log size in Bytes before rotation
server_audit_file_rotations = 9 # Number of audit files before the first will be overwritten
server_audit_excl_users = root # User(s) not audited

Restart the server and check the audit system variables
mysqld7-[MariaDB]>show global variables like "server_audit%";
| Variable_name | Value |
| server_audit_events | CONNECT,QUERY,TABLE |
| server_audit_excl_users | root |
| server_audit_file_path | /u00/app/mysql/admin/mysqld7/log/mysqld7-audit.log |
| server_audit_file_rotate_now | OFF |
| server_audit_file_rotate_size | 10000 |
| server_audit_file_rotations | 9 |
| server_audit_incl_users | sme |
| server_audit_loc_info | |
| server_audit_logging | ON |
| server_audit_mode | 0 |
| server_audit_output_type | file |
| server_audit_query_log_limit | 1024 |
| server_audit_syslog_facility | LOG_USER |
| server_audit_syslog_ident | mysql-server_auditing |
| server_audit_syslog_info | |
| server_audit_syslog_priority | LOG_INFO |
16 rows in set (0.00 sec)

Audit log file

In the Audit log file directory, you will find one current audit file and 9 archived audit log file as defined by the parameter server_audit_file_rotations
mysql@MariaDB:/u00/app/mysql/admin/mysqld7/log/ [mysqld7] ll
total 344
-rw-rw----. 1 mysql mysql 242 Oct 13 10:35
-rw-rw----. 1 mysql mysql 556 Oct 13 10:35 mysqld7-audit.log
-rw-rw----. 1 mysql mysql 10009 Oct 13 10:35 mysqld7-audit.log.1
-rw-rw----. 1 mysql mysql 10001 Oct 12 14:22 mysqld7-audit.log.2
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:47 mysqld7-audit.log.3
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:41 mysqld7-audit.log.4
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:34 mysqld7-audit.log.5
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:28 mysqld7-audit.log.6
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:22 mysqld7-audit.log.7
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:15 mysqld7-audit.log.8
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:09 mysqld7-audit.log.9

You can check the latest records in the current one
mysqld7-[MariaDB]>tail -f mysqld7-audit.log
20161013 14:22:57,MYSQL,sme,localhost,31,179,QUERY,employees,'create tables tst(name varchar(20))',1064
20161013 14:23:05,MYSQL,sme,localhost,31,180,CREATE,employees,tst,
20161013 14:23:05,MYSQL,sme,localhost,31,180,QUERY,employees,'create table tst(name varchar(20))',0
20161013 14:23:41,MYSQL,sme,localhost,31,181,WRITE,employees,tst,
20161013 14:23:41,MYSQL,sme,localhost,31,181,QUERY,employees,'insert into tst values(\'toto\')',0
20161013 14:24:26,MYSQL,sme,localhost,31,182,WRITE,employees,tst,
20161013 14:24:26,MYSQL,sme,localhost,31,182,QUERY,employees,'update tst set name=\'titi\'',0
20161013 14:24:53,MYSQL,sme,localhost,31,183,QUERY,employees,'delete from tst',1142
20161013 14:48:34,MYSQL,sme,localhost,33,207,READ,employees,tst,
20161013 14:48:34,MYSQL,sme,localhost,33,207,QUERY,employees,'select * from tst',0
20161013 15:35:16,MYSQL,sme,localhost,34,0,FAILED_CONNECT,,,1045
20161013 15:35:16,MYSQL,sme,localhost,34,0,DISCONNECT,,,0
20161013 15:35:56,MYSQL,sme,localhost,35,0,CONNECT,,,0
20161013 15:35:56,MYSQL,sme,localhost,35,210,QUERY,,'select @@version_comment limit 1',0
20161013 15:36:03,MYSQL,sme,localhost,35,0,DISCONNECT,,,0

The audit log file which is a plain-text format contains the following commas separated fields
timestamp : 20161012 10:33:58
serverhost : MYSQL
user : sme
host : localhost
connection id: 9
query id : 77
operation : QUERY
database : employees
object : ‘show databases’ # Executed query if QUERY or table name if TABLE operation.
return code : 0

Installation on Percona or MySQL

It is quite the same as on MariaDB
Once your server is installed, you have to look after your plugin directory
mysqld8-[Percona]>SHOW GLOBAL VARIABLES LIKE 'plugin_dir';
| Variable_name | Value |
| plugin_dir | /u00/app/mysql/product/Percona-Server-5.7.14-7-Linux.x86_64.ssl101/lib/mysql/plugin/ |

mysqld10-[MySQL]>SHOW GLOBAL VARIABLES LIKE 'plugin_dir';
| Variable_name | Value |
| plugin_dir | /u00/app/mysql/product/mysql-5.6.14-linux-glibc2.5-x86_64/lib/plugin/ |

then copy the MariaDB plugin server_audit.so in the Percona/MySQL plugin directory
mysql@Percona:[mysqld8] cp /u00/app/mysql/product/mariadb-10.1.16-linux-x86_64/lib/plugin/server_audit.so

mysql@MySQL:[mysqld10] cp /u00/app/mysql/product/mariadb-10.1.16-linux-x86_64/lib/plugin/server_audit.so

Install/load the plugin & check
mysqld8-[(Percona)]>INSTALL PLUGIN server_audit SONAME 'server_audit.so';
mysqld8-[Percona]>SELECT * from information_schema.plugins where plugin_name='server_audit'\G
*************************** 1. row ***************************
PLUGIN_LIBRARY: server_audit.so
PLUGIN_AUTHOR: Alexey Botchkov (MariaDB Corporation)
PLUGIN_DESCRIPTION: Audit the server activity

mysqld10-[MySQL]>INSTALL PLUGIN server_audit SONAME 'server_audit.so';
mysqld10-[MySQL]>SELECT * from information_schema.plugins where plugin_name='server_audit'\G
**************************** 1. row ***************************
PLUGIN_LIBRARY: server_audit.so
PLUGIN_AUTHOR: Alexey Botchkov (MariaDB Corporation)
PLUGIN_DESCRIPTION: Audit the server activity

Configuration of the important audit system variables

You can take exactly the same audit system variables defined for MariaDB


The MariaDB Auditing Plugin is really quick and easy to install
It is a good and cheap auditing solution and can be installed on different distributions
It lets you see exactly what SQL queries are being processed
Auditing information can really help you to track suspicious queries, detect mistakes and overall troubleshoot abnormal activity


Cet article MariaDB: audit plugin est apparu en premier sur Blog dbi services.

Partitioning – When data movement is not performed as expected

Thu, 2016-10-13 07:16

This blog is about an interesting partitioning story and curious data movements during merge operation. I was at my one of my customer uses intensively partitioning for various reasons including archiving and manageability. A couple of days ago, we decided to test the new fresh developed script, which will carry out the automatic archiving stuff against the concerned database in quality environment.

Let’s describe a little bit the context.

We used a range-based data distribution model. Boundary are number-based and are incremented monolithically with identity values.

We defined a partition function with range right values strategy as follows:

AS RANGE RIGHT FOR VALUES(@boundary_archive, @boundary_current)


The first implementation of the partition scheme consisted in storing all partitioned data inside the same filegroup.


Well, the initial configuration was as follows. We applied page compression to the archive partition because it contained cold data which is not supposed to be updated very frequently.

blog 106 - 0 - partitioned table starting point

blog 106 - 1 - partitioned table starting point config

As you may expect, the next step will consist in merging ARCHIVE and CURRENT partitions by using the MERGE command as follows:

MERGE RANGE (@b_archive);


Basically, we achieve a merge operation according to Microsoft documentation:

The filegroup that originally held boundary_value is removed from the partition scheme unless it is used by a remaining partition, or is marked with the NEXT USED property.


blog 106 - 2 - partitioned table after merge

At this point we may expect SQL Server has moved data from the middle partition to the left partition and according to this other Microsoft pointer

When two partitions are merged, the resultant partition inherits the data compression attribute of the destination partition.

But the results come as a little surprise because I expected to see a page compression value from my archive partition.

blog 106 - 3 - partitioned table after merge config

At this point, my assumption was either compression value doesn’t inherit correctly as mentioned in the Microsoft documentation or data movement is not performed as expected. The latter may be checked very quickly by looking at the corresponding records inside the transaction log file.

	COUNT(*) as nb_ops
FROM ::fn_dblog(NULL,NULL)
WHERE [Transaction ID] = (
     SELECT TOP 1 [Transaction ID]
     FROM ::fn_dblog(NULL,NULL)
     WHERE [Xact ID] = 164670)
GROUP BY [AllocUnitName], Operation

I put only the relevant sample here

blog 106 - 5 - log file after merge config

Referring to what I saw above, I noticed data movement was not performed as expected but rather as shown below:

blog 106 - 4 - partitioned table after merge config 2

So, this strange behavior seems to explain why compression state switched from PAGE to NONE in my case. Another strange thing is when we changed the partition scheme to include an archive filegroup, we returned to normality.



I doubled checked the Microsoft documentation to see if it exists one section to figure out the behavior I experienced in this case but without success. After further investigations I found out an interesting blog post from Sunil Agarwal (Microsoft) about partition merging and some performance improvements shipped with SQL Server 2008 R2. In short, I was in the specific context described in the blog post (same filegroup) and merging optimization came into action transparently because number of rows in the archive partition was lower than the one in the current partition at the moment of the MERGE operation.

Let me introduce another relevant information – the number of lines for each partition – to the following picture:

blog 106 - 6 - partitioned table optimization stuff

Just to be clear, this is an interesting and smart mechanism provided by Microsoft but it may be a little bit confusing when you’re not aware of this optimization. In my case, we finally decided with my customer to dedicate an archiving partition to store cold data that will be rarely accessed in this context and keep up the possibility to store cold data on cheaper storage when archiving partition will grow to a critical size. But in other cases, if storing data is the same filegroup is a still relevant scenario, keep in mind this optimization to not experience unexpected behaviors.

Happy partitioning!



Cet article Partitioning – When data movement is not performed as expected est apparu en premier sur Blog dbi services.

Documentum story – ADTS not working anymore?

Thu, 2016-10-13 02:00

A few weeks ago, on one of our Documentum environments, we find out thanks to our monitoring that the Renditions weren’t generated anymore by our CTS/ADTS Server… This happened in a Sandbox environment where a lot of dev/testing was done in parallel between EMC, the different Application Teams and the Platform/Architecture Team (us). A lot of changes at the same time means that it might not be easy to find out what caused this issue…


After a few checks on our monitoring scripts just to ensure that the issue wasn’t the monitoring itself, it appears that this part was working properly and indeed the renditions weren’t generated anymore. Therefore we checked the configuration of the docbase/rendition server but didn’t find anything suspicious on the configuration side and therefore we checked the logs of the Rendition Server. The CTS/ADTS Server often print a lot of different errors that are all linked but which appear to have a different root cause. Therefore to know which error is really relevant, I cleaned up the log file (stop CTS/ADTS, backup log file and remove it) and then I launched our monitoring script that basically remove all existing renditions for a test document, if any, and then request a new set of renditions to be generated by the CTS/ADTS Server.


After doing that, it was clear that the following error was the real one I needed to take a look at:

11:14:58,562  INFO [ Thread-61] CTSThreadPoolManagerImpl -       Added ICTSTask to the ICTSThreadPoolManager: dm_transcode_content
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Start. About to get Next ICTSTask from pool manager...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       ICTSThreadPoolManager: removing first item from the list for processing...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Removing a task to execute it. Number in waiting list: 1
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Next CTSTask received...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       CTSThreadPoolManager has threadlimit -1
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Processing next CTSTask...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       About to get notifier from CTSTask...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       About to get session from CTSTask...
11:14:59,203  INFO [Thread-153] CTSThreadPoolManagerImpl -       About to get RUN CTSTask...
11:14:59,859  WARN [Thread-153] CTSOperationsUtils -       [BOCS/ACS] exportContentFiles error - javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: java.security.cert.CertPathBuilderException: Could not build a validated path.


11:14:59,859 ERROR [Thread-153] CTSThreadPoolManagerImpl -       Exception in CTSThreadPoolManagerImpl, notification :
com.documentum.cts.exceptions.internal.CTSServerTaskException: Error while exporting file(s) from repo01, objectName: check_ADTS_rendition_creation errorMessage: Executing Export operation failed
                Could not get the content file for check_ADTS_rendition_creation (093f245a800a303f, msw12, null)
Cause Exception was:
com.documentum.cts.exceptions.CTSException: Error while exporting file(s) from repo01, objectName: check_ADTS_rendition_creation errorMessage: Executing Export operation failed
                Could not get the content file for check_ADTS_rendition_creation (093f245a800a303f, msw12, null)
Cause Exception was:
DfException:: THREAD: Thread-153; MSG: Error while exporting file(s) from repo01, objectName: check_ADTS_rendition_creation errorMessage: Executing Export operation failed
                Could not get the content file for check_ADTS_rendition_creation (090f446e800a303f, msw12, null); ERRORCODE: ff; NEXT: null
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx5(CTSOperationsUtils.java:626)
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx4(CTSOperationsUtils.java:332)
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx3(CTSOperationsUtils.java:276)
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx2(CTSOperationsUtils.java:256)
                at com.documentum.cts.impl.services.task.CTSTask.exportInputContent(CTSTask.java:4716)
                at com.documentum.cts.impl.services.task.CTSTask.retrieveInputURL(CTSTask.java:4594)
                at com.documentum.cts.impl.services.task.CTSTask.initializeFromCommand(CTSTask.java:2523)
                at com.documentum.cts.impl.services.task.CTSTask.execute(CTSTask.java:922)
                at com.documentum.cts.impl.services.task.CTSTaskBase.doExecute(CTSTaskBase.java:514)
                at com.documentum.cts.impl.services.task.CTSTaskBase.run(CTSTaskBase.java:460)
                at com.documentum.cts.impl.services.thread.CTSTaskRunnable.run(CTSTaskRunnable.java:207)
                at java.lang.Thread.run(Thread.java:745)
11:14:59,859  INFO [Thread-153] CTSQueueProcessor -       _failureNotificationAdmin : false
11:14:59,859  INFO [Thread-153] CTSQueueProcessor -       _failureNotification : true


Ok so now it is clear that the error is actually the following one: “java.security.cert.CertPathBuilderException: Could not build a validated path”. This always means that a specific SSL Certificate Chain isn’t trusted. As you can see above, it is mentioned “BOCS/ACS” on the same line and actually the line just below contains the URL of the ACS… Therefore I thought about that and yes indeed one of the changes planned for that day was that the D2-BOCS has been installed and enabled on this environment. So what is the link between the ACS URL and the D2-BOCS installation? Well actually when installing the D2-BOCS, if you want to keep your environment secured, then you need the ACS to be switched to HTTPS because the D2-BOCS will force D2 to use the ACS URLs to download the documents to the client’s workstation when it is actually not using the ACS at all when there is no D2-BOCS installed. Therefore the installation of the D2-BOCS isn’t linked to the CTS/ADTS at all but one of our pre-requisites to install it was to setup the ACS in HTTPS and that is linked to the CTS/ADTS Server because it is actually using it to download the documents as you can see in the error above.


Now we know what was the error and just to confirm that, I switched back the ACS URL to HTTP (using DA: Administration > Distributed Content Configuration > ACS Servers > Right-click on ACS objects > Properties > ACS Server Connections) and re-init the Content Server (using DA: Administration > Basic Configuration > Content Servers > Right-click on CS objects > Properties > Check “Re-Initialize Server” and click OK).


Right after doing that, the monitoring switched back to green, meaning that renditions were created again and therefore this was indeed the one and only issue. So what to do if we want to use the ACS in HTTPS in correlation with the Rendition? Well we just have to explain to the CTS/ADTS Server that he can trust the ACS SSL Certificate and this is done by updating the cacerts file of Java used by the Rendition Server. This is done pretty easily using the following commands for which I will suppose that the Rendition Server has been installed on a D: Drive under “D:\CTS”.


So the first thing to do is to upload your Certificate Chain to the Rendition Server and put them under “D:\certs” (I will suppose there are two SSL Certificates in the chain: a Root and a Gold). Then simply start a command prompt as Administrator and execute the following commands to update the cacerts file of Java:

D:\> copy D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts.bck_YYYYMMDD
        1 file(s) copied.

D:\> D:\CTS\java64\1.7.0_72\bin\keytool.exe -import -trustcacerts -alias root_ca -keystore D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts -file D:\certs\Internal_Root_CA.cer
Enter keystore password:
Trust this certificate? [no]:  yes
Certificate was added to keystore
D:\> D:\CTS\java64\1.7.0_72\bin\keytool.exe -import -trustcacerts -alias gold_ca -keystore D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts -file D:\certs\Internal_Gold_CA1.cer
Enter keystore password:
Certificate was added to keystore


Now just switch the ACS to use HTTPS again, restart the Rendition services using the Windows Services or command line tool and the next time you will request a rendition, it will work without errors even in HTTPS. That’s actually a very common mistake to setup the SSL on the Content Server side which is great but you need not to forget that some other components might use what you switch to HTTPS and therefore these additional components need to trust your SSL Certificates too!


Note 1: in our case, the WebLogic Server hosting D2 was already in HTTPS and therefore it was already trusting the Internal Root & Gold SSL Certificates, reason why we could use the ACS in HTTPS from D2 without issue.

Note 2: in case you didn’t know about it, I think it is now clear that the CTS/ADTS Server is using the ACS to download the files… Therefore if you want a secured environment even without D2-BOCS, you absolutely need to switch your ACS to HTTPS!


Cet article Documentum story – ADTS not working anymore? est apparu en premier sur Blog dbi services.