Feed aggregator

Documentum – Change password – 5 – CS/FT – JBoss Admin

Yann Neuhaus - Sat, 2017-07-22 02:57

The next password I wanted to blog about is the JBoss Admin password. As you know, there are several JBoss Application Servers in Documentum. The most used being the ones for the Java Method Server (JMS) and for the Full Text Servers (Dsearch/IndexAgent). In this blog, I will only talk about the JBoss Admin password of the JMS and IndexAgents simply because I will include the Dsearch JBoss instance in another blog which will talk about the xDB.

 

The steps are exactly the same for all JBoss instances, it’s just a matter of checking/updating the right file. In this blog, I will still separate the steps for JMS and IndexAgents but that’s because I usually have more than one IndexAgent on the same FT and therefore I’m also providing a way to update all JBoss instances at the same time using the right commands.

 

As always, I will define an environment variable to store the password to avoid using clear text passwords in the shell. The generic steps to change a JBoss Admin password, in Documentum, are pretty simple:

  1. Store the password in a variable
  2. Encrypt the password
  3. Backup the old configuration file
  4. Replace the password file with the new encrypted password
  5. Restart the component
  6. Checking the connection with the new password

 

As you can see above, there is actually nothing in these steps to change the password… We are just replacing a string inside a file with another string and that’s done, the password is changed! That’s really simple but that’s also a security issue since you do NOT need to know the old password… That’s how Documentum works with JBoss…

 

I. JMS JBoss Admin

For the JMS JBoss Admin, you obviously need to connect to all Content Servers and then perform the steps. Below are the commands I use to set the variable, encrypt the password and the update the password file with the new encrypted password (I’m just overwriting it):

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW JBoss admin password: " jboss_admin_pw; echo
Please enter the NEW JBoss admin password:
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -cp "$DOCUMENTUM_SHARED/dfc/dfc.jar" com.documentum.fc.tools.RegistryPasswordUtils ${jboss_admin_pw}
AAAAENwH4N2fF92dfRajKzaARvrfnIG29fnqf8Kgnd2fWfYKmMd9x
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cd $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/configuration/
[dmadmin@content_server_01 ~]$ mv dctm-users.properties dctm-users.properties_bck_$(date "+%Y%m%d")
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ echo "# users.properties file to use with UsersRolesLoginModule" > dctm-users.properties
[dmadmin@content_server_01 ~]$ echo "admin=AAAAENwH4N2fF92dfRajKzaARvrfnIG29fnqf8Kgnd2fWfYKmMd9x" >> dctm-users.properties
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ cat dctm-users.properties
# users.properties file to use with UsersRolesLoginModule
admin=AAAAENwH4N2fF92dfRajKzaARvrfnIG29fnqf8Kgnd2fWfYKmMd9x
[dmadmin@content_server_01 ~]$

 

At this point, the new password has been put in the file dctm-users.properties in its encrypted form so you can now restart the component and check the status of the JBoss Application Server. To check that, I will use below a small curl command which is really useful… If just like me you always restrict the JBoss Administration Console to 127.0.0.1 (localhost only), for security reasons, then this is really awesome since you don’t need to start a X server and you don’t need to start a browser and all this stuff, simply put the password when asked and voila!

[dmadmin@content_server_01 ~]$ cd $DOCUMENTUM_SHARED/jboss7.1.1/server
[dmadmin@content_server_01 ~]$ ./stopMethodServer.sh
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ nohup ./startMethodServer.sh >> nohup-JMS.out 2>&1 &
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sleep 30
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ curl -g --user admin -D - http://localhost:9085/management --header "Content-Type: application/json" -d '{"operation":"read-attribute","name":"server-state","json.pretty":1}'
Enter host password for user 'admin':
HTTP/1.1 200 OK
Transfer-encoding: chunked
Content-type: application/json
Date: Wed, 15 Jul 2017 11:16:51 GMT

{
    "outcome" : "success",
    "result" : "running"
}
[dmadmin@content_server_01 ~]$

 

If everything has been done properly, you should get a “HTTP/1.1 200 OK” status meaning that the JBoss Application Server is up & running and the “result” should be “running”. This proves that the password provided in the command match the encrypted one from the file dctm-users.properties because the JMS is able to answer your request.

 

II. IndexAgent JBoss Admin

For the IndexAgent JBoss Admin, you obviously need to connect to all Full Text Servers and then perform the steps again. Below are the commands to do that. These commands are adapted in case you have several IndexAgents installed. Please note that the commands below will set the same Admin password for all JBoss instances (all IndexAgents JBoss Admin). Therefore, if that’s not what you want, you will have to take the commands from the JMS section but adapt the paths.

[xplore@full_text_server_01 ~]$ read -s -p "Please enter the NEW JBoss admin password: " jboss_admin_pw; echo
Please enter the NEW JBoss admin password:
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ $JAVA_HOME/bin/java -cp "$XPLORE_HOME/dfc/dfc.jar" com.documentum.fc.tools.RegistryPasswordUtils ${jboss_admin_pw}
AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/jboss7.1.1/server/
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do mv ./$i/configuration/dctm-users.properties ./$i/configuration/dctm-users.properties_bck_$(date "+%Y%m%d"); done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do echo "# users.properties file to use with UsersRolesLoginModule" > ./$i/configuration/dctm-users.properties; done
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do echo "AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x" >> ./$i/configuration/dctm-users.properties; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do echo "--$i:"; cat ./$i/configuration/dctm-users.properties; echo; done
--DctmServer_Indexagent_DocBase1:
# users.properties file to use with UsersRolesLoginModule
AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x

--DctmServer_Indexagent_DocBase2:
# users.properties file to use with UsersRolesLoginModule
AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x

--DctmServer_Indexagent_DocBase3:
# users.properties file to use with UsersRolesLoginModule
AAAAENwH4N2cI25WmDdgRzaARvcIvF3g5gR8Kgnd2fWfYKmMd9x

[xplore@full_text_server_01 ~]$

 

At this point, the new password has been put in its encrypted form in the file dctm-users.properties for each IndexAgent. So, the next step is to restart all the components and check the status of the JBoss instances. Just like for the JMS, I will use below the curl command to check the status of a specific IndexAgent:

[xplore@full_text_server_01 ~]$ for i in `ls stopIndexag*.sh`; do ./$i; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ for i in `ls startIndexag*.sh`; do ia=`echo $i|sed 's,start\(.*\).sh,\1,'`; nohup ./$i >> nohup-$ia.out 2>&1 &; done
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ sleep 30
[xplore@full_text_server_01 ~]$
[xplore@full_text_server_01 ~]$ curl -g --user admin -D - http://localhost:9205/management --header "Content-Type: application/json" -d '{"operation":"read-attribute","name":"server-state","json.pretty":1}'
Enter host password for user 'admin':
HTTP/1.1 200 OK
Transfer-encoding: chunked
Content-type: application/json
Date: Wed, 15 Jul 2017 11:16:51 GMT

{
    "outcome" : "success",
    "result" : "running"
}
[xplore@full_text_server_01 ~]$

 

If you want to check all IndexAgents at once, you can use this command instead (it’s a long one I know…):

[xplore@full_text_server_01 ~]$ for i in `ls -d DctmServer_Indexag*`; do port=`grep '<socket-binding .*name="management-http"' ./$i/configuration/standalone.xml|sed 's,.*http.port:\([0-9]*\).*,\1,'`; echo; echo "  ** Please enter below the password for '$i' ($port)"; curl -g --user admin -D - http://localhost:$port/management --header "Content-Type: application/json" -d '{"operation":"read-attribute","name":"server-state","json.pretty":1}'; done

  ** Please enter below the password for 'DctmServer_Indexagent_DocBase1' (9205)
Enter host password for user 'admin':
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Content-Length: 55
Date: Wed, 15 Jul 2017 12:37:35 GMT

{
    "outcome" : "success",
    "result" : "running"
}
  ** Please enter below the password for 'DctmServer_Indexagent_DocBase2' (9225)
Enter host password for user 'admin':
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Content-Length: 55
Date: Wed, 15 Jul 2017 12:37:42 GMT

{
    "outcome" : "success",
    "result" : "running"
}
  ** Please enter below the password for 'DctmServer_Indexagent_DocBase3' (9245)
Enter host password for user 'admin':
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Content-Length: 55
Date: Wed, 15 Jul 2017 12:37:45 GMT

{
    "outcome" : "success",
    "result" : "running"
}
[xplore@full_text_server_01 ~]$

 

If everything has been done properly, you should get a “HTTP/1.1 200 OK” status for all IndexAgents.

 

 

Cet article Documentum – Change password – 5 – CS/FT – JBoss Admin est apparu en premier sur Blog dbi services.

Documentum – Change password – 4 – CS – Presets & Preferences

Yann Neuhaus - Sat, 2017-07-22 01:58

In a previous blog (see this one), I already provided the steps to change the BOF password and I mentioned that this was more or less the only important account in the Global Registry. Well in this blog, I will show you how to change the passwords for the two other important accounts: the Presets and Preferences accounts.

 

These two accounts can actually be created in a dedicated repository for performance reasons but by default they will be taken from the Global Registry and they are used – as you can easily understand – to create Presets and Preferences…

 

As said above, these accounts are docbase accounts so let’s start with setting up some environment variable containing the passwords and then updating their passwords on a Content Server:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW Preset password: " prespw; echo
Please enter the NEW Preset password:
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW Preferences password: " prefpw; echo
Please enter the NEW Preferences password:
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ iapi GR_DOCBASE -Udmadmin -Pxxx << EOF
> retrieve,c,dm_user where user_login_name='dmc_wdk_presets_owner'
> set,c,l,user_password
> $prespw
> save,c,l
> retrieve,c,dm_user where user_login_name='dmc_wdk_preferences_owner'
> set,c,l,user_password
> $prefpw
> save,c,l
> EOF


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000907 started for user dmadmin."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> ...
110f123456000144
API> SET> ...
OK
API> ...
OK
API> ...
110f123456000145
API> SET> ...
OK
API> ...
OK
API> Bye
[dmadmin@content_server_01 ~]$

 

Again, to verify that the passwords have been set properly, you can try to login to the respective accounts:

[dmadmin@content_server_01 ~]$ echo quit | iapi GR_DOCBASE -Udmc_wdk_presets_owner -P$prespw


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000908 started for user dmc_wdk_presets_owner."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ echo quit | iapi GR_DOCBASE -Udmc_wdk_preferences_owner -P$prefpw


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000909 started for user dmc_wdk_preferences_owner."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@content_server_01 ~]$

 

When the docbase account has been updated, the first part is done. That’s good but just like for the BOF account, you still need to update the references everywhere… Fortunately for the Presets and Preferences accounts there are less references so it’s less a pain in the… ;)

 

There are references to these two accounts in the WDK-based Applications. Below I will use Documentum Administrator as an example which is deployed as a WAR file on a WebLogic Server, however the steps would be the same for other Application Servers, except that you might use exploded folders and not war files… Below I will use:

  • $WLS_APPLICATIONS as the directory where the DA WAR file is present.
  • $WLS_APPS_DATA as the directory where the Data are present (log files, dfc.keystore, cache, …).

 

These two folders might be the same depending on how you configured your Application Server. So, first of all, let’s encrypt the two passwords on the Application Server using the DA libraries:

[weblogic@weblogic_server_01 ~]$ cd $WLS_APPLICATIONS/
[weblogic@weblogic_server_01 ~]$ jar -xvf da.war wdk/app.xml WEB-INF/classes WEB-INF/lib/dfc.jar WEB-INF/lib
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ read -s -p "Please enter the NEW Preset password: " prespw; echo
Please enter the NEW Preset password:
[weblogic@weblogic_server_01 ~]$ read -s -p "Please enter the NEW Preferences password: " prefpw; echo
Please enter the NEW Preferences password:
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ java -Djava.security.egd=file:///dev/./urandom -classpath WEB-INF/classes:WEB-INF/lib/dfc.jar:WEB-INF/lib/commons-io-1.2.jar com.documentum.web.formext.session.TrustedAuthenticatorTool $prespw $prefpw
Encrypted: [jpQm5FfqdD3HWqP4mgoIIw==], Decrypted: [Pr3seTp4sSwoRd]
Encrypted: [YaGqNkj2FqfQDn3gfna8Nw==], Decrypted: [Pr3feRp4sSwoRd]
[weblogic@weblogic_server_01 ~]$

 

Once this has been done, let’s check the old passwords, updating them in the app.xml file for DA and then checking that the update has been done. The sed commands below are pretty simple: the first part will search for the parent XML tag (so either <presets>…</presets> or <preferencesrepository>…</preferencesrepository>) and the second part will replace the first occurrence of the <password>…</password> line INSIDE the XML tag mentioned in the command (presets or preferencesrepository) with the new password we encrypted before. So, again, just replace my encrypted password with what you got:

[weblogic@weblogic_server_01 ~]$ grep -C20 "<password>.*</password>" wdk/app.xml | grep -E "dmc_|</password>|presets>|preferencesrepository>"
         <presets>
            <!-- Encrypted password for default preset user "dmc_wdk_presets_owner" -->
            <password>tqQd5gfWGF3tVacfmgwL2w==</password>
         </presets>
         <preferencesrepository>
            <!-- Encrypted password for default preference user "dmc_wdk_preferences_owner" -->
            <password>LdFinAwf2F2fuB29cqfs2w==</password>
         </preferencesrepository>
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ sed -i "/<presets>/,/<\/presets>/ s,<password>.*</password>,<password>jpQm5FfqdD3HWqP4mgoIIw==</password>," wdk/app.xml
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ sed -i "/<preferencesrepository>/,/<\/preferencesrepository>/ s,<password>.*</password>,<password>YaGqNkj2FqfQDn3gfna8Nw==</password>," wdk/app.xml
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ grep -C20 "<password>.*</password>" wdk/app.xml | grep -E "dmc_|</password>|presets>|preferencesrepository>"
         <presets>
            <!-- Encrypted password for default preset user "dmc_wdk_presets_owner" -->
            <password>jpQm5FfqdD3HWqP4mgoIIw==</password>
         </presets>
         <preferencesrepository>
            <!-- Encrypted password for default preference user "dmc_wdk_preferences_owner" -->
            <password>YaGqNkj2FqfQDn3gfna8Nw==</password>
         </preferencesrepository>
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ jar -uvf da.war wdk/app.xml
[weblogic@weblogic_server_01 ~]$ rm -rf WEB-INF/ wdk/
[weblogic@weblogic_server_01 ~]$

 

Normally the passwords returned by the second grep command should be different and they should match the ones returned by the JAVA previously executed to encrypt the Presets and Preferences passwords. Once that is done, simply repack the war file and redeploy it (if needed).

 

To verify that the passwords are properly set you can simply stop DA, remove the cache containing the Presets’ jars and restart DA. If the jars are automatically re-created, then the passwords should be OK:

[weblogic@weblogic_server_01 ~]$ cd $WLS_APPS_DATA/documentum.da/dfc.data/cache
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ ls -l
total 4
drwxr-x---. 4 weblogic weblogic 4096 Jul 15 20:58 7.3.0000.0205
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ ls -l ./7.3.*/bof/*/
...
[weblogic@weblogic_server_01 ~]$

 

This last ‘ls’ command will display a list of 10 or 15 jars (12 for me in DA 7.3 GA release) as well as a few files (content.lck, content.xml and GR_DOCBASE.lck usually). If you don’t see any jar files before the restart, it means the old password was probably not correct… Ok so now to verify that the new passwords have been put properly in the app.xml file, simply stop the Managed Server hosting DA with your preferred way (I will use “msDA-01″ for the example below), then remove the cache folder and restart DA. Once DA is up&running again, it will re-create this cache folder in a few seconds and all the jars should be back:

[weblogic@weblogic_server_01 ~]$ $DOMAIN_HOME/bin/startstop stop msDA-01
  ** Managed Server msDA-01 stopped
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ rm -rf ./7.3*/
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ $DOMAIN_HOME/bin/startstop start msDA-01
  ** Managed Server msDA-01 started
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ sleep 30
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ ls -l ./7.3.*/bof/*/
...
[weblogic@weblogic_server_01 ~]$

 

If you did it properly, the jars will be back. If you want a list of the jars that should be present, take a look at the file “./7.3.*/bof/*/content.xml”. Obviously above I was using the DA 7.3 GA so my cache folder starts with 7.3.xxx. If you are using another version of DA, the name of this folder will change so just keep that in mind.

 

 

Cet article Documentum – Change password – 4 – CS – Presets & Preferences est apparu en premier sur Blog dbi services.

Documentum – Change password – 3 – CS – Installation Owner

Yann Neuhaus - Sat, 2017-07-22 00:36

In this blog, I will describe the few steps needed to change the Documentum Installation Owner’s password. As you all know, the Installation Owner is (one of) the most important password in Documentum and it is probably the first you define even before starting the installation.

 

As always, I will use a linux environment and in this case, I’m assuming the “dmadmin” account is a local account to each Content Server and therefore the change of the password must be done on all of them. In case you have an AD integration or something similar, you can just change the password at the AD level so that’s not funny, right?!

 

So, let’s start with log in to all Content Servers using the Installation Owner’s account. In case you don’t remember the old password, you will have to use the root account instead. So changing the dmadmin’s password is pretty simple, you just have to change it on the OS level (again this is the default… If you changed the dmadmin’s account type, then…):

[dmadmin@content_server_01 ~]$ passwd
    Changing password for user dmadmin.
    Changing password for dmadmin.
    (current) UNIX password:
    New password:
    Retype new password:
    passwd: all authentication tokens updated successfully.
[dmadmin@content_server_01 ~]$

 

To verify that the dmadmin’s password has been changed successfully, you can use the dm_check_password utility as follow (leave the extra #1 and #2 empty):

[dmadmin@content_server_01 ~]$ $DOCUMENTUM/dba/dm_check_password
    Enter user name: dmadmin
    Enter user password:
    Enter user extra #1 (not used):
    Enter user extra #2 (not used):
    $DOCUMENTUM/dba/dm_check_password: Result = (0) = (DM_EXT_APP_SUCCESS)
[dmadmin@content_server_01 ~]$

 

Once you are sure that the password is set properly, one could think that it’s over but actually, it’s not… There is one additional place where this password must be set and I’m not talking about new installations which obviously will requires you to enter the new password. For that, let’s first encrypt this password:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW dmadmin's password: " dmadmin_pw; echo
    --> Enter the NEW dmadmin's password: 
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -cp $DOCUMENTUM_SHARED/dfc/dfc.jar com.documentum.fc.tools.RegistryPasswordUtils ${dmadmin_pw}
AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0
[dmadmin@content_server_01 ~]$

 

I generated a random string for this example (“AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0″) but this will be the encrypted password of dmadmin. I will use this value in the commands below so whenever you see this, just replace it with what your “java -cp ..” command returned.

 

Then where should this be used? On the Full Text Server! So log in to your FT and inside the watchdog configuration, the dmadmin’s password is used for the IndexAgent connection. The commands below will take a backup of the configuration file and then update it to use the new encrypted password:

[xplore@full_text_server_01 ~]$ cd $XPLORE_HOME/watchdog/config/
[xplore@full_text_server_01 config]$ cp dsearch-watchdog-config.xml dsearch-watchdog-config.xml_bck_$(date "+%Y%m%d")
[xplore@full_text_server_01 config]$
[xplore@full_text_server_01 config]$ sed -i 's,<property name="docbase_password" value="[^"]*",<property name="docbase_password" value="AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0",' dsearch-watchdog-config.xml
[xplore@full_text_server_01 config]$

 

Small (but important) note on the above commands: if you are using the same FT for different environments or if one of the IndexAgent is linked to a different dmadmin’s account (and therefore different password), then you will need to open the file manually and replace the passwords for the corresponding xml tags (or use a different sed command which will be more complicated). Each IndexAgent will have the following lines for its configuration:

<application-config instance-name="<hostname>_9200_IndexAgent" name="IndexAgent">
        <properties>
                <property name="application_url" value="https://<hostname>:9202/IndexAgent"/>
                <property name="docbase_user" value="dmadmin"/>
                <property name="docbase_name" value="DocBase1"/>
                <property name="docbase_password" value="AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0"/>
                <property name="servlet_wait_time" value="3000"/>
                <property name="servlet_max_retry" value="5"/>
                <property name="action_on_servlet_if_stopped" value="notify"/>
        </properties>
        <tasks>
                ...
        </tasks>
</application-config>

 

Once the above modification has been done, simply restart the xPlore components.

 

Another thing that must be done is linked to D2 and D2-Config… If you are using these components, then you will need to update the D2 Lockbox on the Content Server side and you probably defined the LoadOnStartup property which will require you to put the dmadmin’s password in the D2 Lockbox on the Web Application side too. In this blog, I won’t discuss the full recreation of the D2 Lockbox with new password/passphrases since this is pretty simple and most likely known by everybody so I’m just going to update the dmadmin’s password inside the D2 Lockbox instead for the different properties. If you would like a more complete blog for the lockbox, just let me know! This only apply to “not so old nor so recent” D2 versions since the D2 Lockbox has been introduced only a few years ago but is yet not present anymore with D2 4.7, so…

 

On the Content Server – I’m just setting up the environment to contain the libraries needed to update the D2 Lockbox and then updating the D2-JMS properties inside the lockbox. I’m using $DOCUMENTUM/d2-lib as the root folder under which the D2 Installer put the libraries and initial lockbox:

[dmadmin@content_server_01 ~]$ export LD_LIBRARY_PATH=$DOCUMENTUM/d2-lib/lockbox/lib/native/linux_gcc34_x64:$LD_LIBRARY_PATH
[dmadmin@content_server_01 ~]$ export PATH=$DOCUMENTUM/d2-lib/lockbox/lib/native/linux_gcc34_x64:$PATH
[dmadmin@content_server_01 ~]$ export CLASSPATH=$DOCUMENTUM/d2-lib/D2.jar:$DOCUMENTUM/d2-lib/LB.jar:$DOCUMENTUM/d2-lib/LBJNI.jar:$CLASSPATH
[dmadmin@content_server_01 ~]$ cp -R $DOCUMENTUM/d2-lib/lockbox $DOCUMENTUM/d2-lib/lockbox-bck_$(date "+%Y%m%d-%H%M%S")
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for docbase in `cd $DOCUMENTUM/dba/config/; ls`; do java com.emc.common.java.crypto.SetLockboxProperty $DOCUMENTUM/d2-lib/lockbox D2-JMS.${docbase}.password ${dmadmin_pw}; done
[dmadmin@content_server_01 ~]$ for docbase in `cd $DOCUMENTUM/dba/config/; ls`; do java com.emc.common.java.crypto.SetLockboxProperty $DOCUMENTUM/d2-lib/lockbox D2-JMS.${docbase}.${docbase}.password ${dmadmin_pw}; done

 

The last command above mention “${docbase}.${docbase}”… Actually the first one is indeed the name of the docbase but the second one is the name of the local dm_server_config. Therefore, for a single Content Server the above commands are probably enough (since by default dm_server_config name = docbase name) but if you have a HA setup, then you will need to also include the remote dm_server_config names for each docbase (alternatively you can also use wildcards…). Once that is done, just replace the old lockbox with the new one in the JMS.

 

On the Web Application Server – the same environment variables are needed but of course the paths will change and you might need to include the C6-Common jar file too (which is known to cause issues with WebLogic if it is still in the CLASSPATH when you start it). So on the Web Application Server, I’m also setting up the environment variables with the dmadmin’s password and D2 Lockbox passphrase as well as another variable for the list of docbases to loop on them:

[weblogic@weblogic_server_01 ~]$ for docbase in $DOCBASES; do java com.emc.common.java.crypto.SetLockboxProperty $WLS_DOMAIN/D2/lockbox LoadOnStartup.${docbase}.password ${dmadmin_pw} ${d2method_pp}; done
[weblogic@weblogic_server_01 ~]$

 

With the D2 Lockbox, you will need to restart the components using them when you recreate it from scratch. However, when you update a property inside it, like above, it’s usually not needed. The next time the password is needed, it will be picked from the Lockbox.

 

Last comment on this, if you are using an ADTS and if you used the dmadmin’s account to manage it (I wouldn’t recommend this! Please use a dedicated user for this instead), then the password is also encrypted in a password file for each docbases under “%ADTS_HOME%/CTS/docbases/”.

 

 

Cet article Documentum – Change password – 3 – CS – Installation Owner est apparu en premier sur Blog dbi services.

Updated PythonDBAGraphs to work from IDLE

Bobby Durrett's DBA Blog - Fri, 2017-07-21 18:19

I switched from Enthought Canopy to IDLE for Python development when I got my new corporate laptop a few weeks back. Yesterday I realized that I was unable to run a Matplotlib graph from IDLE in my current configuration. Also, I could not find a way to pass command line arguments into my Pyth0nDBAGraphs scripts from IDLE. So, I put in a couple of fixes including an update to the README explaining how to pass arguments into my scripts when using IDLE. This describes the problem I was having running Matplotlib graphs in IDLE: stackoverflow article.

I only use IDLE for development. I run my PythonDBAGraphs from the Windows command prompt when I am using them for my database work. Also, I use TextPad and the command line version of Python for development as well as graphical tools like IDLE or Canopy. But, I wanted to use IDLE for development of graphs so I came up with these fixes.

Bobby

Categories: DBA Blogs

Oracle Scheduler Fail To Send Email Notifications

Pythian Group - Fri, 2017-07-21 13:32

In this blog post I would like to share an interesting issue we encountered couple of months ago related to scheduler job email notifications. As some of you may know, starting with Oracle 11.2 you can subscribe to receive notification emails from a scheduler job. You can define an email to be sent on various job events (job_started, job_completed, job_failed etc.). The job email notification is defined with DBMS_SCHEDULER.ADD_JOB_EMAIL_NOTIFICATION procedure.

I am assuming you already have a configured and working SMTP server. If not, that can be done with DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE (attributes: email_server and email_sender).

The issue we encountered was on database version 12.1. After configuring the scheduler jobs and email notification lists, emails were not sent out.

This blog post should give you some guidance on how you can troubleshoot and properly define email job notifications.

The problem:

In our case, we used one “system” account to manage the job email notification subscription. Some of you might use the same approach, having a single and separate account used to manage job email notifications for all other accounts.

Let’s assume we have a job called JOB1 defined in schema IARSOV.

exec dbms_scheduler.create_job(job_name => 'JOB1', job_type => 'PLSQL_BLOCK', job_action => 'begin null; end;');

PL/SQL procedure successfully completed.

If we now try to add the a job notification email for IARSOV.JOB1 with the SYSTEM user we should receive an error: “ORA-24093: AQ agent SCHED$_AGT2$_xx not granted privileges of database user”.

exec dbms_scheduler.add_job_email_notification('IARSOV.JOB1','arsov@pythian.com');

BEGIN dbms_scheduler.add_job_email_notification('IARSOV.JOB1','arsov@pythian.com'); END;

*
ERROR at line 1:
ORA-24093: AQ agent SCHED$_AGT2$_101 not granted privileges of database user
SYSTEM
ORA-06512: at "SYS.DBMS_ISCHED", line 7847
ORA-06512: at "SYS.DBMS_SCHEDULER", line 4063
ORA-06512: at line 1

As a workaround we can grant the necessary privileges with the DBMS_AQADM.ENABLE_DB_ACCESS package used for managing Oracle Database Advanced Queuing (AQ).

exec DBMS_AQADM.ENABLE_DB_ACCESS(agent_name  => 'SCHED$_AGT2$_101', db_username => 'SYSTEM');

PL/SQL procedure successfully completed.

We can confirm the granted privileges via the DBA_AQ_AGENT_PRIVS dictionary view (Line 11):

set lines 120
col agent_name for a40
col db_username for a40

select * from dba_aq_agent_privs;

AGENT_NAME                     DB_USERNAME                    HTTP SMTP
------------------------------ ------------------------------ ---- ----
DB12C_3938_ORCL11G             DBSNMP                         NO   NO
SCHED$_AGT2$_101               IARSOV                         NO   NO
SCHED$_AGT2$_101               SYSTEM                         NO   NO
SCHEDULER$_EVENT_AGENT         SYS                            NO   NO
SCHEDULER$_REMDB_AGENT         SYS                            NO   NO
SERVER_ALERT                   SYS                            NO   NO
HAE_SUB                                                       NO   NO

7 rows selected.

Let’s now try to define job email notification for IARSOV.JOB1:

exec dbms_scheduler.add_job_email_notification('IARSOV.JOB1','arsov@pythian.com');

PL/SQL procedure successfully completed.

set pages 200
set lines 200
col owner for a40
col job_name for a40
col recipient for a20
select owner, job_name, recipient, event from dba_scheduler_notifications where job_name = 'JOB1';

OWNER                          JOB_NAME                       RECIPIENT            EVENT
------------------------------ ------------------------------ -------------------- -------------------
IARSOV                         JOB1                           arsov@pythian.com    JOB_FAILED
IARSOV                         JOB1                           arsov@pythian.com    JOB_BROKEN
IARSOV                         JOB1                           arsov@pythian.com    JOB_SCH_LIM_REACHED
IARSOV                         JOB1                           arsov@pythian.com    JOB_CHAIN_STALLED
IARSOV                         JOB1                           arsov@pythian.com    JOB_OVER_MAX_DUR

The notification has been successfully defined, however, upon testing the events email was not sent. In our case, the events were ending up in the exception queue AQ$_SCHEDULER$_EVENT_QTAB_E and there was not much information we could derive from the AQ$ scheduler related tables.

Troubleshooting:

The DBA_SUBSCR_REGISTRATIONS view contains mapping definitions for each schema showing which event_queue:consumer_group it is subscribed to. If we check the subscription definition for IARSOV user we can see the event_queue:consumer_group is linked to USER# 5 which is the SYSTEM user. In this case IARSOV’s AQ agent SCHED$_AGT2$_101 is linked to wrong user.

What we’re interested in is the association between SUBSCRIPTION_NAME and USER# columns.

SQL> select subscription_name, user#, status from dba_subscr_registrations;

SUBSCRIPTION_NAME                                                                     USER# STATUS
-------------------------------------------------------------------------------- ---------- --------
"SYS"."SCHEDULER$_EVENT_QUEUE":"SCHED$_AGT2$_101"                                         5 DB REG


SQL> select username, user_id from dba_users where user_id = 5;

USERNAME                          USER_ID
------------------------------ ----------
SYSTEM                                  5

In this case the emails won’t be sent out because the subscription registration is not properly initialized (linked) with the correct user (schema). In order for the notifications to work we need the proper link between the agent and the agent’s owner. In this case “SYS”.”SCHEDULER$_EVENT_QUEUE”:”SCHED$_AGT2$_101″ and the IARSOV schema should be properly linked – notice that the user’s ID is also part of the agent name.

What we now need to do is to drop all job email notifications (in this case only one) for IARSOV jobs. When dropping the last job email notification the subscription registration will be removed from DBA_SUBSCR_REGISTRATIONS.
However, note that you have to drop the job email notifications as the user to which the subscription registration is defined, in this case the SYSTEM user.

Hint: If you don’t know the password for the schema you need to connect to, you can can use the Proxy Authenticated Connection feature as documented in the blog article The power of the Oracle Database “proxy authenticated” connections.

SQL> show user;

USER is "SYSTEM"

SQL>
SQL> exec dbms_scheduler.remove_job_email_notification('IARSOV.JOB1','arsov@pythian.com');

PL/SQL procedure successfully completed.

SQL> select subscription_name, user#, status from dba_subscr_registrations;

no rows selected

Once we clear the subscription, we can properly initialize the link by adding the first job notification with the job schema’s owner. This will properly initialize the event_queue:consumer_group with the correct user. After that we can add multiple job notifications from other users as long as we have appropriate privileges granted.

--as SYSTEM user.

SQL> show user;
USER is "SYSTEM"
SQL>
SQL> select subscription_name, user#, status from dba_subscr_registrations;

SUBSCRIPTION_NAME                                       USER# STATUS
-------------------------------------------------- ---------- --------
"SYS"."SCHEDULER$_EVENT_QUEUE":"ILM_AGENT"                  0 DB REG

SQL>

--as IARSOV user.

SQL> show user;
USER is "IARSOV"
SQL>
SQL> exec DBMS_SCHEDULER.add_job_email_notification (job_name => 'IARSOV.JOB1', recipients => 'arsov@pyhian.com');

PL/SQL procedure successfully completed.

SQL>
SQL>

--as SYSTEM user.

SQL> show user;
USER is "SYSTEM"
SQL>
SQL> select subscription_name, user#, status from dba_subscr_registrations;

SUBSCRIPTION_NAME                                       USER# STATUS
-------------------------------------------------- ---------- --------
"SYS"."SCHEDULER$_EVENT_QUEUE":"ILM_AGENT"                  0 DB REG
"SYS"."SCHEDULER$_EVENT_QUEUE":"SCHED$_AGT2$_101"         101 DB REG

SQL>
SQL>
SQL> select username from dba_users where user_id = 101;

USERNAME
----------------
IARSOV

SQL>

Conclusion:

If you decide to use scheduler job email notifications, and also prefer that the notifications management is done by a single user (such as “system”) I would advise you to create a “dummy” job notification (with the job owner’s schema) as soon as you create the first scheduler job. This will link the event_queue:consumer_group to the proper user. Afterwards, once you define the rest of the scheduler job notifications (under the common “system” user), you can clean-up that initial “dummy” job notification.

This behavior (bug) is fixed in 12.2 so that the notifications always go through the user’s AQ agent which defines the notifications.

Categories: DBA Blogs

Oracle Scheduler Fail To Send Email Notifications

Pythian Group - Fri, 2017-07-21 13:32

In this blog post I would like to share an interesting issue we encountered couple of months ago related to scheduler job email notifications. As some of you may know, starting with Oracle 11.2 you can subscribe to receive notification emails from a scheduler job. You can define an email to be sent on various job events (job_started, job_completed, job_failed etc.). The job email notification is defined with DBMS_SCHEDULER.ADD_JOB_EMAIL_NOTIFICATION procedure.

I am assuming you already have a configured and working SMTP server. If not, that can be done with DBMS_SCHEDULER.SET_SCHEDULER_ATTRIBUTE (attributes: email_server and email_sender).

The issue we encountered was on database version 12.1. After configuring the scheduler jobs and email notification lists, emails were not sent out.

This blog post should give you some guidance on how you can troubleshoot and properly define email job notifications.

The problem:

In our case, we used one “system” account to manage the job email notification subscription. Some of you might use the same approach, having a single and separate account used to manage job email notifications for all other accounts.

Let’s assume we have a job called JOB1 defined in schema IARSOV.

exec dbms_scheduler.create_job(job_name => 'JOB1', job_type => 'PLSQL_BLOCK', job_action => 'begin null; end;');

PL/SQL procedure successfully completed.

If we now try to add the a job notification email for IARSOV.JOB1 with the SYSTEM user we should receive an error: “ORA-24093: AQ agent SCHED$_AGT2$_xx not granted privileges of database user”.

exec dbms_scheduler.add_job_email_notification('IARSOV.JOB1','arsov@pythian.com');

BEGIN dbms_scheduler.add_job_email_notification('IARSOV.JOB1','arsov@pythian.com'); END;

*
ERROR at line 1:
ORA-24093: AQ agent SCHED$_AGT2$_101 not granted privileges of database user
SYSTEM
ORA-06512: at "SYS.DBMS_ISCHED", line 7847
ORA-06512: at "SYS.DBMS_SCHEDULER", line 4063
ORA-06512: at line 1

As a workaround we can grant the necessary privileges with the DBMS_AQADM.ENABLE_DB_ACCESS package used for managing Oracle Database Advanced Queuing (AQ).

exec DBMS_AQADM.ENABLE_DB_ACCESS(agent_name  => 'SCHED$_AGT2$_101', db_username => 'SYSTEM');

PL/SQL procedure successfully completed.

We can confirm the granted privileges via the DBA_AQ_AGENT_PRIVS dictionary view (Line 11):

set lines 120
col agent_name for a40
col db_username for a40

select * from dba_aq_agent_privs;

AGENT_NAME                     DB_USERNAME                    HTTP SMTP
------------------------------ ------------------------------ ---- ----
DB12C_3938_ORCL11G             DBSNMP                         NO   NO
SCHED$_AGT2$_101               IARSOV                         NO   NO
SCHED$_AGT2$_101               SYSTEM                         NO   NO
SCHEDULER$_EVENT_AGENT         SYS                            NO   NO
SCHEDULER$_REMDB_AGENT         SYS                            NO   NO
SERVER_ALERT                   SYS                            NO   NO
HAE_SUB                                                       NO   NO

7 rows selected.

Let’s now try to define job email notification for IARSOV.JOB1:

exec dbms_scheduler.add_job_email_notification('IARSOV.JOB1','arsov@pythian.com');

PL/SQL procedure successfully completed.

set pages 200
set lines 200
col owner for a40
col job_name for a40
col recipient for a20
select owner, job_name, recipient, event from dba_scheduler_notifications where job_name = 'JOB1';

OWNER                          JOB_NAME                       RECIPIENT            EVENT
------------------------------ ------------------------------ -------------------- -------------------
IARSOV                         JOB1                           arsov@pythian.com    JOB_FAILED
IARSOV                         JOB1                           arsov@pythian.com    JOB_BROKEN
IARSOV                         JOB1                           arsov@pythian.com    JOB_SCH_LIM_REACHED
IARSOV                         JOB1                           arsov@pythian.com    JOB_CHAIN_STALLED
IARSOV                         JOB1                           arsov@pythian.com    JOB_OVER_MAX_DUR

The notification has been successfully defined, however, upon testing the events email was not sent. In our case, the events were ending up in the exception queue AQ$_SCHEDULER$_EVENT_QTAB_E and there was not much information we could derive from the AQ$ scheduler related tables.

Troubleshooting:

The DBA_SUBSCR_REGISTRATIONS view contains mapping definitions for each schema showing which event_queue:consumer_group it is subscribed to. If we check the subscription definition for IARSOV user we can see the event_queue:consumer_group is linked to USER# 5 which is the SYSTEM user. In this case IARSOV’s AQ agent SCHED$_AGT2$_101 is linked to wrong user.

What we’re interested in is the association between SUBSCRIPTION_NAME and USER# columns.

SQL> select subscription_name, user#, status from dba_subscr_registrations;

SUBSCRIPTION_NAME                                                                     USER# STATUS
-------------------------------------------------------------------------------- ---------- --------
"SYS"."SCHEDULER$_EVENT_QUEUE":"SCHED$_AGT2$_101"                                         5 DB REG


SQL> select username, user_id from dba_users where user_id = 5;

USERNAME                          USER_ID
------------------------------ ----------
SYSTEM                                  5

In this case the emails won’t be sent out because the subscription registration is not properly initialized (linked) with the correct user (schema). In order for the notifications to work we need the proper link between the agent and the agent’s owner. In this case “SYS”.”SCHEDULER$_EVENT_QUEUE”:”SCHED$_AGT2$_101″ and the IARSOV schema should be properly linked – notice that the user’s ID is also part of the agent name.

What we now need to do is to drop all job email notifications (in this case only one) for IARSOV jobs. When dropping the last job email notification the subscription registration will be removed from DBA_SUBSCR_REGISTRATIONS.
However, note that you have to drop the job email notifications as the user to which the subscription registration is defined, in this case the SYSTEM user.

Hint: If you don’t know the password for the schema you need to connect to, you can can use the Proxy Authenticated Connection feature as documented in the blog article The power of the Oracle Database “proxy authenticated” connections.

SQL> show user;

USER is "SYSTEM"

SQL>
SQL> exec dbms_scheduler.remove_job_email_notification('IARSOV.JOB1','arsov@pythian.com');

PL/SQL procedure successfully completed.

SQL> select subscription_name, user#, status from dba_subscr_registrations;

no rows selected

Once we clear the subscription, we can properly initialize the link by adding the first job notification with the job schema’s owner. This will properly initialize the event_queue:consumer_group with the correct user. After that we can add multiple job notifications from other users as long as we have appropriate privileges granted.

--as SYSTEM user.

SQL> show user;
USER is "SYSTEM"
SQL>
SQL> select subscription_name, user#, status from dba_subscr_registrations;

SUBSCRIPTION_NAME                                       USER# STATUS
-------------------------------------------------- ---------- --------
"SYS"."SCHEDULER$_EVENT_QUEUE":"ILM_AGENT"                  0 DB REG

SQL>

--as IARSOV user.

SQL> show user;
USER is "IARSOV"
SQL>
SQL> exec DBMS_SCHEDULER.add_job_email_notification (job_name => 'IARSOV.JOB1', recipients => 'arsov@pyhian.com');

PL/SQL procedure successfully completed.

SQL>
SQL>

--as SYSTEM user.

SQL> show user;
USER is "SYSTEM"
SQL>
SQL> select subscription_name, user#, status from dba_subscr_registrations;

SUBSCRIPTION_NAME                                       USER# STATUS
-------------------------------------------------- ---------- --------
"SYS"."SCHEDULER$_EVENT_QUEUE":"ILM_AGENT"                  0 DB REG
"SYS"."SCHEDULER$_EVENT_QUEUE":"SCHED$_AGT2$_101"         101 DB REG

SQL>
SQL>
SQL> select username from dba_users where user_id = 101;

USERNAME
----------------
IARSOV

SQL>

Conclusion:

If you decide to use scheduler job email notifications, and also prefer that the notifications management is done by a single user (such as “system”) I would advise you to create a “dummy” job notification (with the job owner’s schema) as soon as you create the first scheduler job. This will link the event_queue:consumer_group to the proper user. Afterwards, once you define the rest of the scheduler job notifications (under the common “system” user), you can clean-up that initial “dummy” job notification.

This behavior (bug) is fixed in 12.2 so that the notifications always go through the user’s AQ agent which defines the notifications.

Categories: DBA Blogs

Words I Don’t Use, Part 1: “Methodology”

Cary Millsap - Fri, 2017-07-21 12:58
Today, I will begin a brief sequence of blog posts that I’ll call “Words I Don’t Use.” I hope you’ll have some fun with it, and maybe find a thought or two worth considering.

The first word I’ll discuss is methodology. Yes. I made a shirt about it.

Approximately 100% of the time in the [mostly non-scientific] Oracle world that I live in, when people say “methodology,” they’re using it in the form that American Heritage describes as a pretentious substitute for “method.” But methodology is not the same as method. Methods are processes. Sequences of steps. Methodology is the scientific study of methods.

I like this article called “Method versus Methodology” by Peter Klein, which cites the same American Heritage Dictionary passage that I quoted on page 358 of Optimizing Oracle Performance.

Log Buffer #517: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2017-07-21 11:47

This Log Buffer Edition covers Oracle, SQL Server and MySQL.

Oracle:

Protecting Financial Data with Oracle WebCenter and Adobe LiveCycle

Oracle Forms 12c oracle.security.jps.JpsException Error after Database change

The Future of Content Management: Oracle Content & Experience Cloud

Today Oracle released a very large „monster“ Upgrade. This July 2017 Update includes the first time the new RU „Release Upgrade“ and RUR „Release Update Revision“ Patches.

Cloud Ward: Who Will Win the Battle for IT’s Future?

SQL Server:

SQL Server Management Studio add-ins

Resolve Network Binding Order Warning in failover cluster

Queries to inventory your SQL Server Agent Jobs

SQL Server 2016 ColumnStore Index String Predicate Pushdown

The Fast Route from Raw Data to Analysis and Display

MySQL:

Group-Replication, sweet & sour

You QA Most of Your System — What About Your Database?

Multi-Threaded Slave Statistics

Protecting your data! Fail-safe enhancements to Group Replication.

InnoDB Basics – Compaction: when and when not

Categories: DBA Blogs

Log Buffer #517: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2017-07-21 11:47

This Log Buffer Edition covers Oracle, SQL Server and MySQL.

Oracle:

Protecting Financial Data with Oracle WebCenter and Adobe LiveCycle

Oracle Forms 12c oracle.security.jps.JpsException Error after Database change

The Future of Content Management: Oracle Content & Experience Cloud

Today Oracle released a very large „monster“ Upgrade. This July 2017 Update includes the first time the new RU „Release Upgrade“ and RUR „Release Update Revision“ Patches.

Cloud Ward: Who Will Win the Battle for IT’s Future?

SQL Server:

SQL Server Management Studio add-ins

Resolve Network Binding Order Warning in failover cluster

Queries to inventory your SQL Server Agent Jobs

SQL Server 2016 ColumnStore Index String Predicate Pushdown

The Fast Route from Raw Data to Analysis and Display

MySQL:

Group-Replication, sweet & sour

You QA Most of Your System — What About Your Database?

Multi-Threaded Slave Statistics

Protecting your data! Fail-safe enhancements to Group Replication.

InnoDB Basics – Compaction: when and when not

Categories: DBA Blogs

JRE 1.7.0_141 Certified with Oracle E-Business Suite 12.1 and 12.2

Steven Chan - Fri, 2017-07-21 10:58

Java logo

Java Runtime Environment 1.7.0_151 (a.k.a. JRE 7u151-b15) and later updates on the JRE 7 codeline are now certified with Oracle E-Business Suite Release 12.1 and 12.2 for Windows-based desktop clients.

All JRE 6, 7, and 8 releases are certified with EBS upon release

Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops:

  • From JRE 1.6.0_03 and later updates on the JRE 6 codeline
  • From JRE 1.7.0_10 and later updates on the JRE 7 codeline 
  • From JRE 1.8.0_25 and later updates on the JRE 8 codeline
We test all new JRE releases in parallel with the JRE development process, so all new JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team. 

You do not need to wait for a certification announcement before applying new JRE 6, 7, or 8 releases to your EBS users' desktops.

Effects of new support dates on Java upgrades for EBS environments

Support dates for the E-Business Suite and Java have changed.  Please review the sections below for more details:

  • What does this mean for Oracle E-Business Suite users?
  • Will EBS users be forced to upgrade to JRE 7 for Windows desktop clients?
  • Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

32-bit and 64-bit versions certified

This certification includes both the 32-bit and 64-bit JRE versions for various Windows operating systems. See the respective Recommended Browser documentation for your EBS release for details.

Where are the official patch requirements documented?

How can EBS customers obtain Java 7?

EBS customers can download Java 7 patches from My Oracle Support.  For a complete list of all Java SE patch numbers, see:

Both JDK and JRE packages are now contained in a single combined download.  Download the "JDK" package for both the desktop client JRE and the server-side JDK package. 

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

Java Auto-Update Mechanism

With the release of the January 2015 Critical patch Updates, the Java Auto-Update Mechanism will automatically update JRE 7 plug-ins to JRE 8.

Coexistence of multiple JRE releases Windows desktops

The upgrade to JRE 8 is recommended for EBS users, but some users may need to run older versions of JRE 6 or 7 on their Windows desktops for reasons unrelated to the E-Business Suite.

Most EBS configurations with IE and Firefox use non-static versioning by default. JRE 8 will be invoked instead of earlier JRE releases if both are installed on a Windows desktop. For more details, see "Appendix B: Static vs. Non-static Versioning and Set Up Options" in Notes 290807.1 and 393931.1.

What do Mac users need?

Mac users running Mac OS X 10.7 (Lion), 10.8 (Mountain Lion), 10.9 (Mavericks), and 10.10 (Yosemite) can run JRE 7 or 8 plug-ins.  See:

Will EBS users be forced to upgrade to JDK 7 for EBS application tier servers?

JRE ("Deployment Technology") is used for desktop clients.  JDK is used for application tier servers.

JDK upgrades for E-Business Suite application tier servers are highly recommended but currently remain optional while Java 6 is covered by Extended Support. Updates will be delivered via My Oracle Support, where you can continue to receive critical bug fixes and security fixes as well as general maintenance for JDK 6 for application tier servers. 

Java SE 6 (excluding Deployment Technology) is covered by Extended Support until December 2018.  All EBS customers with application tier servers on Windows, Solaris, and Linux must upgrade to JDK 7 (excluding Deployment Technology) by December 2018. EBS customers running their application tier servers on other operating systems should check with their respective vendors for the support dates for those platforms.

JDK 7 is certified with E-Business Suite 12.  See:

Known Issues

When using Internet Explorer, JRE 1.7.0_01 had a delay of around 20 seconds before the applet started to load. This issue is fixed in JRE 1.7.0_95.

References

Related Articles
Categories: APPS Blogs

TWO WAY SSL

Amis Blog - Fri, 2017-07-21 09:51

How it works in a simple view

Several implementations are done with 2 way ssl certificates, but still wondering how it works?

Two-way ssl means that a client and a server communicates on a verified connection with each other. The verifying is done by certificates to identify. A server and a client has implemented a private key certificate and a public key certificate.

In short and simple terms.

A server has a private certificate which will be accepted by a client. The client also has a private certificate which will be accepted by the server. This is called the handshake and then it is safe to sent messages to each other. The proces looks like a cash withdrawal, putting in your creditcard corresponds to sending a hello to the server. Your card will be accepted if your card is valid for that machine.  You will be asked for your code. With two way ssl, the server sent a code,  the cliënt accept the code. Back to the withdrawal machine, the display ass for your code and putting in the right code, sent it to the server. The server accept the connection. Back to the two-way ssl process the clients sents a thumbprint which should be accepted on the server. When this proces is ready on the withdrawal you might put in the amount you want to receive, on the two-way ssl connection a message could be sent. The withdrawal machine responds with cash and probably a revenu, the two-way ssl connections with a respond message.

In detail.

These are the basic components necessary for communicate 2-way SSL over https.

Sending information to a http address is done in plain text, hacking of these communication remains in clear text information to a hacker, this is not likely for several internet traffic. You don’t want to communicate password in plain text over the internet. So https and a certificate is necessary.

So the first part to describe, the public key.

A public key consists of a root certificate with one or more intermediate certificates. A certificate authority generates a root certificate and on top of these an intermediate certificate and on top of that certificate another intermediate certificate. This is done to arrange a smaller set of clients who can communicate with you. A root certificate will be used in several intermediates, and an intermediate certificate will be used in other intermediate certificates, so using the root certificate will remain in accepting connections of all intermediates. A public key is not protected by password and can be shared.

The second part is the private key.

A private key is built like a public key but on top there is a private key installed, this key is client specific and protected by a password. This private key represent you as firm or as person, so you don’t want to share this key with other people.

What happens when setting up a two-way ssl connection

The first step in the communication is sent a hello from the client to the server and then information is exchanged. The servers sends a request to the client with an encoded string of the thumbprint of his private key. The authorization key of the public chain below is sent to ask if the client will accept the communication. When the public key of the request corresponds to a public key on the client an OK sign will be sent back. The server asks also for the encoded string of the client, so the client will sent his encoded string of the thumbprint to the server. When the server accepts this in case of a match on his public key the connection between client and server is established and a message could be sent.

A certificate has an expiration date, so a certificate (public and private) will only works until the expiration date is reached. Normally it will take some time to receive a new certificate so do a request for a new certificate on time.

A certificate has a version within, for now version 3 is the standard version. Also the term SHA will be used, the start was with sha1 but still this one is achieved not safe enough anymore so we use SHA2 certificates or SHA256 as it will be shown.

The post TWO WAY SSL appeared first on AMIS Oracle and Java Blog.

Video: Basic Help for Docker Noobs

OTN TechBlog - Fri, 2017-07-21 09:12

Mike Raab, senior principal product manager for the Oracle Container Cloud Service, kicked off his Oracle Code Atlanta session "Introduction to Docker Containers" by asking the standing-room-only audience how many had hands-on experience with Docker. Only four people held up their hands. "People have heard about  this thing called containerization, and they're hungry to really understand this new paradigm of being able to run an application within a container," Mike says.

If you share that hunger, this appetizer of an interview will provide some basic background into why Docker is such a hot topic, along with a brief overview of a couple of illustrative use cases. Watch the video!

Additional Resources

 

 

Building a Batch Level of Service Algorithm

Anthony Shorten - Fri, 2017-07-21 00:45

One of the features of the Oracle Utilities Application Framework is the Batch Level of Service. This is an optional feature where the Oracle Utilities Application Framework can assess the current execution metrics against some target metrics and return whether the batch job met its targets or failed in meeting targets (including the reason).

This facility is optional and requires some configuration on the Batch Control using a Batch Level Of Service algorithm. This algorithm takes in the BATCH_CD as an input and performs the necessary processing to check the level of service (anyway you wish).

The algorithm passes in a Batch Code (batchControlId) and it passes back the following:

  • The Level Of Service, levelOfService,  (as expressed by the system lookup F1_BATCH_LEVEL_SERVICE_FLG):
    • DISA (Disabled) - The Batch Level Of Service is disabled as the algorithm is not configured on the Batch Control record. This is the default.
    • NORM (Normal) - The execution of the batch job is within the service level you are checking.
    • ERRO (Error) - The execution of the batch job exceeds the service level is you are checking.
    • WARN (Warning) - This can be used to detect that he job is close to the service level (if you require this functionality).
  • The reason for the Level Of Service, expressed as a message (via Message Category, Message Number and Message Parameters). This allows you customize the information passed to express why the target was within limits or exceeded.

So it is possible to use any metric in your algorithm to measure the target performance of your batch controls. This information will be displayed on the Batch Control or via the F1-BatchLevelOfService Business Service (for query portals).

Now, I will illustrate the process for building a Batch Level Of Service with an example algorithm. This sample will just take a target value and assess the latest completed execution. The requirements for the sample algorithm are as follows:

  • A target will be set on the parameters of the algorithm which is the target value in seconds. Seconds was chosen as that is the lowest common denominator for all types of jobs.
  • The algorithm will determine the latest batch number or batch rerun number (to support reruns) for the completed jobs only. We have an internal business service, F1-BatchRunStatistics that returns the relevant statistics if given the batch code, batch number and batch rerun number.
  • The duration returned will be compared to the target and the relevant levelOfService set with the appropriate message.

Here is the process I used to build my algorithm:

  • I created three custom messages that would hold the reason for the NORM, ERRO and WARN state. I do not use the last state in my algorithm though in a future set of articles I might revisit that. For example:

Messages for Batch Level Of Service

  • You might notice that in the message for the times the target is exceeded I will include the target as part of the message (to tell you how far you are away from the target). The first parameter will be the target and the second will be the value returned from the product.
  • The next step is to define the Business Service that will return the batch identifiers of the execution I want to evaluate for the statistic. In this case I want to find the latest run number for a given batch code. Now, there are various ways of doing this but I will build a business service to bring back the right value. In this case I will do the following:
    • I will build a query zone with the following configuration to return the batch run number and batch rerun number:
Parameter Setting Zone CMBHZZ Description Return Batch Last Run Number and Rerun Number Zone Type F1-DE-SINGLE Application Service F1-DFLTS Width Full Hidden Filter 1 label=BATCH_CD Initial Display Columns C1 C2 C3 SQL Statement select b1.batch_cd, max(b1.batch_nbr), max(b2.batch_rerun_nbr) from ci_batch_inst b1, ci_batch_inst b2 where
b1.batch_cd = :H1 and b1.batch_cd = b2.batch_cd and b1.batch_nbr = b2.batch_nbr group by b1.batch_cd Column 1 source=SQLCOL sqlcol=1 label=BATCH_CD Column 2 source=SQLCOL sqlcol=2 label=BATCH_NBR Column 3 source=SQLCOL sqlcol=3 label=BATCH_RERUN_NBR
  • I will convert this to a Business Service using the FWLZDEXP with the following schema:

Business Service Schema

  • I need to create a Data Area to hold my input variables. I could do this inline but I might want to reuse the Data Area for other algorithms in the future. For example:

Data Area

  • I now have all the components to start my algorithm via Plug In Script. I create a Batch Level Of Service script with the following settings:
Script Basics
  • I attach the following Data Areas. These are the data areas used by the various calls in the script:

Data Areas

  • The script code looks something like this:

Script

Note: The code shown above is for illustrative processes. It is not a supported part of the product, just an example.

  • I now create the Algorithm Type that will define the algorithm parameters and the interface for the Algorithm entries. Notice the only parameter is the Target Value:

Sample Algorithm Type

  • Now I create the Algorithm entries to set the target value. For example:

Example Algorithm

  • I can create many different algorithm entries to reuse across the batch controls. For example:

Example Algorithms

  • The final step is to add it to the Batch Controls ready to be used. As I wrote the script as a Plug-In Script there is no deployment needed as it auto deploys. For example, on the Batch Control, I can add the algorithm:

Example Algorithm configuration on batch control

  • Now the Batch Level Of Service will be invoked whenever I open the Batch Control. For example:

Example Normal outcome from algorithm

Example outcome of Error

This example is just one use case to illustrate the use of Batch Level Of Service. This article is the first in a new series of articles that will use this as a basis for a new set of custom portals to help plan and optimize your batch experience.

DB Performance issue on Vmware infra Guest OS is linux

Tom Kyte - Thu, 2017-07-20 23:06
<code>We are planning to migrate Oracle 11g DB from HPUX environment to Vmware infrastructure. but while doing performance testing we are not getting expected performance while executing batch job.benchmark is 200k transaction per minute but on linux...
Categories: DBA Blogs

DBA_FEATURE_USAGE_STATISTICS result interpretation

Tom Kyte - Thu, 2017-07-20 23:06
I queried the DBA_FEATURE_USAGE_STATISTICS to check if advance compression is being used anywhere. One specific record caught my attention. Here is the row in a single record format: DBID: Suppressed NAME: Oracle Utility Datapump (Export) VERSIO...
Categories: DBA Blogs

Not able to recover Space even after Shrink space

Tom Kyte - Thu, 2017-07-20 23:06
Hi Tom, I have DB with one tablespace and many datafiles whereas tables are spread over many datafiles. I used below query to check fragmented tables list. select owner,table_name,blocks,num_rows,avg_row_len,round(((blocks*8/1024)),2) "TOTAL_SIZE...
Categories: DBA Blogs

PL/SQL running slow after database upgrade

Tom Kyte - Thu, 2017-07-20 23:06
We have a nightly batch process that pulls data into the database (runs in parallel) . It usually takes 8 Hours to run and after we upgraded our database to oracle 12c, This same process now takes 17 hours to run. Any Advice on what to do ?
Categories: DBA Blogs

Update column in one table with grouped data from another

Tom Kyte - Thu, 2017-07-20 23:06
Hello, I was wondering if it is possible to group related records from one table to use as the data to be updated in another table. It seems logical to me but it is not working. I added a trimmed down version of the two tables with only the required...
Categories: DBA Blogs

Drop partition from MVIEWs

Tom Kyte - Thu, 2017-07-20 23:06
Hello Tom ! A question about PMOs on MVIEWs. It seems that PMO (split, merge, add, drop, rename, ... partition) operations on materialized views are working without need to drop materialized view to its prebuilt table. Also for FAST refreshabl...
Categories: DBA Blogs

"Fast" MERGE PARTITION

Tom Kyte - Thu, 2017-07-20 23:06
Hi Tom ! As of Oracle 12c there exists FAST SPLIT PARTITION. Is there something similar for MERGE PMO? The exact problem is this. As currently there is no way to increase HIGH VALUE of RANGE PARTITION (at least I'm aware of), the only way t...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator