Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 12 hours 59 min ago

Recommended DB Parameters on the Oracle Database Appliance (ODA)

Tue, 2017-07-18 10:06

When creating a DB on the ODA using


# oakcli create database -db <db-name>

a template is used to set a couple of DB parameters like e.g.


_datafile_write_errors_crash_instance=FALSE
_db_writer_coalesce_area_size=16777216
_disable_interface_checking=TRUE
_ENABLE_NUMA_SUPPORT=FALSE
_FILE_SIZE_INCREASE_INCREMENT=2143289344
_gc_policy_time=0
_gc_undo_affinity=FALSE
db_block_checking='FULL'
db_block_checksum='FULL'
db_lost_write_protect='TYPICAL'
filesystemio_options='setall'
parallel_adaptive_multi_user=FALSE
parallel_execution_message_size=16384
parallel_threads_per_cpu=2
use_large_pages='ONLY'

In recent projects I saw a couple of DBs running on ODA, which did not have (all) those parameters set, because the DBs were migrated from a non-ODA-platform and the customer took over the previous settings.

The questions are: Are above parameters mandatory on ODA and where do I find them?

Actually Oracle writes in the documentation

http://docs.oracle.com/cd/E83239_01/doc.121/e83201/database-templates-oracle-database-appliance.htm#CMTAR269

“Oracle strongly recommends that you use the Oracle Database Appliance templates, because they implement best practices and are configured specifically for Oracle Database Appliance.”

So it’s not mandatory, but it’s “strongly recommended” by Oracle to set those parameters on ODA.

The parameters are actually defined in the XML-files

/opt/oracle/oak/onecmd/templates/OAK_oltp.dbt
/opt/oracle/oak/onecmd/templates/OAK_dss.dbt

E.g. on a virtualized X5-2 HA with 12.1.2.8.0 installed:


# grep "initParam name" /opt/oracle/oak/onecmd/templates/OAK_oltp.dbt
<initParam name="AUDIT_SYS_OPERATIONS" value="TRUE"/>
<initParam name="AUDIT_TRAIL" value="DB"/>
<initParam name="GLOBAL_NAMES" value="TRUE"/>
<initParam name="OS_AUTHENT_PREFIX" value=""/>
<initParam name="SQL92_SECURITY" value="TRUE"/>
<initParam name="PARALLEL_ADAPTIVE_MULTI_USER" value="FALSE"/>
<initParam name="PARALLEL_EXECUTION_MESSAGE_SIZE" value="16384"/>
<initParam name="PARALLEL_THREADS_PER_CPU" value="2"/>
<initParam name="_disable_interface_checking" value="TRUE"/>
<initParam name="_gc_undo_affinity" value="FALSE"/>
<initParam name="_gc_policy_time" value="0"/>
<initParam name="SESSION_CACHED_CURSORS" value="100"/>
<initParam name="OPEN_CURSORS" value="1000"/>
<initParam name="CURSOR_SHARING" value="EXACT"/>
<initParam name="_ENABLE_NUMA_SUPPORT" value="FALSE"/>
<initParam name="DB_LOST_WRITE_PROTECT" value="TYPICAL"/>
<initParam name="DB_BLOCK_CHECKSUM" value="FULL"/>
<initParam name="DB_BLOCK_CHECKING" value="FULL"/>
<initParam name="FAST_START_MTTR_TARGET" value="300"/>
<initParam name="UNDO_RETENTION" value="900"/>
<initParam name="_FILE_SIZE_INCREASE_INCREMENT" value="2143289344"/>
<initParam name="FILESYSTEMIO_OPTIONS" value="setall"/>
<initParam name="use_large_pages" value="only"/>
<initParam name="DB_FILES" value="1024"/>
<initParam name="processes" value="4800"/>
<initParam name="pga_aggregate_target" value="49152" unit="MB"/>
<initParam name="sga_target" value="98304" unit="MB"/>
<initParam name="db_create_file_dest" value="+DATA"/>
<initParam name="log_buffer" value="64000000" />
<initParam name="cpu_count" value="48"/>
<initParam name="pga_aggregate_limit" value="49152" unit="MB"/>
<initParam name="_datafile_write_errors_crash_instance" value="false"/>
<initParam name="_fix_control" value="18960760:on"/>
<initParam name="db_block_size" value="8" unit="KB"/>
<initParam name="compatible" value="11.2.0.x.0"/>
<initParam name="undo_tablespace" value="UNDOTBS1"/>
<initParam name="control_files" value="("{ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/control01.ctl")"/>
<initParam name="audit_file_dest" value="{ORACLE_BASE}/admin/{DB_UNIQUE_NAME}/adump"/>
<initParam name="audit_trail" value="db"/>
<initParam name="diagnostic_dest" value="{ORACLE_BASE}"/>
<initParam name="remote_login_passwordfile" value="EXCLUSIVE"/>
<initParam name="dispatchers" value="(PROTOCOL=TCP) (SERVICE={SID}XDB)"/>
<initParam name="db_recovery_file_dest" value="+RECO"/>
<initParam name="db_recovery_file_dest_size" value="1843200" unit="MB"/>
<initParam name="db_create_online_log_dest_1" value="+REDO" />
<initParam name="_db_writer_coalesce_area_size" value="16777216"/>

Oracle does not take those parameters blindly when creating a DB with oakcli, but adjusts them as e.g. described in the Blog

https://blog.dbi-services.com/oda-32gb-template-but-got-a-database-with-16gb-sga/

I.e. the SGA_TARGET and PGA_AGGREGATE_TARGET parameters are adjusted based on the chosen database class. Also the COMPATIBLE-parameter is set to the current release (e.g. to 12.1.0.2.0).

So if you’re not able to create the DB on ODA using

# oakcli create database -db <db-name>

then I recommend to check the XML-file OAK_oltp.dbt (or in case of a Decision Support System/Warehouse-DB the file OAK_dss.dbt) and set the parameters in your database accordingly. Alternatively (and probably even better) you may create a dummy-DB using oakcli and check Oracle’s settings, which can then be used in your migrated DB, and drop the dummy-DB afterwards.

Here the parameters of a 12c-DB created on a virtualized ODA X6-2 HA 12.1.2.11.0 with oakcli create database using the smallest setting with 1 Core and the DSS-template:


*._datafile_write_errors_crash_instance=false
*._db_writer_coalesce_area_size=16777216
*._disable_interface_checking=TRUE
*._ENABLE_NUMA_SUPPORT=FALSE
*._FILE_SIZE_INCREASE_INCREMENT=2143289344
*._fix_control='18960760:on'
*._gc_policy_time=0
*._gc_undo_affinity=FALSE
*.audit_file_dest='/u01/app/oracle/admin/C12TEST/adump'
*.audit_sys_operations=TRUE
*.audit_trail='db'
*.cluster_database=true
*.compatible='12.1.0.2.0'
*.control_files='/u01/app/oracle/oradata/datastore/C12TEST/C12TEST/controlfile/o1_mf_dpw4ljnv_.ctl'
*.cpu_count=2
*.cursor_sharing='EXACT'
*.db_block_checking='FULL'
*.db_block_checksum='FULL'
*.db_block_size=8192
*.db_create_file_dest='/u02/app/oracle/oradata/datastore/.ACFS/snaps/C12TEST'
*.db_create_online_log_dest_1='/u01/app/oracle/oradata/datastore/C12TEST'
*.db_domain=''
*.db_files=1024
*.db_lost_write_protect='TYPICAL'
*.db_name='C12TEST'
*.db_recovery_file_dest='/u01/app/oracle/fast_recovery_area/datastore/C12TEST'
*.db_recovery_file_dest_size=476160m
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=C12TESTXDB)'
*.fast_start_mttr_target=300
*.filesystemio_options='setall'
*.global_names=TRUE
*.inmemory_size=0m
*.log_archive_format='%t_%s_%r.dbf'
*.log_buffer=16000000
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=1000
*.os_authent_prefix=''
*.parallel_adaptive_multi_user=TRUE
*.parallel_degree_policy='MANUAL'
*.parallel_execution_message_size=16384
*.parallel_force_local=FALSE
*.parallel_max_servers=80
*.parallel_min_servers=8
*.parallel_threads_per_cpu=2
*.pga_aggregate_limit=2048m
*.pga_aggregate_target=2048m
*.processes=200
*.remote_login_passwordfile='exclusive'
*.session_cached_cursors=100
*.sga_target=1024m
*.sql92_security=TRUE
*.undo_retention=900
*.use_large_pages='ONLY'

 

Cet article Recommended DB Parameters on the Oracle Database Appliance (ODA) est apparu en premier sur Blog dbi services.

Documentum – Change password – 2 – CS – dm_bof_registry

Sat, 2017-07-15 03:30

When installing a Global Registry on a Content Server, you will be asked to setup the BOF username and password. The name of this user is by default “dm_bof_registry” so even if you can change it, I will use this value in this blog. This is one of the important accounts that are being created inside the Global Registry. So, what would be the needed steps to change the password of this account?

 

Let’s start with the simple part: changing the password of the account in the Global Registry. For this, I will use iapi below but you can do the same thing using Documentum Administrator, idql, dqMan or anything else that works. First, let’s login on the Content Server, switch to the Installation Owner’s account and start with defining an environment variable that will contain the NEW password to be used:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the dm_bof_registry password: " bof_pwd; echo
Please enter the dm_bof_registry password:
[dmadmin@content_server_01 ~]$

 

Once that is done, we can now execute the iapi commands below to update the password for the dm_bof_registry account. As there is a local trust on the Content Server with the Installation Owner, I don’t need to enter the password, so I use “xxx” instead to login to the Global Registry (GR_DOCBASE). Execute the commands below one after the other and don’t include the “> ” characters, just past the iapi commands and after pasting the final EOF, an iapi session will be opened and all commands will be executed, like that:

[dmadmin@content_server_01 ~]$ iapi GR_DOCBASE -Udmadmin -Pxxx << EOF
> retrieve,c,dm_user where user_login_name='dm_bof_registry'
> set,c,l,user_password
> $bof_pwd
> save,c,l
> EOF


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000905 started for user dmadmin."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> ...
110f123456000144
API> SET> ...
OK
API> ...
OK
API> Bye
[dmadmin@content_server_01 ~]$

 

Then to verify that the password has been set properly in the Global Registry, we can try to login with the dm_bof_registry account:

[dmadmin@content_server_01 ~]$ echo quit | iapi GR_DOCBASE -Udm_bof_registry -P$bof_pwd


    EMC Documentum iapi - Interactive API interface
    (c) Copyright EMC Corp., 1992 - 2015
    All rights reserved.
    Client Library Release 7.2.0000.0054


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f123456000906 started for user dm_bof_registry."


Connected to Documentum Server running Release 7.2.0000.0155  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@content_server_01 ~]$

 

If the password has been changed properly, the output will be similar to the one above: a session will be opened and the only command executed will be “quit” which will close the iapi session automatically. That was pretty easy, right? Well that’s clearly not all there is to do to change the BOF password, unfortunately…

 

The “problem” with the dm_bof_registry account is that it is used on all DFC Clients to register them, to establish trust, aso… Therefore, if you change the password of this account, you will need to reflect this change on all clients that are connecting to your Content Servers. In the steps below, I will provide some commands that can be used to do that on the different typical DFC clients (JMS, xPlore, DA, D2, …). If I’m not talking about one of your DFC client, then basically the steps are always the same, it’s just the commands that differs:

  • Listing all dfc.keystore
  • Updating the dfc.properties
  • Removing/renaming the dfc.keystore files
  • Restarting the DFC clients
  • Checking that the dfc.keystore files have been recreated

 

Before going through the different DFC Clients, you first need to encrypt the BOF user’s password because it is always be used in its encrypted form, so let’s encrypt it on a Content Server:

[dmadmin@content_server_01 ~]$ $JAVA_HOME/bin/java -cp $DOCUMENTUM_SHARED/dfc/dfc.jar com.documentum.fc.tools.RegistryPasswordUtils ${bof_pwd}
AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0
[dmadmin@content_server_01 ~]$

 

I generated a random string for this example (“AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0″) but this will be the encrypted password of our user. I will use this value in the commands below so whenever you see this, just replace it with what your “java -cp ..” command returned.

 

I. Content Server

On the Content Server, the main dfc client is the JMS. You will have one dfc.properties for each JMS application, one global for the CS, aso… So, let’s update all that with a few commands only. Normally you should only get the definition of the dfc.globalregistry.password in the file $DOCUMENTUM_SHARED/config/dfc.properties. If you got this definition elsewhere, you should maybe consider using the “#include” statement to avoid duplicating the definitions…

[dmadmin@content_server_01 ~]$ for i in `find $DOCUMENTUM_SHARED -type f -name "dfc.keystore"`; do ls -l ${i}; done
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' $DOCUMENTUM_SHARED/config/dfc.properties
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for i in `find $DOCUMENTUM_SHARED -type f -name "dfc.keystore"`; do ls -l ${i}; mv "${i}" "${i}_bck_$(date "+%Y%m%d")"; done
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ $DOCUMENTUM_SHARED/jboss7.1.1/server/stopMethodServer.sh
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ nohup $DOCUMENTUM_SHARED/jboss7.1.1/server/startMethodServer.sh >> $DOCUMENTUM_SHARED/jboss7.1.1/server/nohup-JMS.out 2>&1 &
[dmadmin@content_server_01 ~]$
[dmadmin@content_server_01 ~]$ for i in `find $DOCUMENTUM_SHARED -type f -name "dfc.keystore"`; do ls -l ${i}; done

 

If you do it properly, all the dfc.keystore files will be recreated with the restart and you can verify that by comparing the output of the first and last commands.

 

II. WebLogic Server

In this part, I will assume a WebLogic Server is used for the D2, D2-Config and DA applications. If you are using Tomcat instead, then just adapt the path. Below I will use:

  • $WLS_APPLICATIONS as the directory where all the Application WAR files are present. If you are using exploded applications (it’s just a folder, not a WAR file) OR if you are using an external dfc.properties file (it’s possible even with a WAR file to extract the dfc.properties for it), then the “jar -xvf” and “jar -uvf” commands aren’t needed.
  • $WLS_APPS_DATA as the directory where the Application Data are present (Application log files, dfc.keystore, cache, …)

 

These two folders might be the same depending on how you configured your Application Server. All I’m doing below is just updating the dfc.properties files for D2, D2-Config and DA in order to use the new encrypted password.

[weblogic@weblogic_server_01 ~]$ for i in `find $WLS_APPS_DATA -type f -name "dfc.keystore"`; do ls -l ${i}; done
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ cd $WLS_APPLICATIONS/
[weblogic@weblogic_server_01 ~]$ jar -xvf D2.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ jar -uvf D2.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ jar -xvf D2-Config.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ jar -uvf D2-Config.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ jar -xvf da.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$ jar -uvf da.war WEB-INF/classes/dfc.properties
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ for i in `find $WLS_APPS_DATA -type f -name "dfc.keystore"`; do ls -l ${i}; mv "${i}" "${i}_bck_$(date "+%Y%m%d")"; done

 

Once done, the next steps depend, again, on how you configured your Application Server. If you are using WAR files, you will need to redeploy them. If not, you might have to restart your Application Server for the change to be taken into account and for the keystore file to be re-created.

 

III. Full Text Server

On the Full Text Server, it’s again the same stuff but for all Index Agents this time.

[xplore@xplore_server_01 ~]$ for i in `find $XPLORE_HOME -type f -name "dfc.keystore"`; do ls -l ${i}; done
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ for i in `ls $XPLORE_HOME/jboss7.1.1/server/DctmServer_*/deployments/IndexAgent.war/WEB-INF/classes/dfc.properties`; do sed -i 's,dfc.globalregistry.password=.*,dfc.globalregistry.password=AAAAEE0QvvSIFuiXKd4kNg2Ff1dLf0gacNpofNLtKxoGd2iDFQax0,' ${i}; done
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ for i in `find $XPLORE_HOME -type f -name "dfc.keystore"`; do ls -l ${i}; mv "${i}" "${i}_bck_$(date "+%Y%m%d")"; done
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ service xplore stop
[xplore@xplore_server_01 ~]$ service xplore start
[xplore@xplore_server_01 ~]$
[xplore@xplore_server_01 ~]$ for i in `find $XPLORE_HOME -type f -name "dfc.keystore"`; do ls -l ${i}; done

 

Again if you do it properly, all the dfc.keystore files will be recreated with the restart.

 

When everything has been done, just let the environment up&running for some time and check the logs for authentication failures regarding the dm_bof_registry user. As you saw above, changing the dm_bof_registry password isn’t really complicated but it’s quite redundant and time consuming so better script all this! :)

 

 

 

Cet article Documentum – Change password – 2 – CS – dm_bof_registry est apparu en premier sur Blog dbi services.

Documentum – Change password – 1 – CS – AEK and Lockbox

Sat, 2017-07-15 02:53

This blog is the first one of a series that I will publish in the next few days/weeks regarding how you can change some passwords in Documentum. In these blogs, I will talk about a lot of accounts like the Installation Owner of course, the Preset and Preferences accounts, the JKS passwords, the JBoss Admin passwords, the xPlore xDB passwords, aso…

 

So, let’s dig in with the first ones: AEK and Lockbox passphrases. In this blog, I will only talk about the Content Server lockbox, it’s not about the D2 Lockbox (which is also under the JMS). I’m assuming here that the AEK key is stored in the Content Server lockbox as it is recommended starting with CS 7.2 for security reasons.

 

In this blog, I will use “dmadmin” as the Installation Owner. First, you need to connect to all Content Servers of this environment using the Installation Owner account. In case you have a High Availability environment, then you will need to do this on all Content Servers, obviously.

 

Then, I’m defining some environment variables so I’m sure I’m using the right passphrases and there is no typo in the commands. The first two commands below will be used to store the CURRENT and NEW passphrases for the AEK. The last two commands are for the Lockbox. When you execute the “read” command, the prompt isn’t returned. Just past the passphrase (it’s hidden) and press enter. Then the prompt is returned and the passphrase is stored in the environment variable. I’m describing this in this blog only. In the next blogs, I will just use the commands without explanation:

[dmadmin@content_server_01 ~]$ read -s -p "Please enter the CURRENT AEK passphrase: " c_aek_pp; echo
Please enter the CURRENT AEK passphrase:
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW AEK passphrase: " n_aek_pp; echo
Please enter the NEW AEK passphrase:
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the CURRENT lockbox passphrase: " c_lb_pp; echo
Please enter the CURRENT lockbox passphrase:
[dmadmin@content_server_01 ~]$ read -s -p "Please enter the NEW lockbox passphrase: " n_lb_pp; echo
Please enter the NEW lockbox passphrase:
[dmadmin@content_server_01 ~]$

 

Maybe a small backup of the Lockbox, just in case…:

[dmadmin@content_server_01 ~]$ cp -R $DOCUMENTUM/dba/secure $DOCUMENTUM/dba/secure_bck_$(date "+%Y%m%d-%H%M")
[dmadmin@content_server_01 ~]$

 

Ok then to ensure that the commands will go smoothly, let’s just verify that the environments variables are defined properly (I’m adding “__” at the end of the echo commands to be sure there is no “space” at the end of the passwords). Obviously the “read -s” commands above have been executed to hide the passphrases so if you don’t want the passphrases to be stored in the history of the shell, don’t execute the two commands below.

[dmadmin@content_server_01 ~]$ echo "CURRENT_AEK_PP=${c_aek_pp}__"; echo "NEW_AEK_PP=${n_aek_pp}__"
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ echo "CURRENT_LOCKBOX_PP=${c_lb_pp}__"; echo "NEW_LOCKBOX_PP=${n_lb_pp}__"
[dmadmin@content_server_01 ~]$

 

To verify that the CURRENT AEK and Lockbox passphrases are correct, you can execute the following commands. Just a note, when you first create the Lockbox, the Documentum Installer will ask you which algorithm you want to use… I always choose the stronger one for security reasons so I’m using below “AES_256_CBC”. If you are using something else, just adapt it:

[dmadmin@content_server_01 ~]$ dm_crypto_boot -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -passphrase ${c_aek_pp} -all

Please wait. This will take a few seconds ...

Please wait, this will take a few seconds..
Setting up the (single) passphrase for all keys in the shared memory region..
Operation succeeded
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ dm_crypto_manage_lockbox -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -resetfingerprint
Lockbox lockbox.lb
Lockbox Path /app/dctm/server/dba/secure/lockbox.lb
Reset host done
[dmadmin@content_server_01 ~]$ 
[dmadmin@content_server_01 ~]$ dm_crypto_create -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -keyname CSaek -passphrase ${c_aek_pp} -algorithm AES_256_CBC -check


Key - CSaek uses algorithm AES_256_CBC.

** An AEK store with the given passphrase exists in lockbox lockbox.lb and got status code returned as '0'.

 

For the three commands above, the result should always be “Operation succeeded”, “Reset host done” and “got status code returned as ‘0’”. If the second command fail, then obviously, it’s the Lockbox passphrase that isn’t set properly and otherwise it’s the AEK passphrase.

 

Ok now that all the passwords are set and that the current ones are working, we can start the update of the passphrases. Let’s first start with the AEK:

[dmadmin@content_server_01 ~]$ dm_crypto_change_passphrase -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -keyname CSaek -passphrase ${c_aek_pp} -newpassphrase ${n_aek_pp}
[dmadmin@content_server_01 ~]$

 

Then the Lockbox:

[dmadmin@content_server_01 ~]$ dm_crypto_manage_lockbox -lockbox lockbox.lb -lockboxpassphrase ${c_lb_pp} -changepassphrase -newpassphrase ${n_lb_pp}
[dmadmin@content_server_01 ~]$

 

To verify that the NEW passphrases are now used, you can again run the three above commands. The only difference is that you need to use the environment variables for the NEW passphrases and not the CURRENT (old) ones:

[dmadmin@content_server_01 ~]$ dm_crypto_boot -lockbox lockbox.lb -lockboxpassphrase ${n_lb_pp} -passphrase ${n_aek_pp} -all
[dmadmin@content_server_01 ~]$ dm_crypto_manage_lockbox -lockbox lockbox.lb -lockboxpassphrase ${n_lb_pp} -resetfingerprint
[dmadmin@content_server_01 ~]$ dm_crypto_create -lockbox lockbox.lb -lockboxpassphrase ${n_lb_pp} -keyname CSaek -passphrase ${n_aek_pp} -algorithm AES_256_CBC -check

 

Now we are almost complete. If the three previous commands gave the correct output, then it’s pretty sure that everything is OK. Nevertheless, and to be 100% sure that the Content Server Lockbox isn’t corrupted in some way, it is always good to reboot the Linux Host too. Once the Linux Host is up & running again, you will have to execute the first command above (the dm_crypto_boot) to store the Lockbox information into the Shared Memory so that the docbase(s) can start. If you are able to start the docbase(s) using the NEW passphrases, then the AEK and Lockbox have been updated successfully!

 

As a side note, if the Server Fingerprint has been updated (like some OS patching recently or stuff like that), then you might need to execute the second command too (dm_crypto_manage_lockbox) as well as regenerate the D2 Lockbox (which isn’t described in this blog but will be in a next one).

 

 

 

Cet article Documentum – Change password – 1 – CS – AEK and Lockbox est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 16 – DDL triggers

Fri, 2017-07-14 09:52

A question I received recently from a customer: Oracle gives you the possibility to create DDL triggers. DDL triggers fire (as the name implies) when DDL events occur. Can we do that in PostgreSQL? Yes, this feature is called event triggers. There is a nice list in the documentation which lists for which DDLs an event trigger can fire: Event Trigger Firing Matrix. Lets go …

As usual lets create a dummy table we can work with:

postgres=# create table t1 ( a int );
CREATE TABLE
postgres=# \d t1
      Table "public.t1"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 

What we want to do is to prevent any modifications of the table structure, how do we do this? Obviously we need a way to catch the alter statement against our table and then raise an exception displaying some text. What we need to do is to create a function which returns the pseudo type “event_trigger”:

CREATE OR REPLACE FUNCTION no_ddl() RETURNS event_trigger AS $$
DECLARE r RECORD;
BEGIN
    FOR r IN SELECT * FROM pg_event_trigger_ddl_commands() LOOP
        RAISE EXCEPTION 'You are not allowed to change %', r.object_identity;
    END LOOP;
END;
$$
LANGUAGE plpgsql;

What this function is doing is to iterate over the result set of pg_event_trigger_ddl_commands and then raises an exception. In addition to that we need the event trigger that calls the function:

CREATE EVENT TRIGGER no_ddl_allowed
  ON ddl_command_end WHEN TAG IN ('ALTER TABLE')
  EXECUTE PROCEDURE no_ddl();

Lets see if it works:

postgres=# alter table t1 add column g text;
ERROR:  You are not allowed to change public.t1
CONTEXT:  PL/pgSQL function no_ddl() line 5 at RAISE

Cool, but there is an issue with the current implementation:

postgres=# create table t2 ( a int );
CREATE TABLE
postgres=# alter table t2 add column b text;
ERROR:  You are not allowed to change public.t2
CONTEXT:  PL/pgSQL function no_ddl() line 5 at RAISE

What we effectively did is to deny all alter statements for all objects in that database. This is probably not what you want. A better approach is this:

CREATE OR REPLACE FUNCTION no_ddl() RETURNS event_trigger AS $$
DECLARE r RECORD;
BEGIN
    FOR r IN SELECT * FROM pg_event_trigger_ddl_commands() LOOP
      IF ( r.objid::regclass::text = 't1' )
      THEN
            RAISE EXCEPTION 'You are not allowed to change %', r.object_identity;
      END IF;
    END LOOP;
END;
$$
LANGUAGE plpgsql;

This way we are only raising the exception when the table “t1″ is involved and do nothing for all other tables:

postgres=# alter table t2 add column b text;
ALTER TABLE
postgres=# alter table t1 add column b text;
ERROR:  You are not allowed to change public.t1

Hope this helps …

 

Cet article Can I do it with PostgreSQL? – 16 – DDL triggers est apparu en premier sur Blog dbi services.

What are typed tables in PostgreSQL?

Thu, 2017-07-13 15:29

While reading the PostgreSQL documentation for “CREATE TABLE” I came across this:

“OF type_name: Creates a typed table, which takes its structure from the specified composite type (name optionally schema-qualified). A typed table is tied to its type; for example the table will be dropped if the type is dropped (with DROP TYPE … CASCADE).”

Sounds interesting, lets have a look.

Obviously we’ll need a composite type to make use of the feature described above:

postgres=# create type ctyp1 as ( a int, b varchar(10), c date );
CREATE TYPE
postgres=# \d ctyp1
       Composite type "public.ctyp1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 

Using the “CREATE TABLE” statement we can now create a table which is based on that type:

postgres=# create table t1 of ctyp1;
CREATE TABLE
postgres=# \d t1
             Table "public.t1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 
Typed table of type: ctyp1

But is that useful? First I thought no, but there is at least one use case for which this is a great help. Image you need another table with the same structure, maybe in another schema. All you need to do is:

postgres=# create table t2 of ctyp1;
CREATE TABLE
postgres=# \d t2
             Table "public.t2"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 
Typed table of type: ctyp1

Not much of a help you might think as you can just create another table the usual way by specifying all the columns once more, but wait: How would you manage changes to the table structure then? Having both tables based on the same type gives you this:

postgres=# alter type ctyp1 add attribute d numeric cascade;
ALTER TYPE
postgres=# \d ctyp1
       Composite type "public.ctyp1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 
 d      | numeric               | 

The magic is in the keyword “cascade”. What happened is that both our tables now look like this:

postgres=# \d t1
             Table "public.t1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 
 d      | numeric               | 
Typed table of type: ctyp1

postgres=# \d t2
             Table "public.t2"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 
 d      | numeric               | 
Typed table of type: ctyp1

With one command we changed the structure of all the tables which are based on the composite type. Btw: When you skip the “cascade” keyword you’ll get an error:

postgres=# alter type ctyp1 add attribute e numeric;
ERROR:  cannot alter type "ctyp1" because it is the type of a typed table
HINT:  Use ALTER ... CASCADE to alter the typed tables too.

Can be quite useful…Good to know that this is possible.

 

Cet article What are typed tables in PostgreSQL? est apparu en premier sur Blog dbi services.

Did you ever wonder what PostgreSQL is executing in the background when you use the psql shortcuts?

Thu, 2017-07-13 08:19

When you work with PostgreSQL you are probably using psql and when you use psql you are probably using one or more of the shortcuts psql is providing. These shortcuts provide a quick and convenient way to get meta data out of the PostgreSQL catalog which safes you from a lot of typing and typos. The only issue with that is that it hides the statements which are executed to get the meta data so you don’t know were the information is actually coming from. Of course you can check either the information_schema or the PostgreSQL system catalog and then write your own queries for what you are looking for. But, hey, there is a much easier way.

Lets start by creating a dummy table and an index:

postgres=# create table dummy ( a int primary key, b varchar(50), c timestamp with time zone );
CREATE TABLE
postgres=# create index i_dummy on dummy ( c );
CREATE INDEX
postgres=# 

The fastest way to get the definition of the table is:

postgres=# \d dummy
             Table "public.dummy"
 Column |           Type           | Modifiers 
--------+--------------------------+-----------
 a      | integer                  | not null
 b      | character varying(50)    | 
 c      | timestamp with time zone | 
Indexes:
    "dummy_pkey" PRIMARY KEY, btree (a)
    "i_dummy" btree (c)

As you can see you do not only get the definition of the table itself but also information about the primary key and the index. But where does this information come from? As information about the index and the primary key is displayed as well the information must be coming from more than one catalog table, but which? Quite easy when you check the options of psql:

postgres@pgbox:/home/postgres/ [PG962] psql --help | grep "hidden"
  -E, --echo-hidden        display queries that internal commands generate

When you fire up psql with this option all the internal statements will be displayed when you use a short cut:

postgres@pgbox:/home/postgres/ [PG962] psql -E postgres
postgres=# \d dummy
********* QUERY **********
SELECT c.oid,
  n.nspname,
  c.relname
FROM pg_catalog.pg_class c
     LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname ~ '^(dummy)$'
  AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 2, 3;
**************************

********* QUERY **********
SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules, c.relhastriggers, c.relrowsecurity, c.relforcerowsecurity, c.relhasoids, '', c.reltablespace, CASE WHEN c.reloftype = 0 THEN '' ELSE c.reloftype::pg_catalog.regtype::pg_catalog.text END, c.relpersistence, c.relreplident
FROM pg_catalog.pg_class c
 LEFT JOIN pg_catalog.pg_class tc ON (c.reltoastrelid = tc.oid)
WHERE c.oid = '16679';
**************************

********* QUERY **********
SELECT a.attname,
  pg_catalog.format_type(a.atttypid, a.atttypmod),
  (SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid) for 128)
   FROM pg_catalog.pg_attrdef d
   WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef),
  a.attnotnull, a.attnum,
  (SELECT c.collname FROM pg_catalog.pg_collation c, pg_catalog.pg_type t
   WHERE c.oid = a.attcollation AND t.oid = a.atttypid AND a.attcollation  t.typcollation) AS attcollation,
  NULL AS indexdef,
  NULL AS attfdwoptions
FROM pg_catalog.pg_attribute a
WHERE a.attrelid = '16679' AND a.attnum > 0 AND NOT a.attisdropped
ORDER BY a.attnum;
**************************

********* QUERY **********
SELECT c2.relname, i.indisprimary, i.indisunique, i.indisclustered, i.indisvalid, pg_catalog.pg_get_indexdef(i.indexrelid, 0, true),
  pg_catalog.pg_get_constraintdef(con.oid, true), contype, condeferrable, condeferred, i.indisreplident, c2.reltablespace
FROM pg_catalog.pg_class c, pg_catalog.pg_class c2, pg_catalog.pg_index i
  LEFT JOIN pg_catalog.pg_constraint con ON (conrelid = i.indrelid AND conindid = i.indexrelid AND contype IN ('p','u','x'))
WHERE c.oid = '16679' AND c.oid = i.indrelid AND i.indexrelid = c2.oid
ORDER BY i.indisprimary DESC, i.indisunique DESC, c2.relname;
**************************

********* QUERY **********
SELECT pol.polname,
CASE WHEN pol.polroles = '{0}' THEN NULL ELSE array_to_string(array(select rolname from pg_roles where oid = any (pol.polroles) order by 1),',') END,
pg_catalog.pg_get_expr(pol.polqual, pol.polrelid),
pg_catalog.pg_get_expr(pol.polwithcheck, pol.polrelid),
CASE pol.polcmd 
WHEN 'r' THEN 'SELECT'
WHEN 'a' THEN 'INSERT'
WHEN 'w' THEN 'UPDATE'
WHEN 'd' THEN 'DELETE'
WHEN '*' THEN 'ALL'
END AS cmd
FROM pg_catalog.pg_policy pol
WHERE pol.polrelid = '16679' ORDER BY 1;
**************************

********* QUERY **********
SELECT c.oid::pg_catalog.regclass FROM pg_catalog.pg_class c, pg_catalog.pg_inherits i WHERE c.oid=i.inhparent AND i.inhrelid = '16679' ORDER BY inhseqno;
**************************

********* QUERY **********
SELECT c.oid::pg_catalog.regclass FROM pg_catalog.pg_class c, pg_catalog.pg_inherits i WHERE c.oid=i.inhrelid AND i.inhparent = '16679' ORDER BY c.oid::pg_catalog.regclass::pg_catalog.text;
**************************

             Table "public.dummy"
 Column |           Type           | Modifiers 
--------+--------------------------+-----------
 a      | integer                  | not null
 b      | character varying(50)    | 
 c      | timestamp with time zone | 
Indexes:
    "dummy_pkey" PRIMARY KEY, btree (a)
    "i_dummy" btree (c)

Here you go. Quite a lot of stuff is happening in the background and you can exactly see what it is. This is a great way to get known to the catalog. When you are already inside psql and want to switch the display of the hidden stuff to on you can do that as well:

postgres=# \set ECHO_HIDDEN on
postgres=# \d dummy
********* QUERY **********
SELECT c.oid,
  n.nspname,
  c.relname
FROM pg_catalog.pg_class c
     LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname ~ '^(dummy)$'
  AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 2, 3;
**************************

...

When you want to make this permanent add it to your .psqlrc (scroll down to the “Files” section). Have fun …

 

Cet article Did you ever wonder what PostgreSQL is executing in the background when you use the psql shortcuts? est apparu en premier sur Blog dbi services.

One-day workshop with Slava Oks for SQL Server on Linux

Thu, 2017-07-13 05:26

Few weeks ago, David Barbarin and I went to the Tugar IT  in Lisbon to follow the workshop from Slava Oks: Bringing SQL Server to Linux: Experience

Linux03

It was a deep dive into the code of SQL Server and Slava shows us how they achieve a universal platform regardless the operating system.

On my mind, the challenge was how to deal with the Windows operating system dependencies that include all Win32/ NT Kernel calls and makes SQL Server OS agnostic

Indeed, SQL Server has about more than 800 calls to Win32 and more than 400 calls to NT Kernel through the SQL OS. Slava explained us how challenging it could be to rewrite all the SQL Server code to support the Linux operating system.

To achieve it, the development Team implemented a containerized approach called Drawbridge.

Drawbridge combines 2 technologies:

  • Picoprocess: Process-based isolation container with a minimal kernel API Surface
  • Library OS: Version of Windows enlightened to run efficiently

Here a link for more information about Drawbridge: https://www.microsoft.com/en-us/research/project/drawbridge/#

This technology reduces to 45 calls to the host OS through a proper Platform Abstraction Layer(PAL): SQLPAL.

Linux01Source: Microsoft

SQLPAL manages all resources’ in the process. The goal is to merge SOS and Library OS as the core of SQLPAL.

Then, we get deeper into the SQLPAL code and Slava began a complex topic of debugging stuff…I must admit it was a headache moment! :-?

But finally, I may conclude that it was a very interesting Workshop with a lot of explanations about how Microsoft SQL Server team addressed the challenge of making SQL Server compatiblewith Linux. This helps me to understand the architecture of the next version of SQL Server. 8-)

And it was a nice day in Lisbon in the Microsoft building!

IMG_0782(1)

 

Cet article One-day workshop with Slava Oks for SQL Server on Linux est apparu en premier sur Blog dbi services.

Azure Compare Total Cost of Ownership (TCO) with comparable Azure deployment

Thu, 2017-07-13 05:18

Microsoft Azure allows you to quickly deploy infrastructures and services to meet all of your business needs. You can run Windows and Linux based applications in 36 Azure datacenter regions, delivered with enterprise grade SLAs.

Microsoft Azure offer you to Quickly compare the total cost of ownership (TCO) of your on-premises infrastructure with a comparable Azure deployment using the  TCO Calculator and estimate savings you can realize by moving to Azure.

In this article i will show you  a little example on how to calculate the TCO on a small architecture.

To access to the TCO calculator, click on this link : TCO Calculator,  click on the TCO calculator image in red rectangle and follow instructions

TCO

My use case:

This example is based on a small architecture with two Virtual Machines, just to give you an insight into the use of this TCO calculator.

The first step is to enter input parameters

2017-07-13 11_56_10-Azure TCO Calculator _ Microsoft

after clicking on the Calculate button, the result appears:

tco2

tco3

tco4

Details are available by clicking on export to Word on the top frame result.

Conclusion:

To be honest, this calculator may not accurately predict the TCO but it offers a good estimate on the kind of savings one can expect. It is a very good tool for companies looking to move their IT infrastructure to Azure cloud.

imageDbiFin

 

 

 

Cet article Azure Compare Total Cost of Ownership (TCO) with comparable Azure deployment est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 15 – invisible indexes

Wed, 2017-07-12 11:26

It has been quite a while since the last post in this series. Today we’ll look at what you know from Oracle as: Invisible indexes. In case you wonder what they might be useful for: Imagine you want to test if an index would benefit one or more queries without affecting the production workload. In other words: Wouldn’t it be cool to create an index but somehow tell the optimizer not to use it for the ongoing queries? This is what invisible indexes are about: Create an index which you believe should improve performance for one or more queries but at the same step make sure that it is not taken into account when the query plan is generated and then executed. The bad news is: This is not possible in PostgreSQL core. The good news is: There is an extension which does exactly this.

The extension is called hypopg and is available via github. The readme states that it works on all PostgreSQL versions starting with 9.2, so lets try it with PostgreSQL 10 Beta1.

postgres@pgbox:/home/postgres/ [PG10B] psql -X postgres
psql (10beta1 dbi services build)
Type "help" for help.

postgres=# select version();
                                                            version                                                            
-------------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 10beta1 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit
(1 row)

postgres=# 

Getting the extension downloaded, compiled and installed is straight forward:

postgres@pgbox:/home/postgres/ [PG10B] wget https://github.com/dalibo/hypopg/archive/master.zip
postgres@pgbox:/home/postgres/ [PG10B] unzip master.zip 
postgres@pgbox:/home/postgres/ [PG10B] cd hypopg-master/
postgres@pgbox:/home/postgres/hypopg-master/ [PG10B] make install
postgres@pgbox:/home/postgres/hypopg-master/ [PG10B] psql -X -c "create extension hypopg" postgres
CREATE EXTENSION
postgres@pgbox:/home/postgres/hypopg-master/ [PG10B] psql -X -c "\dx" postgres
                     List of installed extensions
  Name   | Version  |   Schema   |             Description             
---------+----------+------------+-------------------------------------
 hypopg  | 1.1.0dev | public     | Hypothetical indexes for PostgreSQL
 plpgsql | 1.0      | pg_catalog | PL/pgSQL procedural language
(2 rows)

Here we go, all fine until now and we should be ready to use it. Obviously we need a table and some data to test with:

postgres@pgbox:/home/postgres/ [PG10B] psql -X postgres
psql (10beta1 dbi services build)
Type "help" for help.

postgres=# \! cat a.sql
drop table if exists t1;
create table t1 ( a int );
with generator as 
 ( select a.*
     from generate_series ( 1, 5000000 ) a
    order by random()
 )
insert into t1 ( a ) 
     select a
       from generator;
postgres=# \i a.sql
DROP TABLE
CREATE TABLE
INSERT 0 5000000
postgres=# analyze t1;
ANALYZE
postgres=# select * from pg_size_pretty ( pg_total_relation_size ('t1') );
 pg_size_pretty 
----------------
 173 MB
(1 row)

So now we have a table containing some data. The only choice PostgreSQL has to fetch one or more rows is to use a sequential scan (which is a full table scan in Oracle):

postgres=# explain select * from t1 where a = 5;
                             QUERY PLAN                              
---------------------------------------------------------------------
 Gather  (cost=1000.00..49165.77 rows=1 width=4)
   Workers Planned: 2
   ->  Parallel Seq Scan on t1  (cost=0.00..48165.67 rows=1 width=4)
         Filter: (a = 5)
(4 rows)

Although PostgreSQL already knows that only one row needs to be returned (rows=1) it still needs to read the whole table. Lets look at how that looks like when we really execute the query by using “explain (analyze)”:

postgres=# explain (analyze) select * from t1 where a = 5;
                                                    QUERY PLAN                                                     
-------------------------------------------------------------------------------------------------------------------
 Gather  (cost=1000.00..49165.77 rows=1 width=4) (actual time=133.292..133.839 rows=1 loops=1)
   Workers Planned: 2
   Workers Launched: 2
   ->  Parallel Seq Scan on t1  (cost=0.00..48165.67 rows=1 width=4) (actual time=110.446..124.888 rows=0 loops=3)
         Filter: (a = 5)
         Rows Removed by Filter: 1666666
 Planning time: 0.055 ms
 Execution time: 135.465 ms
(8 rows)

What kicked in here is parallel query which is available since PostgreSQL 9.6 but this is not really important for the scope of this post. Coming back to the invisible or hypothetical indexes: Having the extension installed we can now do something like this:

postgres=# SELECT * FROM hypopg_create_index('CREATE INDEX ON t1 (a)');
 indexrelid |     indexname     
------------+-------------------
      16399 | btree_t1_a
(1 row)

postgres=# select * from pg_size_pretty ( pg_total_relation_size ('t1') );
 pg_size_pretty 
----------------
 173 MB
(1 row)

What this did is to create a hypothetical index but without consuming any space (pg_total_relation_size counts the indexes as well), so it is pretty fast. What happens to our query now?

postgres=# explain select * from t1 where a = 5;
                                   QUERY PLAN                                    
---------------------------------------------------------------------------------
 Index Only Scan using btree_t1_a on t1  (cost=0.06..8.07 rows=1 width=4)
   Index Cond: (a = 5)
(2 rows)

Quite cool, the index is really getting used and we did not consume any resources for the index itself. Could be a good index to implement. What you need to know is, that this does not work for “explain analyze” as this really executes the query (and we do not really have an index on disk):

postgres=# explain (analyze) select * from t1 where a = 5;
                                                    QUERY PLAN                                                     
-------------------------------------------------------------------------------------------------------------------
 Gather  (cost=1000.00..49165.77 rows=1 width=4) (actual time=76.247..130.235 rows=1 loops=1)
   Workers Planned: 2
   Workers Launched: 2
   ->  Parallel Seq Scan on t1  (cost=0.00..48165.67 rows=1 width=4) (actual time=106.861..124.252 rows=0 loops=3)
         Filter: (a = 5)
         Rows Removed by Filter: 1666666
 Planning time: 0.043 ms
 Execution time: 131.866 ms
(8 rows)

If you want to list all the hypothetical indexes you can do this as well:

 indexrelid |     indexname     | nspname | relname | amname 
------------+-------------------+---------+---------+--------
      16399 | btree_t1_a | public  | t1      | btree
(1 row)

Of course you can drop them when not anymore required:

postgres=# select * from  hypopg_drop_index(16399);
 hypopg_drop_index 
-------------------
 t
(1 row)

postgres=# SELECT * FROM hypopg_list_indexes();
 indexrelid | indexname | nspname | relname | amname 
------------+-----------+---------+---------+--------
(0 rows)

Hope this helps …

 

Cet article Can I do it with PostgreSQL? – 15 – invisible indexes est apparu en premier sur Blog dbi services.

Migration to SQL Server 2016 with deprecated data types and Full-Text

Tue, 2017-07-11 10:06

A few weeks ago, I prepared a migration from SQL Server 2008 to SQL server 2016, I came across a case with deprecated data types and a Full-Text activated on these columns.
To simulate this scenario, I downloaded and installed the Data Migration Assistant(DMA).

Make sure that the .Net Framework 4.5 is installed on your machine before starting the installation of the DMA.
After the installation, you start the application which is very simple to use.
I created a database named db_to_mig.

In this database, I created 2 tables:

  • Text_With_FullText: table with a text named description and a FullText index on this Column
  • Text_Without_FullText: same table without the FullText Index

DMA_02

In the Data Migration Assistant, I created a new project. Be careful, to select “SQL Server” as target and not “Azure SQL Database”(default).
DMA_03

After, control that the target version is SQL Server 2016 and I recommend checking both checkboxes: “Compatibility issues” and “new features’ recommendation”.
The second step is to select the source, enter your instance and click connect.
Select your database and click add.
DMA_04

And go on clicking Start Assessment
DMA_05

Waiting the result…
DMA_07

After the evaluation, in the Review Results Part, I see 2 Issues:

    • Full-Text Search has changed since SQL Server 2008

DMA_08

    • Deprecated data types Text, Image or Ntext

DMA_09

Tips: You have also the possibility to save the report in a json file.
DMA_10

My case was first to convert all TEXT,NTEXT and image to VARCHAR(max), NVARCHAR(MAX) and VARBINARY(MAX) like the link here to the Microsoft documentation advise us.
First, I did a backup from my database and restored it on another server or on the same with a new name. Don’t touch directly the production database! :evil:
DMA_11

In my case, I restore the database with a new name on the same server: db_to_mig_2

DMA_12

On this copy, I change the data type from Text to nvarchar(max) for the first table without Full-Text with the T-SQL Command:

USE [db_to_mig_2]
GO

SELECT id,Description into #before FROM dbo.Text_Without_FullText WHERE Description is NOT NULL
GO

ALTER TABLE dbo.Text_Without_FullText ALTER COLUMN Description NVARCHAR(MAX)
GO

SELECT id,Description into #after  FROM dbo.Text_Without_FullText WHERE Description is NOT NULL
GO

SELECT DIFFERENCE(a.Description,b.Description), DIFFERENCE(b.Description,a.Description)
FROM #after AS a INNER JOIN #before AS b ON b.id = a.id
GO

SELECT Description from #before
SELECT Description from #after

DROP TABLE #before
DROP TABLE #after

DMA_14

I used 2 temporary tables to compare the result after the column data type change.
As you can see, the result of the select with difference is 4.
The value 4 indicates strong similarity or the same values.
To be sure, I advise you to do a better comparison between both temporary tables.
But this blog is not the case about comparing 2 strings with different data types.
The conversion is fast and easy, now, I do the same for the table with Full-Text:

SELECT id,Description into #before FROM dbo.Text_With_FullText WHERE Description is NOT NULL
GO

ALTER TABLE dbo.Text_With_FullText ALTER COLUMN Description NVARCHAR(MAX)
GO

SELECT id,Description into #after  FROM dbo.Text_With_FullText WHERE Description is NOT NULL
GO

SELECT DIFFERENCE(a.Description,b.Description), DIFFERENCE(b.Description,a.Description)
FROM #after AS a INNER JOIN #before AS b ON b.id = a.id
GO

SELECT Description from #before
SELECT Description from #after

DROP TABLE #before
DROP TABLE #after

DMA_13

And the result as you can see is an error message:

Msg 7614, Level 16, State 1, Line 1
Cannot alter or drop column ‘Description’ because it is enabled for Full-Text Search.

The Full-Text Index prevents me from changing the data type.
I need to drop the column from the FullText Index and create it after the data type change.

SELECT id,Description into #before FROM dbo.Text_With_FullText WHERE Description is NOT NULL
GO

DROP FULLTEXT INDEX ON dbo.Text_With_FullText;
GO

ALTER TABLE dbo.Text_With_FullText ALTER COLUMN Description NVARCHAR(MAX)
GO

CREATE FULLTEXT INDEX ON  dbo.Text_With_FullText(Description)
KEY INDEX PK_Text_With_FullText ON FT_Catalog
WITH STOPLIST = SYSTEM,CHANGE_TRACKING OFF, NO POPULATION;
GO

SELECT id,Description into #after  FROM dbo.Text_With_FullText WHERE Description is NOT NULL
GO

SELECT DIFFERENCE(a.Description,b.Description), DIFFERENCE(b.Description,a.Description)
FROM #after AS a INNER JOIN #before AS b ON b.id = a.id
GO

SELECT Description from  #before
SELECT Description from #after

DROP TABLE #before
DROP TABLE #after

DMA_15

Et voila! It is done. Be careful, if your FullText has more columns, you need to replace the CREATE FULLTEXT INDEX with an ALTER FULLTEXT INDEX ADD (column)
To finish, I reran the Data Migration Assistant and I saw that the only remaining issue was the Full-Text Search.

DMA_16

 

Cet article Migration to SQL Server 2016 with deprecated data types and Full-Text est apparu en premier sur Blog dbi services.

SQL Server 2016: patching CU with R Services

Tue, 2017-07-11 01:48

As a good DBA, I begin to be up to date with all Cumulative Update (CU) by my customers.
It is the first time that I run an update for SQL Server 2016 with the CU 3.
I download the CU on Microsoft website and I begin my patching campaign on all SQL server 2016 instances.

The first one is quick & successful.
The second one, with R Services, is a little bit different.
SQL2016CU3_01

After, the features’ selection, you need to accept the “R services download”.
SQL2016CU3_02

Servers are not able to go to Internet to download the R services package.
SQL2016CU3_03

A new step appears “Offline Installation of Microsoft R Open and Microsoft R Server” with 2 links reference to download 2 packages.
I advise you to download both. SRO_xxx is for Microsoft R Open and SRS_xxx is for Microsoft R Server. It is not necessary but I think, it is every time good to have all packages for a build.
After, you have just to copy the two cab files to your server and select the folder in the update window.
SQL2016CU3_04

The installation of the Cumulative Update continues like usual.
This link here, give you the list of the Microsoft R Open and Microsoft R Server per build. It’s a very useful link.

PS: I do not use the installation by Command-line but flags are explained on this website.

Nice patching to you!

To finish, I give you a little tips: In SQL  Server 2017, you will have the same to do with the Python language
SQL2016CU3_05

 

Cet article SQL Server 2016: patching CU with R Services est apparu en premier sur Blog dbi services.

Grid Infrastructure 11.2.0.3 PSU: My Opatch version check is failing

Sat, 2017-07-08 06:30

Last day I was configuring grid infrastructure 11.2.0.3 for a standlone server.
After the installation, I decided to apply the Patch Set Update 11.2.0.3.7 (Includes Database PSU 11.2.0.3.7) – 16742216
In the documentation it is marked You must use the OPatch utility version 11.2.0.3.4 or later to apply this patch.
Verifying my opatch version, I see that I have to update it

[root@ ]# ./OPatch/opatch version
Invoking OPatch 11.2.0.1.7
OPatch Version: 11.2.0.1.7

So I download a newer version and install it

[grid psuJul2013]$ /u00/app/11.2.0/grid/OPatch/opatch version
OPatch Version: 11.2.0.3.15
OPatch succeeded.
[grid@dbserver psuJul2013]$

With my new opatch I am fine to proceed. First I create my ocm.rsp file

[oracle@dbserver ~]$ /u00/app/oracle/product/11.2.0/pcsdb/OPatch/ocm/bin/emocmrsp
OCM Installation Response Generator 10.3.7.0.0 - Production
Copyright (c) 2005, 2012, Oracle and/or its affiliates. All rights reserved.
..
Provide your email address to be informed of security issues, install and initiate Oracle Configuration Manager. Easier for you if you use your My Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:
..
You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: y
The OCM configuration response file (ocm.rsp) was successfully created.
[oracle@dbserver ~]$

Then I unzip the patch
[oracle@dbserver 11203]$ unzip -d psuJul2013/ p16742216_112030_Linux-x86-64.zip
And I run the command to apply the patch

[root@ 11203]# /u00/app/11.2.0/grid/OPatch/opatch auto /u00/software/11203/psuJul2013 -ocmrf /u00/software/11203/ocm.rsp
Executing /u00/app/11.2.0/grid/perl/bin/perl /u00/app/11.2.0/grid/OPatch/crs/patch11203.pl -patchdir /u00/software/11203 -patchn psuJul2013 -ocmrf /u00/software/11203/ocm.rsp -paramfile /u00/app/11.2.0/grid/crs/install/crsconfig_params
..
This is the main log file: /u00/app/11.2.0/grid/cfgtoollogs/opatchauto2017-07-06_11-39-25.log
..
This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/u00/app/11.2.0/grid/cfgtoollogs/opatchauto2017-07-06_11-39-25.report.log
..
2017-07-06 11:39:25: Starting Oracle Restart Patch Setup
Using configuration parameter file: /u00/app/11.2.0/grid/crs/install/crsconfig_params
The opatch minimum version check for patch /u00/software/11203/psuJul2013/16619898 failed for /u00/app/11.2.0/grid
The opatch minimum version check for patch /u00/software/11203/psuJul2013/16619892 failed for /u00/app/11.2.0/grid
Opatch version check failed for oracle home /u00/app/11.2.0/grid
Opatch version check failed
ERROR: update the opatch version for the failed homes and retry

..
opatch auto failed.

Strange, though my opatch version seems to be fine, I got errors about opatch version check.
Looking into the log file /u00/app/11.2.0/grid/cfgtoollogs/opatchauto2017-07-06_11-39-25.log, I find following lines. It seems that there is a write permission issue.
[oracle@ pcsdb]$ less /u00/app/11.2.0/grid/cfgtoollogs/opatchauto2017-07-06_11-39-25.log
...
...
OPatch failed to lock and get an Inventory Session for the given Oracle Home /u00/app/11.2.0/grid
Possible causes are:
No read or write permission to $ORACLE_HOME, cannot create $ORACLE_HOME/.patch_storage
No read or write permission to $ORACLE_HOME/.patch_storage
OPatch cannot proceed further because system will not be able to backup files, or read the backup area for rollback/restore.
OPatchSession cannot load inventory for the given Oracle Home /u00/app/11.2.0/grid. Possible causes are:
No read or write permission to ORACLE_HOME/.patch_storage
Central Inventory is locked by another OUI instance
No read permission to Central Inventory
The lock file exists in ORACLE_HOME/.patch_storage
The Oracle Home does not exist in Central Inventory
..
UtilSession failed: IPMRWServices::verifyPatchStorageDirectory() cannot read or write to /u00/app/11.2.0/grid/.patch_storage
..
OPatch failed with error code 73
..
2017-07-06 11:39:30: Status of opatch version check for /u00/app/11.2.0/grid is 0
2017-07-06 11:39:30: Opatch version check failed for oracle home /u00/app/11.2.0/grid
2017-07-06 11:39:30: Opatch version check failed
2017-07-06 11:39:30: ERROR: update the opatch version for the failed homes and retry

After some researches it seems that it’s a Bug 24584367. A workaround is to create the directory $GRID_HOME/.path_storage and to grant appropriate privileges to user grid on it.

[root@dbserver 11203]# cd /u00/app/11.2.0/grid/
[root@dbserver grid]# mkdir .patch_storage
[root@dbserver grid]# chown grid:oinstall .patch_storage/

And then rerun the opatch process which finally goes fine.

[root@dbserver psuJul2013]# /u00/app/11.2.0/grid/OPatch/opatch auto /u00/software/11203/psuJul2013 -ocmrf /u00/software/11203/ocm.rsp
Executing /u00/app/11.2.0/grid/perl/bin/perl /u00/app/11.2.0/grid/OPatch/crs/patch11203.pl -patchdir /u00/software/11203 -patchn psuJul2013 -ocmrf /u00/software/11203/ocm.rsp -paramfile /u00/app/11.2.0/grid/crs/install/crsconfig_params
..
This is the main log file: /u00/app/11.2.0/grid/cfgtoollogs/opatchauto2017-07-06_11-44-05.log
..
This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/u00/app/11.2.0/grid/cfgtoollogs/opatchauto2017-07-06_11-44-05.report.log
..
2017-07-06 11:44:05: Starting Oracle Restart Patch Setup
Using configuration parameter file: /u00/app/11.2.0/grid/crs/install/crsconfig_params
..
Stopping CRS...
Stopped CRS successfully
..
patch /u00/software/11203/psuJul2013/16619898 apply successful for home /u00/app/11.2.0/grid
patch /u00/software/11203/psuJul2013/16619892 apply successful for home /u00/app/11.2.0/grid
..
Starting CRS...
CRS-4123: Oracle High Availability Services has been started.
..
opatch auto succeeded.

Hope that this article will help to save time.

 

Cet article Grid Infrastructure 11.2.0.3 PSU: My Opatch version check is failing est apparu en premier sur Blog dbi services.

Grid Infrastructure 12.2: Are all Redhat 7 kernels compatible?

Sat, 2017-07-08 06:28

In this blog I am going to share an issue I had when installing grid infrastructure 12.2 on Redhat 7.3. I was configuring grid infrastructure 12.2 for a standlone server. Below the release and the version of my operating system

root@dbserver /etc $ cat redhat-release
Red Hat Enterprise Linux Server release 7.3 (Maipo)


root@dbserver ~ $ uname -a
Linux dbserver 3.10.0-514.21.2.el7.x86_64 #1 SMP Sun May 28 17:08:21 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
root@dbserver ~ $

In the oracle certification matrix, we can find that clusterware 12.2 is certified on REL 7 with the minimum kernel 3.10.0-54.0.1.el7.x86_64. So we can proceed with the installation as our kernel is higher. After installing all prerequisites and configuring ASM disks, we download the binaries. We then unzip binaries into the GRID_HOME and lunch the gridsetup.sh
[oracle@dbserver 12.2]$ ./gridSetup.sh
Screenshots are not shown.
Later in the installation oracle asks to run famous following scripts:
orainstRoot.sh
root.sh
The orainstRoot.sh went fine.
While executing the root.sh, we get following errors

root@dbserver /dev/oracleasm $ /u00/oracle/grid/12.2/root.sh
Performing root user operation.
..
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u00/oracle/grid/12.2
..
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
..
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u00/oracle/grid/12.2/crs/install/crsconfig_params
The log of current session can be found at:
/u00/oracle/crsdata/dbserver/crsconfig/roothas_2017-07-04_06-40-49PM.log
2017/07/04 18:40:53 CLSRSC-363: User ignored prerequisites during installation
LOCAL ADD MODE
Creating OCR keys for user 'oracle', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node dbserver successfully pinned.
2017/07/04 18:41:01 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2017/07/04 18:41:06 CLSRSC-214: Failed to start the resource 'ohasd'
Failed to start the Clusterware. Last 20 lines of the alert log follow:
..
2017/07/04 18:41:06 CLSRSC-318: Failed to start Oracle OHASD service
Died at /u00/oracle/grid/12.2/crs/install/crsinstall.pm line 2775.
The command '/u00/oracle/grid/12.2/perl/bin/perl -I/u00/oracle/grid/12.2/perl/lib -I/u00/oracle/grid/12.2/crs/install /u00/oracle/grid/12.2/crs/install/roothas.pl ' execution failed

root@dbserver /dev/oracleasm $

Trying to start manually the has, we got following errors

root@dbserver ~ $ /u00/oracle/grid/12.2/bin/crsctl start has
CLSU-00100: operating system function: waitpid failed with error data: 0
CLSU-00101: operating system error message: Error 0
CLSU-00103: error location: usrgetgrp12
CLSU-00104: additional error information: child returned 232

Having a look in oracle support, I find these documents
OHASD fails to start with kernel version 3.10.0-514.21.2.el7.x86_64 (Doc ID 2281492.1)
ALERT: Grid Infrastructure Fails to Start OHASD With Linux Kernel Version 3.10.0-514.21.2.EL7.X86_64 or Higher (Doc ID 2282371.1)
Then we change the kernel to the following

root@dbserver /etc/systemd/system $ uname -a
Linux dbserver 3.10.0-514.2.2.el7.x86_64 #1 SMP Wed Nov 16 13:15:13 EST 2016 x86_64 x86_64 x86_64 GNU/Linux

And then restart the install after cleaning the first installation
When asked to execute root.sh we do and the script went fine.

So if you are planning to use Redhat/OEL 7 to install Grid Infrastructure, please have a look in these two oracle documents to be sure that your kernel is OK

 

Cet article Grid Infrastructure 12.2: Are all Redhat 7 kernels compatible? est apparu en premier sur Blog dbi services.

Oracle Database to SQL Server 2016 TCO Analyzer

Sat, 2017-07-08 03:41

image 1

Today, many enterprises are striving to transform their IT organization from a cost center into a strategic asset and looking for ways to
get more value from their application and data platform investments.
To deliver on this vision, Microsoft SQL Server 2016 – via its on-premises, cloud, or hybrid deployment options – provides customers with
significantly lower total cost of ownership (TCO) compared to existing Oracle DB platform.

How to know the real difference in cost between your current Oracle solution and the benefits you are anticipating with SQL Server 2016 ?

You are in luck, Microsoft have built an exclusive, in-depth Total Cost of Ownership (TCO) analysis tool to help you compare the hard facts

Lear more:  sqltcocalculator.azurewebsites.net

 

 

Cet article Oracle Database to SQL Server 2016 TCO Analyzer est apparu en premier sur Blog dbi services.

DA 7.2 UCF Transfer failing with SSL

Wed, 2017-07-05 08:14

This is a blog following the one I already wrote ealier:  https://blog.dbi-services.com/documentum-administrator-ucf-troubleshooting/
The first one was for “how to find the error” and not how to resolve it. In this blog I will talk about an error I got by a customer about UCF.

I got an error message using DA 7.2 where I coulnd’t download documents, in fact every transfer were failing due to UCF. By following my previous blog I found the specific error in the logs saying:
SSL Handshake failed.

You probably had this issue as well if you used SSL with DA. By default when you configure SSL with DA it tries to find a certificate from the java CA cert. You can add the certificate to your keystore to prevent this issue.
But in my case I had a keystore generated from certs certified by the customer authority. So I had to find another way.

I found the solution in the documentation: https://support.emc.com/docu56531_Documentum-Web-Development-Kit-6.8-Development-Guide.pdf?language=en_US at page 58.

You can deactivate the java validation as follow:
cd $CATALINA_HOME_DA
vi ./webapps/da/wdk/contentXfer/ucf.installer.config.xml

Add the following option:

<option name="https.host.validation" persistent="false">
<value>false</value>
</option>

Now restart tomcat or your application server and you will be able to transfer content.

 

Cet article DA 7.2 UCF Transfer failing with SSL est apparu en premier sur Blog dbi services.

PostgreSQL 10 Beta 1 : pg_hba_file_rules view

Fri, 2017-06-30 10:12

This small blog to share a helpful stuff with the pg_hba.conf file :

The pg_hba.conf file is the central configuration file to control client authentication.
It is located in the database cluster’s data directory :
postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1] ll pg_hba.conf
lrwxrwxrwx. 1 postgres postgres 59 Jun 30 10:19 pg_hba.conf
postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1]

When you add or modify an entry in this file, you have to reload the cluster to take the changes in account :

postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1] echo "host all efm 192.168.22.38/35 md5" >> /u02/pgdata/PG10BETA1/pg_hba.conf
postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1] postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1] pg_ctl -D "/u02/pgdata/PG10BETA1/" reload
server signaled
postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1]

Oops… Have you seen my mistake ? Probably not. The system didn’t see it either.
That was in Postgres 9.6 and older versions.

In Postgres 10 there is now a new view called “pg_hba_file_rules” which will returns the current content of the pg_hba.conf file entries AND reports the errors :
postgres=# select version();
version
-------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 10beta1 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit
(1 row)

postgres=# select line_number, error, type, database, user_name, address, netmask from pg_hba_file_rules;
line_number | error | type | database | user_name | address | netmask
-------------+-------------------------------------------------+-------+---------------+-----------+-----------+-----------------------------------------
80 | | local | {all} | {all} | |
82 | | host | {all} | {all} | 127.0.0.1 | 255.255.255.255
84 | | host | {all} | {all} | ::1 | ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff
87 | | local | {replication} | {all} | |
88 | | host | {replication} | {all} | 127.0.0.1 | 255.255.255.255
89 | | host | {replication} | {all} | ::1 | ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff
90 | invalid CIDR mask in address "192.168.22.38/35" | | | | |
(7 rows)

This will allows you to get a quick look at what is wrong in pg_hba.conf and fixing it as necessary.

 

Cet article PostgreSQL 10 Beta 1 : pg_hba_file_rules view est apparu en premier sur Blog dbi services.

ODA X6 installation: re-image

Wed, 2017-06-28 15:44

The Oracle Database Appliance is shipped with a bare-metal installation which may not be the latest version. You may want to have it virtualized, or get the latest version to avoid further upgrade, or install an earlier version to be in the same configuration as another ODA already in production. The easiest for all cases is to start with a re-image as soon as the ODA is plugged. This post is not a documentation, just a quick cheat sheet.

I don’t want to spend hours in the data center, so the first step, once the ODA is racked, cabled and plugged, is to get it accessible from the management network. Then all tasks can be done from a laptop, accessing the ILOM interface through a browser (Java required, and preferably 32-bits) before the public network is setup.

NET MGMT

Here is the back of the server where you find the management network port.
ODA-reimage-000
This was an X5-2 but very similar to X6-2. You can look at the 3D view of X6-2 to get a better view.
There is also VGA and USB to plug a monitor and keyboard just for the time to setup the management network.

You can also use the serial port which is just next to it but I usually don’t have a serial adapter for my laptop.

First checks

You can plug a keyboard and monitor and log on the server: root password is welcome1 (no need to change it for the moment as we will re-image the machine)

In ODA X6-HA you have two nodes, numbered 0 and 1 and called oak1 and oak2 … be careful. You may wonder which server is node 0 and which one is node 1, because the servers are the same. Yes, that’s true. The nodes are identified from the storage port they are connected to. The node 0 is the one connected to the blue mini-SAS and the red ones are for node 1.

  • Blue cable – Node 0 – oak1
  • Red cable – Node 0 – oak2

Here I’m on node 0 and check hardware version

[root@oak1 ~]# /opt/oracle/oak/bin/oakcli show env_hw
BM ODA X6-2

This means Bare Metal X6-2 HA, a virtualized one would show VM-ODA_BASE ODA X6-2, and ODA X6-2S would show BM ODA_Lite X6-2 Small.

From storage topology, we confirm that this node has been recognized as node 0:


[root@oak1 ~]# /opt/oracle/oak/bin/oakcli validate -c StorageTopology
It may take a while. Please wait...
INFO : ODA Topology Verification
INFO : Running on Node0
INFO : Check hardware type
SUCCESS : Type of hardware found : X5-2
INFO : Check for Environment (Bare Metal or Virtual Machine)

And finally check the version because you may be lucky to get the version you want and then don’t need to re-image (I switched to node 1 here):

ODA-reimage-010

Of course, you don’t need to re-image when you want an higher version. You can upgrade it, but re-image is simple.

BMC Network

The management network interface can get an address from DHCP. But who runs DHCP on the management network? There are two ways to assign a static IP to the management network.

You may use the IPMI Tool commands:
ipmitool -I open sunoem cli
cd SP
cd Network
ls
set pendingipadress=192.168.15.101
set pendingnetmask=255.255.255.0
set pendingipgateway=192.168.15.1
set commitpending=true

Here is an example:
oda-ipmitool

Or you can go to BIOS. It goes fast but filming in slow motion shows that the key is F2:
ODA-reimage-001
Once you are in the BIOS interface, go to Advanced, then choose ‘BMC Network configuration’ to configure IPMI and enter the following BMC Lan information:
 
IPv4 IP Assignment [Static] IPv4 address 192.168.15.102
IPv4 Subnet Mask 255.255.255.0
IPv4 Default Gateway 192.168.15.1

Once you are in the BIOS you can choose to boot on the CD-ROM first because I sometimes have problems to set that from the ILOM Web interface.

Once you have validated that the ILOM IP address can be reached from your office, you can remove the keyboard and monitor and leave the datacenter. the hardware is ok. Now the software can be deployed from the ILOM Web interface. The password to connect to ILOM from the Web browser is ‘changeme’ and you should change it.

Software

The latest ISO image for Bare Metal or Virtualized ODA can be found from Note 88888.1
You can also find the oldest versions:

From the ILOM you you get to the console of node 0:

ODA-reimage-002

From there, you can attach the ISO image: ILOM Remote Control / Devices / CD-ROM Images

and then reboot on the CD-ROM:

Reboot (Host Control / Next Boot Device / CD-ROM )

ODA-reimage-003

Do that for both nodes (you can run them at the same time) and prepare the information for the deployment and download the ‘End-User RDBMS Clone files’ for the database version you want.

You can download the Oracle Appliance Manager Configurator and take your time to setup and verify the configuration.

configure firstnet

Here is part of the information to prepare. First, you will configure the network to be able to scp the software (Grid Infrastructure and Database):

[root@oak1 ~]# /opt/oracle/oak/bin/oakcli configure firstnet
Configure the network for node(s) (local,global) [global]:
The network configuration for both nodes:
hostname: Hostname lookup failure
Domain Name: pachot.net
DNS Servers: Primary DNS Server: 8.8.8.8
Node Name Host Name
0 myserver1
1 myserver2
Choose the network interface to configure (net1,net2) [net1]:
Enter the IP address for net1 on Node 0: 192.168.16.101
Enter the IP address for net1 on Node 1: 192.168.16.102
Netmask for net1: 255.255.255.0
Gateway address for net1 [192.168.16.1]:

Note that this is a funny example. I hope you don’t use the 192.168.16/24 as you public network because this is used for the private interconnect where IP addresses 192.168.16.24 and 192.168.16.25 are hardcoded. But thanks to that the configure-network can be run fron one node only.

Deploy

Now that you have access through the public network, you can copy (scp) the Oracle Home clones and the configuration file to /tmp, unpack the .zip (for i in *.zip ; do /opt/oracle/oak/bin/oakcli unpack -package $i ; done) and run deploy (ssh -X /opt/oracle/oak/bin/oakcli deploy), loading the configuration from your file or entering all information from there. Crossing the fingers, this should go to the end without any problem. On the opposite, My Oracle Support notes may help. The nice thing with ODA is that most of the configurations are similar so there a good chances that a problem was already encountered and documented.

 

Cet article ODA X6 installation: re-image est apparu en premier sur Blog dbi services.

Summer, autumn and winter: a lot of conferences ahead

Tue, 2017-06-27 14:50

It is hot in Europe, it is summer, enjoy, but technology moves fast so you have the chance to already prepare for the next conferences. The IT Tage 2017 will happen the 11th to 14th of December this year and we are happy to be there again.
Selection_001
This event covers a wide range of topics and we will be there again covering and talking about:

In addition you will have the chance to join Jan from EnterpriseDB speaking about the differences between Oracle and PostgreSQL. If you want to hear more about that, you might want to join the PostgreSQL Conference Europe this year.

Selection_002

Maybe Jan and me get the chance to talk there as well, we submitted an interesting session, stay tuned.

Chances are high that you may find us at the ukoug, too:

Selection_004

… and we’ll be at the #DOAG2017 for sure this year again.
Selection_003

Hope to see you soon… Cheers, discussions ahead …

 

Cet article Summer, autumn and winter: a lot of conferences ahead est apparu en premier sur Blog dbi services.

12c Multitenant Internals: VPD for V$ views

Tue, 2017-06-27 13:49

I described in an earlier post on AWR views how the dictionary views were using metadata and object links to show information from other containers. But this mechanism cannot work for fixed views (aka V$) because they don’t have their definition in the dictionary.

The big difference is that most of V$ views are available long before the dictionary is opened or even created. Just start an instance in NOMOUNT and you can query the V$ views. Even in multitenant, you can switch to different containers in MOUNT, and query V$ views, when no dictionary is opened.

SQL> alter database mount;
Database altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED MOUNTED
3 PDB1 MOUNTED
 
SQL> alter session set container=pdb1;
Session altered.
 
SQL> show pdbs;
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
3 PDB1 MOUNTED

V$ views query information from the instance and this information pertain to one container:

  • CON_ID=0 for the CDB itself
  • CON_ID=1 for CDB$ROOT
  • CON_ID=2 for PDB$SEED
  • CON_ID=3 for the first PDB you have created

When you are in root, the V$ views are queried as normal and show all information – from all containers – with their related CON_ID

When you are in a PDB, you must see the objects that belong to your PDB, but not those that belong to other PDBS. But this is not sufficient. For example, you may query the version, and the version is related to the CDB itself, with CON_ID=0:

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select * from v$version;
 
BANNER CON_ID
-------------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production 0
PL/SQL Release 12.2.0.1.0 - Production 0
CORE 12.2.0.1.0 Production 0
TNS for Linux: Version 12.2.0.1.0 - Production 0
NLSRTL Version 12.2.0.1.0 - Production 0

Then, in a PDB you should see your PDB objects and the CON_ID=0 ones. Oracle needs a new mecanism for that. One way would be to switch to root, query the V$ and filter on CON_ID. We don’t need that. Context switch is there to access data from a different container tablespace, because tablepaces are not shared. But V$ views expose data from the instance, and the instance is shared. Any container can see all rows, and we just want to filter some rows.

Here is the execution plan when querying V$VERSION from a PDB:


SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> explain plan for select * from v$version;
Explained.
 
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 1078166315
 
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 68 | 0 (0)| 00:00:01 |
|* 1 | FIXED TABLE FULL| X$VERSION | 1 | 68 | 0 (0)| 00:00:01 |
------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(("CON_ID"=0 OR "CON_ID"=3) AND
"INST_ID"=USERENV('INSTANCE'))

An additional predicate (“CON_ID”=0 OR “CON_ID”=3) is added to the view. How is it done? Oracle has a security feature for that: Virtual Private Database – aka Row Level Security – which adds a where clause dynamically.

One way to get more information about virtual private databases is to have an error on its execution and I know that a user with only select privilege cannot EXPLAIN PLAN (see MOS Note 1029064.6).

I connect to a PDB with a low privileged user:
SQL> connect scott/tiger@//localhost/PDB1
Connected.

I explain plan the V$VERSION fixed view.
SQL> explain plan for select * from v$version;
 
Error starting at line : 10 File @ /media/sf_share/122/blogs/multitenant-vpd.sql
In command -
explain plan for select * from v$version
Error report -
ORA-28113: policy predicate has error
28113. 00000 - "policy predicate has error"
*Cause: Policy function generates invalid predicate.
*Action: Review the trace file for detailed error information.

Interesting error which confirms the guess: this is a VPD error and it generates a trace:
*** 2017-06-26T22:45:17.838507+02:00 (PDB1(3))
*** SESSION ID:(141.17865) 2017-06-26T22:45:17.838537+02:00
*** CLIENT ID:() 2017-06-26T22:45:17.838541+02:00
*** SERVICE NAME:(pdb1) 2017-06-26T22:45:17.838545+02:00
*** MODULE NAME:(java@VM104 (TNS V1-V3)) 2017-06-26T22:45:17.838548+02:00
*** ACTION NAME:() 2017-06-26T22:45:17.838552+02:00
*** CLIENT DRIVER:(jdbcoci : 12.2.0.1.0) 2017-06-26T22:45:17.838555+02:00
*** CONTAINER ID:(3) 2017-06-26T22:45:17.838558+02:00
 
-------------------------------------------------------------
Error information for ORA-28113:
Logon user : SCOTT
Table/View : SYS.V_$VERSION
VPD Policy name : CON_ID
Policy function: SYS.CON_ID
RLS view :
SELECT "BANNER","CON_ID" FROM "SYS"."V_$VERSION" "V_$VERSION" WHERE (con_id IN (0, 3) )
ORA-01039: insufficient privileges on underlying objects of the view
-------------------------------------------------------------

There’s no container switch here, all is running in PDB1 with CON_ID=3 and the internal VPD has added a where clause to filter rows with CON_ID=0 and CON_ID=3

Do not search for the VPD policy name ‘CON_ID’ and function ‘CON_ID’ in the dictionary views because this happens even when the dictionary is not accessible. This is an internal policy used when querying fixed views in multitenant and which probably use some of the VPD code only.

 

Cet article 12c Multitenant Internals: VPD for V$ views est apparu en premier sur Blog dbi services.

OUD 11.1.2.3 – ODSM Session Timeout

Tue, 2017-06-27 12:52

The ODSM is a quite powerful tool for managing the Oracle Unified Directory, and quite a lot of default settings are very reasonable. But there is one that disturbed me from the beginning. It is the ODSM Session Timeout. You might see a message like the following.

1

Or maybe this one.

2

It just says, that the page will expire unless a response is received within 2 minutes. Doing some complex searches or similar stuff often takes more than 2 minutes.

So, how can we increase the limit to, let’s say, 2 hours? A value which is more reasonable from my point of view.

In the early days, you had to edit a file called web.xml in a very intuitive directory. ;-) And change the session-timeout to whatever value you like. In my example 2 hours.

$ vi /u01/app/oracle/product/Middleware/11.1.2.3/user_projects/domains/base_domain/servers/AdminServer/tmp/_WL_user/odsm_11.1.1.5.0/d89dm9/war/WEB-INF/web.xml

...
...

  <session-config>
    <session-timeout>7200</session-timeout>
  </session-config>

However, changing the value here has the disadvantage that it might not be permanent.  This directory is a temporary cache directory for the WebLogic server.  So, if you apply a patch that overwrites the …/odsm/odsm.ear file, the changes you made to web.xml in the temporary cache directory are also overwritten. So, it is not a good long term solution.

Way better is to do it via the WebLogic Console.

Login to your WebLogic Console with the WebLogic user and navigate to “Deployments”. From there select the “/odsm” module.

3

Now move to the “Configuration” tab and change the Session Timeout (in seconds) to 7200, in case you want a 2h session timeout.

4

5

Now save the changes, and click another time OK, to save the Deployment Plan.

6

7

That’s it.

Conclusion

The default session timeout is way too short from my point of view. But no worries. Changing it via the WebLogic Console is quite easy and it might save you a lot of headaches. :-)

 

Cet article OUD 11.1.2.3 – ODSM Session Timeout est apparu en premier sur Blog dbi services.

Pages