Feed aggregator

Set the minimum password length on your default authenticator in Weblogic

Darwin IT - Thu, 2018-03-08 05:55
End of last year I wrote how to create a demo community of users in your Weblogic using wlst.
Using these scripts I wanted to do the same at my current customer: creating test users in the DefaultAuthenticator. However, I faced that the minimum password length was 8, while one of the user failed creation, because the password was the same as the user, and only 5 characters long.

So I need to change the password validator. And preferably using WLST (of course). Now, the password validator of de authenticator can also be found through the console. However, the Weblogic realm also has a system password validator. Both have a default length of 8.

Let me show you some snippets (that you can add to the create users script, or your own purpose), on how to change the minimum password length.

First a method to get the default realm:
#
#
def getRealm(name=None):
cd("/")
if name == None:
realm = cmo.getSecurityConfiguration().getDefaultRealm()
else:
realm = cmo.getSecurityConfiguration().lookupRealm(name)
return realm

With that you can get the authenticator:
#
#
def getAuthenticator(realm, name=None):
if name == None:
authenticator = realm.lookupAuthenticationProvider("DefaultAuthenticator")
else:
authenticator = realm.lookupAuthenticationProvider(name)
return authenticator

With a realm an an authenticator, we can change the password length:
#
#
def setMinPasswordLengthOnDftAuth(minPasswordLength):
try:
edit()
startEdit()
# Get Realm and Authenticator
realm = getRealm()
authenticator = getAuthenticator(realm)
authenticator.setMinimumPasswordLength(int(minPasswordLength))
passwordValidator=realm.lookupPasswordValidator('SystemPasswordValidator')
passwordValidator.setMinPasswordLength(int(minPasswordLength))
save()
activate(block='true')
print('Succesfully set minimum password length to '+minPasswordLength+ ' on '+authenticator.getRealm().getName()+'.')
print('For '+ authenticator.getName() +': '+str(authenticator.getMinimumPasswordLength()))
print('For SystemPasswordValidator of '+getRealm().getName()+': '+ str(passwordValidator.getMinPasswordLength()))
except WLSTException:
stopEdit('y')
message="Failed to update minimum password length!"
print (message)
raise Exception(message)

The minimum password length from the authenticator can be set directly. From the realm this function looks up the SystemPasswordValidator. And on that it set the minimum password length.

This function goes to edit mode, saves and activates the changes. But if you want to add users, you need to get wlst into domainConfig() mode.

Other password validator property setters are:
  • setMinPasswordLength()
  • setMaxPasswordLength()
  • setMaxConsecutiveCharacters()
  • setMaxInstancesOfAnyCharacter()
  • setMinAlphabeticCharacters()
  • setMinNumericCharacters()
  • setMinLowercaseCharacters()
  • setMinUppercaseCharacters()
  • setMinNonAlphanumericCharacters()
  • setMinNumericOrSpecialCharacters()
  • setRejectEqualOrContainUsername(true)
  • setRejectEqualOrContainReverseUsername(true) 
See the docs for more.

MD5 Function with Large string having characters more than 4000

Tom Kyte - Thu, 2018-03-08 04:06
I am looking for a function to MD5 values for concatenated columns in the table. I plan to use this function on before insert/update trigger to store this values as one of the column (md5_hash) in the same table. The intention is to use this column f...
Categories: DBA Blogs

srvctl add service on standalone

Tom Kyte - Thu, 2018-03-08 04:06
Hi, I am working on 12cR1 version installed on my windows 8.1 laptop. Trying to learn EBR and so want to create additional services so that I can connect to two different editions if required. Using below command, but getting error. >srvctl a...
Categories: DBA Blogs

Oracale procedure execution is giving an error

Tom Kyte - Thu, 2018-03-08 04:06
I have a stored procedure in oracle. Whenever i try to execute it its throwing me an error mentioned below: Error starting at line 5 in command: exec usp_eventmgmt_get_events(TO_DATE('2018-07-18', 'YYYY-MM-DD'),'District',1) *Error report: ORA-06550:...
Categories: DBA Blogs

Mimicking Sql Server OUTPUT Clause

Tom Kyte - Thu, 2018-03-08 04:06
I am trying converting SQL Server T-SQL queries to Oracle based queries. In SQL Server there is an ability to use the OUTPUT Clause within a query. See example below. BEGIN TRANSACTION DELETE TableA OUTPUT "DELETED".* INTO "TESTARCHIVE"....
Categories: DBA Blogs

Line numbers in error messages do not match source lines

Tom Kyte - Thu, 2018-03-08 04:06
When running a PL/SQL script through sqlplus, eventual error messages contain a line number that does not match the source lines. For example : <code> ERROR at line 1: ORA-06533: Subscript beyond count ORA-06512: at line 144 ORA-06512: at li...
Categories: DBA Blogs

Rman backup

Tom Kyte - Thu, 2018-03-08 04:06
I have a database - about 6.2Terabytes in size - has about 525G freespace. There are basically three tablespaces (other than the normal. FO_tables has about 50 tables. One of the tables there has two blob columns - each in it's own tablesapce. One...
Categories: DBA Blogs

view compiled with errors

Tom Kyte - Thu, 2018-03-08 04:06
I am trying to create a view and i am getting message as "<i>Error report: SQL Command: force view VIEW_REPORTEVENTS Failed: Warning: execution completed with warning"</i> <code>CREATE OR REPLACE FORCE VIEW VIEW_REPORTEVENTS AS SELEC...
Categories: DBA Blogs

Training on Demand: Oracle WebLogic Server 12c Implementation Specialists Boot Camp

Oracle WebLogic Server 12c Implementation Specialist Boot Camp is a technical enablement program aimed at partner technical consultants who will be implementing Oracle WebLogic Server 12c solutions....

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Chatbots MOOC runs again!

Oracle Chatbots MOOC is back due to popular demand. This MOOC has already run previously, but if you didn't have time to finish or didn't get a chance to participate previously,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Column Groups

Jonathan Lewis - Thu, 2018-03-08 00:54

There’s a question on the ODC database forum about column groups that throws up an interesting side point. The OP is looking at a query like the following and asking about which column groups might help the optimizer get the best plan:

select
        a.*, b.*, c.*
from
        a, b, c
where
        a.id   = b.id
and     a.id1  = b.id1
and     a.id   = c.id
and     b.id2  = c.id2
and     a.id4  = 66
and     b.id7  = 44
and     c.id88 = 88
;

I’m going to start by being a bit boring about presentation and meaning (although this query is fairly obviously intended to be devoid of meaningful content) and rearrange the query because if I had a from clause reading “a, b, c” it would be because I hoped the optimizer would find that to be the best join order – and if that were the case I would have written the predicate section to display the order and timing of the work needed:

select
        a.*, b.*, c.*
from
        a, b, c
where
        a.id4  = 66
--
and     b.id   = a.id
and     b.id1  = a.id1
and     b.id7  = 44
--
and     c.id   = a.id
and     c.id2  = b.id2
and     c.id88 = 88
;

Having cosmetically enhanced (to my mind) the query, I’ll now ask the question: “Would it make sense to create column groups on a(id, id1), b(id, id1) and c(id, id2) ?”

I’ve written various articles on cases where column groups have effects (or not): “out of range” predicates, “is null” predicates, “histograms issues”, “statistics at both ends of the join”, and “multi-column indexes vs. column groups” are just some of the key areas. Assuming there are no reasons to stop a particular column group from working , we can look at the join from table A to table B: it’s a two-column join so if there’s some strong correlation between the id and id1 columns of these two tables then creating the two column groups (one at each end of the join) can make a difference to the optimizer’s calculations with the most likely effect that the cardinality estimate on the join will go up and, as a side effect the join order and join method may change.

If we then consider the join to table C – we note that it involves two columns from table C being joined to one column from table A and one from table B – so while we could create a column group on those two columns as the table C end of the join a column group is simply not possible at the “opposite end” of the join. This means that one end of the join may have a selectivity that is hugely reduced because the column group has quantified the correlation, but the selectivity at the other end is simply based on the two separate selectivities from a.id and b.id2 – and that’s likely to be larger than the selectivity of (c.id, c.id2), and the optimizer will choose the larger join selectivity hence producing a lower cardinality estimate.

This is where a collateral point appears – there is an opportunity for transitive closure that the human eye can see but the optimizer is not coded to manipulate. We have two predicates: “b.id = a.id” and “c.id = a.id”, but they can only both be true when “c.id = b.id”, so let’s replace “c.id = a.id” with “c.id = b.id” and the join predicate to table C becomes:

and     c.id   = b.id
and     c.id2  = b.id2

Both left hand sides reference table C, both right hand sides reference table B – so if we now create a column group on c(id, id2) and an additional column group on b(id, id2) then we may give Oracle some better information about this join as well. In fact, even if we create NO column groups at all this specific change may be enough to result in a change in the selectivity calculations with a subsequent change in cardinality estimate and execution plan.

Trivadis Performance Days 2018

Richard Foote - Wed, 2018-03-07 19:06
I’m very excited and privileged to be asked to present the opening and closing sessions at this year’s Trivadis Performance Training Days in Zurich, Switzerland on 19-20 September 2018. It’s one of the premier Oracle training events in Europe, one which I’ve always wanted to attend. This year has a fantastic lineup, so I’m really […]
Categories: DBA Blogs

Unit Testing for PL/SQL

Gerger Consulting - Wed, 2018-03-07 14:48
Hi PL/SQL Developers! 
We all could use more units tests for our PL/SQL code. :-)
Attend the free webinar by utPLSQL lead architect Jacek Gebal and learn how to implement robust unit tests for PL/SQL using the free and open source utPLSQL unit testing framework. 

Register at: 
http://www.prohuddle.com/webinars/jacek/unit-testing-plsql.php

Categories: Development

Fewer Platform Flags on Indexes from PeopleTools 8.55

David Kurtz - Wed, 2018-03-07 12:28
It has always been possible in Application Deisnger to specify upon which databases platforms each index should be built.  This is really a feature that is used by PeopleSoft development, rather than customers to deliver indexes that are more appropriate for a particular platform due to differences in the optimizer.
Over the years, the number of supported PeopleSoft platforms has declined.  In PeopleTools 8.45, it went down from 9 to 6 and in PeopleTools 8.55 it has gone down to just 4, but there are still 9 columns on PSINDEXDEFN that correspond to the original 9 supported platforms.
I explained in a previous blog that you can have all or none of the platform flags set to the same value, but with the platform radio button on the index properties dialogue box is still set to 'some' because one or more of the platform flag columns for some of the unsupported platforms is set differently.  Of course, this is a purely cosmetic problem, but one that can cause confusion in Application Designer.
PeopleTools 8.45PeopleTools 8.55I fix this by updating PeopleTools tables as follows. The first query reports on the indexes where the supported platform flags all have the same value and one of the unsupported platform flags are different.
column RECNAME format a15
column INDEXID format a3 heading 'Idx|ID'
column DDLCOUNT format 999 heading 'DDL|Cnt'
column CUSTKEYORDER format 9999 heading 'Cust|Key|Order'
column KEYCOUNT format 999 heading 'Key|Cnt'
column PLATFORM_SBS format 999 heading 'SBS'
column PLATFORM_DB2 format 999 heading 'DB2'
column PLATFORM_ORA format 999 heading 'ORA'
column PLATFORM_INF format 999 heading 'INF'
column PLATFORM_DBX format 999 heading 'DBx'
column PLATFORM_ALB format 999 heading 'ALB'
column PLATFORM_SYB format 999 heading 'SYB'
column PLATFORM_MSS format 999 heading 'MSS'
column PLATFORM_DB4 format 999 heading 'DB4'
column ACTIVEFLAG Format 999 heading 'Active'
column CLUSTERFLAG format 999 heading 'Clst|Flg'
column UNIQUEFLAG format 999 heading 'Uniq|Flg'
column INDEXTYPE format 999 heading 'Idx|Type'
column IDXCOMMENTS format a60
spool platformfix855
SELECT *
FROM PSINDEXDEFN
WHERE PLATFORM_DB2=PLATFORM_DBX
AND PLATFORM_DBX=PLATFORM_ORA
AND PLATFORM_ORA=PLATFORM_MSS
AND (PLATFORM_ORA!=PLATFORM_SBS
  OR PLATFORM_ORA!=PLATFORM_ALB
  OR PLATFORM_ORA!=PLATFORM_SYB
  OR PLATFORM_ORA!=PLATFORM_INF
  OR PLATFORM_ORA!=PLATFORM_DB4)
;
These are the indexes that have inconsistent platform flags.  In this case PS_PSPMTRANSHIST is to be disabled on DB2/AS400.  You can't update the flag via Application Designer, but you could set the radio button to ALL.
                                           Cust
Idx Idx Uniq Clst Key Key DDL
RECNAME ID Type Flg Flg Active Order Cnt Cnt SBS DB2 ORA INF DBx ALB SYB MSS DB4
--------------- --- ---- ---- ---- ------ ----- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
IDXCOMMENTS
------------------------------------------------------------
PSPMTRANSHIST _ 1 1 1 1 0 4 1 1 1 1 1 1 1 1 1 0
It could be a tedious process to do this for a lot of indexes.  So the following SQL commands correct all indexes.  They set the SQL flags for the unsupported platforms to the value for the supported platforms if they are all the same. The version number on the record definition is updated so that Application Desinger refreshes the object.
UPDATE PSVERSION
SET VERSION = VERSION + 1
WHERE OBJECTTYPENAME IN('SYS','RDM');

UPDATE PSLOCK
SET VERSION = VERSION + 1
WHERE OBJECTTYPENAME IN('SYS','RDM');

UPDATE PSRECDEFN
SET VERSION = (
SELECT VERSION
FROM PSVERSION
WHERE OBJECTTYPENAME = 'RDM')
WHERE RECNAME IN (
SELECT RECNAME
FROM PSINDEXDEFN
WHERE PLATFORM_DB2=PLATFORM_DBX
AND PLATFORM_DBX=PLATFORM_INF
AND PLATFORM_INF=PLATFORM_ORA
AND PLATFORM_ORA=PLATFORM_SYB
AND ( PLATFORM_ORA!=PLATFORM_SBS
OR PLATFORM_ORA!=PLATFORM_ALB
OR PLATFORM_ORA!=PLATFORM_DB4)
);

UPDATE psindexdefn
SET PLATFORM_DB4=PLATFORM_ORA
WHERE PLATFORM_DB2=PLATFORM_DBX
AND PLATFORM_DBX=PLATFORM_INF
AND PLATFORM_INF=PLATFORM_ORA
AND PLATFORM_ORA=PLATFORM_SYB
AND PLATFORM_SYB=PLATFORM_MSS
AND PLATFORM_ORA!=PLATFORM_DB4;

UPDATE psindexdefn
SET PLATFORM_ALB=PLATFORM_ORA
WHERE PLATFORM_DB2=PLATFORM_DBX
AND PLATFORM_DBX=PLATFORM_INF
AND PLATFORM_INF=PLATFORM_ORA
AND PLATFORM_ORA=PLATFORM_SYB
AND PLATFORM_SYB=PLATFORM_MSS
AND PLATFORM_ORA!=PLATFORM_ALB;

UPDATE psindexdefn
SET PLATFORM_SBS=PLATFORM_ORA
WHERE PLATFORM_DB2=PLATFORM_DBX
AND PLATFORM_DBX=PLATFORM_INF
AND PLATFORM_INF=PLATFORM_ORA
AND PLATFORM_ORA=PLATFORM_SYB
AND PLATFORM_SYB=PLATFORM_MSS
AND PLATFORM_ORA!=PLATFORM_SBS;
The platform flags now say 'ALL'. Not a tremendous change, but at least I can immediately see that these indexes do build on all platforms without having to open each one.

The new platformfix855.sql script is available on collection of miscellaneous scripts on Github.

Oracle Database Physical I/O IOPS And Throughput Benchmark

Randolf Geist - Wed, 2018-03-07 11:38
General Information

I've used a similar set of scripts quite a few times now to provide feedback to customers that wanted to get a more realistic picture of the I/O capabilities from a database perspective, rather than relying on artificial benchmarks performed outside - or even inside, think of I/O calibration - the database. Although there are already several Oracle benchmark toolkits available, like Swingbench and in particular SLOB, that run inside the database I was looking for a very simplistic and straightforward I/O benchmark that comes with minimum overhead and in principle allows maximizing I/O from the database perspective, so using regular Oracle database codepaths based on SQL execution plans, I/O calls performed as part of that, involving the database buffer cache where applicable and in particular supports focusing on specific I/O patterns (which might be unique to this benchmark toolkit). Therefore I've developed a set of simple scripts that support in total four different I/O tests:

- Single and multi block physical reads

- Single and multi block physical writes

In case of the physical read tests support for synchronous and asynchronous reads is provided, which can be controlled via a corresponding input / parameter to the script.

In terms of instrumentation / internal implementation this corresponds to:

- synchronous single block reads: "db file sequential read" / "cell single block physical read" on Exadata

- asynchronous single block reads: "db file parallel read" / "cell list of blocks physical read" on Exadata

- synchronous multi block reads: "db file scattered read" / "cell multiblock physical read" on Exadata

- asynchronous multi block reads: "direct path read" / "cell smart table/index scan" on Exadata

The physical write tests should mainly trigger "db file parallel write" in case of the single block writes and "direct path write" in case of the multi block writes. Of course when it comes to DML activity things get more complicated in terms of the actual database activity triggered, in particular the additional undo / redo / archiving / potential standby activity. Note that if you're using an auto-extensible Undo tablespace configuration, running the single block physical write tests might increase the size of your Undo tablespace - you have been warned.

So each I/O test generates a specific I/O pattern (except for the single block write test that can also trigger synchronous single block reads, see the script for more details). The basic idea is to run just one of the tests to maximize the specific I/O pattern, but of course nothing stops you from running several of the tests at the same time  (would require different schemas to use for each test because otherwise object names dropped / created will collide / overlap) which will result in a mixture of I/O patterns. There is no synchronisation though in terms of starting / stopping / generating performance report snapshots etc. when running multiple of these tests at the same time, so you would probably have to take care of that yourself. So far I've not tested this, so it might not work out as just described.

In case of the physical read tests (except for the asynchronous "direct path read" that bypasses the buffer cache by definition) using too small objects / a too large buffer cache can turn this into a CPU / logical I/O test instead, so in principle you could use those tests for generating mainly CPU load instead of physical I/O (and provided the physical I/O is quick enough the CPU usage will be significant anyway), but that is not the intended usage of the scripts.

The scripts allow control over the size of the objects created and also support placing in specific buffer caches via the STORAGE clause (like RECYCLE or KEEP cache), so it is up to you to create objects of a suitable size depending on your configuration and intended usage.

Usage

Please note - the scripts are freely available and come with no warranty at all - so please use at your own risk.

In principle the scripts can be downloaded from my github repository - ideally pick the IO_BENCHMARK.ZIP which contains all required scripts, and should simply be extracted into some directory. Since the four different I/O tests are so similar, there is a subdirectory "common" under "io_benchmark" that holds all the common script parts and the main scripts then just call these common scripts where applicable.

The benchmark scripts consist of four similar sets:

max_read_iops_benchmark*: Single block reads synchronous / asynchronous

max_read_throughput_benchmark*: Multi block reads synchronous / asynchronous

max_write_iops_benchmark*: Single block writes - optionally mixed with synchronous single block reads (depends on object vs. buffer cache size)

max_write_throughput_benchmark*: Multi block direct writes

Each set consists of three scripts - an interactive guided script prompting for inputs used as parameters for the actual benchmark harness that in turn will launch another "slave" script as many times as desired to run the concurrent benchmark code.

There are in principle two different ways how the scripts can be used:

1. For each set there is a script that is supposed to be used from a SYSDBA account and guides through the different options available (*interactive). It will drop and re-create a schema to be used for the benchmark and grant the minimum privileges required to create the objects and run the benchmark. At the end of this interactive script it will connect as the user just created and run the benchmark. You can also use this script to clean-up afterwards, which is dropping the user created and stopping the script at that point.

2. The interactive script just calls the main benchmark harness with the parameters specified, so if you already have everything in place (check the "interactive_create_user.sql" in the "common" script subdirectory for details what privileges are required) to run the benchmark you can simply connect as the intended user, call the actual benchmark script and specify the parameters as desired - it will use defaults for any parameter not explicitly specified - check the script header for more details. Please note that I haven't tested running the actual benchmark as SYS respectively SYSDBA and I wouldn't recommend doing so. Instead use a dedicated user / schema as created by the interactive script part.

Each set of scripts consists of a third script which is the "slave" script being called as many times concurrently as specified to perform the actual benchmark activity.

The scripts will generate objects, typically as part of the preparation steps before the actual concurrent benchmark activity starts, or in case of the multi block write test, the object creation is the actual benchmark activity.

After the benchmark ran for the specified amount of time (600 seconds / 10 minutes default) the "slaves" will be shut down (if they haven't done so automatically) and the corresponding information about the IOPS / throughput rate achieved will be shown, based on (G)V$SYSMETRIC_HISTORY, so at least 120 seconds of runtime are required to have this final query to show something meaningful (to ensure that at least one 60 seconds interval is fully covered).

In addition the script by default will generate performance report snapshots (either AWR or STATSPACK) and display the corresponding report at the end. The file name generated describes the test performed along with the most important parameters (parallel degree, I/O mode (sync / async), object size, duration, timestamp etc.) Note that the script on Unix/Linux makes use of the "xdg-open" utility to open the generated report, so the "xdg-utils" package would be required to be installed to have this working as intended.

Note that in 12.2.0.1 the PDB level reports and metrics seem to miss "physical single block writes" performed by the DB Writer, so effectively evaluating / running this benchmark in 12.2.0.1 on PDB level won't report anything meaningful - you would have to resort to reports on CDB level instead, which I haven't implemented (actually I had to put in some effort to use the PDB level AWR reports and metrics in 12.2, so hopefully Oracle will fix this in future versions).

Finally the benchmark script will clean up and drop the objects created for the benchmark.

In principle the benchmark scripts should cope with all kinds of configurations: Windows / Unix / Linux, Single Instance / RAC, Standard / Enterprise Edition, PDB / Non-PDB, Non-Exadata / Exadata, and support versions from 11.2.0.4 on. It might run on lower versions, too, but not tested, and of course 18c (12.2.0.2) is not available on premises yet at the time of writing this, so not tested either.

But since this is 1.0 version it obviously wasn't tested in all possible combinations / configurations / parameter settings, so expect some glitches. Feedback and ideas how to improve are welcome.

Where applicable the benchmark harness script also generates two tables EVENT_HISTOGRAM_MICRO1 and EVENT_HISTOGRAM_MICRO2 which are snapshots of GV$EVENT_HISTOGRAM_MICRO available from 12.1 on for synchronous single / multi block reads. The "harness" scripts provide a suitable query in the script header to display the latency histogram information nicely.

Happy I/O benchmarking!

EDB Failover Manager 3.0 and PostgreSQL 10.1

Yann Neuhaus - Wed, 2018-03-07 10:31

Edb Failover Manager edb-efm30 is now released and supports PostgreSQL 10.1.
Efm is an EDB tool which can be used to perform a switchover and a failover in a PostgreSQL standby environment.
In this blog we are going to see how we can install and configure efm in a standby environment. We suppose that the standby is already confugured and is running. We also have not configured any VIP. I just use 3 virtual machines to test. Below the configuration we are using 3 servers with Oracle Linux 7.2

pgservertools: 192.168.56.30 which is the witness
pgserver1: 192.168.56.36 which is the primary server
pgserver2: 192.168.56.37 which is the standby
config
EFM must be installed on both 3 nodes.To install EDB EFM,I used the rpm provided by EnterpriseDB. Note that you will need an Edb account.We will show the installation on only one node, but is the same for both nodes.

[root@host tmp]# wget http://yum.enterprisedb.com/edbrepos/edb-repo-latest.noarch.rpm
[root@host tmp]# yum localinstall -y edb-repo-latest.noarch.rpm

After we have to enable the corresponding edb repository (You will need a login and password)

[root@pgservertools yum.repos.d]# cat /etc/yum.repos.d/edb.repo
[edbas10]
name=EnterpriseDB Advanced Server 10 $releasever - $basearch
baseurl=http://login:password@yum.enterprisedb.com/10/redhat/rhel-$releasever-$basearch
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/ENTERPRISEDB-GPG-KEY

And then we can search for the package

[root@pgservertools ~]# yum search efm
Loaded plugins: langpacks, ulninfo
=============================== N/S matched: efm ===============================
edb-efm30.x86_64 : EnterpriseDB Failover Manager
efm20.x86_64 : EnterpriseDB Failover Manager
efm21.x86_64 : EnterpriseDB Failover Manager

And then install it (ouputs are truncated)

[root@pgservertools ~]# yum install edb-efm30.x86_64
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package edb-efm30.x86_64 0:3.0.0-1.rhel7 will be installed
--> Finished Dependency Resolution
…
…
Installed:
  edb-efm30.x86_64 0:3.0.0-1.rhel7
Complete!

efm requires also openjdk. So we also have installed openjdk 1.8 on all nodes:

root@host tmp]# yum install java-1.8.0-openjdk.x86_64

To manage efm we create a dedicated user in the primary cluster

postgres=# create user efm with login password 'root' superuser;
CREATE ROLE
postgres=#

We have to add following entries to pg_hba.conf of all databases clusters to allow connection for user efm.

##for efm
host  postgres    efm  192.168.56.36/32     md5
host  postgres    efm  192.168.56.37/32     md5
host  postgres    efm  192.168.56.30/32     md5

The configuration of efm consists of editing 2 main configuration files: efm.nodes and efm.properties. In my case these files are located in /etc/edb/efm-3.0. There are already two sample files that we can copy and then edit.
First we need to encrypt the password of user efm and after we have to configure efm.nodes and efm.properties on both nodes.

[root@pgserver1 efm-3.0]# /usr/edb/efm-3.0/bin/efm encrypt efm
This utility will generate an encrypted password for you to place in your
EFM cluster property file.
Please enter the password and hit enter:
Please enter the password again to confirm:
The encrypted password is: ff7f041651e5e864013c1102d26a5e08
Please paste this into your cluster properties file.
        db.password.encrypted= ff7f041651e5e864013c1102d26a5e08

Below we show the contents of our two configuration files
On pgserver1 which is the primary
efm.nodes

root@pgserver1 101]# cat /etc/edb/efm-3.0/efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.56.30:9998 192.168.56.37:9998
[root@pgserver1 101]#

efm.properties

[root@pgserver1 101]# cat /etc/edb/efm-3.0/efm.properties | grep -v ^#
db.user=efm
db.password.encrypted=ff7f041651e5e864013c1102d26a5e08
db.port=5436
db.database=postgres
db.service.owner=postgres
db.service.name=
db.bin=/u01/app/postgres/product/10/db_1/bin
db.recovery.conf.dir=/u90/mydata/101
jdbc.sslmode=disable
user.email=xxx@xxx.fr
script.notification=
bind.address=192.168.56.36:9998
admin.port=9999
is.witness=false
local.period=10
local.timeout=60
local.timeout.final=10
remote.timeout=10
node.timeout=50
stop.isolated.master=false
pingServerIp=192.168.56.30
pingServerCommand=/bin/ping -q -c3 -w5
auto.allow.hosts=false
db.reuse.connection.count=0
auto.failover=true
auto.reconfigure=true
promotable=true
minimum.standbys=0
recovery.check.period=2
auto.resume.period=0
script.fence=
script.post.promotion=
script.resumed=
script.db.failure=
script.master.isolated=
script.remote.pre.promotion=
script.remote.post.promotion=
script.custom.monitor=
custom.monitor.interval=
custom.monitor.timeout=
custom.monitor.safe.mode=
sudo.command=sudo
sudo.user.command=sudo -u %u
log.dir=/var/log/efm-3.0
jgroups.loglevel=INFO
efm.loglevel=INFO
jvm.options=-Xmx32m
[root@pgserver1 101]#

On pgserver2 which is the standby
efm.nodes

[root@pgserver2 tmp]# cat /etc/edb/efm-3.0/efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.56.30:9998 192.168.56.36:9998
[root@pgserver2 tmp]#

efm.properties

[root@pgserver2 tmp]# cat /etc/edb/efm-3.0/efm.properties | grep -v ^#
db.user=efm
db.password.encrypted=ff7f041651e5e864013c1102d26a5e08
db.port=5436
db.database=postgres
db.service.owner=postgres
db.service.name=
db.bin=/u01/app/postgres/product/10/db_1/bin
db.recovery.conf.dir=/u90/mydata/101
jdbc.sslmode=disable
user.email=xxx@xxx.fr
script.notification=
bind.address=192.168.56.37:9998
admin.port=9999
is.witness=false
local.period=10
local.timeout=60
local.timeout.final=10
remote.timeout=10
node.timeout=50
stop.isolated.master=false
pingServerIp=192.168.56.30
pingServerCommand=/bin/ping -q -c3 -w5
auto.allow.hosts=true
db.reuse.connection.count=0
auto.failover=true
auto.reconfigure=true
promotable=true
minimum.standbys=0
recovery.check.period=2
auto.resume.period=0
script.fence=
script.post.promotion=
script.resumed=
script.db.failure=
script.master.isolated=
script.remote.pre.promotion=
script.remote.post.promotion=
script.custom.monitor=
custom.monitor.interval=
custom.monitor.timeout=
custom.monitor.safe.mode=
sudo.command=sudo
sudo.user.command=sudo -u %u
log.dir=/var/log/efm-3.0
jgroups.loglevel=INFO
efm.loglevel=INFO
jvm.options=-Xmx32m
[root@pgserver2 tmp]#

On pgservertools which is the witness
efm.nodes

[root@pgservertools efm-3.0]# cat /etc/edb/efm-3.0/efm.nodes
# List of node address:port combinations separated by whitespace.
# The list should include at least the membership coordinator's address.
192.168.56.36:9998 192.168.56.37:9998
[root@pgservertools efm-3.0]#

efm.properties

[root@pgservertools efm-3.0]# cat /etc/edb/efm-3.0/efm.properties | grep -v ^#
db.user=efm
db.password.encrypted=ff7f041651e5e864013c1102d26a5e08
db.port=5436
db.database=postgres
db.service.owner=postgres
db.service.name=
db.bin=/u01/app/postgres/product/10/db_1/bin
db.recovery.conf.dir=/u90/mydata/101
jdbc.sslmode=disable
user.email=xxx@xxx.fr
script.notification=
bind.address=192.168.56.30:9998
admin.port=9999
is.witness=true
local.period=10
local.timeout=60
local.timeout.final=10
remote.timeout=10
node.timeout=50
stop.isolated.master=false
pingServerIp=192.168.56.30
pingServerCommand=/bin/ping -q -c3 -w5
auto.allow.hosts=false
db.reuse.connection.count=0
auto.failover=true
auto.reconfigure=true
promotable=true
minimum.standbys=0
recovery.check.period=2
auto.resume.period=0
script.fence=
script.post.promotion=
script.resumed=
script.db.failure=
script.master.isolated=
script.remote.pre.promotion=
script.remote.post.promotion=
script.custom.monitor=
custom.monitor.interval=
custom.monitor.timeout=
custom.monitor.safe.mode=
sudo.command=sudo
sudo.user.command=sudo -u %u
log.dir=/var/log/efm-3.0
jgroups.loglevel=INFO
efm.loglevel=INFO
jvm.options=-Xmx32m
[root@pgservertools efm-3.0]#

Now let’s start efm on both nodes. If there is any error check logs on /var/log/efm-3.0/.
I started on following order: pgserver1, pgserver2 and pgservertools. Services can be configured to be started automatically when the server starts.
Below an example on pgserverools

[root@pgservertools efm-3.0]# service efm-3.0 start
.
[root@pgservertools efm-3.0]# service efm-3.0 status
Redirecting to /bin/systemctl status  efm-3.0.service
● efm-3.0.service - EnterpriseDB Failover Manager 3.0
   Loaded: loaded (/usr/lib/systemd/system/efm-3.0.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-03-06 15:58:51 CET; 1h 46min ago
  Process: 22260 ExecStart=/bin/bash -c /usr/edb/efm-3.0/bin/runefm.sh start ${CLUSTER} (code=exited, status=0/SUCCESS)
 Main PID: 22321 (java)
   CGroup: /system.slice/efm-3.0.service
           └─22321 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.151-1.b12.el7_4.x86_64/jre/bin/java -cp /usr/edb/efm-3.0/lib/EFM-3.0.0.jar -Xmx32m com.enterprisedb.efm.main.ServiceCommand __int_start /e...
Mar 06 15:58:45 pgservertools.localdomain systemd[1]: Starting EnterpriseDB Failover Manager 3.0...
Mar 06 15:58:51 pgservertools.localdomain systemd[1]: Started EnterpriseDB Failover Manager 3.0.
[root@pgservertools efm-3.0]#

Once started we can verified from any node the status of our cluster

[root@pgservertools efm-3.0]# /usr/edb/efm-3.0/bin/efm cluster-status efm
Cluster Status: efm
VIP:
        Agent Type  Address              Agent  DB       Info
        --------------------------------------------------------------
        Witness     192.168.56.30        UP     N/A
        Master      192.168.56.36        UP     UP
        Standby     192.168.56.37        UP     UP
Allowed node host list:
        192.168.56.36 192.168.56.37 192.168.56.30
Membership coordinator: 192.168.56.30
Standby priority host list:
        192.168.56.37
Promote Status:
        DB Type     Address              XLog Loc         Info
        --------------------------------------------------------------
        Master      192.168.56.36        0/430001A8
        Standby     192.168.56.37        0/430001A8
        Standby database(s) in sync with master. It is safe to promote.
[root@pgservertools efm-3.0]#

Now that everything is ok, let’s do a switchover

[root@pgservertools efm-3.0]# /usr/edb/efm-3.0/bin/efm promote efm -switchover
Promote/switchover command accepted by local agent. Proceeding with promotion and will reconfigure original master. Run the 'cluster-status' command for information about the new cluster state.

And if we run the cluster-status command during the switchover

[root@pgservertools efm-3.0]# /usr/edb/efm-3.0/bin/efm cluster-status efm
Cluster Status: efm
VIP:
        Agent Type  Address              Agent  DB       Info
        --------------------------------------------------------------
        Witness     192.168.56.30        UP     N/A
        Idle        192.168.56.36        UP     UNKNOWN
        Standby     192.168.56.37        UP     UP
Allowed node host list:
        192.168.56.36 192.168.56.37 192.168.56.30
Membership coordinator: 192.168.56.30
Standby priority host list:
        192.168.56.37
Promote Status:
        DB Type     Address              XLog Loc         Info
        --------------------------------------------------------------
        Standby     192.168.56.37        0/44000098
        No master database was found.
Idle Node Status (idle nodes ignored in XLog location comparisons):
        Address              XLog Loc         Info
        --------------------------------------------------------------
        192.168.56.36        UNKNOWN          Connection to 192.168.56.36:5436 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.

And we can see the promoting phase

[root@pgservertools efm-3.0]# /usr/edb/efm-3.0/bin/efm cluster-status efm
Cluster Status: efm
VIP:
        Agent Type  Address              Agent  DB       Info
        --------------------------------------------------------------
        Witness     192.168.56.30        UP     N/A
        Idle        192.168.56.36        UP     UNKNOWN
        Promoting   192.168.56.37        UP     UP
Allowed node host list:
        192.168.56.36 192.168.56.37 192.168.56.30
Membership coordinator: 192.168.56.30
Standby priority host list:
        (List is empty.)
Promote Status:
        DB Type     Address              XLog Loc         Info
        --------------------------------------------------------------
        Master      192.168.56.37        0/44000170
        No standby databases were found.
Idle Node Status (idle nodes ignored in XLog location comparisons):
        Address              XLog Loc         Info
        --------------------------------------------------------------
        192.168.56.36        UNKNOWN          Connection to 192.168.56.36:5436 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.

And after a few time we can see that the new master is on pgserver2

[root@pgservertools efm-3.0]# /usr/edb/efm-3.0/bin/efm cluster-status efm
Cluster Status: efm
VIP:
        Agent Type  Address              Agent  DB       Info
        --------------------------------------------------------------
        Witness     192.168.56.30        UP     N/A
        Standby     192.168.56.36        UP     UP
        Master      192.168.56.37        UP     UP
Allowed node host list:
        192.168.56.36 192.168.56.37 192.168.56.30
Membership coordinator: 192.168.56.30
Standby priority host list:
        192.168.56.36
Promote Status:
        DB Type     Address              XLog Loc         Info
        --------------------------------------------------------------
        Master      192.168.56.37        0/440001A8
        Standby     192.168.56.36        0/440001A8
        Standby database(s) in sync with master. It is safe to promote.
[root@pgservertools efm-3.0]#

The purpose of the witness is to do an automatic failover when the primary is down. Let’s simulate a crash of our primary database by killing the corresponding process.

[root@pgserver2 tmp]# ps -ef | grep postgres
...
...
root     17529 14103  0 16:45 pts/1    00:00:00 tail -f /u90/mydata/101/log/postgresql-2018-03-06.log
postgres 20612     1  0 17:56 ?        00:00:00 /u01/app/postgres/product/10/db_1/bin/postgres -D /u90/mydata/101
postgres 20613 20612  0 17:56 ?        00:00:00 postgres: logger process
postgres 20615 20612  0 17:56 ?        00:00:00 postgres: checkpointer process
postgres 20616 20612  0 17:56 ?        00:00:00 postgres: writer process
postgres 20617 20612  0 17:56 ?        00:00:00 postgres: stats collector process
postgres 20819 20612  0 18:00 ?        00:00:00 postgres: wal writer process
postgres 20820 20612  0 18:00 ?        00:00:00 postgres: autovacuum launcher process
postgres 20821 20612  0 18:00 ?        00:00:00 postgres: archiver process   last was 00000008.history
postgres 20822 20612  0 18:00 ?        00:00:00 postgres: bgworker: logical replication launcher
postgres 20832 20612  0 18:00 ?        00:00:00 postgres: wal sender process repliuser 192.168.56.36(45827) streaming 0/440001A8
root     21481 16868  0 18:16 pts/0    00:00:00 grep --color=auto postgres
[root@pgserver2 tmp]#

And let’s execute the kill command

[root@pgserver2 tmp]# kill -9 20612

If we check the cluster status from the witness server, we can see that the master is in an UNKNOWN status

[root@pgservertools efm-3.0]# /usr/edb/efm-3.0/bin/efm cluster-status efm
Cluster Status: efm
VIP:
        Agent Type  Address              Agent  DB       Info
        --------------------------------------------------------------
        Witness     192.168.56.30        UP     N/A
        Standby     192.168.56.36        UP     UP
        Idle        192.168.56.37        UP     UNKNOWN
Allowed node host list:
        192.168.56.36 192.168.56.37 192.168.56.30
Membership coordinator: 192.168.56.30
Standby priority host list:
        192.168.56.36
Promote Status:
        DB Type     Address              XLog Loc         Info
        --------------------------------------------------------------
        Standby     192.168.56.36        0/440001A8
        No master database was found.
Idle Node Status (idle nodes ignored in XLog location comparisons):
        Address              XLog Loc         Info
        --------------------------------------------------------------
        192.168.56.37        UNKNOWN          Connection to 192.168.56.37:5436 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
[root@pgservertools efm-3.0]#

In the alert log in our standby server pgserver1, we can see that that the database is converted to a primary one a few time after.

2018-03-06 18:17:49.381 CET [18384] FATAL:  could not receive data from WAL stream: server closed the connection unexpectedly
                This probably means the server terminated abnormally
                before or while processing the request.
2018-03-06 18:17:49.382 CET [18380] LOG:  invalid record length at 0/440001A8: wanted 24, got 0
2018-03-06 18:17:49.387 CET [19049] FATAL:  could not connect to the primary server: could not connect to server: Connection refused
                Is the server running on host "192.168.56.37" and accepting
                TCP/IP connections on port 5436?
2018-03-06 18:17:54.404 CET [19055] FATAL:  could not connect to the primary server: could not connect to server: Connection refused
                Is the server running on host "192.168.56.37" and accepting
                TCP/IP connections on port 5436?
2018-03-06 18:17:59.406 CET [19107] FATAL:  could not connect to the primary server: could not connect to server: Connection refused
                Is the server running on host "192.168.56.37" and accepting
…
….
….
                TCP/IP connections on port 5436?
                TCP/IP connections on port 5436?
2018-03-06 18:18:34.450 CET [19128] FATAL:  could not connect to the primary server: could not connect to server: Connection refused
                Is the server running on host "192.168.56.37" and accepting
                TCP/IP connections on port 5436?
2018-03-06 18:18:39.451 CET [19134] FATAL:  could not connect to the primary server: could not connect to server: Connection refused
                Is the server running on host "192.168.56.37" and accepting
                TCP/IP connections on port 5436?
2018-03-06 18:18:44.462 CET [19135] FATAL:  could not connect to the primary server: could not connect to server: Connection refused
                Is the server running on host "192.168.56.37" and accepting
                TCP/IP connections on port 5436?
2018-03-06 18:18:49.456 CET [18380] LOG:  trigger file found: /tmp/postgresql.trigger
2018-03-06 18:18:49.456 CET [18380] LOG:  redo done at 0/44000170
2018-03-06 18:18:49.479 CET [18380] LOG:  selected new timeline ID: 9
2018-03-06 18:18:50.128 CET [18380] LOG:  archive recovery complete
2018-03-06 18:18:50.229 CET [18378] LOG:  database system is ready to accept connections

What we can confirm by querying the cluster status

[root@pgservertools efm-3.0]# /usr/edb/efm-3.0/bin/efm cluster-status efm
Cluster Status: efm
VIP:
        Agent Type  Address              Agent  DB       Info
        --------------------------------------------------------------
        Witness     192.168.56.30        UP     N/A
        Master      192.168.56.36        UP     UP
        Idle        192.168.56.37        UP     UNKNOWN
Allowed node host list:
        192.168.56.36 192.168.56.37 192.168.56.30
Membership coordinator: 192.168.56.30
Standby priority host list:
        (List is empty.)
Promote Status:
        DB Type     Address              XLog Loc         Info
        --------------------------------------------------------------
        Master      192.168.56.36        0/440002B8
        No standby databases were found.
Idle Node Status (idle nodes ignored in XLog location comparisons):
        Address              XLog Loc         Info
        --------------------------------------------------------------
        192.168.56.37        UNKNOWN          Connection to 192.168.56.37:5436 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
[root@pgservertools efm-3.0]#

and

[root@pgservertools efm-3.0]# /usr/edb/efm-3.0/bin/efm cluster-status efm
Cluster Status: efm
VIP:
        Agent Type  Address              Agent  DB       Info
        --------------------------------------------------------------
        Witness     192.168.56.30        UP     N/A
        Master      192.168.56.36        UP     UP
Allowed node host list:
        192.168.56.36 192.168.56.37 192.168.56.30
Membership coordinator: 192.168.56.30
Standby priority host list:
        (List is empty.)
Promote Status:
        DB Type     Address              XLog Loc         Info
        --------------------------------------------------------------
        Master      192.168.56.36        0/440002B8
        No standby databases were found.
[root@pgservertools efm-3.0]#

On the old primary pgserver2 we can see the contents of the file recovery.conf automatically created by EDB Failover manager

[postgres@pgserver2 101]$ cat recovery.conf
# EDB Failover Manager
# This generated recovery.conf file prevents the db server from accidentally
# being restarted as a master since a failover or promotion has occurred
standby_mode = on
restore_command = 'echo 2>"recovery suspended on failed server node"; exit 1'
[postgres@pgserver2 101]$

To rebuild our standby database we have to edit the recovery.conf file
Conclusion
We have seen in this blog how to configure edb-efm30. Note that a virtual IP can be also configured. The official documentation can help.

 

Cet article EDB Failover Manager 3.0 and PostgreSQL 10.1 est apparu en premier sur Blog dbi services.

Find the last time for consecutive rows by status

Tom Kyte - Wed, 2018-03-07 09:46
Hello, I need some help with an Oracle Sql Query. For each day i must calculate the time difference between each cycle IN & OUT and then sum the difference as total worked minutes. The main problem that i have it's that the input data is variabl...
Categories: DBA Blogs

How to exclude a list of tables in DBMS_STATS.GATHER_SCHEMA_STATS ?

Tom Kyte - Wed, 2018-03-07 09:46
Hello, Is there any way to exclude a list of tables while doing DBMS_STATS.GATHER_SCHEMA_STATS? DBMS_STATS.LOCK_TABLE_STATS doesn't answer my need exactly. Best regards, Amine
Categories: DBA Blogs

adding space between a number and a letter in a record

Tom Kyte - Wed, 2018-03-07 09:46
Hi Tom, I want to add space between number and a letter in a column. For eg: somestreet 22a --> Somestreet 22 A oldstreet 5d --> Oldstreet 5 D othernewtreet 134B --> Othernewstreet 134 B Similarly, for all suc...
Categories: DBA Blogs

regarding table partition

Tom Kyte - Wed, 2018-03-07 09:46
hi Team, We have set initrans value for partition table to 255 in our exadata box, this table have very high concurrency , after which we noticed huge archive generation , here can help us how this inittrans value is related with archive genera...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator