Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 10 hours 4 min ago

An SQLite extension for gawk (part I)

Fri, 2018-09-28 17:20

Quick: what is the most used database management system on our planet ? Oracle ? Wrong. SQL server ? Wrong again ! MySQL ? You’re almost there. It’s SQLite. Surprised ? I must confess that I was too. Actually, SQLite is special in that it is not the traditional 2-tiers client/server but one-tier and embedded, which means that it works as a library linked to an application. As such, it is used to fulfill the database needs of browsers, portable devices such as the iPods, iPhones, Android, etc… (see a short list of famous users here). Look also here for a succinct intro and here for a list of features. Here you’ll find distinctive features of SQLite and here common uses for it.
Let’s be clear from the start: although light, this is no toy software but a solid, time-proven, globally used, rigorously tested open-source product.
So, what the relation with gawk ? Well, none. Until now. As you may know, gawk has had for some time now an easy way to be extended with useful libraries. I already talked about this in my previous blogs, e.g. here. In particular, it has also a binding for PostgreSQL (see here). So, I told to myself, wouldn’t it be nice to make one for Oracle too ? Or for some key-value xDBMs such as Berkeley DB ? But fate decided otherwise: I already used SQLite in the past as an almost no-installation SQL database and during the present research I landed fortuitously on SQLite’s web pages. I was immediately hooked on. SQLite is petite (500 KiB in one unique source file for the whole RDBMS library and about 1 Mb for the shared library object file, unbelievable !), easy to learn and use, and open-source. Actually, the SQLite creators propose such an unorthodox license agreement that I feel compelled to list it here:

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.

So inspirational and quite a depart from the traditional, indigestible EULAs ! There is enough here to push the religion business to bankruptcy. And the lawyers. And make our sorry planet a paradise again.
In effect, the only data structure known to gawk is the associative array, or hashes in perl parlance, or dictionaries for pythonists. And they are entirely held in memory. I thought it would be a nice addition to gawk to be able to work with on-disk tables for those cases where huge amount of textual data have to be processed in random order.
As this article is rather large, I’ve split it into 3 parts. Part I, the one you’re reading now, presents the requirements and proposes an API. Part II lists the extension code and shows how to compile and use it in gawk. Part III comments the code and describes a stress test for the interface and SQLite from within a gawk script. So, let’s see how I hammered that screw !

The Objectives

At the very minimum, the gawk interface to SQLite (for short sqlite_gawk henceforth) shall be able to open a database file (a SQLite database is entirely self-contained in a single file, I told you it’s light), send DML/DDL statements to it and run SELECT queries against it. The SELECT statement shall be able to present the result in a nice and flexible way to avoid messing up the screen with wrapped-around, illegible lines, i.e. the columns’ width shall be individually configurable and some truncation, with or without an ellipsis, or wrap-around be possible within those limits.
Also, as SQLite supports blobs, sqlite_gawk should be able to deal with them, i.e. insert, retrieve and display them if they are textual. SQLite comes with an interactive shell functionally akin to Oracle’s sqlplus and named sqlite3 which extends the collection of SQL functions (yes, SQLite allows that too). The SQL functions the shell adds are readfile() and writefile() to read the content of a file into a blob, respectively dump a blob into a file. We definitively want that functionality in the interface too.
Another requirement is that, along with displaying the retrieved rows on the screen, sqlite_select shall be able to store them into an integer-indexed gawk array of associative arrays. The goal is to permit further in-memory processing from within gawk.

The API

All the above requirements converge into the following API:

int do_sqlite_open(db_filename)
-- opens the database file db_filename;
-- in case of success, returns a non-negative integer which is a handle to the db;
-- returns -1 and sends to stderr an error message from SQLite if an error occured;

int sqlite_close(db_handle)
-- closes the database whith db_handle handle;
-- returns 0 is success and -1 plus an error message from SQLite to stderr in case of error;

int sqlite_exec(db_handle, sql_stmt)
-- sends the DML/DDL SQL statement sql_stmt to the database with db_handle handle;
-- returns -1 plus an error message from SQLite to stderr in case of error, a non-negative integer from SQLite if success;

int sqlite_select(db_handle, select_stmt [,"" | separator_string | list-of-columns-widths | , dummy, gawk_array])
-- sends the SELECT select_stmt statement with or without formatting options;
-- the output is either displayed or sent into a gawk array of associative arrays;

sqlite_exec takes INSERT, DELETE, UPDATE, i.e. all SQL statements different from SELECT. In addition, as said above, INSERT, UPDATE and SELECT also accept the SQL extensions readfile() and writefile() for blob I/Os from/to file. Here are a few examples of usage from gawk:

rc = sqlite_exec(my_db, "INSERT INTO test_with_blob(n1, my_blob) VALUES(1000, readfile('/home/dmadmin/setup_files/instantclient-basic-linux.x64-12.2.0.1.0.zip'))"

rc = sqlite_select(my_db, "SELECT n1, writefile('blob_2000.dmp', my_blob) FROM test_with_blob where n1 = 2000 limit 1")

rc = sqlite_exec(my_db, "UPDATE test_with_blob set my_blob = readfile('/home/dmadmin/dmgawk/gawk-4.2.1/extension/sqlite_gawk.c') where n1 = 1000")
sqlite_select()

As expected from functions that must produce human-readable output, this is the most feature rich, and complex, function of the interface:

int sqlite_select(db_handle, select_stmt [, "" | , "col_separator_string" | , "list-of-columns-formats" | , dummy, gawk_array])

This compact syntax can be split into the following five acceptations:

int sqlite_select(db_handle, select_stmt)
int sqlite_select(db_handle, select_stmt, "")
int sqlite_select(db_handle, select_stmt, "col_separator_string")
int sqlite_select(db_handle, select_stmt, "list-of-columns-formats")
int sqlite_select(db_handle, select_stmt, dummy, gawk_array)

The function sqlite_select is overloaded and takes from 2 to 4 parameters; when the arities are identical (variants 2 to 4), the format of the 3rd parameter makes the difference. Let’s see what service they provide.

int sqlite_select(db_handle, select_stmt)

The first select outputs its result as a table with fixed column-widths; those widths are from 8 to 15 characters wide. Its purpose is to give a quick overview of a query’s result without messing up the screen with long, wrapped around lines. Here is an example with fake data:

sqlite_select(my_db, 'SELECT * FROM test1')
n1 s1 s2
-------- -------- --------
100 hello1 hello...
200 hello2 hello...
300 hello3 hello...
400 hello4 hello...
400 hello... hello...
400 hello... hello...
6 rows selected
3 columns displayed

We can see that too large columns are truncated and an ellipsis string (…) is appended to them to show this fact.
This quick overview terminates with a table of optimum columns widths so that, if used later, screen width permitting, the columns can be displayed entirely without truncation.

Optimum column widths
=====================
for query: SELECT * FROM test1
n1 3
s1 18
s2 21

This variant is simple to use and handy for having a quick peek at the data and their display needs.
The 4th sqlite_select() variant let us provide column formats. Here is an example of how to do that with previous optimum column widths:

sqlite_select(my_db, 'SELECT * FROM test1', "3 18 21")
n1 s1 s2
--- ------------------ ---------------------
100 hello1 hello0101
200 hello2 hello0102
300 hello3 hello0103
400 hello4 hello0104
400 hello5 with spaces hello0105 with spaces
400 hello6 with spaces hello0106
6 rows selected

The columns are now displayed without truncation.

int sqlite_select(db_handle, select_stmt, "")

The second variant takes en empty string as the 3rd parameters, which means “use | as a column separator”. Here is an example of output:

sqlite_select(my_db, 'SELECT * FROM test1', "")
n1|s1|s2
100|hello1|hello0101
200|hello2|hello0102
300|hello3|hello0103
400|hello4|hello0104
400|hello5 with spaces |hello0105 with spaces
400|hello6 with spaces |hello0106
6 rows selected

If such a default character is not appropriate, a more suitable string can be provided, which is the purpose of sqlite_select() 3rd variant, e.g. ‘||’ as shown below:

sqlite_select(my_db, 'SELECT * FROM test1', "||")
n1 ||s1 ||s2
--------||--------||--------
100 ||hello1 ||hello...
200 ||hello2 ||hello...
300 ||hello3 ||hello...
400 ||hello4 ||hello...
400 ||hello...||hello...
400 ||hello...||hello...
6 rows selected
3 columns displayed

Since this format is easy to parse, it is handy for exporting the data into a file and subsequently import them into a spreadsheet program.

int sqlite_select(db_handle, select_stmt, "list-of-columns-formats")

We’ve already seen the function’s 4th variant but there is more to the column formats.
The parameter “list-of-columns-formats” is a comma- or space-separated ordered list of numeric values, the column widths, one number for each column in the SELECT clause of that statement. If they are too many values, the superfluous ones are ignored. If they are fewer, the last one is extended to cover for the missing values.
They can also end with one of t, e or w characters where t stands for t(runcation), e stands for e(llipsis) suffix if truncation and w stands for w(rap-around).
The minimal width is 3 characters if an ellipsis is requested to accommodate the suffix itself.
Here is an example of invocation and output:

sqlite_select(my_db, 'SELECT * FROM test1', "2e 15e 10w")
n1 s1 s2
--- --------------- ----------
100 hello1 hello0101
200 hello2 hello0102
300 hello3 hello0103
400 hello4 hello0104
400 hello5 with ... hello0105
with space
s
400 hello6 with ... hello0106
400 hello6-with-... hello0106-
12345
7 rows selected

Here, we want the first column to be displayed in a 3-character wide field with truncation allowed and an ellipsis suffix. No truncation occurred in this column.
The second column should be displayed in a 15-character wide column also with ellipsis as suffix if truncation, which is visible here.
The third column is displayed in a 10-character wide column with wrapping-around.
This variant attempts to emulate some of the flexibility of Oracle sqlplus “col XX format YY” command.

int sqlite_select(db_handle, select_stmt, dummy, gawk_array)

Finally, the last acceptation below does not output anything on the screen but fills in a gawk array with the query’s result.
A dummy parameter has been introduced to allow resolving the overloaded function and invoke the expected code. It can be set to any value since this formal parameter is ignored or, as the say, is reserved for future use.
Here is an example of invocation:

sqlite_select(my_db, 'SELECT * FROM test1', 0, a_test)
6 rows selected

If we iterate and print the gawk array:

   for (row in a_test) {
      printf("row %d: ", row)
      for (col in a_test[row])
         printf("  %s = %s", col, a_test[row][col])
      printf "\n" 
   }
   printf "\n"

, we can guess its structure:

row 0: n1 = 100 s1 = hello1 s2 = hello0101
row 1: n1 = 200 s1 = hello2 s2 = hello0102
row 2: n1 = 300 s1 = hello3 s2 = hello0103
row 3: n1 = 400 s1 = hello4 s2 = hello0104
row 4: n1 = 400 s1 = hello5 with spaces s2 = hello0105 with spaces
row 5: n1 = 400 s1 = hello6 with spaces s2 = hello0106

As we can see, the array’s structure is the following (say the query returns a table of count rows and ncolumns):

array[0] = sub-array_0
array[1] = sub-array_1
...
array[count-1] = sub-array_count-1
where array is indexed by an integer and the sub-arrays are associative arrays with following composition:
sub-array0[col0] = value0,0
sub-array0[col1] = value0,1
...
sub-array0[coln-1] = value0,n-1
sub-array1[col0] = value1,0
...
sub-array1[coln-1] = value1,n-1
...
sub-arraycount-1[col0] = valuecount-1,0
...
sub-arraycount-1[coln-1] = valuecount-1,n-1

Said otherwise, the returned array is an integer-indexed array of associative arrays; its first dimension contains the rows and its second dimension contains the columns, i.e. it’s a table of database rows and value columns.
This feature is very interesting but comes with some limitation since all the data are held in memory. Maybe a future release of gawk will be able to transparently paginate gawk arrays to/from disk providing a kind of virtual memory for them. Its implementation could even use SQLite. The problem turns out to map multi-dimensional associative arrays onto relational tables. Some performance degradation is expected, unless an efficient pagination mechanism is introduced. Documentum implemented the mapping partially for repeating attributes, i.e. flat arrays of values, with joins between _s and _r tables, but there is much more to it in gawk’s array. This would be a fascinating subject for another blog on its own as one can imagine.
An implicit and free-of-charge benefit of this implementation would be persistence. Another would be serialization.
For the time being, if a query returns too many rows to be held in memory at once, it may be better to first print them into a file as delimited values (use variants 2 or 3 of sqlite_select for that, as describe above) and later work on them sequentially from there, with as many passes as needed. Or use the SQLite shell and work the data on-the-fly. e.g.:

cat - << EoQ | sqlite3 | gawk -v FS="|" '{
# do something, e.g.:
print "row", NR, $0
}'
.open my_db
.separator |
select * from test1;
.exit
EoQ
row 1 100|hello1|hello01001
row 2 200|hello2|hello01002
row 3 300|hello3|hello01003

How could Documentum benefit of SQLite ?

Whereas it would not be realistic to use SQLite to store documents’ metadata as a replacement for a full 2-tiers RDBMS, it could still find a purpose within Documentum.
Documentum, and lots of other software, stores its configuration files, such as the server.ini and the old dmcl.ini, in ini files, e.g.:

[SERVER_STARTUP]
docbase_id = 1000000
docbase_name = mydocbase
database_name = mydb
database_conn = dbconn
database_owner = dmadmin
database_password_file = /home/dmadmin/dba/mydocbase/dbpasswd.txt

[DOCBROKER_PROJECTION_TARGET]
host = myhost

[DOCBROKER_PROJECTION_TARGET_n]
#n can be 0-49
key=value

[FUNCTION_SPECIFIC_STORAGE]
#Oracle & DB2 only
key=value

[TYPE_SPECIFIC_STORAGE]
key=value
#Oracle & DB2 only

[FUNCTION_EXTENT_SIZE]
key=value
#Oracle only

[TYPE_EXTENT_SIZE]
key=value

There are also key-value files such as the dfc.properties.
By storing these data inside a SQLite table, in its own database for example (databases are so cheap in SQlite, just one file), common typos could be avoided. A classic error with ini files in particular consists in misplacing a setting in the wrong section; such errors are not easy to spot because Documentum silently ignores them.
Consider this snippet from dfcfull.properties for instance:

# ACS configuration
# =================
# Preferences prefixed dfc.acs are used by dfc for distributed content services 
# ACS.                                                                          


# Defines how often dfc verifies acs projection, specified in seconds.          
# min value:  0, max value: 10000000
# 
dfc.acs.avail.refresh_interval = 360


# Indicates whether to verify if ACS server projects.                           
# 
dfc.acs.check_availability = true


# Defines how often dfc verifies that docbase related config objects are 
# modified, specified in seconds.                                               
# min value:  0, max value: 10000000
# 
dfc.acs.config.refresh_interval = 120


# Defines how often dfc verifies that global registry related config objects are 
# modified, specified in seconds.                                               
# min value:  0, max value: 10000000
# 
dfc.acs.gr.refresh_interval = 120
...

Wouldn’t it be nice to move these settings into a SQLite table with the structure dfc_properties(key PRIMARY KEY, value, comment, is_enabled) ? Both could still coexist and be merged at server startup time, with more priority to the file (i.e. if a given setting is present in both the table and in the file, the latter would prevail). We could have a classic file dfc.propertes along with the database file dfc.properties.db.
Common operations on the parameters would be done through SQL statements, e.g.:

SELECT key, value, comment from dfc_properties where is_enable = TRUE;
SELECT 'other_values', key, value from dfc_properties where is_enable = FALSE and key = ... ORDER BY key, value;
SELECT 'all_values', is_enable, key, value from dfc_properties where ORDER BY is_enable DESC, key, value;
UPDATE server_ini SET value = 'mydb.world' WHERE key = 'database_conn';

All the supported parameters would already be present in the table with correct names and default values. Alternative or deactivated values would have their is_enabled value to FALSE.
But the main interest of using SQlite’s tables here would be the CHECK constraints (or FOREIGN keys) to prevent typos in the parameter names. And for a server.ini.db with structure server_ini(section, key, value, comment, is_enabled), the section + key would be a composite foreign key with a domain CHECK constraints in the parent table, to prevent misplaced parameters. This would require a real database schema with some complexity, and would be delivered by Documentum. The point here is that SQLite would be an excellent tool for this kind of data.
But let’s leave it at that. Such an design and implementation could also deserve its own blog.
I hope this interface got your interest. In the next part, I’ll present its gory details, comment on the implementation and its limits and show how to compile and use it within gawk. See you there !

 

Cet article An SQLite extension for gawk (part I) est apparu en premier sur Blog dbi services.

Adding a timeout in monitoring probes

Fri, 2018-09-28 15:40

A few months ago day, as I was writing the documentation for a monitoring probe, I suddenly realized that that probe, along with others I wrote during that time to monitor Documentum installations, had all a big, unexpected flaw. Indeed, it struck me that if it hang for some reason while running, it could stay there well after the next monitoring cycle had begun, which could too be affected by the same problem, and so on, until lots of such processes could be hanging and possibly hogging valuable repository sessions, causing their complete exhaustion and the catastrophic consequence on the client applications. How ironic would it be that health checks would actually endanger an application ?

A true story

I realized all this because it already happened once, years ago, at a different client’s. It was just after we migrated from Documentum content server v5.3 to v6.x. A big shift was introduced in that new version: the command-line tools iapi, idql, dmbasic and dmawk went java. More precisely, they switched from the native libdmcl40.so C library to the library libdmcl.so which calls the DfCs behind the scenes, with this sorcery made possible thanks to JNI. The front is still native code but all the Documentum stuff is henceforth delegated to the java DfCs.
What was the impact on those tools ? It was huge: all those tools that used to start in less than a second took now around 10 seconds or more to start because of all the bloatware initialization. We vaguely noticed it during the tests and supposed it was caused by a big load in that less powerful environment so we went confidently to production one week-end.
The next Monday morning, panicked calls flooded the Help Desk; users were complaining that part of their applications did not work any more. A closer look in the application’s log showed that it had become impossible to open new sessions to some repositories. The process list on the server machine showed tens of documentum and idql processes running at once. Those idql processes were stuck instances of a monitoring probe that run once per minute. Its job was just to connect to the target docbase, run a quick query and exit with a status. For some reason, it was probably waiting for a session, or idql was taking a lot more than the expected few seconds to do its job; therefore, the next monitoring cycle started before the previous one was completed and it too it hang there, and so on until affected users became vocal. The real root cause was programmatic since one developer thought it was a good idea to periodically and too frequently connect to docbases from within Ajax code in the clients’ home page, without informing the docbases’ administrators of this new resource hungry feature. This resulted in a saturation of the allowed sessions, stuck idql processes, weblogic threads waiting for a connection and, ultimately, application downtime.
Needless to say, the flashy Ajax feature was quickly removed, the number of allowed concurrent sessions was boosted up and we decided to keep around a copy of those fast, full binary v5.3 tools for low-level tasks such as our monitoring needs.
So let’s see how to protect the probes from themselves and from changing environments or well-meaning but ingenuous developers.

The requirements

1. If the monitoring cycles are tight, the probes shall obviously do very simple things; complex things can take time, and be fragile and buggy. Simple things complete quickly and are less subject to hasards.
2. As seen, unless the probe is started only once and runs constantly in the background, the probe’s interpreter shall start very quickly which excludes java code and its JVM; this also avoids recent issues such as the random number generator entropy that used to plague java programs for some time now and, I’m sarcastic but confident, the next ones still lurking around the corner. The interpreter that executes the probe shall be that of some well known scripting language such as the bash or ksh shells, python or perl with the needed binding to access the resource to be monitored, e.g. a Documentum repository, or some native binary tool that is part of the product to monitor, such as idql or sqlplus, launched by the shell, or even a custom compiled program.
3. While a probe is running, no other instance of it shall be allowed to start; i.e. the next instance shall not start until after the current one completes.
4. A probe shall only be allowed to execute during an allotted time; once this delay is elapsed, the probe shall be terminated manu militari with a distinct return status.
5. The probe’s cycles too shall be monitored, e.g. missing cycles should be reported.

Points 1 easy to implement; e.g. to check the availability of a repository or a docbase, just try a connection to it and exit. If a more exhaustive test is required, a quick and simple query could be sent to the server. It all depends on how exhaustive we want to be. A SELECT query won’t be able to detect, say, unusable database indexes or indexes being rebuilt off-line, if it still completes within the allowed delay. Some neutral UPDATE could be attempted to detect those kinds of issues or, more straightforwardly yet, just query the state of the indexes. But whatever is monitored, let’s keep it quick and direct. The 3rd and 4th requirements can help detecting anomalies such as the preceding index problem (in an Oracle database, unusable indexes causes UPDATEs to hang, so timeout detection and forced termination are mandatory in such cases).

Point 2 is quite obvious: if the monitoring is so aggressive that it runs in one-minute cycles, the script that it executes shall complete in less than one minute; i.e. start time + execution time shall be less than the monitoring period, let’s say less than half that time to be safe. If the monitoring tools and script cannot keep up with the stringent timing, a different, more efficient approach shall be considered, unless the timing requirement is relaxed somewhat. For example, a reverse approach could be considered where instead of pulling the status from a target, it’s the target that publish its status, like a heartbeat; that would permit very tight monitoring cycles.

Point 3 requires a barrier to prevent the next cycle to start. This does not need to be a fancy test-and-set semaphore because concurrency is practically nonexistent. A simple test of existence of a conventional file is enough. If the file exists, it means a cycle is in progress and the next cycle is not allowed in. If the file does not exist, create it and continue. There may be a race condition but it is unlikely to occur given that the monitoring cycles are quite widely spread apart, one minute at the minimum if defined in the crontab.

Point 4 means that a timer shall be set up upon starting the probe. This is easy to do from a shell, e.g. thanks to the “timeout” command. Some tools may have their own command-line option to run in batch mode within a timeout duration, which is even better. Nonetheless, an external timer offers a double protection and is still desirable.

Point 5: Obviously, this part is only possible from outside the probe. On some system (e.g. nagios), the probe’s log file itself is monitored and, if not updated within some time interval, an alert is raised. This kind of passive or indirect heartbeat permits to detect disabled or stuck probes, but doesn’t remove them. Resilience shall be auto-applied whenever possible in order to minimize human intervention. This check is useful to detect cases where the probe or the scheduler itself have been suspended abruptly or are no longer available on the file system (it can even happen that the file system itself has been unmounted by mistake or due to some technical problem or unscheduled intervention).

An example

Let’s say that we want to monitor the availability of a docbase “doctest”. We propose to attempt a connection with idql as “dmadmin” from the server machine so trusted mode authentication is used and no password is needed. A response from the docbase shall arrive within 15s. The probe shall run with a periodicity of 5 minutes, i.e. 12 times per hour. Here is a no frills attempt:

#!/bin/bash

BARRIER=/tmp/sentinel_file
DOCBASE=doctest
LOG_FILE=/var/dctm/monitor_docbase_${DOCBASE}
TIMEOUT=15s
export DOCUMENTUM=/u01/app/documentum53/product/5.3

if [ -f $BARRIER ]; then
   echo "WARNING: previous $DOCBASE monitoring cycle still running" > $LOG_FILE
   exit 100
fi
touch $BARRIER
if [ $? -ne 0 ]; then
   echo "FATAL: monitoring of $DOCBASE failed while touch-ing barrier $BARRIER" > $LOG_FILE
   exit 101
fi

timeout $TIMEOUT $DOCUMENTUM/bin/idql $DOCBASE -Udmadmin -Pxx 2>&1 > /dev/null <<EoQ
   select * from dm_server_config;
   go
   quit
EoQ
rc=$?
if [ $rc -eq 124 ]; then
   echo "FATAL: monitoring of $DOCBASE failed in timeout of $TIMEOUT" > $LOG_FILE
elif [ $rc -eq 1 ]: then
  echo "FATAL: connection to $DOCBASE was unsuccessful"               > $LOG_FILE
else
   echo "OK: connection to $DOCBASE was successful"                   > $LOG_FILE
fi

rm $BARRIER
exit $rc

Line 3: the barrier is an empty file whose existence or inexistence simulates the state of the barrier; if the file exists, then the barrier is down and the access is forbidden; if the file does not exist, then the barrier is up and the access is allowed;
Line 7: we use the full native, DMCL-based idql utility for a quick start up;
Line 9: the barrier is tested by checking the file’s existence as written above; if the file already exists, it means that an older monitoring cycle is still running, so the new cycle aborts and returns an error message and an exit code;
Line 13: the barrier has been lowered to prevent the next cycle to execute the probe;
Line 19: the idql command is launched and monitored by the command timeout with a duration of $TIMEOUT;
Line 24: the timeout command’s return status is tested; if it is 124 (line 25), it means a timeout has occurred; the probe aborts with an appropriate error message; otherwise, it’s the command’s error code: if it is 1, idql could not connect; if it is 0, the connection was OK;
Lines 27 and 29: the connection attempt returned within the $TIMEOUT time interval, meaning the idql has a connection status;
Line 33: the barrier is removed so the next monitoring cycle has the green light;
Line 34: the exit code is returned; it should be 124 for timeout, 1 for no connection to the docbase, 0 if connection OK;

The timeout command belongs to the coreutils package so install that package through your linux distribution’s package manager if the command is missing.

If cron is used as a scheduler, the crontab entry could look like below (assuming the probe’s name is test-connection.sh):

0,5,10,15,20,25,30,35,40,45,50,55 * * * /u01/app/documentum/monitoring/test-connection.sh 2>&1 > /dev/null

cron is sufficient most of the time, even though its time granularity is 1 minute.
The probe could be enhanced very easily in such a way that, once deployed, it optionally installs itself in dmadmin’s crontab, e.g.:

/u01/app/documentum/monitoring/test-connection.sh --install "0,5,10,15,20,25,30,35,40,45,50,55 * * *"

for the maximum simplicity. But this is a different topic.

Some comments

On some large infrastructures, centralized scheduler and orchestration software may be in use (CTRL-M, Activeeon, Rundeck, Dkron, etc. Just check the web, they are plenty to shop for) which have their own management of rogue jobs and commands. Still, dedicated probes such as the preceding one have to be called but the timeout logic could be removed and externalized into the launcher. Better yet, only externalize the hard-coded timeout parameter so it is passed to the probe as a command-line parameter and the probe can still work independently from the launcher in use.
Other systems use a centralized monitoring system (e.g. nagios, Icinga, BMC TrueSight, Cacti, etc. Again, search the web) but whatever the software be prepared to manually write probes because, unless it is so ubiquitous like apache, tomcat or mysql, it is unlikely that the system to be monitored is supported out of the box, particularly specialized products such as Documentum.
Some of the above software need an agent process deployed and permanently running on each monitored or enrolled machine. There are pros and cons in this architecture but we won’t go there as the subject is out of scope.

Conclusion

Monitoring a target is interacting with it. Physics tells us that it is impossible to be totally invisible while observing but at least we can minimize the probes’ footprint. While it is impossible to anticipate every abnormality on a system, these few very simple guidelines can help a long way in making monitoring probes more robust, resilient and as unobtrusive as possible.

 

Cet article Adding a timeout in monitoring probes est apparu en premier sur Blog dbi services.

Documentum – Checking warnings&errors from an xPlore full re-index

Fri, 2018-09-28 00:00

When working with xPlore as a Full Text Server (indexing), there are a few ways to perform a full re-index. You can potentially do it from the IndexAgent UI, from the Dsearch UI, from the file system (with an ids.txt file for example, it is usually for a “small” number of r_object_id so that’s probably not an ideal way) or from the docbase (mass-queue, it’s not really a good way to do it either). Performing a full re-index from the xPlore Server directly will be faster because you remove a few layers where the Content Server asks for an index (the index queues) and expect an answer/result, that’s why I will in this blog only talk about the full re-index performed from the xPlore Server directly and below I will use a full re-index from the IndexAgent UI. For each of these cases, there might be a few warnings or errors along the re-index, some of which might be normal (password protected file), some others might not (timeout because xPlore heavily loaded).

The whole purpose of this blog is to show you how you can check these warnings/errors because there is no information about them directly displayed on the UI, you need to go find that information manually. These warnings/errors aren’t shown in the index queues since they weren’t triggered from the docbase but from the xPlore Server directly.

So first of all, you need to trigger a re-index using the IndexAgent:

  • Open the IndexAgent UI (https://<hostname>:<ia_port>/IndexAgent)
  • Login with the installation owner’s account
  • Stop the IndexAgent if it is currently running in Normal mode and then launch a re-index operation

It should look like that (for xPlore 1.6):
IA1

On the above screenshot, the green represents the success count and the blue is for the filtered count. Once completed and as shown above, you might have a few warnings/errors but you don’t have any information about them as I mentioned previously. To narrow down and facilitate the check of the warnings/errors, you need to know (approximately) the start and end time of the re-index operation: 2018-06-12 11:55 UTC to 2018-06-12 12:05 UTC for the above example. From that point, the analysis of the warnings/errors can be done in two main ways:

 

1. Using the Dsearch Admin

I will start with the way that most of you probably already know: use the Dsearch reports to see the errors/warnings. That’s not the fastest way, clearly not the funniest way either but it is an easy way for sure…

Accessing the reports from the Dsearch Admin:

  • Open the Dsearch Admin UI (https://<hostname>:<ds_port>/dsearchadmin)
  • Login with the admin account (or any other valid account with xPlore 1.6+)
  • Navigate to: Home > Diagnostic and Utilities > Reports
  • Select the “Document Processing Error Summary” report and set the following:
    • Start from: 2018-06-12 11:55
    • To: 2018-06-12 12:05
    • Domain name (optional): leave empty if you only have one IndexAgent, otherwise you can specify the domain name (usually the same name as the docbase)
  • Click on Run to get the report

At this point, you will have a report with the number of warnings/errors per type, meaning that you do not have any information about the documents yet, you only know the number of errors for each of the pre-defined error types (=error code). For the above example, I had 8 warnings once the re-index was completed and I could see them all (seven warnings for ‘777’ and one warning for ‘770’):
IA2

Base on the information from this “Document Processing Error Summary” report, you can go deeper and find the details about the documents but you can only do it for one type, one Error Code, at a time. Therefore, you will have to loop on all Error Codes returned:

  • For each Error Code:
    • Select the “Document Processing Error Detail” report and set the following:
      • Start from: 2018-06-12 11:55
      • To: 2018-06-12 12:05
      • Domain name (optional): leave empty if you only have 1 IndexAgent, otherwise you can specify the domain name (usually the same name as the docbase)
      • Processing Error Code: Select the Error Code you want to see (either 777 or 770 in my case)
      • Number of Results to Display: Set here the number of items you want to display, 10, 20, …
    • Click on Run to get the report

And there you finally have the details about the warnings/errors documents that weren’t indexed properly because of the Error Code you choose. In my case, I selected 770 so I have only 1 document:
IA3

You can export this list to excel if you want, to do some processing on these items for example but you will need to do it for all Error Codes and then merge them or whatever.

 

2. Using the logs

In the above example, I used the IndexAgent to perform the re-index so I will use the IndexAgent logs to find what happened exactly. This section is really the main purpose of this blog because I assume that most people are using the Dsearch Admin reports already but probably not the logs! If you want to script the check of warnings/errors after a re-index of just if you want to play and have fun while doing your job, then this is what you need ;).

So let’s start simple: listing all errors and warnings and keeping only the lines that contain an r_object_id.

[xplore@full_text_server_01 ~]$ cd $JBOSS_HOME/server/DctmServer_Indexagent_DocBase1/logs/
[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | egrep --color "[ (<][0-9a-z]{16}[>) ]"

Indexagent_DocBase1.log:2018-06-12 11:55:26,456 WARN PrepWorkItem [full_text_server_01_9200_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGNT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:00,752 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:00,752 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:00,754 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:00,754 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:27,518 INFO ReindexBatch [Worker:Finalization Action:#6][DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete btch. From a total of 45, 44 done, 0 filtered, 0 errors, and 8 warnings.
[xplore@full_text_server_01 logs]$

 

As you can see above, there is also one queue item (1b0f1234501327f0) listed because I kept everything that is 16 char long with 0-9 or a-z. If you want, you can rather select only r_object_id starting with 09 to have all dm_documents (using this: “[ (<]09[0-9a-z]{14}[>) ]” ) or you can just remove the r_object_id starting with 1b which are the queue items.

In the above example, all the results are in the timeframe I expected them to be but it is possible that there are older or newer warnings/errors so you might want to apply another filter with the date. Since I want everything from 11:55 to 12:05 on the 12-Jun-2018, this is how I can do it (and removing the log file name too) using a time regex:

[xplore@full_text_server_01 logs]$ time_regex="2018-06-12 11:5[5-9]|2018-06-12 12:0[0-5]"
[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | sed 's,^[^:]*:,,' \
                                   | egrep "${time_regex}" \
                                   | egrep --color "[ (<][0-9a-z]{16}[>) ]"

2018-06-12 11:55:26,456 WARN PrepWorkItem [full_text_server_01_9200_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGNT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].
2018-06-12 12:01:00,752 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:00,752 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:00,754 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:00,754 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:27,518 INFO ReindexBatch [Worker:Finalization Action:#6][DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete btch. From a total of 45, 44 done, 0 filtered, 0 errors, and 8 warnings.
[xplore@full_text_server_01 logs]$

 

Listing only the messages for each of these warnings/errors:

[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | sed 's,^[^:]*:,,' \
                                   | egrep "${time_regex}" \
                                   | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                   | sed 's,^[^]]*],,' \
                                   | sort -u

[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete batch. From a total of 45, 44 done, 0 filtered, 0 errors, and 1 warnings.
[xplore@full_text_server_01 logs]$

 

Listing only the r_object_id (to resubmit them via the ids.txt for example):

[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | sed 's,^[^:]*:,,' \
                                   | egrep "${time_regex}" \
                                   | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                   | sed 's,.*[ (<]\([0-9a-z]\{16\}\)[>) ].*,\1,' \
                                   | sort -u \
                                   | grep -v "^1b"

090f12345007f40e
090f1234500aa9f6
090f1234500aaa97
090f1234500aaa98
090f1234500aaa99
090f1234500aaa9a
090f1234500aaa9b
090f1234500aaa9d
[xplore@full_text_server_01 logs]$

 

If you want to generate the iapi commands to resubmit them all:

[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | sed 's,^[^:]*:,,' \
                                   | egrep "${time_regex}" \
                                   | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                   | sed 's,.*[ (<]\([0-9a-z]\{16\}\)[>) ].*,\1,' \
                                   | sort -u \
                                   | grep -v "^1b"
                                   | sed 's/.*/queue,c,&,dm_fulltext_index_user/'

queue,c,090f12345007f40e,dm_fulltext_index_user
queue,c,090f1234500aa9f6,dm_fulltext_index_user
queue,c,090f1234500aaa97,dm_fulltext_index_user
queue,c,090f1234500aaa98,dm_fulltext_index_user
queue,c,090f1234500aaa99,dm_fulltext_index_user
queue,c,090f1234500aaa9a,dm_fulltext_index_user
queue,c,090f1234500aaa9b,dm_fulltext_index_user
queue,c,090f1234500aaa9d,dm_fulltext_index_user
[xplore@full_text_server_01 logs]$

 

Finally, to group the warnings/errors per types:

[xplore@full_text_server_01 logs]$ echo; IFS=$'\n'; \
                                   for type in `egrep -i "err|warn" Indexagent_*.log* \
                                     | sed 's,^[^:]*:,,' \
                                     | egrep "${time_regex}" \
                                     | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                     | sed 's,^[^]]*],,' \
                                     | sort -u \
                                     | sed 's,.*\(\[[^\[]*\]\).*,\1,' \
                                     | sort -u`;
                                   do
                                     echo "  --  Listing warnings/errors with the following messages: ${type}";
                                     egrep -i "err|warn" Indexagent_*.log* \
                                       | sed 's,^[^:]*:,,' \
                                       | egrep "${time_regex}" \
                                       | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                       | sed 's,^[^]]*],,' \
                                       | sort -u \
                                       | grep -F "${type}";
                                     echo;
                                   done

  --  Listing warnings/errors with the following messages: [Corrupt file]
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].

  --  Listing warnings/errors with the following messages: [DM_INDEX_AGENT_REINDEX_BATCH]
[DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete batch. From a total of 45, 44 done, 0 filtered, 0 errors, and 1 warnings.

  --  Listing warnings/errors with the following messages: [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file]
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].

[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ # Or to shorten a little bit the loop command:
[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ command='egrep -i "err|warn" Indexagent_*.log* | sed 's,^[^:]*:,,'
                                   | egrep "${time_regex}"
                                   | egrep "[ (<][0-9a-z]{16}[>) ]"
                                   | sed 's,^[^]]*],,'
                                   | sort -u'
[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ echo; IFS=$'\n'; \
                                   for type in `eval ${command} \
                                     | sed 's,.*\(\[[^\[]*\]\).*,\1,' \
                                     | sort -u`;
                                   do
                                     echo "  --  Listing warnings/errors with the following messages: ${type}";
                                     eval ${command} \
                                       | grep -F "${type}";
                                     echo;
                                   done

  --  Listing warnings/errors with the following messages: [Corrupt file]
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].

  --  Listing warnings/errors with the following messages: [DM_INDEX_AGENT_REINDEX_BATCH]
[DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete batch. From a total of 45, 44 done, 0 filtered, 0 errors, and 1 warnings.

  --  Listing warnings/errors with the following messages: [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file]
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].

[xplore@full_text_server_01 logs]$

 

So the above was related to a very simple example where a full reindex took only a few minutes because it is a very small repository. But what about a full reindex that takes days because there are several millions of documents? Well the truth is that checking the logs might actually surprise you because it is usually more accurate than checking the Dsearch Admin. Yes, I said more accurate!

 

3. Accuracy of the Dsearch Admin vs the Logs

Let’s take another example with a repository containing a few TB of documents. A full re-index took 2.5 days to complete and in the commands below, I will check the status of the indexing for the 1st day: from 2018-09-19 07:00:00 UTC to 2018-09-20 06:59:59 UTC. Here is what the Dsearch Admin is giving you:

IA4

So based on this, you would expect 1 230 + 63 + 51 = 1 344 warnings/errors. So what about the logs then? I included below the DM_INDEX_AGENT_REINDEX_BATCH which are the “1b” object_id (item_id) I was talking about earlier but these aren’t document indexing, they are just batches:

[xplore@full_text_server_01 logs]$ time_regex="2018-09-19 0[7-9]|2018-09-19 [1-2][0-9]|2018-09-20 0[0-6]"
[xplore@full_text_server_01 logs]$ command='egrep -i "err|warn" Indexagent_*.log* | sed 's,^[^:]*:,,'
                                   | egrep "${time_regex}"
                                   | egrep "[ (<][0-9a-z]{16}[>) ]"
                                   | sed 's,^[^]]*],,'
                                   | sort -u'
[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ echo; IFS=$'\n'; \
                                   for type in `eval ${command} \
                                     | sed 's,.*\(\[[^\[]*\]\).*,\1,' \
                                     | sort -u`;
                                   do
                                     echo "  --  Number of warnings/errors with the following messages: ${type}";
                                     eval ${command} \
                                       | grep -F "${type}" \
                                       | wc -l;
                                     echo;
                                   done

  --  Number of warnings/errors with the following messages: [Corrupt file]
51

  --  Number of warnings/errors with the following messages: [DM_INDEX_AGENT_REINDEX_BATCH]
293

  --  Number of warnings/errors with the following messages: [DM_STORAGE_E_BAD_TICKET]
7

  --  Number of warnings/errors with the following messages: [Password-protected or encrypted file]
63

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction]
5

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction(native code: 18, native msg: unknown error)]
1

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction(native code: 257, native msg: handle is invalid)]
1053

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction(native code: 30, native msg: out of memory)]
14

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction(native code: 65534, native msg: unknown error)]
157

[xplore@full_text_server_01 logs]$

 

As you can see above, there is more granularity regarding the types of errors from the logs. Here are some key points in the comparison between the logs and the Dsearch Admin:

  1. In the Dsearch Admin, all messages that start with “Unknown error during text extraction” are considered as a single error type (N° 1023). Therefore from the logs, you can addition all of them: 5 + 1 + 1 053 + 14 + 157 = 1 230 to find the same number that was mentioned in the Dsearch Admin. You cannot separate them on the Dsearch Admin on the Error Summary report, it will only be on the Error Details report that you will see the full message and you can then separate them, kind of…
  2. You can find properly the same amount of “Password-protected or encrypted file” (63) as well as “Corrupt file” (51) from the logs and from the Dsearch Admin so no differences here
  3. You can see 7 “DM_STORAGE_E_BAD_TICKET” warnings/errors from the logs but none from the Dsearch Admin… Why is that? That’s because the Dsearch Admin do not have any Error Code for that so these errors aren’t shown!

So like I was saying at the beginning of this blog, using the Dsearch Admin is very easy but that’s not fun and you might actually miss a few information while checking the logs is funny and you are sure that you won’t miss anything (these 7 DM_STORAGE_E_BAD_TICKET errors for example)!

 

You could just as easily do the same thing in perl or using awk, that’s just a question of preferences… Anyway, you understood it, working with the logs allows you to do pretty much what you want but you will need some linux/scripting knowledge obviously while working with the Dsearch Admin is simple and easy but you will have to work with what OTX gives you and with the restrictions that it has.

 

 

Cet article Documentum – Checking warnings&errors from an xPlore full re-index est apparu en premier sur Blog dbi services.

Oracle 18c : Active Data Guard and AWR Reports

Tue, 2018-09-25 09:43

Since Oracle Database 12c Release 2 (12.2), Automatic Workload Repository (AWR) data can be captured for Active Data Guard (ADG) standby databases. This feature enables analyzing any performance-related issues for ADG standby databases
AWR snapshots for ADG standby databases are called remote snapshots. A database node, called destination, is responsible for storing snapshots that are collected from remote ADG standby database nodes, called sources.
The AWR data captures for Active Data Guard require certain steps that I am trying to describe here. I am using Oracle 18c with following configuration

DGMGRL> show configuration

Configuration - NONC18C_DR

  Protection Mode: MaxPerformance
  Members:
  NONC18C_SITE1 - Primary database
    NONC18C_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 30 seconds ago)

DGMGRL>

The primary is opened in Read Write mode and the standby in Read Only Mode With Apply

SQL> select db_unique_name,open_mode, database_role from v$database;

DB_UNIQUE_NAME                 OPEN_MODE            DATABASE_ROLE
------------------------------ -------------------- ----------------
NONC18C_SITE2                  READ ONLY WITH APPLY PHYSICAL STANDBY

SQL> select db_unique_name,open_mode, database_role from v$database;

DB_UNIQUE_NAME                 OPEN_MODE            DATABASE_ROLE
------------------------------ -------------------- ----------------
NONC18C_SITE1                  READ WRITE           PRIMARY

SQL>

The feature uses the Remote Management Framework which comes with a New Oracle built-in user called SYS$UMF. This user is locked by default and should be unlocked before configuring the RMF.

SQL> select username,common,account_status from dba_users where username like 'SYS%';

USERNAME                       COM ACCOUNT_STATUS
------------------------------ --- --------------------------------
SYS                            YES OPEN
SYSTEM                         YES OPEN
SYSBACKUP                      YES EXPIRED & LOCKED
SYSRAC                         YES EXPIRED & LOCKED
SYSKM                          YES EXPIRED & LOCKED
SYS$UMF                        YES EXPIRED & LOCKED
SYSDG                          YES EXPIRED & LOCKED

7 rows selected.

SQL> alter user sys$umf identified by root account unlock;

User altered.

For the configuration we need in our case 2 database links. Indeed each source must have two database links, a destination-to-source database link and a source-to-destination database link. So connecting on the primary, let’s create 2 database links
=>prima_to_stand: from the primary to the standby
=>stand_to_prima: from standby to the primary

SQL> create database link prima_to_stand CONNECT TO sys$umf IDENTIFIED BY root using 'STBY_NONC';

Database link created.

SQL> create database link stand_to_prima CONNECT TO sys$umf IDENTIFIED BY root using 'PRIMA_NONC';

Database link created.

SQL>
SQL> select * from dual@prima_to_stand;

D
-
X

SQL> select * from dual@stand_to_prima;

D
-
X

SQL>

The RMF topology is a centralized architecture that consists of all the participating database nodes along with their metadata and connection information. So let’s configure the nodes. We will call them “site_prim” for primary and “site_stby” for standby
While connecting on the primary we execute

SQL> exec dbms_umf.configure_node ('site_prim');

PL/SQL procedure successfully completed.

SQL>

On the standby side we do the same but here we give the database link. Be sure that the database links were created in the right direction, otherwise you will get errors later.

SQL> exec dbms_umf.configure_node('site_stby','stand_to_prima');

PL/SQL procedure successfully completed.

SQL>

And now from the primary, we can then create the RMF topology.

SQL> exec DBMS_UMF.create_topology ('Topology_1');

PL/SQL procedure successfully completed.

SQL>

To verify the status of the configuration we can use following UMF views on the primary

SQL> select * from dba_umf_topology;

TOPOLOGY_NAME    TARGET_ID TOPOLOGY_VERSION TOPOLOGY
--------------- ---------- ---------------- --------
Topology_1      1530523744                1 ACTIVE

SQL> select * from dba_umf_registration;

TOPOLOGY_NAME   NODE_NAME          NODE_ID  NODE_TYPE AS_SO AS_CA STATE
--------------- --------------- ---------- ---------- ----- ----- --------------------
Topology_1      site_prim       1530523744          0 FALSE FALSE OK

SQL>

Everything seems fine, so we can register the standby in the topology. On the primary let’s excute the register_node procedure.

SQL> exec DBMS_UMF.register_node ('Topology_1', 'site_stby', 'prima_to_stand', 'stand_to_prima', 'FALSE', 'FALSE');

PL/SQL procedure successfully completed.

SQL>

If we do not have errors then we can enable the AWR service.

SQL> exec DBMS_WORKLOAD_REPOSITORY.register_remote_database(node_name=>'site_stby');

PL/SQL procedure successfully completed.

SQL>

Using UMF views, we can again verify our configuration.

SQL> select * from dba_umf_topology;

TOPOLOGY_NAME    TARGET_ID TOPOLOGY_VERSION TOPOLOGY
--------------- ---------- ---------------- --------
Topology_1      1530523744                4 ACTIVE

SQL> select * from dba_umf_registration;

TOPOLOGY_NAME   NODE_NAME          NODE_ID  NODE_TYPE AS_SO AS_CA STATE
--------------- --------------- ---------- ---------- ----- ----- --------------------
Topology_1      site_prim       1530523744          0 FALSE FALSE OK
Topology_1      site_stby       3265600723          0 FALSE FALSE OK

SQL> select * from dba_umf_service;

TOPOLOGY_NAME      NODE_ID SERVICE
--------------- ---------- -------
Topology_1      3265600723 AWR

SQL>

SQL> select * from dba_umf_link;

TOPOLOGY_NAME   FROM_NODE_ID TO_NODE_ID LINK_NAME
--------------- ------------ ---------- --------------------
Topology_1        1530523744 3265600723 PRIMA_TO_STAND
Topology_1        3265600723 1530523744 STAND_TO_PRIMA

SQL>

It’s now time to generate remote snapshots for the standby. While connecting to the primary. Two snapshots are at least required to be able to generate an AWR report.

SQL> set time on
16:01:22 SQL> exec dbms_workload_repository.create_remote_snapshot('site_stby');

PL/SQL procedure successfully completed.

16:21:41 SQL> exec dbms_workload_repository.create_remote_snapshot('site_stby');

PL/SQL procedure successfully completed.

16:21:50 SQL>

And we can generate the report as we usually do

SQL> @?/rdbms/admin/awrrpti.sql

Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
AWR reports can be generated in the following formats.  Please enter the
name of the format at the prompt. Default value is 'html'.

   'html'          HTML format (default)
   'text'          Text format
   'active-html'   Includes Performance Hub active report

Enter value for report_type: text



Type Specified: text


Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  DB Id      Inst Num   DB Name      Instance     Host
------------ ---------- ---------    ----------   ------
  2315634502     1      NONC18C      NONC18C      standserver1
  3265600723     1      NONC18C      NONC18C      standserver1
* 2315634502     1      NONC18C      NONC18C      primaserver.

Enter value for dbid: 3265600723
Using 3265600723 for database Id
Enter value for inst_num: 1
Using 1 for instance number


Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed.  Pressing  without
specifying a number lists all completed snapshots.


Enter value for num_days:

Listing all Completed Snapshots
Instance     DB Name      Snap Id       Snap Started    Snap Level
------------ ------------ ---------- ------------------ ----------

NONC18C      NONC18C              1  24 Sep 2018 16:01    1
                                  2  24 Sep 2018 16:21    1


Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 1
Begin Snapshot Id specified: 1

Enter value for end_snap: 2
End   Snapshot Id specified: 2



Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is awrrpt_1_1_2.txt.  To use this name,
press  to continue, otherwise enter an alternative.

Enter value for report_name:

And viewing the generated report, we can see that the database role is PHYSICAL STANDBY

WORKLOAD REPOSITORY report for

DB Name         DB Id    Unique Name DB Role          Edition Release    RAC CDB
------------ ----------- ----------- ---------------- ------- ---------- --- ---
NONC18C       3265600723 NONC18C_SIT PHYSICAL STANDBY   EE      18.0.0.0.0 NO  NO

Instance     Inst Num Startup Time
------------ -------- ---------------
NONC18C             1 24-Sep-18 15:49

Host Name        Platform                         CPUs Cores Sockets Memory(GB)
---------------- -------------------------------- ---- ----- ------- ----------
standserver1.loc Linux x86 64-bit                    1                     2.96

Conclusion
In this blog we have shown how we can use UMF to generate AWR reports in Active Data Guard Instance. The framework UMF use the package DBMS_UMF which has many subprograms. The following note How to Generate AWRs in Active Data Guard Standby Databases (Doc ID 2409808.1) and Oracle documentation will help.

 

Cet article Oracle 18c : Active Data Guard and AWR Reports est apparu en premier sur Blog dbi services.

SQL Plan stability in 11G using stored outlines

Mon, 2018-09-24 05:01

A stored outline is a collection of hints associated with a specific SQL statement that allows a standard execution plan to be maintained, regardless of changes in the system environment or associated statistics. Plan stability is based on the preservation of execution plans at a point in time where the performance of a statement is considered acceptable. The outlines are stored in the OL$, OL$HINTS, and OL$NODES tables, but the [USER|ALL|DBA]_OUTLINES and [USER|ALL|DBA]_OUTLINE_HINTS views should be used to display information about existing outlines.

All of the caveats associated with optimizer hints apply equally to stored outlines. Under normal running the optimizer chooses the most suitable execution plan for the current circumstances. By using a stored outline you may be forcing the optimizer to choose a substandard execution plan, so you should monitor the affects of your stored outlines over time to make sure this isn’t happening. Remember, what works well today may not tomorrow.

Many times we are into the situation when the performance of a query regressing, or the optimizer is not able to choose the better execution plan.

In the next lines I will try to describe a scenario that needs the usage of a stored outline:

–we will identify the different plans that exists for our sql_id

SQL> select hash_value,child_number,sql_id,executions from v$sql where sql_id='574gkc8gn7u0h';

HASH_VALUE CHILD_NUMBER SQL_ID        EXECUTIONS 
---------- ------------ ------------- ---------- 
 524544016            0 574gkc8gn7u0h          4 
 576321033            1 574gkc8gn7u0h          5

 

Between the two different plans, we know that the best one is that with the cost 15 and the hash_value : 4013416232, but which is not all the time choosed by the optimizer, causing peak of performance

SQL> select * from table(dbms_xplan.display_cursor(‘574gkc8gn7u0h’,0));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------

SQL_ID  574gkc8gn7u0h, child number 0
-------------------------------------
Select   m.msg_message_id,   m.VersionId,   m.Knoten_id,
m.Poly_User_Id,   m.State,   'U' as MutationsCode from
........................................................

Plan hash value: 4013416232

-------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                            |       |       |    15 (100)|       |
|   1 |  UNION-ALL                     |                            |       |       |            |       |
|*  2 |   FILTER                       |                            |       |       |            |       |
|   3 |    NESTED LOOPS                |                            |       |       |            |       |
|   4 |     NESTED LOOPS               |                            |     1 |    76 |     7  (15)| 00:00:01 |
|   5 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|   6 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|*  7 |        TABLE ACCESS FULL       | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|   8 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|   9 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 10 |         TABLE ACCESS FULL      | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|* 11 |      INDEX RANGE SCAN          | XAK2_MSG_MESSAGE_ENTRY     |     1 |       |     1   (0)| 00:00:01 |
|* 12 |     TABLE ACCESS BY INDEX ROWID| MSG_MESSAGE_ENTRY          |     1 |    24 |     2   (0)| 00:00:01 |
|* 13 |   FILTER                       |                            |       |       |            |       |
|  14 |    NESTED LOOPS                |                            |       |       |            |       |
|  15 |     NESTED LOOPS               |                            |     1 |    76 |     8  (13)| 00:00:01 |
|  16 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|  17 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 18 |        TABLE ACCESS FULL       | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|  19 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|  20 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 21 |         TABLE ACCESS FULL      | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|* 22 |      INDEX RANGE SCAN          | XAK3_MSG_MESSAGE_ENTRY_DEL |     1 |       |     2   (0)| 00:00:01 |
|  23 |     TABLE ACCESS BY INDEX ROWID| MSG_MESSAGE_ENTRY_DEL      |     1 |    24 |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(:LASTTREATEDVERSIONID<=:MAXVERSIONID)
   7 - filter("SERIAL#"=1999999999)
  10 - filter("SERIAL#"=1999999998)
----------------------------------------------

 

In order to fix this , we will create and enable an outline, that should help the optimizer to choose always the best plan:

 BEGIN
      DBMS_OUTLN.create_outline(hash_value    =>524544016,child_number  => 0);
    END;
  /

PL/SQL procedure successfully completed.

SQL>
SQL> alter system set use_stored_outlines=TRUE;

System altered.

SQL> create or replace trigger trig_start_out after startup on database
  2  begin
  3  execute immediate 'alter system set use_stored_outlines=TRUE';
  4  end;
  5  /

Trigger created.

As the parameter “use_stored_outlines” is a ‘pseudo’ parameter, is not persistent over the reboot of the system, for that reason we had to create this trigger on startup database.

Now we can check , if the outline is used:

NAME                           OWNER                          CATEGORY                       USED
------------------------------ ------------------------------ ------------------------------ ------
SYS_OUTLINE_18092409295665701  TEST                         DEFAULT                        USED

And also, to check that the execution is taking in account

SQL> select * from table(dbms_xplan.display_cursor('574gkc8gn7u0h',0));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------

SQL_ID  574gkc8gn7u0h, child number 0
-------------------------------------
Select   m.msg_message_id,   m.VersionId,   m.Knoten_id,
m.Poly_User_Id,   m.State,   'U' as MutationsCode from
msg_message_entry m where   m.VersionId between :LastTreatedVersionId
...................

Plan hash value: 4013416232

-------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                            |       |       |    15 (100)|       |
|   1 |  UNION-ALL                     |                            |       |       |            |       |
|*  2 |   FILTER                       |                            |       |       |            |       |
|   3 |    NESTED LOOPS                |                            |       |       |            |       |
|   4 |     NESTED LOOPS               |                            |     1 |    76 |     7  (15)| 00:00:01 |
|   5 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|   6 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|*  7 |        TABLE ACCESS FULL       | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|   8 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|   9 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 10 |         TABLE ACCESS FULL      | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|* 11 |      INDEX RANGE SCAN          | XAK2_MSG_MESSAGE_ENTRY     |     1 |       |     1   (0)| 00:00:01 |
|* 12 |     TABLE ACCESS BY INDEX ROWID| MSG_MESSAGE_ENTRY          |     1 |    24 |     2   (0)| 00:00:01 |
|* 13 |   FILTER                       |                            |       |       |            |       |
|  14 |    NESTED LOOPS                |                            |       |       |            |       |
|  15 |     NESTED LOOPS               |                            |     1 |    76 |     8  (13)| 00:00:01 |
|  16 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|  17 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 18 |        TABLE ACCESS FULL       | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|  19 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|  20 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 21 |         TABLE ACCESS FULL      | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|* 22 |      INDEX RANGE SCAN          | XAK3_MSG_MESSAGE_ENTRY_DEL |     1 |       |     2   (0)| 00:00:01 |
|  23 |     TABLE ACCESS BY INDEX ROWID| MSG_MESSAGE_ENTRY_DEL      |     1 |    24 |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(:LASTTREATEDVERSIONID<=:MAXVERSIONID)
   7 - filter("SERIAL#"=1999999999)
  10 - filter("SERIAL#"=1999999998)
  
Note
-----
   - outline "SYS_OUTLINE_18092409295665701" used for this statement

To use stored outlines when Oracle compiles a SQL statement we need to enable them by setting the system parameter USE_STORED_OUTLINES to TRUE or to a category name. This parameter can be also be set at the session level.
By setting this parameter to TRUE, the category by default on which the outlines are created is DEFAULT.
If you prefer to add a category on the procedure of outline creation, Oracle will used this outline category until you provide another category value or you disable the usage of the outlines by putting the parameter USE_STORED_OUTLINE to FALSE.

 

Cet article SQL Plan stability in 11G using stored outlines est apparu en premier sur Blog dbi services.

Upgrade Oracle Internet Directory from 11G (11.1.1.9) to 12C (12.2.1.3)

Fri, 2018-09-21 00:53

There is no in-place upgrade for the OID 11.1.1.9 to OID 12C 12.2.1.3. The steps to follow are the following:

  1. Install the required JDK version
  2. Install the Fusion Middleware Infrastructure 12c (12.2.1.3)
  3. Install the OID 12C (12.2.1.3) in the Fusion Middleware Infrastructure Home
  4. Upgrade the exiting OID database schemas
  5. Reconfigure the OID WebLogic Domain
  6. Upgrade the OID WebLogic Domain

1. Install JDK 1.8.131+

I have used the JDK 1.8_161

cd /u00/app/oracle/product/Java
tar xvf ~/software/jdk1.8.0_161

set JAVA_HOME and add  $JAVA_HOME/bin in the path

2. Install Fusion Middleware Infrastructure 12.2.1.3  software

I will not go into the details as this is a simple Fusion Middleware Infrastructure 12.2.1.3 software installation.
This software contains the WebLogic 12.2.1.3. Thee is no need to install a separate WebLogic software.

I used MW_HOME set to /u00/app/oracle/product/oid12c

java -jar ~/software/fmw_12.2.1.3_infrastructure.jar

3. Install OID 12C software

This part is just a software installation, you just need to follow the steps in the installation wizard

cd ~/software/
./fmw_12.2.1.3.0_oid_linux64.bin

4. Check the existing schemas:

In SQLPLUS connected as SYS run the following query

SET LINE 120
COLUMN MRC_NAME FORMAT A14
COLUMN COMP_ID FORMAT A20
COLUMN VERSION FORMAT A12
COLUMN STATUS FORMAT A9
COLUMN UPGRADED FORMAT A8
SELECT MRC_NAME, COMP_ID, OWNER, VERSION, STATUS, UPGRADED FROM SCHEMA_VERSION_REGISTRY ORDER BY MRC_NAME, COMP_ID ;

The results:

MRC_NAME COMP_ID OWNER VERSION STATUS UPGRADED
-------------- -------------------- ------------------------------ ------------ --------- --------
DEFAULT_PREFIX    OID            ODS                  11.1.1.9.0    VALID      N
IAM               IAU            IAM_IAU              11.1.1.9.0    VALID      N
IAM               MDS            IAM_MDS              11.1.1.9.0    VALID      N
IAM               OAM            IAM_OAM              11.1.2.3.0    VALID      N
IAM               OMSM           IAM_OMSM             11.1.2.3.0    VALID      N
IAM               OPSS           IAM_OPSS             11.1.1.9.0    VALID      N
OUD               IAU            OUD_IAU              11.1.1.9.0    VALID      N
OUD               MDS            OUD_MDS              11.1.1.9.0    VALID      N
OUD               OPSS           OUD_OPSS             11.1.1.9.0    VALID      N

9 rows selected.

I have a OID 11.1.1.9 and a IAM 11.1.2.3 using the same database as repository

5. ODS Schema upgrade:

Take care to only upgrade the ODS schema and not the IAM schemas or the Internet Access Manager will not work any more.
Associated to OID 11.1.1.9, there was only the ODS schema installed, the ODS upgrade requires to create new Schemas.

cd /u00/app/oracle/product/oid12c/oracle_common/upgrade/bin/
./ua

Oracle Fusion Middleware Upgrade Assistant 12.2.1.3.0
Log file is located at: /u00/app/oracle/product/oid12c/oracle_common/upgrade/logs/ua2018-01-26-11-13-37AM.log
Reading installer inventory, this will take a few moments...
...completed reading installer inventory.

In the following, I provide the most important screen shots for the “ODS schema upgrade”

ODS schema upgrade 1

ODS schema upgrade 2
Checked the schema validity:

ODS schema upgrade 3

ODS schema upgrade 4

ODS schema upgrade 5

ODS schema upgrade 6

ODS schema upgrade 7

ODS schema upgrade 8

In SQLPLUS connected as SYS run the following query

SET LINE 120
COLUMN MRC_NAME FORMAT A14
COLUMN COMP_ID FORMAT A20
COLUMN VERSION FORMAT A12
COLUMN STATUS FORMAT A9
COLUMN UPGRADED FORMAT A8
SELECT MRC_NAME, COMP_ID, OWNER, VERSION, STATUS, UPGRADED FROM SCHEMA_VERSION_REGISTRY ORDER BY MRC_NAME, COMP_ID;

MRC_NAME       COMP_ID            OWNER               VERSION    STATUS      UPGRADED
————– —————- ——————————– ———— ——— ——–
DEFAULT_PREFIX OID                ODS                  12.2.1.3.0    VALID      Y
IAM               IAU                IAM_IAU              11.1.1.9.0    VALID      N
IAM               MDS                IAM_MDS              11.1.1.9.0    VALID      N
IAM               OAM                IAM_OAM              11.1.2.3.0    VALID      N
IAM               OMSM               IAM_OMSM             11.1.2.3.0    VALID      N
IAM               OPSS               IAM_OPSS             11.1.1.9.0    VALID      N
OID12C           IAU                OID12C_IAU           12.2.1.2.0    VALID      N
OID12C           IAU_APPEND        OID12C_IAU_APPEND    12.2.1.2.0    VALID      N
OID12C           IAU_VIEWER        OID12C_IAU_VIEWER    12.2.1.2.0    VALID      N
OID12C           OPSS               OID12C_OPSS          12.2.1.0.0    VALID      N
OID12C           STB                OID12C_STB           12.2.1.3.0    VALID      N
OID12C           WLS                OID12C_WLS           12.2.1.0.0    VALID      N
OUD               IAU                OUD_IAU              11.1.1.9.0    VALID      N
OUD               MDS                OUD_MDS              11.1.1.9.0    VALID      N
OUD               OPSS               OUD_OPSS             11.1.1.9.0    VALID      N

15 rows selected.

I named the new OID repository schemas OID12C during the ODS upgrade.

6. reconfigure the domain

cd /u00/app/oracle/product/oid12c/oracle_common/common/bin/
./reconfig.sh -log=/tmp/reconfig.log -log_prority=ALL

See screen shots “Reconfigure Domain”
Reconfigure Domain 1
Reconfigure Domain 2
Reconfigure Domain 3
Reconfigure Domain 4
Reconfigure Domain 5
Reconfigure Domain 6
Reconfigure Domain 7
Reconfigure Domain 8
Reconfigure Domain 9
Reconfigure Domain 10
Reconfigure Domain 11
Reconfigure Domain 12
Reconfigure Domain 13
Reconfigure Domain 14
Reconfigure Domain 15
Reconfigure Domain 16
Reconfigure Domain 17
Reconfigure Domain 18
Reconfigure Domain 19
Reconfigure Domain 20
Reconfigure Domain 21
Reconfigure Domain 22
Reconfigure Domain 23
Reconfigure Domain 24
Reconfigure Domain 25

7. Upgrading Domain Component Configurations

cd ../../upgrade/bin/
./ua

Oracle Fusion Middleware Upgrade Assistant 12.2.1.3.0
Log file is located at: /u00/app/oracle/product/oid12c/oracle_common/upgrade/logs/ua2018-01-26-12-18-12PM.log
Reading installer inventory, this will take a few moments…

The following are the screen shots of the upgrade of the WebLogic Domain configuration

upgrade domain component configuration 1
upgrade domain component configuration 2
upgrade domain component configuration 3
upgrade domain component configuration 4
upgrade domain component configuration 5
upgrade domain component configuration 6
upgrade domain component configuration 7

8. Start the domain

For this first start I will use the normal start scripts installed when upgrading the domain in separate putty session to see the traces

Putty Session 1:

cd /u01/app/OID/user_projects/domains/IDMDomain/bin
# Start the Admin Server in the first putty
./startWebLogic.sh

Putty Session 2:

cd /u01/app/OID/user_projects/domains/IDMDomain/bin
# In an other shell session start the node Manager:
./startNodeManager.sh

Putty Session 3:

cd /u01/app/OID/user_projects/domains/IDMDomain/bin
./startComponent.sh oid1

Starting system Component oid1 ...

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

Reading domain from /u01/app/OID/user_projects/domains/IDMDomain

Please enter Node Manager password:
Connecting to Node Manager ...
<Jan 26, 2018 1:02:08 PM CET> <Info> <Security> <BEA-090905> <Disabling the CryptoJ JCE Provider self-integrity check for better startup performance. To enable this check, specify -Dweblogic.security.allowCryptoJDefaultJCEVerification=true.>
<Jan 26, 2018 1:02:08 PM CET> <Info> <Security> <BEA-090906> <Changing the default Random Number Generator in RSA CryptoJ from ECDRBG128 to HMACDRBG. To disable this change, specify -Dweblogic.security.allowCryptoJDefaultPRNG=true.>
<Jan 26, 2018 1:02:08 PM CET> <Info> <Security> <BEA-090909> <Using the configured custom SSL Hostname Verifier implementation: weblogic.security.utils.SSLWLSHostnameVerifier$NullHostnameVerifier.>
Successfully Connected to Node Manager.
Starting server oid1 ...
Successfully started server oid1 ...
Successfully disconnected from Node Manager.

Exiting WebLogic Scripting Tool.

Done

The ODSM application is now deployed in the WebLogic Administration Server and the WLS_ODS1 WebLogic Server from the previous OID 11C  administration domain is not used any more.

http://host01.example.com:7002/odsm

7002 is the Administration Server port for this domain.

 

Cet article Upgrade Oracle Internet Directory from 11G (11.1.1.9) to 12C (12.2.1.3) est apparu en premier sur Blog dbi services.

Oracle 12.2 : Windows Virtual Account

Thu, 2018-09-20 09:51

With Oracle 12.2 we can use a Virtual Account during the Oracle installation on Windows. Virtual Accounts allow you to install an Oracle Database and, create and manage Database services without passwords. A Virtual Account can be used as the Oracle Home User for Oracle Database Single Instance installations and does not require a user name or password during installation and administration.
In this blog I want to share an experience I had with the Windows Virtual Accounts when installing Oracle.
I was setting an Oracle environment on Windows Server 2016 for a client. During The installation I decided to use the Virtual Account option.
Capture1
After the installation of Oracle, I created a database PROD. And everything was fine

SQL*Plus: Release 12.2.0.1.0 Production on Wed Sep 19 05:43:05 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
PROD      READ WRITE

SQL>

SQL> show parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      C:\APP\ORACLE\PRODUCT\12.2.0\D
                                                 BHOME_1\DATABASE\SPFILEPROD.ORA
                                                
SQL>

Looking into the properties of my spfile I can see that there is a Windows group named ORA_OraDB12Home1_SVCACCTS
namedgroup
which has full control of the spfile. Indeed as we used the virtual account to install the Oracle software, oracle will automatically create this group and will use it for some tasks
Capture2
After the first database, the client asked to create a second database. Using DBCA I created a second let’s say ORCL.
After the creation of ORCL, I changed some configuration parameters of the first database PROD and decide to restart it. And then I was surprised with the following error.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file 'C:\APP\ORACLE\PRODUCT\12.2.0\DBHOME_1\DATABASE\INITPROD.ORA'
SQL>

Waw!! What happened is that when using DBCA to create the second database ORCL, Oracle changed the properties of the spfile of the first database PROD (spfilePROD.ora). Yes it’s strange but this was exactly what happened. The Virtual Group was replaced by OracleServiceORCL
Capture3

At the other side The ORCL spfile was fine.
Capture4

So I decided to remove the OracleServiceORCL in the properties of the PROD spfile and I add back the Virtual Group
Capture5

And Then I was able to start the PROD database

SQL> startup
ORACLE instance started.

Total System Global Area  524288000 bytes
Fixed Size                  8748760 bytes
Variable Size             293601576 bytes
Database Buffers          213909504 bytes
Redo Buffers                8028160 bytes
Database mounted.
Database opened.
SQL>

But this issue means that every time I create a new database with DBCA the properties of spfiles of others databases may be changed and this is not normal.
When checking for this strange issue I found this Oracle Support note
DBCA Using Virtual Account Incorrectly Sets The SPFILE Owner (Doc ID 2410452.1)

So I decided to apply the recommended patches by Oracle
Oracle Database 12.2.0.1.180116BP
26615680

C:\Users\Administrator>c:\app\oracle\product\12.2.0\dbhome_1\OPatch\opatch lspatches
26615680;26615680:SI DB CREATION BY DBCA IN VIRTUAL ACCOUNT INCORRECTLY SETS THE ACL FOR FIRST DB
27162931;WINDOWS DB BUNDLE PATCH 12.2.0.1.180116(64bit):27162931

And Then I create a new database TEST to see if the patches have corrected the issue.
Well I was able to restart all databases without any errors. But looking into the properties of the 3 databases, we can see that the patch added back the Virtual Group but the service of the last database is still present for previous databases. I don’t really understand why OracleServiceTest should be present in spfilePROD.ora and spfileORCL.ora.

Capture6

Capture7

Capture8

Conclusion : In this blog I shared an issue I experienced with Windows Virtual Account. Hope that this will help.

 

Cet article Oracle 12.2 : Windows Virtual Account est apparu en premier sur Blog dbi services.

MongoDB OPS Manager

Wed, 2018-09-19 08:03

MongoDB OPS Manager (MMS) is a tool for administering and managing MongoDB deployments, particularly large clusters. MongoDB Inc. qualified it as “the best way to manage your MongoDB data center“. OPS Manager also allows you to deploy a complete MongoDB cluster in multiple nodes and several topologies.  As you know, at dbi services, the MongoDB installation is based on our best practices, especially the MFA (MongoDB Flexible Architecture), more information here.

Is OPS Manager compatible with our installation best practices and our MongoDB DMK? For this reasons, I would like to post a guide for the installation and the configuration of the OPS Manager (MMS) based on the dbi services best practices.

In this installation guide, we’ll use the latest version of OPS Manager, release 4.0.2. We’ll install OPS Manager in a single instance, recommended for test and proof of concept.

Testing Environment

We’ll use a Docker container provisioned in the Swiss public cloud Hidora.  Below the information of the container:

  • CentOS 7
  • Add a Public IP
  • Endpoints configuration for: MongoDB DB port 2017, FTP port 21, SSH port 22, OPS Manager interface port 8080

Hidora_Endpoints_MongoDB

MongoDB Installation

Once your container has been provisioned, you can start the installation of MongoDB. It’s important to know, that OPS Manager needs a MongoDB database in order to stores the application information.  That’s why we need to install and start a mongo database at first.

For more details about the MongoDB Installation, you can refer to a previous blog.

[root@node32605-env-4486959]# mkdir -p /u00/app/mongodb/{local,admin,product}

[root@node32605-env-4486959]# mkdir -p /u01/mongodbdata/
[root@node32605-env-4486959]# mkdir -p /u01/mongodbdata/{appdb,bckpdb}
[root@node32605-env-4486959]# mkdir -p /u02/mongodblog/
[root@node32605-env-4486959]# mkdir -p /u02/mongodblog/{applog,bckplog}
[root@node32605-env-4486959]# mkdir -p /u99/mongodbbackup/ 

[root@node32605-env-4486959]# chown -R mongodb:mongodb /u00/app/mongodb/ /u01/mongodbdata/ /u99/mongodbbackup/

Let’s now download the latest MongoDB and OPS Manager releases from the MongoDB Download Center.

[root@node32605-env-4486959 opt]# wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-4.0.2.tgz
[root@node32605-env-4486959 opt]# wget https://downloads.mongodb.com/on-prem-mms/tar/mongodb-mms-4.0.2.50187.20180905T1454Z-1.x86_64.tar.gz

Based on the MFA, move the software inside the /product folder.

[root@node32605-env-4486959 opt] mv mongodb-linux-x86_64-rhel70-4.0.1.tgz mongodb-mms-4.0.2.50187.20180905T1454Z-1.x86_64.tar.gz /u00/app/mongodb/product/

Permissions and Extraction:

[root@node32605-env-4486959 product]# chown -R mongodb:mongodb /u00/app/mongodb/product/* [root@node32605-env-4486959 product]# su - mongodb [mongodb@node32605-env-4486959 product]$ tar -xzf mongodb-linux-x86_64-rhel70-4.0.1.tgz [mongodb@node32605-env-4486959 product]$ tar -xzf mongodb-mms-4.0.2.50187.20180905T1454Z-1.x86_64.tar.gz

Run mongo databases for OPS Manager and Backup:

[mongodb@node32605-env-4486959 bin]$ ./mongod --port 27017 --dbpath /u01/mongodbdata/appdb/ --logpath /u02/mongodblog/applog/mongodb.log --wiredTigerCacheSizeGB 1 --fork
[mongodb@node32605-env-4486959 bin]$ ./mongod --port 27018 --dbpath /u01/mongodbdata/bckpdb/ --logpath /u02/mongodblog/bckplog/mongodb.log --wiredTigerCacheSizeGB 1 --fork

Once the 2 databases have been successfully started, we can confirgure and start the OPS Manager application.

First, we need to configure the URL to access to OPS Manager.

[mongodb@node32605-env-4486959 ~]$ cd /u00/app/mongodb/product/mongodb-mms-4.0.2.50187.20180905T1454Z-1.x86_64/conf

Edit the conf-mms.properties file and add the following lines:

mongo.mongoUri=mongodb://127.0.0.1:27017/?maxPoolSize=150
 mongo.ssl=false
 mms.centralUrl=http://xxx.xxx.xx.xx:8080

Replace the xxx.xxx.xx.xx by your public IP or DNS name.

[mongodb@node32605-env-4486959 ~]$ cd /u00/app/mongodb/product/mongodb-mms-4.0.2.50187.20180905T1454Z-1.x86_64/bin
[mongodb@node32605-env-4486959 bin]$ ./mongodb-mms start
 OPS Manager configuration

Access to the OPS Manager application through the following URL:

http://public_ip:8080

MongoDB_UI

 

You need to register for the first time.

MongoDB_Register

 

Once your account have been created, configure the OPS Manager access URL.

MongoDB_URL

Then configure your email settings.

MongoDB_EmailSettings

Click on Continue and configure the User Authentication, Backup Snapshots, Proxy.

Finish by the OPS Manager versions configuration.

MongoDB_Version_Management

 

Congratulation, you finish your installation. You can start using OPS Manager now and deploy a MongoDB cluster.

MongoDB_OPSManager

 

 

 

Cet article MongoDB OPS Manager est apparu en premier sur Blog dbi services.

Java9 new features

Tue, 2018-09-18 01:52

java9

Java9 is on its way now, in this blog I’ll talk about the new features I found interesting, the performances and so on.

Configure Eclipse for Java9

Prior to Eclipse Oxygen 4.7.1a, you’ll have to configure eclipse a little bit to make it run your Java9 projects.

Add in eclipse.ini after –launcher.appendVmargs

-vm
C:\Program Files\Java\jdk-9.0.4\bin\javaw.exe

 

Still in eclipse.ini add:

--add-modules=ALL-SYSTEM

 

You should have something like this:

--launcher.appendVmargs
-vm
C:\Program Files\Java\jdk-9.0.4\bin\javaw.exe
-vmargs
-Dosgi.requiredJavaVersion=1.6
-Xms40m
-Xmx512m
--add-modules=ALL-SYSTEM
New Features  Modules

Like a lot of other languages, and in order to obfuscate a little more the code, java is going to use Modules. It simply means that you’ll be able to make your code requiring a specific library. This is quite helpful for small memory device that do not need the whole JVM to be loaded. You can have a list of available modules here.

When creating a module, you’ll generate a file called module-info.java which will be like:

module test.java9 {
	requires com.dbiservices.example.engines;
	exports com.dbiservices.example.car;
}

Here my module requires the “engines” module and exports the “car” module. This allows to only load classes related to our business and not some side libraries, it will help managing memory more efficiently but also requires some understanding regarding the module system. In addition, it creates a real dependency system between jars, and prevent using public classes that were not supposed to be exposed through the API. It prevents some strange behavior when you have duplicates entries, like several jar versions in the classpath. All non-exported modules will be encapsulated by default

 JShell

Java9 now provides a JShell, like other languages you can now execute java code through a java shell command prompt. Simply starts jshell from the JDK in the bin folder:

jshellThis kind of tool can greatly improve productivity for small tests, you don’t have to create small testing classes anymore. Very useful for regular expressions testing for example.

New HTTP API

The old http api is being upgraded finally. It now supports WebSockets and HTTP/2 protocol out of the box. For the moment the API is placed in an incubator module, that mean it can still change a little, but you can start playing with like following:

import jdk.incubator.http.*;
public class Run {

public static void main(String[] args) throws IOException, InterruptedException {
  HttpClient client = HttpClient.newHttpClient();
  HttpRequest req = HttpRequest.newBuilder(URI.create("http://www.google.com"))
		              .header("User-Agent","Java")
		              .GET()
		              .build();
  HttpResponse<String> resp = client.send(req, HttpResponse.BodyHandler.asString());
}

You’ll have to setup module-info.java accordingly:

module test.java9 {
	requires jdk.incubator.httpclient;
}
 Private interface methods

Since Java 8, an interface can contain behavior instead of only a method signature. But if you have several methods doing quite the same thing, usually you can refactor those methods into a private one. But default methods in java 8 can’t be private. In Java 9 you can add private helper methods to interfaces which can solve this issue:

public interface CarContract {

	void normalMethod();
	default void defaultMethod() {doSomething();}
	default void secondDefaultMethod() {doSomething();}
	
	private void doSomething(){System.out.println("Something");}
}

The private method “doSomething()” is hidden from the exposure of the interface.

 Unified JVM Logging

Java 9 adds a handy feature to debug the JVM thanks to logging. You can now enable logging for different tags like gc, compiler, threads and so on. You can set it thanks to the command line parameter -Xlog. Here’s an example of the configuration for the gc tag, using debug level without decoration:

-Xlog:gc=debug:file=log/gc.log:none

And the result:

ConcGCThreads: 2
ParallelGCThreads: 8
Initialize mark stack with 4096 chunks, maximum 16384
Using G1
GC(0) Pause Young (G1 Evacuation Pause) 24M->4M(254M) 5.969ms
GC(1) Pause Young (G1 Evacuation Pause) 59M->20M(254M) 21.708ms
GC(2) Pause Young (G1 Evacuation Pause) 50M->31M(254M) 20.461ms
GC(3) Pause Young (G1 Evacuation Pause) 84M->48M(254M) 30.398ms
GC(4) Pause Young (G1 Evacuation Pause) 111M->70M(321M) 31.902ms

We can even merge info:

-Xlog:gc+heap=debug:file=log/heap.log:none

Which results to this:

Heap region size: 1M
Minimum heap 8388608  Initial heap 266338304  Maximum heap 4248829952
GC(0) Heap before GC invocations=0 (full 0):
GC(0)  garbage-first heap   total 260096K, used 24576K [0x00000006c2c00000, 0x00000006c2d007f0, 0x00000007c0000000)
GC(0)   region size 1024K, 24 young (24576K), 0 survivors (0K)
GC(0)  Metaspace       used 6007K, capacity 6128K, committed 6272K, reserved 1056768K
GC(0)   class space    used 547K, capacity 589K, committed 640K, reserved 1048576K
GC(0) Eden regions: 24->0(151)
GC(0) Survivor regions: 0->1(3)
GC(0) Old regions: 0->0
GC(0) Humongous regions: 0->0
GC(0) Heap after GC invocations=1 (full 0):
GC(0)  garbage-first heap   total 260096K, used 985K [0x00000006c2c00000, 0x00000006c2d007f0, 0x00000007c0000000)
GC(0)   region size 1024K, 1 young (1024K), 1 survivors (1024K)
GC(0)  Metaspace       used 6007K, capacity 6128K, committed 6272K, reserved 1056768K
GC(0)   class space    used 547K, capacity 589K, committed 640K, reserved 1048576K
GC(1) Heap before GC invocations=1 (full 0):
GC(1)  garbage-first heap   total 260096K, used 155609K [0x00000006c2c00000, 0x00000006c2d007f0, 0x00000007c0000000)
GC(1)   region size 1024K, 152 young (155648K), 1 survivors (1024K)
GC(1)  Metaspace       used 6066K, capacity 6196K, committed 6272K, reserved 1056768K
GC(1)   class space    used 548K, capacity 589K, committed 640K, reserved 1048576K
GC(1) Eden regions: 151->0(149)
GC(1) Survivor regions: 1->3(19)
...
...

There are other new features not detailed here, but you can find a list here.

 

Cet article Java9 new features est apparu en premier sur Blog dbi services.

Configure AFD with Grid Infrastructure software (SIHA & CRS) from very beginning.

Tue, 2018-09-18 01:04

Introduction :

Oracle ASM Filter Driver (Oracle ASMFD) simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
Oracle ASM Filter Driver (Oracle ASMFD) is a kernel module that resides in the I/O path of the Oracle ASM disks. Oracle ASM uses the filter driver to validate write I/O requests to Oracle ASM disks.

In this blog I will explain how to setup a Grid Infrastructure software within AFD on an architecture SIHA or CRS

Case1. You want to configure AFD from very beginning (no UDEV, no ASMLib) with SIHA, Single Instance High Availability installation (former Oracle Restart)

Issue :

If we want to use AFD driver from very beginning, we should use Oracle AFD to prepare some disks for the ASM instance,
The issue is coming from the fact that AFD will be available just after the installation (can be configured before the installation)!

Solution :

Step1. Install GI stack in software only mode

setup_soft_only

Step2. Run root.sh when is prompted, without any other action(do not execute generated script rootUpgrade.sh)

Step3. Run roothas.pl to setup your HAS stack

[root] /u01/app/grid/product/12.2.0/grid/perl/bin/perl -I /u01/app/grid/product/12.2.0/grid/perl/lib -I /u01/app/grid/product/12.2.0/grid/crs/install /u01/app/grid/product/12.2.0/grid/crs/install/roothas.pl

Step4. As root user proceed to configure AFD

 /u01/app/grid/product/12.2.0/grid/bin/crsctl stop has -f
/u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_configure
/u01/app/grid/product/12.2.0/grid/bin/crsctl start has

Step5.  Setup AFD string to discover new devices , as grid user

 /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_dsset '/dev/sd*'

Step6. Label new disk as root

 /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_label DISK1 /dev/sdb1

Step7. As grid user, launch ASMCA , to create your ASM instance, based on the diskgroup created on the new labeled disk , DISK1

disk_AFD

disk_AFD

Step8. Display AFD driver  within HAS stack.

check_res

 

Case2. You want to configure AFD from very beginning (no UDEV, no ASMLib) with CRS : Cluster Ready Services

Issue :

By installing on software-only mode, you will just copy and relink the binaries.
No wrapper scripts are created as (crsctl or clsecho).
The issue consists that AFD needs wrapper scripts and not the binaries (crsctl.bin).

Solution :

Step1.Do it on all nodes.

Install Grid Infrastructure on the all nodes of the future cluster in the mode “Software-only Installation”.

setup_soft_only

Step2. Do it on all nodes.

After the installation the wrapper scripts are not present. You can copy from any other installation (SIHA too) or use a cloned home.

After getting the two scripts , modify the variables inside them to be aligned with your current system used for installation:

ORA_CRS_HOME=/u01/app/grid/product/12.2.0/grid  --should be changed
MY_HOST=dbi1 –should be changed
ORACLE_USER=grid
ORACLE_HOME=$ORA_CRS_HOME
ORACLE_BASE=/u01/app/oracle
CRF_HOME=/u01/app/grid/product/12.2.0/grid –should be changed

Step3. Do it on all nodes

Configure AFD :

[root@dbi1 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_configure
AFD-627: AFD distribution files found.
AFD-634: Removing previous AFD installation.
AFD-635: Previous AFD components successfully removed.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.

Step4. Do it only on the first node.

Scan & label the new disks using AFD.

/u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_label DISK1 /dev/sdb1
/u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_label DISK1 /dev/sdc1
/u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_label DISK1 /dev/sdd1
[root@dbi1 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_scan
[root@dbi1 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_lsdsk
--------------------------------------------------------------------------------

Label                     Filtering   Path

================================================================================

DISK1                       ENABLED   /dev/sdb1

DISK2                       ENABLED   /dev/sdc1

DISK3                       ENABLED   /dev/sdd1

Step5. Do it on the other nodes.

Scan and display the disks on the other nodes of the future cluster. No need to label them again.

[root@dbi2 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_scan
[root@dbi2 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DISK1                       ENABLED   /dev/sdb1
DISK2                       ENABLED   /dev/sdc1
DISK3                       ENABLED   /dev/sdd1

Step6. Do it on 1st node

Run the script config.sh as oracle/grid user

/u01/app/grid/product/12.2.0/grid/crs/config/config.sh

config_luster

Step7. Do it on 1st node

Setup the connectivity between all the future nodes of the cluster and follow the wizard.

conn_all_nodes

Step8. Do it on 1st node

You will be asked to create a ASM diskgroup.

Normally without doing previous steps , will not be possible , as no udev no ASMLib no AFD configured. So no labeled disks for that step.

create_asm_DG

But…….

Step9. Do it on 1st node

Change discovery path to ‘AFD:*’and should retrieve the disks labeled on the previous step.

afd_path

Step10. Do it on 1st node

Provide AFD labeled disks to create the ASM disk group for the OCR files.Uncheck “Configure Oracle ASM Filter Driver”

CREATE_ASM_DG_2

Step11. Do it on 1st node

Finalize the configuration as per documentation.

 

Additionally, another way (easier ) to install/configure ASM Filter Driver you can find here :
https://blog.dbi-services.com/oracle-18c-cluster-with-oracle-asm-filter-driver/

Summary : Using the scenarios described above , we can configure Grid Infrastructure stack within AFD on  a SIHA or CRS architecture.

 

Cet article Configure AFD with Grid Infrastructure software (SIHA & CRS) from very beginning. est apparu en premier sur Blog dbi services.

User Session lost using ADF Application

Mon, 2018-09-17 11:45

In one of my missions, I was involved in a new Fusion Middleware 12C (12.2.1.2) installation with an ADF application and an Oracle report server instance deployments .
This infrastructure is protected using an Access Manager Single Sign on Server.
In Production, the complete environment is fronted by a WAF server ending the https.
On the TEST The complete environment is fronted by a SSL reverse proxy ending the https.

In the chosen architecture, all Single Sign On request goes directly through the reverse proxy to the OAM servers.
The Application requests and the reports requests are routed through a HTTP server having the WebGate installed.

Below is an extract of the SSL part of the reverse Proxy configuration:
# SSL Virtual Host
<VirtualHost 10.0.1.51:443>
ServerName https://mySite.com
ErrorLog logs/ssl_errors.log
TransferLog logs/ssl_access.log
HostNameLookups off
ProxyPreserveHost On
ProxyPassReverse /oam http://appserver.example.com:14100/oam
ProxyPass /oam http://appserver.example.com:14100/oam
ProxyPassReverse /myCustom-sso-web http://appserver.example.com:14100/myCustom-sso-web
ProxyPass /myCustom-sso-web http://appserver.example.com:14100/myCustom-sso-web
ProxyPass /reports http://appserver.example.com:7778/reports
ProxyPassReverse /reports http://appserver.example.com:7778/reports
ProxyPass /myApplication http://appserver.example.com:7778/myApplication
ProxyPassReverse /myApplication http://appserver.example.com:7778/myApplication
# SSL configuration
SSLEngine on
SSLCertificateFile /etc/httpd/conf/ssl/myStite_com.crt
SSLCertificateKeyFile /etc/httpd/conf/ssl/mySite_com.key
</VirtualHost>

HTTP Server Virtual hosts:
# Local requests
Listen 7778
<VirtualHost *:7778>
ServerName http://appserver.example.com:7778
# Rewrite included for OAM logout redirection
RewriteRule ^/oam/(.*)$ http://appserver.example.com:14100/oam/$1
RewriteRule ^/myCustom-sso-web/(.*)$ http://appserver.example.com:14100/myCustom-sso-sso-web/$1
</VirtualHost>

<VirtualHost *:7778>
ServerName https://mySite.com:443
</VirtualHost>

The ADf application and the reports servers mapping is done using custom configuration files included in http.conf files
#adf.conf
#----------
<Location /myApplication>
SetHandler weblogic-handler
WebLogicCluster appserver.example.com:9001,appserver1.example.com:9003
WLProxySSLPassThrough ON
</Location>

# Force caching for image files
<FilesMatch "\.(jpg|jpeg|png|gif|swf)$">
Header unset Surrogate-Control
Header unset Pragma
Header unset Cache-Control
Header unset Last-Modified
Header unset Expires
Header set Cache-Control "max-age=86400, public"
Header set Surrogate-Control "max-age=86400"
</FilesMatch>

#reports.conf
#-------------
<Location /reports>
SetHandler weblogic-handler
WebLogicCluster appserver.example.com:9004,appserver1.example.com:9004
DynamicServerList OFF
WLProxySSLPassThrough ON
</Location>

After configuring a ADF application and the Reports Server to be protected through the WebGate, the users can connect and work without any issue during the first 30 minutes.
Then they loose their sessions. We thought first it was related to the session timeouts or inactivity timeout.
We increased the values of those timeouts without success.
We checked the logs and found out that the issue was related to the OAM and WebGate cookies.

The OAM Server gets and sets a cookie named OAM_ID.
Each WebGate gets and sets a cookie named OAMAuthnCookie_ + the host name and port.

The contents of the cookies are:

Authenticated User Identity (User DN)
Authentication Level
IP Address
SessionID (Reference to Server side session – OAM11g Only)
Session Validity (Start Time, Refresh Time)
Session InActivity Timeouts (Global Inactivity, Max Inactivity)
Validation Hash

The validity of a WebGate handled user session is 30 minutes by default and then the WebGate checks the OAM cookies.
Those cookies are secured and were lost because they were not forwarded by the WAF or the reverse proxy due to ending of the https.

We needed to changes the SSL reverse proxy configuration to send the correct information to the WebLogic Server and HTTP Server about ending SSL at reverse proxy level.
This has been done adding two HTTP Headers to the request before sending them to the Oracle Access Manager or Fusion Middleware HTTP Server.

# For the WebLogic Server to be informed about SSL ending at reverse proxy level
RequestHeader set WL-Proxy-SSL true
# For the Oracle HTTP Server to take the secure cookies in account
RequestHeader set X-Forwarded-Proto “https”

The WAF needed to be configured to do the same HTTP Headers adds in the production environment.

After those changes, the issue was solved.

 

Cet article User Session lost using ADF Application est apparu en premier sur Blog dbi services.

EDB containers for OpenShift 2.3 – PEM integration

Mon, 2018-09-17 11:19

A few days ago EnterpriseDB announced the availability of version 2.3 of the EDB containers for OpenShift. The main new feature in this release is the integration of PEM (Postgres Enterprise Manager), so in this post we’ll look at how we can bring up a PEM server in OpenShift. If you did not follow the lats posts about EDB containers in OpenShift here is the summary:

The first step you need to do is to download the updated container images. You’ll notice that there are two new containers which have not been available before the 2.3 release:

  • edb-pemserver: Obviously this is the PEM server
  • admintool: a utility container for supporting database upgrades and launching PEM agents on the database containers

For downloading the latest release of the EDB container images for OpenShift, the procedure is the following:

docker run -d -p 5000:5000 --restart=always --name registry registry:2
docker login containers.enterprisedb.com

docker pull containers.enterprisedb.com/edb/edb-as:v10
docker tag containers.enterprisedb.com/edb/edb-as:v10 localhost:5000/edb/edb-as:v10
docker push localhost:5000/edb/edb-as:v10

docker pull containers.enterprisedb.com/edb/edb-pgpool:v3.6
docker tag containers.enterprisedb.com/edb/edb-pgpool:v3.6 localhost:5000/edb/edb-pgpool:v3.6
docker push localhost:5000/edb/edb-pgpool:v3.6

docker pull containers.enterprisedb.com/edb/edb-pemserver:v7.3
docker tag containers.enterprisedb.com/edb/edb-pemserver:v7.3 localhost:5000/edb/edb-pemserver:v7.3
docker push localhost:5000/edb/edb-pemserver:v7.3

docker pull containers.enterprisedb.com/edb/edb-admintool
docker tag containers.enterprisedb.com/edb/edb-admintool localhost:5000/edb/edb-admintool
docker push localhost:5000/edb/edb-admintool

docker pull containers.enterprisedb.com/edb/edb-bart:v2.1
docker tag containers.enterprisedb.com/edb/edb-bart:v2.1 localhost:5000/edb/edb-bart:v2.1
docker push localhost:5000/edb/edb-bart:v2.1

In my case I have quite a few EDB containers available now (…and I could go ahead and delete the old ones, of course):

docker@minishift:~$ docker images | grep edb
containers.enterprisedb.com/edb/edb-as          v10                 1d118c96529b        45 hours ago        1.804 GB
localhost:5000/edb/edb-as                       v10                 1d118c96529b        45 hours ago        1.804 GB
containers.enterprisedb.com/edb/edb-admintool   latest              07fda249cf5c        10 days ago         531.6 MB
localhost:5000/edb/edb-admintool                latest              07fda249cf5c        10 days ago         531.6 MB
containers.enterprisedb.com/edb/edb-pemserver   v7.3                78954c316ca9        10 days ago         1.592 GB
localhost:5000/edb/edb-pemserver                v7.3                78954c316ca9        10 days ago         1.592 GB
containers.enterprisedb.com/edb/edb-bart        v2.1                e2410ed4cf9b        10 days ago         571 MB
localhost:5000/edb/edb-bart                     v2.1                e2410ed4cf9b        10 days ago         571 MB
containers.enterprisedb.com/edb/edb-pgpool      v3.6                e8c600ab993a        10 days ago         561.1 MB
localhost:5000/edb/edb-pgpool                   v3.6                e8c600ab993a        10 days ago         561.1 MB
containers.enterprisedb.com/edb/edb-as                              00adaa0d4063        3 months ago        979.3 MB
localhost:5000/edb/edb-as                                           00adaa0d4063        3 months ago        979.3 MB
localhost:5000/edb/edb-pgpool                   v3.5                e7efdb0ae1be        4 months ago        564.1 MB
containers.enterprisedb.com/edb/edb-pgpool      v3.5                e7efdb0ae1be        4 months ago        564.1 MB
localhost:5000/edb/edb-as                       v10.3               90b79757b2f7        4 months ago        842.7 MB
containers.enterprisedb.com/edb/edb-bart        v2.0                48ee2c01db92        4 months ago        590.6 MB
localhost:5000/edb/edb-bart                     2.0                 48ee2c01db92        4 months ago        590.6 MB
localhost:5000/edb/edb-bart                     v2.0                48ee2c01db92        4 months ago        590.6 MB

The only bits I changed in the yaml file that describes my EDB AS deployment compared to the previous posts are these (check the high-lightened lines, there are only two):

apiVersion: v1
kind: Template
metadata:
   name: edb-as10-custom
   annotations:
    description: "Custom EDB Postgres Advanced Server 10.0 Deployment Config"
    tags: "database,epas,postgres,postgresql"
    iconClass: "icon-postgresql"
objects:
- apiVersion: v1 
  kind: Service
  metadata:
    name: ${DATABASE_NAME}-service 
    labels:
      role: loadbalancer
      cluster: ${DATABASE_NAME}
  spec:
    selector:                  
      lb: ${DATABASE_NAME}-pgpool
    ports:
    - name: lb 
      port: ${PGPORT}
      targetPort: 9999
    sessionAffinity: None
    type: LoadBalancer
- apiVersion: v1 
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-pgpool
  spec:
    replicas: 2
    selector:
      lb: ${DATABASE_NAME}-pgpool
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        labels:
          lb: ${DATABASE_NAME}-pgpool
          role: queryrouter
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-pgpool
          env:
          - name: DATABASE_NAME
            value: ${DATABASE_NAME} 
          - name: PGPORT
            value: ${PGPORT} 
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres'
          - name: REPL_PASSWORD
            value: 'postgres'
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: localhost:5000/edb/edb-pgpool:v3.6
          imagePullPolicy: IfNotPresent
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-as10-0
  spec:
    replicas: 1
    selector:
      db: ${DATABASE_NAME}-as10-0 
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        creationTimestamp: null
        labels:
          db: ${DATABASE_NAME}-as10-0 
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-as10 
          env:
          - name: DATABASE_NAME 
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER 
            value: ${DATABASE_USER} 
          - name: DATABASE_USER_PASSWORD
            value: 'postgres'
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres'
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: REPL_PASSWORD
            value: 'postgres'
          - name: PGPORT
            value: ${PGPORT} 
          - name: RESTORE_FILE
            value: ${RESTORE_FILE} 
          - name: LOCALEPARAMETER
            value: ${LOCALEPARAMETER}
          - name: CLEANUP_SCHEDULE
            value: ${CLEANUP_SCHEDULE}
          - name: EFM_EMAIL
            value: ${EFM_EMAIL}
          - name: NAMESERVER
            value: ${NAMESERVER}
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName 
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP 
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: localhost:5000/edb/edb-as:v10
          imagePullPolicy: IfNotPresent 
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5 
          livenessProbe:
            exec:
              command:
              - /var/lib/edb/testIsHealthy.sh
            initialDelaySeconds: 600 
            timeoutSeconds: 60 
          ports:
          - containerPort: ${PGPORT} 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
          - name: ${BACKUP_PERSISTENT_VOLUME}
            mountPath: /edbbackup
          - name: pg-initconf
            mountPath: /initconf
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${PERSISTENT_VOLUME_CLAIM}
        - name: ${BACKUP_PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${BACKUP_PERSISTENT_VOLUME_CLAIM}
        - name: pg-initconf
          configMap:
            name: postgres-map
    triggers:
    - type: ConfigChange
parameters:
- name: DATABASE_NAME
  displayName: Database Name
  description: Name of Postgres database (leave edb for default)
  value: 'edb'
- name: DATABASE_USER
  displayName: Default database user (leave enterprisedb for default)
  description: Default database user
  value: 'enterprisedb'
- name: REPL_USER
  displayName: Repl user
  description: repl database user
  value: 'repl'
- name: PGPORT
  displayName: Database Port
  description: Database Port (leave 5444 for default)
  value: "5444"
- name: LOCALEPARAMETER
  displayName: Locale
  description: Locale of database
  value: ''
- name: CLEANUP_SCHEDULE
  displayName: Host Cleanup Schedule
  description: Standard cron schedule - min (0 - 59), hour (0 - 23), day of month (1 - 31), month (1 - 12), day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0). Leave it empty if you dont want to cleanup.
  value: '0:0:*:*:*'
- name: EFM_EMAIL
  displayName: Email
  description: Email for EFM
  value: 'none@none.com'
- name: NAMESERVER
  displayName: Name Server for Email
  description: Name Server for Email
  value: '8.8.8.8'
- name: PERSISTENT_VOLUME
  displayName: Persistent Volume
  description: Persistent volume name
  value: ''
  required: true
- name: PERSISTENT_VOLUME_CLAIM 
  displayName: Persistent Volume Claim
  description: Persistent volume claim name
  value: ''
  required: true
- name: BACKUP_PERSISTENT_VOLUME
  displayName: Backup Persistent Volume
  description: Backup Persistent volume name
  value: ''
  required: false
- name: BACKUP_PERSISTENT_VOLUME_CLAIM
  displayName: Backup Persistent Volume Claim
  description: Backup Persistent volume claim name
  value: ''
  required: false
- name: RESTORE_FILE
  displayName: Restore File
  description: Restore file location
  value: ''
- name: ACCEPT_EULA
  displayName: Accept end-user license agreement (leave 'Yes' for default)
  description: Indicates whether user accepts the end-user license agreement
  value: 'Yes'
  required: true

As the template starts with one replica I scaled that to three so finally the setup we start with for PEM is this (one master and two replicas, which is the minimum you need for automated failover anyway):

dwe@dwe:~$ oc get pods -o wide -L role
edb-as10-0-1-4ptdr   1/1       Running   0          7m        172.17.0.5   localhost   standbydb
edb-as10-0-1-8mw7m   1/1       Running   0          5m        172.17.0.6   localhost   standbydb
edb-as10-0-1-krzpp   1/1       Running   0          8m        172.17.0.9   localhost   masterdb
edb-pgpool-1-665mp   1/1       Running   0          8m        172.17.0.8   localhost   queryrouter
edb-pgpool-1-mhgnq   1/1       Running   0          8m        172.17.0.7   localhost   queryrouter

Nothing special happened so far except that we downloaded the new container images, pushed that to the local registry and adjusted the deployment yaml to reference the latest version of the containers. What we want to do now is to create the PEM repository container so that we can add the database to PEM which will give us monitoring and alerting. As PEM requires persistent storage as well we need a new storage definition:

Selection_016

You can of course also get the storage definition using the “oc” command:

dwe@dwe:~$ oc get pvc
NAME                STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
edb-bart-claim      Bound     pv0091    100Gi      RWO,ROX,RWX                   16h
edb-pem-claim       Bound     pv0056    100Gi      RWO,ROX,RWX                   50s
edb-storage-claim   Bound     pv0037    100Gi      RWO,ROX,RWX                   16h

The yaml file for the PEM server is this one (notice that the container image referenced is coming from the local registry):

apiVersion: v1
kind: Template
metadata:
   name: edb-pemserver
   annotations:
    description: "Standard EDB Postgres Enterprise Manager Server 7.3 Deployment Config"
    tags: "pemserver"
    iconClass: "icon-postgresql"
objects:
- apiVersion: v1
  kind: Service
  metadata:
    name: ${DATABASE_NAME}-webservice 
    labels:
      name: ${DATABASE_NAME}-webservice
  spec:
    selector:
      role: pemserver 
    ports:
    - name: https
      port: 30443
      nodePort: 30443
      protocol: TCP
      targetPort: 8443
    - name: http
      port: 30080
      nodePort: 30080
      protocol: TCP
      targetPort: 8080
    type: NodePort
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    name: edb-pemserver
  spec:
    replicas: 1
    selector:
      app: pemserver 
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        creationTimestamp: null
        labels:
          app: pemserver 
          cluster: ${DATABASE_NAME} 
      spec:
        containers:
        - name: pem-db
          env:
          - name: DATABASE_NAME
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER
            value: ${DATABASE_USER}
          - name: ENTERPRISEDB_PASSWORD
            value: "postgres"
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
          - name: PGPORT
            value: ${PGPORT}
          - name: RESTORE_FILE
            value: ${RESTORE_FILE}
          - name: ENABLE_HA_MODE
            value: "No"
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
            image: localhost:5000/edb/edb-as:v10
          imagePullPolicy: Always 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
        - name: pem-webclient 
          image: localhost:5000/edb/edb-pemserver:v7.3
          imagePullPolicy: Always 
          env:
          - name: DATABASE_NAME 
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER 
            value: ${DATABASE_USER} 
          - name: ENTERPRISEDB_PASSWORD
            value: "postgres"
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName 
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP 
          - name: PGPORT
            value: ${PGPORT}
          - name: CIDR_ADDR
            value: ${CIDR_ADDR}
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          - name: DEBUG_MODE
            value: ${DEBUG_MODE}
          ports:
          - containerPort: ${PGPORT} 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
          - name: httpd-shm
            mountPath: /run/httpd
        volumes:
        - name: ${PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${PERSISTENT_VOLUME_CLAIM}
        - name: httpd-shm 
          emptyDir:
            medium: Memory 
        dnsPolicy: ClusterFirst
        restartPolicy: Always
    triggers:
    - type: ConfigChange
parameters:
- name: DATABASE_NAME
  displayName: Database Name
  description: Name of Postgres database (leave edb for default)
  value: 'pem'
  required: true
- name: DATABASE_USER
  displayName: Default database user (leave enterprisedb for default)
  description: Default database user
  value: 'enterprisedb'
- name: PGPORT
  displayName: Database Port
  description: Database Port (leave 5444 for default)
  value: '5444'
  required: true
- name: PERSISTENT_VOLUME
  displayName: Persistent Volume
  description: Persistent volume name
  value: 'edb-data-pv'
  required: true
- name: PERSISTENT_VOLUME_CLAIM 
  displayName: Persistent Volume Claim
  description: Persistent volume claim name
  value: 'edb-data-pvc'
  required: true
- name: RESTORE_FILE
  displayName: Restore File
  description: Restore file location
  value: ''
- name: CIDR_ADDR 
  displayName: CIDR address block for PEM 
  description: CIDR address block for PEM (leave '0.0.0.0/0' for default) 
  value: '0.0.0.0/0' 
- name: ACCEPT_EULA
  displayName: Accept end-user license agreement (leave 'Yes' for default)
  description: Indicates whether user accepts the end-user license agreement
  value: 'Yes'
  required: true

Again, don’t process the template right now, just save it as a template:
Selection_001

Once we have that available we can start to deploy the PEM server from the catalog:
Selection_002

Selection_003

Of course we need to reference the storage definition we created above:
Selection_004

Leave everything else at its defaults and create the deployment:
Selection_005

A few minutes later you should have PEM ready:
Selection_011

For connecting to PEM with your browser have a look at the service definition to get the port:
Selection_012

Once you have that you can connect to PEM:
Selection_013
Selection_014

In the next post we’ll look at how we can add our existing database deployment to our just created PEM server so we can monitor the instances and configure alerting.

 

Cet article EDB containers for OpenShift 2.3 – PEM integration est apparu en premier sur Blog dbi services.

We are proud to announce:

Fri, 2018-09-14 09:42

Selection_015

(no words required for this post, the image says it all)

 

Cet article We are proud to announce: est apparu en premier sur Blog dbi services.

Masking Data With PostgreSQL

Thu, 2018-09-13 10:01

I was searching a tool for anonymizing data in a PostgreSQL database and I have tested the extension pg_anonymizer.
PostgreSQL_anonymizer is a set of SQL functions that remove personally identifiable values from a PostgreSQL table and replace them with random-but-plausible values. The goal is to avoid any identification from the data record while remaining suitable for testing, data analysis and data processing.
In this blog I am showing how this extension can be used. I am using a PostgreSQL 10 database.
The first step is to install the extension pg_anonymizer. In my case I did it with with pgxn client

[postgres@pgserver2 ~]$ pgxn install postgresql_anonymizer --pg_config /u01/app/postgres/product/10/db_1/bin/pg_config
INFO: best version: postgresql_anonymizer 0.0.3
INFO: saving /tmp/tmpVf3psT/postgresql_anonymizer-0.0.3.zip
INFO: unpacking: /tmp/tmpVf3psT/postgresql_anonymizer-0.0.3.zip
INFO: building extension
gmake: Nothing to be done for `all'.
INFO: installing extension
/usr/bin/mkdir -p '/u01/app/postgres/product/10/db_1/share/extension'
/usr/bin/mkdir -p '/u01/app/postgres/product/10/db_1/share/extension/anon'
/usr/bin/install -c -m 644 .//anon.control '/u01/app/postgres/product/10/db_1/share/extension/'
/usr/bin/install -c -m 644 .//anon/anon--0.0.3.sql  '/u01/app/postgres/product/10/db_1/share/extension/anon/'
[postgres@pgserver2 ~]$

We can then verify that under /u01/app/postgres/product/10/db_1/share/extension we have a file anon.control and a directory named anon

[postgres@pgserver2 extension]$ ls -ltra anon*
-rw-r--r--. 1 postgres postgres 167 Sep 13 10:54 anon.control

anon:
total 18552
drwxrwxr-x. 3 postgres postgres    12288 Sep 13 10:54 ..
drwxrwxr-x. 2 postgres postgres       28 Sep 13 10:54 .
-rw-r--r--. 1 postgres postgres 18980156 Sep 13 10:54 anon--0.0.3.sql

Let’s create a database named prod and let’s create the required extensions. tsm_system_rows should delivered by the contrib.

prod=# \c prod
You are now connected to database "prod" as user "postgres".
prod=#
prod=# CREATE EXTENSION tsm_system_rows;;
CREATE EXTENSION
prod=#

prod=# CREATE EXTENSION anon;
CREATE EXTENSION
prod=#


prod=# \dx
                                    List of installed extensions
      Name       | Version |   Schema   |                        Description

-----------------+---------+------------+----------------------------------------------------
--------
 anon            | 0.0.3   | anon       | Data anonymization tools
 plpgsql         | 1.0     | pg_catalog | PL/pgSQL procedural language
 tsm_system_rows | 1.0     | public     | TABLESAMPLE method which accepts number of rows as
a limit
(3 rows)

prod=#

The extension will create following functions in the schema anon. These functions can be used to mask some data.

prod=# set search_path=anon;
SET
prod=# \df
                                                               List of functions
 Schema |           Name           |     Result data type     |                          Argu
ment data types                           |  Type
--------+--------------------------+--------------------------+------------------------------
------------------------------------------+--------
 anon   | random_city              | text                     |
                                          | normal
 anon   | random_city_in_country   | text                     | country_name text
                                          | normal
 anon   | random_company           | text                     |
                                          | normal
 anon   | random_country           | text                     |
                                          | normal
 anon   | random_date              | timestamp with time zone |
                                          | normal
 anon   | random_date_between      | timestamp with time zone | date_start timestamp with tim
e zone, date_end timestamp with time zone | normal
 anon   | random_email             | text                     |
                                          | normal
 anon   | random_first_name        | text                     |
                                          | normal
 anon   | random_iban              | text                     |
                                          | normal
 anon   | random_int_between       | integer                  | int_start integer, int_stop integer
                            | normal
 anon   | random_last_name         | text                     |
                            | normal
 anon   | random_phone             | text                     | phone_prefix text DEFAULT '0'::text
                            | normal
 anon   | random_region            | text                     |
                            | normal
 anon   | random_region_in_country | text                     | country_name text
                            | normal
 anon   | random_siren             | text                     |
                            | normal
 anon   | random_siret             | text                     |
                            | normal
 anon   | random_string            | text                     | l integer
                            | normal
 anon   | random_zip               | text                     |
                            | normal
(18 rows)

prod=#

Now in the database prod let’s create a table with some data.

prod=# \d customers
                      Table "public.customers"
   Column   |         Type          | Collation | Nullable | Default
------------+-----------------------+-----------+----------+---------
 first_name | character varying(30) |           |          |
 last_name  | character varying(30) |           |          |
 email_add  | character varying(30) |           |          |
 country    | character varying(60) |           |          |
 iban       | character varying(60) |           |          |
 amount     | integer               |           |          |

prod=#

prod=# table customers;
 first_name | last_name |        email_add        |   country    |            iban            |   amount
------------+-----------+-------------------------+--------------+----------------------------+------------
 Michel     | Delco     | michel.delco@yaaa.fr    | FRANCE       | FR763000600001123456890189 |    5000000
 Denise     | Blanchot  | denise.blanchot@yaaa.de | GERMANY      | DE91100000000123456789     | 1000000000
 Farid      | Dim       | farid.dim@yaaa.sa       | Saudi Arabia | SA4420000001234567891234   |    2500000
(3 rows)

prod=#

Let’s say that I want some people to access to all data for this table, but I don’t want them to see the real email, the real country and the real iban of the customers.
One solution should be to create a view with anonymous data for these columns. This will replace them with random-but-plausible values for these columns

prod=# create view Customers_anon as select first_name as Firstname ,last_name  as Lastnmame,anon.random_email() as Email ,anon.random_country() as Country, anon.random_iban() as Iban ,amount as Amount from customers;
CREATE VIEW

And then grant the access privilege to concerned people

prod=# select * from customers_anon ;
 firstname | lastnmame |             email             | country |            iban            |   amount
-----------+-----------+-------------------------------+---------+----------------------------+------------
 Michel    | Delco     | wlothean0@springer.com        | Spain   |  AD1111112222C3C3C3C3C3C3  |    5000000
 Denise    | Blanchot  | emoraloy@dropbox.com          | Myanmar |  AD1111112222C3C3C3C3C3C3  | 1000000000
 Farid     | Dim       | vbritlandkt@deliciousdays.com | India   |  AD1111112222C3C3C3C3C3C3  |    2500000
(3 rows)

prod=#
 

Cet article Masking Data With PostgreSQL est apparu en premier sur Blog dbi services.

Redhat Forum 2018 – everthing is OpenShift

Wed, 2018-09-12 11:31

I had an exiting and informational day at Redhat Forum Zurich.

After starting with a short welcome in the really full movie theater in Zurich Sihlcity, we had the great pleasure to listen to Jim Whitehurst. With humor he told about the success of Redhat during the last 25 years.

IMG_20180911_094607

The partner and success stories of Vorwerk / Microsoft / Accenture / Avaloq / Swisscom and SAP showed impressivly the potential and the usage of OpenShift.
firefox_2018-09-12_18-11-44

After the lunch break, which was great for networking and talking to some of the partners, the breakout sessions started.
The range of sessions showed the importance of OpenShift for agile businesses.

Here is a short summary of three sessions:
Philippe Bürgisser (acceleris) showed on a practical example his sticking points of bringing OpenShift into production.
PuzzleIT, adcubum and Helsana gave amazing insides into their journey to move adcubum syrius to APPUiO.
RedHat and acceleris explained how Cloud/OpenShift simplifies and improves development cycles.
firefox_2018-09-12_17-50-40

During the end note, Redhat take up the cudgels for women in IT and their importance, a suprising and apreciated aspect – (Red)Hats off!
Thank you for that great event! Almost 900 participants this year can’t be wrong.
firefox_2018-09-12_18-05-56

 

Cet article Redhat Forum 2018 – everthing is OpenShift est apparu en premier sur Blog dbi services.

[Oracle 18c new feature] Quick win to improve queries with Memoptimized Rowstore

Tue, 2018-09-11 09:18

With its 18th release Oracle comes with many improvements. Some of them are obvious and some of them more discrete.
This is the case of the new buffer area (memory area) called the Memoptimize pool. This new area, part of the SGA, is used to store the data and metadata of standard Oracle tables (heap-organized tables) to improve significantly the performance of queries having filter on PKs.

This new MEMPTIMIZE POOL memory area is split in 2 parts:

  • Memoptimize buffer area: 75% of the space reserved to store table buffers the same way as they are store in the so-called buffer cache
  • Hash index: 25% of the space reserved to store the hash index of primary key from table in the Memoptimize buffer area

To manage this space a new parameter MEMOPTIMIZE_POOL_SIZE is available, unfortunately not dynamic. This parameter is fixed at run time and it is not managed with the database automatic memory management. This parameter takes space from the SGA_TARGET so be careful when dimensioning it.

Before this new memory structure, clients who want to query a standard table with a filter on its PK (e.g: where COL_PK = X ) have to wait on I/Os coming from the disk to the memory until reach the X value from the index. Then I/Os again from disk to memory to fetch the table block containing the row from the table where COL_PK = X. This mechanism consumes I/Os of course and also CPU cycles because it involves other processes of the instance who need to perform some tasks.

Now thanks to this new memory space, when a client does the exact same query where COL_PK = X, it can directly hash the value and walk through the Hash Index to find the row location in the Memoptimize buffer area. Then the result is directly picked up by the client process. It results in less CPU consumption and less I/Os disk in most of case at the cost of memory space.

When to used?

It is only useful in case when queries are done on table with an equality filter on the PK. You can balance the need with the size of the requested table and the frequency of usage of such queries.

4 steps activation

  1. Check that the COMPATIBLE parameter is set to 18.0.0 or higher
  2. Set the parameter MEMOPTIMIZE_POOL_SIZE to the desired value (restart required)
  3. Alter (or create) target table with the “MEMOPTIMIZE FOR READ” clause
  4. Then execute the procedure “DBMS_MEMOPTIMIZE.POPULATE( )” to populate the MEMOPTIMIZE POOL with the target table

How to remove a table from the MEMOPTIMIZE POOL ?

With the procedure DROP_OBJECT() from the DBMS_MEMOPTIMIZE package.

You can disable the access to this new MEMPTIMIZE POOL by using the clause “NO MEMOPTIMIZE FOR READ”.

 

I hope this helps and please do not hesitates to contact us should you want more details.

Nicolas

 

Cet article [Oracle 18c new feature] Quick win to improve queries with Memoptimized Rowstore est apparu en premier sur Blog dbi services.

PII search using HCI

Tue, 2018-09-11 05:04

In a previous blog, we described how to install Hitachi Content Intelligence the solution of Hitachi Ventara for data indexing and search. In this blog post, we will see how we can use Hitachi Content Intelligence to perform the basic search on personal information (PII).

Data Connections

HCI allows you to connect to multiple data source using default data connectors. The first step is to create a data connection. By default, multiples data connectors are available:

HCI_data_connectors

For our example, we will simply use the Local File System as the data repository. Note that, the directory must be within the HCI install directory

Below the data connection configuration for our PII demo.

HCI_Data_Connection

Click on Test after adding the information and click on Create.

A new data connection will appear in your dashboard.

Processing Pipelines
After creating the data connection, will build a processing pipeline for our PII example
Click on Processing Pipelines > Create a Pipeline. Enter a name for your pipeline (optionally a description) and click on Create.
Click on Add Stages, and create your desired pipeline. For PII search we will use the following pipeline.
HCI_PII_Pipeline
After building your pipeline, you can test it by clicking on the Test Pipeline button at the top right of your page.
Index Collections
We should now, create an index collection to specify how you want to index your data set.
First, click on Create Index inside the Index Collections button. Create an HCI Index and use the schemaless option.
HCI_Index
Content Classes
Then you should create your content classes to extract your desired information from your data set. For our PII example, we will create 3 content classes for American Express and Visa credit card and for Security Social Number.
HCI_content_classes
For America Express credit card, your should add the following pattern.
HCI_AMEX
Pattern for Visa credit card.
HCI_VISA
Pattern for Social Security Number.
HCI_SSN
Start your workflow
When all steps are completed you can start your workflow and wait until it finish.
HCI_Workflow_Start
HCI Search
Use the HCI Search application to visualize the results.
https://<hci_instance_ip>:8888
Select your index name in the search field, and naviguate through the results.
You can also display the results in charts and graphics
HCI_Search_Graph
This demo is also available in the Hitachi Ventara Community website: https://community.hitachivantara.com/thread/13804-pii-workflows
 

Cet article PII search using HCI est apparu en premier sur Blog dbi services.

PDB lockdown with Oracle 18.3.0.0

Mon, 2018-09-10 11:01

The PDB lockdown feature offers you the possibility to restrict operations and functionality available from within a PDB, and might be very useful from a security perspective.

Some new features have been added to the 18.3.0.0 Oracle version:

  • You have the possibility to create PDB lockdown profiles in the application root like in the CDB root. This facilitates to have a more precise control access to the applications associated with the application container.
  • You can create a PDB lockdown profile from another PDB lockdown profile.
  • Three default PDB lockdown profiles have been added : PRIVATE_DBAAS, SAAS and PUBLIC_DBAAS
  • The v$lockdown_rules is a new view allowing you to display the contents of a PDB lockdown profile.

Let’s make some tests:

At first we create a lockdown profile from the CDB (as we did with Oracle 12.2)

SQL> create lockdown profile psi;

Lockdown Profile created.

We alter the lockdown profile to disable any statement on the PDB side except alter system set open_cursors=500;

SQL> alter lockdown profile PSI disable statement=('ALTER SYSTEM') 
clause=('SET') OPTION ALL EXCEPT=('open_cursors');

Lockdown Profile altered.

Then we enable the lockdown profile:

SQL> alter system set PDB_LOCKDOWN=PSI;

System altered.

We can check the pdb_lockdown parameter value from the CDB side:

SQL> show parameter pdb_lockdown

NAME				     TYPE	 VALUE
------------------------------------ ----------- -------
pdb_lockdown			     string	 PSI

From the PDB side what happens ?

SQL> alter session set container=pdb;

Session altered.

SQL> alter system set cursor_sharing='FORCE';
alter system set cursor_sharing='FORCE'
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> alter system set optimizer_mode='FIRST_ROWS_10';
alter system set optimizer_mode='FIRST_ROWS_10'
*
ERROR at line 1:
ORA-01031: insufficient privileges

This is a good feature, allowing a greater degree of separation between different PDB of the same instance.

We can create a lockdown profile disabling partitioned tables creation:

SQL> connect / as sysdba
Connected.
SQL> create lockdown profile psi;

Lockdown Profile created.

SQL> alter lockdown profile psi disable option=('Partitioning');

Lockdown Profile altered.

SQL> alter system set pdb_lockdown ='PSI';

System altered.

On the CDB side, we can create partitioned tables:

SQL> create table emp (name varchar2(10)) partition by hash(name);

Table created.

On the PDB side we cannot create partitioned tables:

SQL> alter session set container = pdb;

Session altered.

SQL> show parameter pdb_lockdown

NAME				     TYPE
------------------------------------ --------------------------------
VALUE
------------------------------
pdb_lockdown			     string
APP
SQL> create table emp (name varchar2(10)) partition by hash(name);
create table emp (name varchar2(10)) partition by hash(name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

We now have the possibility to create a lockdown profile from another one:

Remember we have the pdb lockdown profile app disabling partitioned tables creation, we can create a new app_hr lockdown profile from the app lockdown profile and add new features to the app_hr one:

SQL> create lockdown profile app_hr from app;

Lockdown Profile created.

The app_hr lockdown profile will not have the possibility to run alter system flush shared_pool:

SQL> alter lockdown profile app_hr disable STATEMENT = ('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

We can query the dba_lockdown_profiles view:

SQL> SELECT profile_name, rule_type, rule, status 
     FROM   dba_lockdown_profiles order by 1;

PROFILE_NAME		   RULE_TYPE	    RULE.        STATUS

APP			    OPTION.     PARTITIONING	 DISABLE
APP_HR			   STATEMENT	ALTER SYSTEM	 DISABLE
APP_HR		            OPTION.     PARTITIONING     DISABLE
SQL> alter system set pdb_lockdown=app_hr;

System altered.

SQL> alter session set container=pdb;

Session altered.

SQL> alter system flush shared_pool ;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

If we reset the pdb_lockdown to app, we now can flush the shared pool:

SQL> alter system set pdb_lockdown=app;

System altered.

SQL> alter system flush shared_pool ;

System altered.

We now can create lockdown profiles in the application root, so let’s create an application PDB:

SQL> CREATE PLUGGABLE DATABASE apppsi 
AS APPLICATION CONTAINER ADMIN USER app_admin IDENTIFIED BY manager
file_name_convert=('/home/oracle/oradata/DB18', 
'/home/oracle/oradata/DB18/apppsi');  

Pluggable database created.

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB				  READ WRITE NO
	 4 APPPSI			  MOUNTED

We open the application PDB:

SQL> alter pluggable database apppsi open;

Pluggable database altered.

We connect to the application container :

SQL> alter session set container=apppsi;

Session altered.

We have the possibility to create a lockdown profile:

SQL> create lockdown profile apppsi;

Lockdown Profile created.

And to disable some features:

SQL> alter lockdown profile apppsi disable option=('Partitioning');

Lockdown Profile altered.

But there is a problem if we try to enable the profile:

SQL> alter system set pdb_lockdown=apppsi;
alter system set pdb_lockdown=apppsi
*
ERROR at line 1:
ORA-65208: Lockdown profile APPPSI does not exist.

And surprise we cannot create a partitioned table:

SQL> create table emp (name varchar2(10)) partition by hash(name);
create table emp (name varchar2(10)) partition by hash(name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

Let’s do some more tests: we alter the lockdown profile like this:

SQL> alter lockdown profile apppsi disable statement=('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

SQL> alter system flush shared_pool;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

In fact we cannot use sys in order to test lockdown profiles in APP root, we have to use an application user with privileges such as create or alter lockdown profiles in the application container. So after creating an appuser in the application root:

SQL> connect appuser/appuser@apppsi

SQL> create lockdown profile appuser_hr;

Lockdown Profile created.

SQL> alter lockdown profile appuser_hr disable option=('Partitioning');

Lockdown Profile altered.

And now it works fine:

SQL> alter system set pdb_lockdown=appuser_hr;

System altered.

SQL> create table emp (name varchar2(10)) partition by hash (name);
create table emp (name varchar2(10)) partition by hash (name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

And now can we enable again the partitioning option for the appuser_hr profile in the APP root ?

SQL> alter lockdown profile appuser_hr enable option = ('Partitioning');

Lockdown Profile altered.

SQL> create table emp (name varchar2(10)) partition by hash (name);
create table emp (name varchar2(10)) partition by hash (name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

It does not work as expected, the lockdown profile has been updated, but as previously we cannot create a partitioned table.

Let’s do another test with the statement option: we later the lockdown profile in order to disable all alter system set statements except with open_cursors:

SQL> alter lockdown profile appuser_hr disable statement=('ALTER SYSTEM') 
clause=('SET') OPTION ALL EXCEPT=('open_cursors');

Lockdown Profile altered.

SQL> alter system set open_cursors=500;

System altered.

This is a normal behavior.

Now we alter the lockdown profile in order to disable alter system flush shared_pool:

SQL> alter lockdown profile appuser_hr disable STATEMENT = ('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

SQL> alter system flush shared_pool;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

That’s fine :=)

Now we enable the statement:

SQL> alter lockdown profile appuser_hr enable STATEMENT = ('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

SQL> alter system flush shared_pool;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

And again this is not possible …

Let’s try in the CDB root:

SQL> connect / as sysdba
Connected.

SQL> alter lockdown profile app disable statement =('ALTER SYSTEM') 
clause=('SET') OPTION ALL EXCEPT=('open_cursors');

Lockdown Profile altered.

SQL> alter session set container=pdb;

Session altered.

SQL> alter system set cursor_sharing='FORCE';
alter system set cursor_sharing='FORCE'
*
ERROR at line 1:
ORA-01031: insufficient privileges

The behavior is correct, let’s try to enable it :

SQL> connect / as sysdba
Connected.

SQL> alter lockdown profile app enable statement=('ALTER SYSTEM') 
clause ALL;

Lockdown Profile altered.

SQL> alter session set container=pdb;

Session altered.

SQL> alter system set cursor_sharing='FORCE';

System altered.

This is correct again, it seems it does not work correctly in the APP root …

In conclusion the lockdown profile new features are powerful and will be very useful for security reasons. It will allow the DBAs to define a finer granularity  to restrict user’s rights to what they only need to access. But we have to be careful, with the PDB lockdown profiles we can build and generate very complicated database administration.

 

 

 

 

 

 

 

 

 

 

 

 

 

Cet article PDB lockdown with Oracle 18.3.0.0 est apparu en premier sur Blog dbi services.

Connecting to Azure SQL Managed Instance from on-premise network

Mon, 2018-09-10 06:26

A couple of weeks ago, I wrote up about my first immersion into the SQL Server managed instances (SQLMIs), a new deployment model of Azure SQL Database which provides near 100% compatibility with the latest SQL Server on-premises Database Engine. In the previous blog post, to test a connection to this new service, I installed an Azure virtual machine on the same VNET (172.16.0.0/16) including SQL Server management studio. For testing purpose, we don’t need more, but in real production scenario chances are your SQL Azure MI would be part of your on-premise network with a more complex Azure network topology including VNET, Express Route or VPN S2S as well. Implementing such infrastructure won’t be likely your concern if you are a database administrator. But you need to be aware of the underlying connectivity components and how to diagnose possible issues or how to interact with your network team in order to avoid being under pressure and feeling the wrath of your application users too quickly :)

So, I decided to implement this kind of infrastructure in my lab environment but if you’re not a network guru like me you will likely face some difficulties to configure some components especially when it comes the VPN S2S. In addition, you have to understand different new notions about Azure network before hoping to see your infrastructure work correctly. As an old sysadmin, I admit it was a very great opportunity to turn my network learning into a concrete use case. Let’s first set the initial context. Here my lab environment I’ve been using for a while for different purposes as internal testing and event presentations as well. It addresses a lot of testing scenarios including multi-subnet architectures with SQL Server FCIs and SQL Server availability groups.

blog 142 - 1 - lab environment

bviously, some static routes are already set up to allow network traffic between my on-premise subnets. As you guessed, the game consisted in extending this on-premise network to my SQL MI network on Azure. As a reminder, SQL MI is not reachable from a public endpoint and you may connect only from an internal network (either directly from Azure or from your on-premise network). As said previously one of my biggest challenges was to configure my remote access servers as VPN server to communicate with my SQL MI Azure network. Fortunately, you have a plenty of pointers on the internet that may help you to achieve this task.  This blog post is a good walk-through by the way. In my context, you will note I also had to apply special settings to my home routeur in order to allow IPsec Passthrough as well as to add my RRAS server internal IP (192.168.0.101) to the DMZ. I also used IKEv2 VPN protocol and pre-shared key for authentication between my gateways on-premise and on Azure. The VPN S2S configuration is environment specific and this probably why doing a presentation to customer or at events is so difficult especially if you’re outside of your own network.

Anyway, let’s talk about the Azure side configuration. My Azure network topology is composed of two distinct VNETs as follows:

blog 142 - 2 - VNET configuration

The connection between my on-premise and my Azure networks are defined as shown below:

$vpnconnection = Get-AzureRmVirtualNetworkGatewayConnection -ResourceGroupName dbi-onpremises-rg 
$vpnconnection.Name
$vpnconnection.VirtualNetworkGateway1.Id
$vpnconnection.LocalNetworkGateway2.Id 

dbi-vpn-connection
/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/virtualNetworkGateways/dbi-virtual-network-gw
/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/localNetworkGateways/dbi-local-network-gw

 

The first VNET (172.17.x.x) is used as hub virtual network and owns my gateway. The second one (172.16.x.x) concerns is SQL MI VNET:

Get-AzureRmVirtualNetwork | Where-Object { $_.Name -like '*-vnet' } | % {

    Get-AzureRmVirtualNetworkSubnetConfig -VirtualNetwork $_ | Select Name, AddressPrefix
} 

Name          AddressPrefix  
----          -------------  
default       172.17.0.0/24  
GatewaySubnet 172.17.1.0/24  
vm-net        172.16.128.0/17
sql-mi-subnet 172.16.0.0/24

 

My azure gateway subnet (GatewaySubnet) is part of the VPN connectivity with the related gateway connections:

$gatewaycfg = Get-AzureRmVirtualNetworkGatewayConnection -ResourceGroupName dbi-onpremises-rg -Name dbi-vpn-connection 
$gatewaycfg.VirtualNetworkGateway1.Id
$gatewaycfg.LocalNetworkGateway2.Id 

/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/virtualNetworkGateways/dbi-virtual-network-gw
/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/localNetworkGateways/dbi-local-network-gw

 

The dbi-local-network-gw local gateway includes the following addresses prefix that correspond to my local lab environment network:

$gatewaylocal = Get-AzureRMLocalNetworkGateway -ResourceGroupName dbi-onpremises-rg -Name dbi-local-network-gw 
$gatewaylocal.LocalNetworkAddressSpace.AddressPrefixes 

192.168.0.0/16
192.168.5.0/24
192.168.40.0/24

 

Note that I’ve chosen a static configuration but my guess is that I could turn to the BGP protocol instead to make things more dynamic. I will talk quickly about using BGP with routing issues at the end of the write-up. But at this stage, some misconfiguration steps are missing to hope reaching out my SQL MI instance from my lab environment network. Indeed, although my VPN connection status is ok, I was able only to reach out my dbi-on-premise-vnet VNET and I need a way to connect to the sql-mi-vnet VNET. So, I had to turn on both the virtual network peering and gateway transit mechanism. Peering 2 VNETs Azure automatically routes traffic between them by the way.

blog 142 - 3 - VNET configuration peering

Here the peering configuration I applied to my dbi-onpremises-vnet VNET (first VNET):

Get-AzureRmVirtualNetworkPeering -ResourceGroupName dbi-onpremises-rg -VirtualNetworkName dbi-onpremises-vnet | `
Select-Object VirtualNetworkName, PeeringState, AllowVirtualNetworkAccess, AllowForwardedTraffic, AllowGatewayTransit, UseRemoteGateways 

$peering = Get-AzureRmVirtualNetworkPeering -ResourceGroupName dbi-onpremises-rg -VirtualNetworkName dbi-onpremises-vnet 
Write-Host "Remote virtual network peering"
$peering.RemoteVirtualNetwork.Id 

VirtualNetworkName        : dbi-onpremises-vnet
PeeringState              : Connected
AllowVirtualNetworkAccess : True
AllowForwardedTraffic     : True
AllowGatewayTransit       : True
UseRemoteGateways         : False

Remote virtual network peering
/subscriptions/xxxxx/resourceGroups/sql-mi-rg/providers/Microsoft.Network/virtualNetworks/sql-mi-vnet

 

And here the peering configuration of my sql-mi-vnet VNET (2nd VNET):

Get-AzureRmVirtualNetworkPeering -ResourceGroupName sql-mi-rg -VirtualNetworkName sql-mi-vnet | `
Select-Object VirtualNetworkName, PeeringState, AllowVirtualNetworkAccess, AllowForwardedTraffic, AllowGatewayTransit, UseRemoteGateways 

$peering = Get-AzureRmVirtualNetworkPeering -ResourceGroupName sql-mi-rg -VirtualNetworkName sql-mi-vnet
Write-Host "Remote virtual network peering"
$peering.RemoteVirtualNetwork.Id 

VirtualNetworkName        : sql-mi-vnet
PeeringState              : Connected
AllowVirtualNetworkAccess : True
AllowForwardedTraffic     : True
AllowGatewayTransit       : False
UseRemoteGateways         : True

Remote virtual network peering
/subscriptions/xxxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/virtualNetworks/dbi-onpremises-vnet

 

Note that to allow traffic that comes from my on-premise network to go through my first VNET (dbi-onpremises-vnet) at the destination of the second one (sql-mi-vnet), I need to enable some configuration settings as Allow Gateway Transit, Allow Forwarded Traffic and remote gateway on the concerned networks.

At this stage, I still faced a weird issue because I was able to connect to a virtual machine installed on the same VNET than my SQL MI but no luck with the SQL instance. In addition, the psping command output confirmed my connectivity issue letting me think about a routing issue.

blog 142 - 5 - psping command output

Routes from my on-premise network seemed to be well configured as show below. The VPN is a dial up internet connection in my case.

blog 142 - 6 - local route

I also got the confirmation that my on-premise network packets were correctly sent through my Azure VPN gateway by the Microsoft support team (a particular to Filipe Bárrios – support engineer Azure Networking). In fact, I got stuck a couple of days without to figure out exactly what happens. Furthermore Checking effective routes seemed to not viable option in my case because there is no explicit network interface with SQL MI. Please feel free to comment if I get wrong on this point. Fortunately, I found out a PowerShell script provided by Jovan Popovic (MSFT) which seems to have put me on the right track:

blog 142 - 4 - VNET PGP configuration

Referring to the Microsoft documentation, it seems PGP propagation could be very helpful in my case.

Support transit routing between your on-premises networks and multiple Azure VNets

BGP enables multiple gateways to learn and propagate prefixes from different networks, whether they are directly or indirectly connected. This can enable transit routing with Azure VPN gateways between your on-premises sites or across multiple Azure Virtual Networks.

After enabling the corresponding option in the SQL MI route table and opening the SQL MI ports in my firewall connection was finally successful.

blog 142 - 8 - sql mi route bgp

blog 142 - 7 - psping output 2

Hope it helps!

See you

 

 

Cet article Connecting to Azure SQL Managed Instance from on-premise network est apparu en premier sur Blog dbi services.

Oracle 18c: Cluster With Oracle ASM Filter Driver

Sat, 2018-09-08 17:32

During the installation of Oracle Grid Infrastructure, you can optionally enable automated installation and configuration of Oracle ASM Filter Driver for your system with the Configure ASM Filter Driver check box on the Create ASM Disk Group wizard page. When you enable the Configure ASM Filter Driver box, an automated process for Oracle ASMFD is launched during Oracle Grid Infrastructure installation.

If Oracle ASMLIB exists on your Linux system, then deinstall Oracle ASMLIB before installing Oracle Grid Infrastructure, so that you can choose to install and configure Oracle ASMFD during an Oracle Grid Infrastructure installation.
In this blog I do install a 2 nodes cluster of Oracle 18c using Oracle ASMFD. Below the disks we will use.

[root@rac18ca ~]# ls -l /dev/sd[d-f]
brw-rw----. 1 root disk 8, 48 Sep  8 22:09 /dev/sdd
brw-rw----. 1 root disk 8, 64 Sep  8 22:09 /dev/sde
brw-rw----. 1 root disk 8, 80 Sep  8 22:09 /dev/sdf
[root@rac18ca ~]#

[root@rac18cb ~]# ls -l /dev/sd[d-f]
brw-rw----. 1 root disk 8, 48 Sep  8 22:46 /dev/sdd
brw-rw----. 1 root disk 8, 64 Sep  8 22:46 /dev/sde
brw-rw----. 1 root disk 8, 80 Sep  8 22:46 /dev/sdf
[root@rac18cb ~]#

We suppose that all prerequisites are done (public IP, private IP, scan,shared disks ….). Also we will not show all print screens.
The first step is to unzip the Oracle software in the ORACLE_HOME for the grid infrastructure.

unzip -d /u01/app/grid/18.0.0.0 LINUX.X64_180000_grid_home.zip

After we have to use the ASMCMD afd_label command to provision disk devices for use with Oracle ASM Filter Driver as follows.

[root@rac18ca ~]# export ORACLE_HOME=/u01/app/oracle/18.0.0.0/grid
[root@rac18ca ~]# export ORACLE_BASE=/tmp                                       
[root@rac18ca ~]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_label VOTOCR /dev/sde --init
[root@rac18ca ~]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_label DATA /dev/sdd --init
[root@rac18ca ~]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_label DIVERS /dev/sdf --init
[root@rac18ca ~]#

And then we can use the ASMCMD afd_lslbl command to verify the device has been marked for use with Oracle ASMFD.

[root@rac18ca network-scripts]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_lsl                              bl /dev/sde
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
VOTOCR                                /dev/sde
[root@rac18ca network-scripts]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_lslbl /dev/sdd
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DATA                                  /dev/sdd
[root@rac18ca network-scripts]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_lslbl /dev/sdf
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DIVERS                                /dev/sdf
[root@rac18ca network-scripts]#

Now that disks are initialized for ASMFD, we can start the installation.

[oracle@rac18ca grid]$ ./gridSetup.sh

We will not show all the pictures.

imag1

imag2

imag3

imag4

imag5

imag6

imag7

And in next window, we can choose the disks for the OCR and Voting files. We will also check Configure Oracle ASM Filter Driver.

imag8

And then continue the installation. We will have to run the orainstRoot.sh and the root.sh scripts. All these steps are not shown here.
At the end of the installation we can verify the status of the cluster

[oracle@rac18cb ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]

[oracle@rac18ca ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.DG_DATA.dg
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.DG_VOTOCR.dg
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.net1.network
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.ons
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.proxy_advm
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.MGMTLSNR
      1        OFFLINE OFFLINE                               STABLE
ora.asm
      1        ONLINE  ONLINE       rac18ca                  Started,STABLE
      2        ONLINE  ONLINE       rac18cb                  Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.mgmtdb
      1        OFFLINE OFFLINE                               STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.rac18ca.vip
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.rac18cb.vip
      1        ONLINE  ONLINE       rac18cb                  STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac18ca                  STABLE
--------------------------------------------------------------------------------
[oracle@rac18ca ~]$

We also can check that ASMFD is enabled.

[oracle@rac18ca ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DATA                        ENABLED   /dev/sdd
DIVERS                      ENABLED   /dev/sdf
VOTOCR                      ENABLED   /dev/sde
[oracle@rac18ca ~]$


[oracle@rac18ca ~]$ asmcmd dsget
parameter:/dev/sd*, AFD:*
profile:/dev/sd*,AFD:*
[oracle@rac18ca ~]$

[oracle@rac18ca ~]$ asmcmd lsdsk
Path
AFD:DATA
AFD:DIVERS
AFD:VOTOCR
[oracle@rac18ca ~]$

Conclusion
In this blog we have seen how we can install a cluster using ASMFD

 

Cet article Oracle 18c: Cluster With Oracle ASM Filter Driver est apparu en premier sur Blog dbi services.

Pages