Yann Neuhaus

Subscribe to Yann Neuhaus feed Yann Neuhaus
Updated: 5 hours 7 min ago

Doc.SERIES – The « User Friendly » ECM solution

Fri, 2023-11-17 04:15

There are several Electronic Content Management tools (ECM) on the market that are very powerful in their functionalities but are not always easy to configure or maintain, particularly if the size of your company does not allow for a large IT department.

However, a good document management in a company is a necessity to:

– Avoid administrative pitfalls

– Optimize the sales process, personnel management, purchasing management, etc.

Doc.SERIES combines simplicity of use with the power of configuration. Without being as powerful and expensive as a solution like Documentum or Alfresco, this ECM solution perfectly meets the needs of SME – SMI type companies for such kind of application.

This is the first blog from a series that will describe the different functionalities and artefacts used in a Doc.SERIES application deployment.

The Doc.SERIES solution uses a “modular” architecture structure. It is structured around a main module, Doc.ECM, onto which various specialized modules are added. In other words, the user only uses and pays for the modules he needs.

Let’s do a quick presentation of the solution and its modules.

Doc.ECM

Doc.ECM being the heart of the system, it is the only mandatory module to be able to use the tool. It will manage all your document management processes. It can work very well independently, without any other module being installed.

I will come back to this module more in details.

Doc.Capture

The Doc.Capture module is used to automate the digitization of your documents via an OCR process (Optical Character Recognition).

Doc.Convert

The Doc.Convert module allows you to scan a large quantity of documents into PDF format, including image type documents (JPEG, TIFF, etc.)

Doc.Printer

The Doc.Printer module allows you to send all types of documents from your business applications to Doc.ECM.

Doc.Loader

The Doc.Lodaer module is a tool that allows you to quickly transfer, scan, and hierarchize documents saved in directories to Doc.ECM.

Doc.Mobility

Doc.Mobility is the module of the Doc.SERIES solution that allow you to scan, validate or search documents from a tablet, smartphone or any other mobile device.

Doc.Import

Doc.Import is the module that will allow you to automatically import documents saved in a specific folder into Doc.ECM.

Doc.Office

This module will allow you to send your documents directly to Doc.ECM via your Office tools.

Doc.Desktop

This last module allows you to directly export documents from Windows Explorer or your Windows Desktop to Doc.SERIES or other applications such as CRM, Accounting Solution, etc…

As described above, the heart of the Doc.SERIES solution is the Doc.ECM module. Ergonomic, intuitive and complete, it can operate independently, without the support of other modules.

The User Interface

As you can see, the main interface of the application consists of 3 main parts:

Applications Menus

  • Each user may or may not have access to the different menus of the application.
  • The rights on the menus are completely and very easily configurable according to the different user profiles that you define.
  • The Menu management for application administrators is done via a dialog box. No command line are needed.

The Search Interface

  • Doc.ECM uses a very powerful, easy-to-use, multi-criteria search engine that can scan thousands of documents and their content very quickly.
  • The search can be global or more targeted.

The Document Management Interface

  • This is the main document management dashboard.
  • It mainly consists of contextual business-oriented “Document baskets”.
  • Document management is done through flows moving the document from one status to another.
  • It is intuitive and easily customizable by each user.
  • It gives the possibility of displaying information on document distribution flows.
  • It can display statistics or reports regarding document activity.

Adding a new document

  • Add a new document using a drag / drop action.
  • Complete the metadata manually or using an automatic process based on a RAD/LAD technology.

The Doc.SERIES ECM solution is a good mix between a “User Friendly” use and a complex document management.

One of his strongest advantages is that the handling of the tool is really easy to do and it could be used in all the company services thanks to the Mobility functionalities.

In conclusion, to use this solution gives your company a major asset in all the document management tasks that costs energy, money and workload.

A “must-have”…

Publisher Info @Doc.SERIES : https://www.doc-series.ch

L’article Doc.SERIES – The « User Friendly » ECM solution est apparu en premier sur dbi Blog.

SQL Server Integrated acceleration & offloading

Thu, 2023-11-16 01:30
Introduction

I was asked by one of my customers to have a look at the SQL Server 2022 Integrated acceleration & offloading feature. The goal of it is to provide a framework for offloading specific SQL Server workload compute to hardware devices.

 At first it seems very interresting to speed up for instance backup tasks especially for big databases.

But reading the documentation, specifically the Intel drivers information, I was disappointed, because it was specified that the CPU must support the QuickAssist Technology (IntelQAT) and for virtualized Windows Server only Hyper-V was supported.

My customer environment was not at all in this situation, running the SQL Server on VMWare and the host having CPU without this QAT feature available.

I thought than that it was not possible to use this new feature, but my customer found some blog explaining that the CPU must not specifically have this QAT feature. The acceleration can be done only on software level and not hardware.

Well I was curious than to test this solution and see if it really can save time for the backup tasks.

Let’s have a look first, what we need to install, how to install the needed drivers and how to  configure SQL Server to be able to use this  Integrated acceleration & offloading feature.

Installation of the latest IntelQAT driver 

You wiil be able to find and download the latest IntelQAT driver from  the following link:

Intel® QuickAssist Technology Driver for Windows* – HW Version 2.0

Save the IntelQAT drvier in c:\temp and unzip the QAT2.0.W.2.0.5-0014.zip file in C:\temp\IntelQAT\QAT2.0.W.2.0.5-0014 folder

To install the IntelQAT driver start the QatSetup.exe located in C:\temp\IntelQAT\QAT2.0.W.2.0.5-0014\QuickAssist\Setup 

Click [yes] to the warning, to use the software compression option. You can see that in this version it is only a warning asking if you want to use software compression because the QuickAssist Technology Accelerator option is not available on processor hardware on the system. In older driver installation wizard it was an error message, that was a bit scaring, but the warning message reassure me that it can be used even if the Host CPU have no QAT possibility.

For the driver installation, that is all what need to be done. Very easy in fact.

Configuration of the SQL Server instance

In order to configure the SQL Server instance I did the following steps:

  • Enable hardware offloading, setting the server configuration option
execute sp_configure 'show advanced options', 1
RECONFIGURE
execute sp_configure 'hardware offload enabled', 1
RECONFIGURE
GO
  • Restart the SQL Server service (in case of AlwaysOn setup, if your server is primary, failover the availability group to the other replica){}
  • Enable accelerator hardware offloading
ALTER SERVER CONFIGURATION  
SET HARDWARE_OFFLOAD = ON (ACCELERATOR = QAT);
  • Restart the SQL Server service
  • Check if you configuration is set  correctly

Run first the following query

 

select *
from sys.configurations c
where c.name like 'backup compression%'
or c.name like '%offload%'
order by c.name

Check that the values of the SQL Server instance configuration parameter
hardware offload enabled = 1

Check if the IntelQAT driver is detected

select * from sys.dm_server_accelerator_status

Check if you have the QAT accelarator [mode_reason_desc] has the value ‘SOFTWARE_MODE_ACCELERATOR_HARDWARE_NOT_FOUND’
Check also the [accelerator_library_version] column to check the version of the driver detected. 

Test the backup

With normal compression algorithm = MS_XPRESS 

BACKUP DATABASE [TestDB] TO DISK = N'\\BackupShareDrive\TestDB_FULLbackup_CopyOnly_20231102_MS_XPRESS.bak' 
WITH CHECKSUM, COMPRESSION, COPY_ONLY

Processed 15956856 pages for database 'TestDB', file 'TestDB' on file 1.
Processed 5 pages for database 'TestDB', file 'TestDB_log' on file 1.
BACKUP DATABASE successfully processed 15956861 pages in 499.640 seconds (249.505 MB/sec).

Completion time: 2023-11-02T15:33:54.1892323+01:00


With IntelQAT compression algorithm = QAT_DEFLATE

BACKUP DATABASE [TestDB] TO DISK = N'\\BackupShareDrive\TestDB_FULLbackup_CopyOnly_20231102_QAT_DEFLATE.bak' 
WITH CHECKSUM, COMPRESSION (ALGORITHM = QAT_DEFLATE), COPY_ONLY,  STATS = 10 


Processed 15956704 pages for database 'TestDB', file 'TestDB' on file 1.
Processed 1 pages for database 'TestDB', file 'TestDB_log' on file 1.
BACKUP DATABASE successfully processed 15956705 pages in 238.594 seconds (522.484 MB/sec).

Completion time: 2023-11-02T15:40:47.8642423+01:00

We can notice that the backup time was reduce by 2

Check the backup information

select
bs.database_name,
bs.backup_start_date,
bs.backup_finish_date,
1.0 * datediff(second,bs.backup_start_date,bs.backup_finish_date)/60 as BackupDuration_Min,
bs.backup_finish_date,
bs.backup_size,
bs.compressed_backup_size,
bs.compression_algorithm

from msdb.dbo.backupset bs
where bs.database_name = 'TestDB'


Configure the backup to have the IntelQAT software compression with algorithm = QAT_DEFLATE by default.

execute sp_configure 'backup compression algorithm', 2
RECONFIGURE

If you start now a database backup the QAT_DEFLATE compression algorithm will be apply by default. You do not have to specify it specifically on your backup command.

Conclusion

The feature is quite easy to configure. It can speed up your big database backups with very few configuration effort.

It is a mean to save time and disk space too, although the compression ratio is not really higher than the standard compression mode.

It is just annoying that the Intel QAT driver documentation specify that only processors with the QAT feature and Hyper-V virtualization is supported by this driver…but it seems to work well using the software only option.

But before deciding to go for it, of course, I do not have to tell you, please test the restore process…

L’article SQL Server Integrated acceleration & offloading est apparu en premier sur dbi Blog.

Cloudbees CD/RO, send and receive parameters with bash / Ansible scripts

Mon, 2023-11-13 09:00

Sending a pipeline parameter to a bash or Ansible script is easy, but the reverse is less obvious.

In this article, we’ll look at how to return parameters from an Ansible script to a Cloudbees CD/RO job.

The easy way

You can easily transfer a parameter from the Cloudbees CD/RO procedure to an Ansible script. Simply pass the parameter or variable to the command to execute:

ansible-playbook playbooks/pipeline_action.yml -l $[ansibleHost] -e myAnsibleParam="${myCDParams}"
Types of arguments

Many kinds of data can be used to transfer values to Ansible script:

  • Input Parameter, from pipeline or procedure form. Exemple: $[myParameter]
  • Properties, from any object in Cloudbees CD/RO. Exemple: $[/projects/my_project_name/my_property]
  • Variable, define on procedure runtime. Exemple: ${my_varaibles}
Getting data, the hard way

Sending data is easy, but receiving data from a script to use it into a pipeline is more complicated.

To obtain data from a script, you need to run a script that creates a property for the job. After that, the property can be used in the procedure run.

Running a script to get data

Simply run a script, such as:

ansible-playbook playbooks/get_values.yml 

Parameter transfer takes place inside the script using the Cloudbees CD/RO command ectool:

- name: Define environment
  command: "ectool setProperty /myJob/environmentCreated {{ success_state }}"

This task defines an “environmentCreated” property in the job scope with as value for the ansible variable success_state.

You can create as many properties as you need.

Using the property

Like all properties, it can be accessed using $[…]
Example:

if [[ "$[/myJob/environmentCreated]" == "success" ]]; then
   ...
Conclusion

Retrieving values from a script in Cloudbees pipelines is perfectly possible using ectool, however, scripts need to be adapted to define properties that can be used in procedures.

To learn more about properties, see the Cloudbees CD/RO documentation

L’article Cloudbees CD/RO, send and receive parameters with bash / Ansible scripts est apparu en premier sur dbi Blog.

Monitor Microsoft SQL Server with Zabbix

Mon, 2023-11-13 07:12

In this blog post, we will quickly see what is required to monitor a Microsoft SQL Server with Zabbix.

Agent Setup

Agent setup is not required by MSSQL by ODBC template; thus this step can be skipped.

Setup Monitoring Account

The best practice is to create an account in the database dedicated to monitoring which will have limited permissions (view, select). Setup chapter of MS SQL integration on Zabbix web site explains it well.

In our example, this user will be named zbx_monitor.

ODBC Driver Linux

Zabbix server will trigger SQL queries directly against MS SQL Server. To achieve this, it requires ODBC to be set up locally.

You can refer to Microsoft web site (link here) which provides all the steps to install ODBC driver depending on your linux flavor. Roughly, installation is composed of following steps:

  1. Add MSSQL Report
  2. Uninstall any installed ODBC packages
  3. Install Microsoft driver
Setup Data Source Name (DSN)

Before being able to use driver, we must declare it in /etc/odbcinst.ini:

[ODBC Driver 17 for SQL Server]
Description=Microsoft ODBC Driver 17 for SQL Server
Driver=/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.10.so.5.1
UsageCount=1

Next, we can declare DSN in /etc/odbc.ini:

[mssql-mfiles]
Description = MS SQL M-Files Database
Driver = ODBC Driver 17 for SQL Server
Server = <IP>
Port = 1433
User = zbx_monitor
Password = <password>
Database = Sandbox

The name between square bracket is the DSN that we will use later in the Zabbix configuration. Note that the ODBC driver does not use password from that file.

Finally, we can test our ODBC setup with the command:

isql -v mssql-mfiles zbx_monitor <password>

If all worked as expected, you should see this output and get a SQL prompt:

+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| echo [string]                         |
| quit                                  |
|                                       |
+---------------------------------------+
SQL>

To active ODBC poller on Zabbix, we must uncomment line in /etc/zabbix/zabbix_server.conf:

StartODBCPollers=5

Followed by a zabbix_server service restart.

Link MSSQL by ODBC Template

Next step is in the Zabbix UI. Link MSSQL by ODBC template to the host and set the following macros:

  • {$MSSQL.USER}
  • {$MSSQL.PASSWORD}
  • {$MSSQL.DSN} as defined in /etc/odbc.ini (ie. mssql-mfiles)
  • {$MSSQL.INSTANCE} if different from SQLServer

As an example, it might look like this:

After few seconds, you should see discovered databases and new items:

As well as pre-built dashboards:

L’article Monitor Microsoft SQL Server with Zabbix est apparu en premier sur dbi Blog.

Manage SSH public keys with Oracle Key Vault (2/2)

Mon, 2023-11-13 06:53

Manage SSH public keys with Oracle Key Vault (1/2)

In the first post these steps where performed:

  • Configure the REST API.
  • How to install the REST API on server.
  • Configure the REST API on server.
  • Declare and enroll the server endpoint using the REST API.

Let’s now upload the keys configure the ssh daemon and make a connection using ssh key from OKV.

Upload public key to OKV

First generate some ssh keys:

[opc@tstokvcli ~]$ ssh-keygen -t rsa -b 2048

If the public key is not in the format PKCS8, which is almost sure, we need to convert it first.

[opc@tstokvcli ~]$ ssh-keygen -e -m PKCS8 -f $HOME/.ssh/id_rsa.pub > $HOME/.ssh/id_rsa-pkcs8.pub

[opc@tstokvcli ~]$ cat $HOME/.ssh/id_rsa-pkcs8.pub
-----BEGIN PUBLIC KEY-----
MIIBojANBgkqhkiG9w0BAQEFAAOCAY8AMIIBigKCAYEAuLrgarG7OgvUkixlu25d
........
KhrUWpazjmhm9fR+RIec6fjgVbh/7Q6dKPzl0kY00yUawvejsLFChWi+V4Mk3PUk
2mbHWkxIFvMab4HXWfAwXyMTUXNv/Bs/jTaK4Z3eb87HAgMBAAE=
-----END PUBLIC KEY-----

Upload the key

[opc@tstokvcli ~]$ $OKV_HOME/bin/okv managed-object public-key register --generate-json-input > /tmp/t.json

[opc@tstokvcli log]$ cat /tmp/t.json
{
  "service" : {
    "category" : "managed-object",
    "resource" : "public-key",
    "action" : "register",
    "options" : {
      "object" : "/home/opc/.ssh/id_rsa-pkcs8.pub",
      "algorithm" : "RSA",
      "length" : "2048",
      "mask" : [ "ENCRYPT" ],
      "sshUser" : "oracle",
      "attributes" : {
        "name" : {
          "value" : "ORACLE-PUBKEY",
          "type" : "text"
        },
        "activationDate" : "NOW",
        "deactivationDate" : "2099-12-01 00:00:00"
      }
    }
  }
}

[opc@tstokvcli log]$ $OKV_HOME/bin/okv managed-object public-key register --from-json /tmp/t.json
{
  "result" : "Success",
  "value" : {
    "uuid" : "2AC6239C-C673-4FBC-BFF1-6D4A17990634"
  }
}

The UUID is to be kept, as this is the identifier in OKV for this key.

At this moment the key is loaded in OKV but is not a part of any wallet. In the previous post we create an ssh wallet oracle_ssh_wallet, and grant the rights to manage this wallet to the endpoint.

Now we are going to add this key to the wallet oracle_ssh_wallet:

[opc@tstokvcli log]$ $OKV_HOME/bin/okv managed-object wallet add-member --generate-json-input > /tmp/t.json

[opc@tstokvcli log]$ cat /tmp/t.json
{
  "service" : {
    "category" : "managed-object",
    "resource" : "wallet",
    "action" : "add-member",
    "options" : {
      "uuid" : "2AC6239C-C673-4FBC-BFF1-6D4A17990634",
      "wallet" : "oracle_ssh_wallet"
    }
  }
}

[opc@tstokvcli log]$ $OKV_HOME/bin/okv managed-object wallet add-member --from-json /tmp/t.json
{
  "result" : "Success"
}

As a verification we can list the wallet content by using the REST API commands:

[opc@tstokvcli log]$ $OKV_HOME/bin/okv manage-access wallet list-objects --generate-json-input > /tmp/t.json

[opc@tstokvcli log]$ cat /tmp/t.json
{
  "service" : {
    "category" : "manage-access",
    "resource" : "wallet",
    "action" : "list-objects",
    "options" : {
      "wallet" : "oracle_ssh_wallet"
    }
  }
}

[opc@tstokvcli log]$  $OKV_HOME/bin/okv manage-access wallet list-objects --from-json /tmp/t.json
{
  "result" : "Success",
  "value" : {
    "fetchedObjectCount" : "1",
    "managedObjects" : [ {
      "creatingEndpoint" : "TSTOKVCLI",
      "creationDate" : "2023-11-07 15:34:27",
      "deactivationDate" : "2099-12-01 00:00:00",
      "displayName" : "SSH Key for user: oracle, Fingerprint: SHA256:CalWw0m5Z3KnlXhCbFz3OmiYT2/ubfrk60DO/0oCTTY",
      "name" : "ORACLE-PUBKEY",
      "protectStopDate" : "",
      "state" : "Active",
      "type" : "Public Key",
      "uuid" : "2AC6239C-C673-4FBC-BFF1-6D4A17990634",
      "walletMembership" : [ "oracle_ssh_wallet" ]
    } ]
  }
}

or using the okvutil from the endpoint, which should work too as this will be the tool used by ssh daemon to retrieve the key from OKV:

[opc@tstokvcli ~]$ /opt/okvutil/bin/okvutil list
Unique ID                               Type            Identifier
2AC6239C-C673-4FBC-BFF1-6D4A17990634	Public Key	SSH Key for user: oracle, Fingerprint: SHA256:CalWw0m5Z3KnlXhCbFz3OmiYT2/ubfrk60DO/0oCTTY

Before configure ssh daemon let’s try to retrieve the key from OKV, and convert it back to an RSA key to be used in authorized_keys file:

[opc@tstokvcli ~]$ $OKV_HOME/bin/okv managed-object public-key get --output_format TEXT --uuid 2AC6239C-C673-4FBC-BFF1-6D4A17990634 > /tmp/ssh_pkcs8.pub 

[opc@tstokvcli ~]$ cat /tmp/ssh_pkcs8.pub 
-----BEGIN PUBLIC KEY-----
MIIBojANBgkqhkiG9w0BAQEFAAOCAY8AMIIBigKCAYEAuLrgarG7OgvUkixlu25d
MU9182jk+UEpCy2vGQdUvp65yp/NylkiBSeIzI35SA79vrJcy/1rXeovosxmryoQ
uDlGFUwCewXqogXSDuwXUlQGsNM2RcsfivTkZYtQ+8B7NGlzyXfDA5i/eGN8yv6U
DTFcsr6EEYgR42CtsbMxQDDeCNRjhKN0DP/41QMaWNFy5qB2zI+jeBgu4yvSmlKA
MH1OCXnRMmFFfkC8wLFRb6GFo1dqBWXAGY4VycGUqeYNfMX/fozb5ArsRCkh//ir
faH5R6gaga9W6UpOlsNWwQ69kaByeQq6Xsnd3s1o6BcVL6YgEH94AkENeUbS3Cs6
11FfgzLQBectgISitpDKtFSepENpcfsSoAEkj24mtPCZKkG1zxYIQyWfXR7y6GbF
KhrUWpazjmhm9fR+RIec6fjgVbh/7Q6dKPzl0kY00yUawvejsLFChWi+V4Mk3PUk
2mbHWkxIFvMab4HXWfAwXyMTUXNv/Bs/jTaK4Z3eb87HAgMBAAE=
-----END PUBLIC KEY-----

[opc@tstokvcli .ssh]$ ssh-keygen -i -f /tmp/ssh_pkcs8.pub  -m PKCS8 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRmSoCFY7XflTUNoPBLLaLYfloGCPhbHewlo7niJ0xUnGHLzaHJ7DHbG9nmxRSFh+eZWzqDt2ZzKVWKLuAG53+3nJZ+/ksZZ80tjC6WtRA37nY3D+RBZ3wA4sxI5jCVqHxbpKQARlDlLMlLLxHEegJ1yXULHmIcJMf9p3HiBJwxSZKWBZyJAWMh6v671EK0RBez+bKDud7/VDhYeVcLOxNceC25NnfjJyftigN95Bk/miIJ49BGThdmcRY9txR2d9RrE7ZK1YEv+bAAKHRb9w8d/FWvo1XMGA+yNWa/5MV/V/EDA1OMcimyWT6YLM4zkmt9GUSOHgmmu/g8niE4HNl

The generated ssh key was:

[opc@tstokvcli .ssh]$ cat id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRmSoCFY7XflTUNoPBLLaLYfloGCPhbHewlo7niJ0xUnGHLzaHJ7DHbG9nmxRSFh+eZWzqDt2ZzKVWKLuAG53+3nJZ+/ksZZ80tjC6WtRA37nY3D+RBZ3wA4sxI5jCVqHxbpKQARlDlLMlLLxHEegJ1yXULHmIcJMf9p3HiBJwxSZKWBZyJAWMh6v671EK0RBez+bKDud7/VDhYeVcLOxNceC25NnfjJyftigN95Bk/miIJ49BGThdmcRY9txR2d9RrE7ZK1YEv+bAAKHRb9w8d/FWvo1XMGA+yNWa/5MV/V/EDA1OMcimyWT6YLM4zkmt9GUSOHgmmu/g8niE4HNl opc@tstokvcli

which is the same but the comment.

Now we are able to populate automatically the authorized_keys file, with a cron job for instance or by script.

Configure the ssh daemon get keys directly from OKV

Now we are going to configure the ssh to automatically get the public key when an ssh client try to connect.
For this we use the AuthorizedKeysCommand and AuthorizedKeysCommandUser parameters of sshd daemon sshd_config file.

AuthorizedKeysCommand specifies a program to be used to look up the user’s public keys.
The program must be owned by root, not writable by group or others and specified by an absolute path.

Otherwise you will have in /var/log/secure traces like:
error: Unsafe AuthorizedKeysCommand "... okv_ssh_ep_lookup_authorized_keys": bad ownership or modes for directory ... bin

For the demo purpose I will create an user oracle on the same server and make a connection from opc account to oracle account using ssh.

[root@tstokvcli ~]# cat /etc/ssh/sshd_config
....
AuthorizedKeysCommand /opt/okvutil/bin/okv_ssh_ep_lookup_authorized_keys get_authorized_keys_for_user %u %f %k get_authorized_keys_for_user %u %f %k
AuthorizedKeysCommandUser root
....

[root@tstokvcli ~]# systemctl restart sshd

For debug purpose, I started sshd daemon in debug mode. The traces show the call of /opt/okvutil/bin/okv_ssh_ep_lookup_authorized_keys program when connection as oracle (ssh oracle@tstokvcli) user are made.

[root@tstokvcli ~]# tail -f /var/log/secure
.....
Nov  8 14:59:56 tstokvcli sshd[21835]: AuthorizedKeysCommand /opt/okvutil/bin/okv_ssh_ep_lookup_authorized_keys get_authorized_keys_for_user oracle SHA256:o6r9tlURDNyA104Ei7RWGUBmXYDdy8jJFSfbWGFKtd0 AAAAB3NzaC1yc2EAAAADAQABAAABAQDRmSoCFY7XflTUNoPBLLaLYfloGCPhbHewlo7niJ0xUnGHLzaHJ7DHbG9nmxRSFh+eZWzqDt2ZzKVWKLuAG53+3nJZ+/ksZZ80tjC6WtRA37nY3D+RBZ3wA4sxI5jCVqHxbpKQARlDlLMlLLxHEegJ1yXULHmIcJMf9p3HiBJwxSZKWBZyJAWMh6v671EK0RBez+bKDud7/VDhYeVcLOxNceC25NnfjJyftigN95Bk/miIJ49BGThdmcRY9txR2d9RrE7ZK1YEv+bAAKHRb9w8d/FWvo1XMGA+yNWa/5MV/V/EDA1OMcim

and from the debug traces:

debug1: restore_uid: 0/0
debug1: temporarily_use_uid: 0/0 (e=0/0)
debug1: /opt/okvutil/bin/okv_ssh_ep_lookup_authorized_keys get_authorized_keys_for_user %u %f %k:14: matching key found: RSA SHA256:o6r9tlURDNyA104Ei7RWGUBmXYDdy8jJFSfbWGFKtd0
debug1: /opt/okvutil/bin/okv_ssh_ep_lookup_authorized_keys get_authorized_keys_for_user %u %f %k:14: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding
Accepted key RSA SHA256:o6r9tlURDNyA104Ei7RWGUBmXYDdy8jJFSfbWGFKtd0 found at /opt/okvutil/bin/okv_ssh_ep_lookup_authorized_keys get_authorized_keys_for_user %u %f %k:14
debug1: restore_uid: 0/0
Postponed publickey for oracle from 172.168.1.181 port 51386 ssh2 [preauth]
.....

debug1: /opt/okvutil/bin/okv_ssh_ep_lookup_authorized_keys get_authorized_keys_for_user %u %f %k:14: matching key found: RSA SHA256:o6r9tlURDNyA104Ei7RWGUBmXYDdy8jJFSfbWGFKtd0
debug1: /opt/okvutil/bin/okv_ssh_ep_lookup_authorized_keys get_authorized_keys_for_user %u %f %k:14: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding
Accepted key RSA SHA256:o6r9tlURDNyA104Ei7RWGUBmXYDdy8jJFSfbWGFKtd0 found at /opt/okvutil/bin/okv_ssh_ep_lookup_authorized_keys get_authorized_keys_for_user %u %f %k:14
debug1: restore_uid: 0/0
.....
Accepted publickey for oracle from 172.168.1.181 port 51386 ssh2: RSA SHA256:o6r9tlURDNyA104Ei7RWGUBmXYDdy8jJFSfbWGFKtd0

In a high secure environement sshd can be configured to disable the .ssh/authorized_keys at all so the only keys to be used will be those get from OKV.

IMPORTANT.

As I spent some time to figure out what happens….

/opt/bin/okvutil use java. So I had to pass SELINUX to PERMISSIVE. Otherwise java cannot be executed from systemd. The error is:

Nov 09 13:38:28 tstokvcli sshd[22452]: + CURRENT_JAVA_VERSION='Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f89584bc000, 2555904, 1) failed; error='\''Permission denied'\'' (errno=13)

To output this error in the /var/log/messages I added the bash debug option set -x to /opt/okvutil/bin/okvcli shell.

In PERMISSIVE mode, SELINUX will output in /var/log/messages all informations and commands to execute to keep the ENFORCED mode, but on this test platform I didn’t implement all this stuff.

Conclusion

The post treat only the the SSH public keys management. OKV propose also to manage the private KEYS and also to generate them. This is for another post.

Is OKV the simplest method to concentrate all public keys ? The answer is maybe no.

But as OKV is the best solution to manage the Oracle TDE keys, it is nice to have, only one tool to manage all keys, SSH included.

L’article Manage SSH public keys with Oracle Key Vault (2/2) est apparu en premier sur dbi Blog.

Manage SSH public keys with Oracle Key Vault (1/2)

Mon, 2023-11-13 06:53

Manage SSH public Keys with Oracle Key Vault (2/2)

Introduction

Oracle Key Vault 21.7 introduce the SSH key support, a new type of endpoint(SSH server) and a new wallet type (SSH Wallet).

As prerequisites the OKV 21.7 need to be installed and a test vm (I named it tstokvcli) must exist.

The post will follow these steps:

  • Configure and install the REST API on test server
  • Declare and enroll the server endpoint using the REST API.
  • Create wallets for the test server.
  • Configure the server to use OKV to query the public key.
  • Make a connection from one account to another on the same server using the public key from OKV.

OKV will provide two type of binaries. One for the REST API installed in /home/opc/okv, and the endpoint management which is installed in /opt/okvutil. All specified paths are not mandatory.

I preferred to use the REST API to make all configuration and not use the http console, in order to give a solution for scripting too. The full configuration cam be made from the web page as well.

About using the REST API

All REST API commands havent two work modes.

One mode by giving the options on command line, for instance:

[opc@tstokvcli ~]$ $OKV_HOME/bin/okv admin endpoint get --endpoint my_endpoint

and another mode by using an intermediate json file. For instance:

[opc@tstokvcli ~]$ $OKV_HOME/bin/okv admin endpoint get --generate-json-input > /tmp/my_endpoint.json
[opc@tstokvcli ~]$ cat /tmp/my_endpoint.json
{
  "service" : {
    "category" : "admin",
    "resource" : "endpoint",
    "action" : "get",
    "options" : {
      "endpoint" : "#VALUE"
    }
  }
}

Now change all #VALUE fields with correct values and execute againt the command:

[opc@tstokvcli ~]$ $OKV_HOME/bin/okv admin endpoint get --from-endpoint /tmp/my_endpoint.json

I preferred to use the second mode, by creating the json. Because first, I want to keep these files, second because it can be exploited with the jq tool and last but not least because some commands have a lot of parameters.

Configure the REST API

First download the binaries on the test server tstokvcli

Or download directly on destination server tstokvcli, from OKV server:

[opc@tstokvcli ]$  curl -O -k https://172.168.0.42:5695/okvrestclipackage.zip
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 3865k  100 3865k    0     0  89.8M      0 --:--:-- --:--:-- --:--:-- 89.8M

Install the REST API on destination server. I put all variables in a env file set_okv_rest_env.sh, in order to set the environnement easily.

[opc@tstokvcli ~]$  pwd
/home/opc

[opc@tstokvcli ~]$  mkdir okv
[opc@tstokvcli ~]$  unzip okvrestclipackage.zip -d okv

   creating: okv/bin/
  inflating: okv/bin/okv
  inflating: okv/bin/okv.bat
   creating: okv/conf/
  inflating: okv/conf/okvrestcli.ini
  inflating: okv/conf/okvrestcli_logging.properties
  inflating: okv/lib/okvrestcli.jar
[opc@tstokvcli ~]$  echo "export OKV_RESTCLI_CONFIG=/home/opc/okv/conf" >> set_okv_rest_env.sh
[opc@tstokvcli ~]$  echo "export JAVA_HOME=/usr/java/jdk-11.0.10" >> set_okv_rest_env.sh
[opc@tstokvcli ~]$  echo "export OKV_HOME=/home/opc/okv" >> set_okv_rest_env.sh
[opc@tstokvcli ~]$  source set_okv_rest_env.sh

Uncomment the line export OKV_RESTCLI_CONFIG=$OKV_RESTCLI_DIR/conf/okvrestcli.ini in the /home/opc/okv/bin/okv shell file to avoir error’s like:

{
  "result" : "Failure",
  "message" : "Invalid file: file extension .ini is required"
}

Edit okvresccli.ini and okvrestcli_logging.properties file to finish the configuration. I used the admin OKV user, but the best practice is to create a dedicated OKV user for all REST API usage.

[opc@tstokvcli ~]$ cat $OKV_HOME/conf/okvrestcli.ini
#Provide absolute path for log_property, okv_client_config properties
[Default]
log_property=/home/opc/okv/conf/okvrestcli_logging.properties
server=172.168.0.41
#okv_client_config=/home/opc/okv/conf/okvclient.ora
user=admin
client_wallet=/home/opc/okv/wallet

[opc@tstokvcli okv]$ $OKV_HOME/bin/okv
{
  "restCLIVersion" : "21.7.0.0.0"
}

Uncomment the line java.util.logging.FileHandler.pattern = /home/opc/okv/log/okv%u.log in the $OKV_HOME/conf/okvrestcli_logging.properties and create log directory:

[opc@tstokvcli okv]$ mkdir $OKV_HOME/log

Finally to avoid to tape the OKV admin user password at each command it is better to create a local wallet to keep this password:

[opc@tstokvcli okv] mkdir $OKV_HOME/wallet

[opc@tstokvcli okv]$ $OKV_HOME/bin/okv admin client-wallet add --client-wallet $PKV_HOME/wallet --wallet-user admin
Password: ******
{
  "result" : "Success"
}

Configure the endpoint

Next step is to add the test server tstokvcli as an endpoint to OKV in order to be able to make request to OKV server:

[opc@tstokvcli ~]$ $OKV_HOME/bin/okv admin endpoint create --generate-json-input > /tmp/t.json

[opc@tstokvcli ~]$ cat /tmp/t.json
{
  "service" : {
    "category" : "admin",
    "resource" : "endpoint",
    "action" : "create",
    "options" : {
      "endpoint" : "tstokvcli",
      "description" : "Endpoint linus hostname tstokvcli",
      "platform" : "LINUX64",
      "type" : "SSH_SERVER",
      "strictIpCheck" : "FALSE",
      "sshServerHostName" : "tstokvcli"
    }
  }
}

[opc@tstokvcli ~]$ $OKV_HOME/bin/okv admin endpoint create --from-json /tmp/t.json
./log/okv0.log.0.lck
{
  "result" : "Success"
}

At this moment at OKV console the endpoint must be registered:

When we enroll the endpoint we need to give a path where to install the okvutil tool (/opt/okvutil for my test server).

If the endpoint is a database, additionally we need to have the $ORACLKE_HOME and $ORACLE_BASE defined and the WALLET_ROOT database parameter. In this case the okvutil will be installed in the WALLET_ROOT/okv path.

As this node will be ans SSH server there is no need for these variables.

[opc@tstokvcli ~]$  $OKV_HOME/bin/okv admin endpoint provision --generate-json-input  > /tmp/t.json

[opc@tstokvcli ~]$ cat /tmp/t.json
{
  "service" : {
    "category" : "admin",
    "resource" : "endpoint",
    "action" : "provision",
    "options" : {
      "endpoint" : "tstokvcli",
      "location" : "/opt/okvutil",
      "autoLogin" : "TRUE"
    }
  }
}

[opc@tstokvcli ~]$  $OKV_HOME/bin/okv admin endpoint provision --from-json /tmp/t.json
./log/okv0.log.0.lck
{
  "result" : "Success"
}

At the end the /opt/okvutil directory contains all files needed by okvutil. The okvutil will work in autologin ("autoLogin" : "TRUE")mode so there is a wallet created in ssl directory:

[opc@tstokvcli ~]$ tree /opt/okvutil
/opt/okvutil
├── bin
│   ├── ep_healthcheck.sh
│   ├── okveps.x64
│   ├── okv_ssh_ep_lookup_authorized_keys
│   ├── okvutil
│   └── root.sh
├── conf
│   ├── logging.properties
│   ├── okvclient.lck
│   ├── okvclient.ora
│   └── okvsshendpoint.conf
├── csdk
│   └── lib
│       └── liborasdk.so
├── jlib
│   └── okvutil.jar
├── lib
│   └── liborapkcs.so
├── log
│   └── okvutil.deploy.log
└── ssl
    ├── cwallet.sso
    └── ewallet.p12

8 directories, 15 files

And execute the root.sh:

[opc@tstokvcli ~]$ sudo su -

[root@tstokvcli ~]# /opt/okvutil/bin/root.sh

Now we need to configure the OKV REST api to get the correct okvclient.ora file from the okvutil:

[opc@tstokvcli ~]$ cat $OKV_HOME/conf/okvrestcli.ini
#Provide absolute path for log_property, okv_client_config properties
[Default]
log_property=/home/opc/okv/conf/okvrestcli_logging.properties
server=172.168.0.41
okv_client_config=/opt/okvutil/conf/okvclient.ora
user=admin
client_wallet=/home/opc/okv/wallet

At the end the endpoint is enrolled:

Next step is to create a dedicated wallet, named oracle_ssh_wallet in this example, (an ssh wallet type) for this endpoint and give the rights on this wallet to the endpoint.

We can use an SSH Server wallet to manage the authorisation for an SSH server host user on the multiple SSH servers if the same set of the SSH client users need to log in as that host user on all of these SSH servers. Such shared use of SSH Server wallets may further simplify the SSH access management at scale.

Create the ssh wallet
[opc@tstokvcli ~]$ $OKV_HOME/bin/okv manage-access wallet create --generate-json-input > /tmp/t.json

[opc@tstokvcli ~]$ cat /tmp/t.json
{
  "service" : {
    "category" : "manage-access",
    "resource" : "wallet",
    "action" : "create",
    "options" : {
      "wallet" : "oracle_ssh_wallet",
      "type" : "SSH_SERVER",
      "description" : "ssh wallet for user oracle on servers tstokvcli and ...",
      "sshServerHostUser" : "oracle"
    }
  }
}

[opc@tstokvcli ~]$  $OKV_HOME/bin/okv manage-access wallet create --from-json /tmp/t.json
./log/okv0.log.0.lck
{
  "result" : "Success"
}

Note that the parameter sshServerHostUser was set to oracle: "sshServerHostUser" : "oracle". The ssh connection will be made as oracle user:

The wallet must be present at the OKV console too:

Next step is to grant access to the wallet for the endpoint:

[opc@tstokvcli ~]$ $OKV_HOME/bin/okv manage-access wallet add-access --generate-json-input > /tmp/t.json

[opc@tstokvcli ~]$ cat /tmp/t.json
{
  "service" : {
    "category" : "manage-access",
    "resource" : "wallet",
    "action" : "add-access",
    "options" : {
      "wallet" : "oracle_ssh_wallet",
      "endpoint" : "tstokvcli",
      "access" : "RM_MW"
    }
  }
}

[opc@tstokvcli ~]$ $OKV_HOME/bin/okv manage-access wallet add-access --from-json /tmp/t.json
./log/okv0.log.0.lck
{
  "result" : "Success"
}

When the endpoint was enrolled the okvutil command was downloaded in the /opt/okvutil path.
We need to configure the okvutil to use the oracle_ssh_wallet with the oracle user.

[opc@tstokvcli ~]$ cat /opt/okvutil/conf/okvsshendpoint.conf
# Configuration file for Oracle Key Vault SSH server endpoints
[ oracle ]
ssh_server_wallet=oracle_ssh_wallet

At this moment we have an enrolled endpoint okvcli which is an ssh server, an wallet oracle_ssh_wallet for oracle user, and the enrolled endpoint which have access to manage the wallet.

In the next post we are going to load the ssh keys into OKV and make a connection using these keys

L’article Manage SSH public keys with Oracle Key Vault (1/2) est apparu en premier sur dbi Blog.

PostgreSQL 17: Reset shared statistics at once with pg_stat_reset_shared()

Sun, 2023-11-12 03:33

PostgreSQL comes with several catalog views which expose cluster wide statistics. One of them is the new pg_stats_checkpointer view we’ve talked about previously. To reset some of those statistics there is pg_stat_reset_shared(). Beside the pg_stat_checkpointer view, the statistics in pg_stat_bgwriter, pg_stat_io, pg_stat_wal, pg_stat_recovery_prefetch, and pg_stat_archiver can be reset.

Before PostgreSQL 17 you had to call this function individually for all kinds of statistics if you wanted to reset the statistics in all of the views, e.g.:

postgres=# select * from pg_stat_reset_shared('bgwriter');
 pg_stat_reset_shared 
----------------------
 
(1 row)

postgres=# select pg_stat_reset_shared('bgwriter');
 pg_stat_reset_shared 
----------------------
 
(1 row)

postgres=# select pg_stat_reset_shared('archiver');
 pg_stat_reset_shared 
----------------------
 
(1 row)

postgres=# select pg_stat_reset_shared('wal');
 pg_stat_reset_shared 
----------------------
 
(1 row)

postgres=# select pg_stat_reset_shared('io');
 pg_stat_reset_shared 
----------------------
 
(1 row)

postgres=# select pg_stat_reset_shared('recovery_prefetch');
 pg_stat_reset_shared 
----------------------
 
(1 row)

Calling the function without an argument resulted in an error:

postgres=# select version();
                                                   version                                                    
--------------------------------------------------------------------------------------------------------------
 PostgreSQL 16.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
(1 row)

postgres=# select pg_stat_reset_shared();
ERROR:  function pg_stat_reset_shared() does not exist
LINE 1: select pg_stat_reset_shared();
               ^
HINT:  No function matches the given name and argument types. You might need to add explicit type casts.
postgres=# 

Starting with PostgreSQL 17 this will change, and calling the function without an argument will reset all those statistics at once:

postgres=# select version();
                              version                               
--------------------------------------------------------------------
 PostgreSQL 17devel on x86_64-linux, compiled by gcc-12.2.0, 64-bit
(1 row)

postgres=# select pg_stat_reset_shared();
 pg_stat_reset_shared 
----------------------
 
(1 row)
postgres=# select stats_reset, now() from pg_stat_archiver ;
          stats_reset          |              now              
-------------------------------+-------------------------------
 2023-11-12 10:29:19.341854+01 | 2023-11-12 10:30:42.304791+01
(1 row)

Nice, thanks to all involved.

L’article PostgreSQL 17: Reset shared statistics at once with pg_stat_reset_shared() est apparu en premier sur dbi Blog.

Cloudbees CD/RO, Pipelines advanced parameters

Fri, 2023-11-10 08:03

Cloudbees CD/RO is a highly flexible tool. You can define your own customized parameter forms for pipelines or procedures.
For example, the same parameter form can be adapted according to the users: user-friendly for common users or more permissive for admin users.

Exemple of dynamic parameters form

I will take a basic example to explain how to configure Cloudbees CD/RO to render a different parameters form depending of the user.

I want to run a pipeline on a git repository. The user can choose the branch on which to run the pipeline. Common users can select the branch to use with a pre-defined drop down list, while the advanced users have a possibility to switch to “Advanced Mode”. In this mode, the drop down list is replaced by a free text field.

Constraints

Although the desired behavior is simple, replace the drop down list by a free text field for debugging. There are a few constraints to bear in mind:

  • Field type cannot be changed dynamically
  • Field names are unique for each form
  • Render Condition only affects display, field exists in all cases
Detect user type

In this case, I use a property “isAdvanced” set on the user to authorize access to the advanced mode.

I use a hidden intermediate field to store my user’s status. As a result, the Render Condition of the fields linked to this value is simplified.

This field contains a boolean value calculated from the user’s property “isAdvanced”. To do this, the default value uses a javascript expression to fill the value:

$[/javascript (myUser.isAdvanced == "true")?"true":"false"]

To hide the field in all cases, simply add “false” to the rendering condition.

Advanced option

To activate the advanced mode, add a checkbox to the form.

Only advanced user can access this checkbox. To do this, add a Render Condition to check the value of the previous advancedUser field:

advancedUser == true
The main drop down list

Add the main field to select the branch. This field is used by all users in standard mode.

This field is only displayed if the “Advanced Mode” checkbox is unchecked. To do this:

  • Add the “advancedMode” field to the dependencies
  • Define a Render Condition to hide the field on advanced mode :
${advancedMode} != true

Note : It’s important to add the “advancedMode” field to the dependencies, as it enables dynamic re-rendering of the form when the checkbox is checked or unchecked.

Overriding the main field

We don’t really replace the main drop down list, it’s hidden and after that, we display another text field instead. To do that, add a new text field:

Unlike the drop down list, this field is visible when the “Advanced Mode” checkbox is checked. To do that, add this Render Condition:

${advancedMode} == true

As with the drop down list, don’t forget to add the “advancedMode” field to the dependencies.

The default value of the text field retrieves the value from the drop down list :

The drop down list and the text field are chained so that only the text field value can be used in pipeline tasks:

This configuration works as follows:

  • In standard mode, the drop down list value is transferred to the text field, then the text field value is used in the pipeline.
  • In advanced mode, the drop down list is not used, the user modifies the text field, and the text field value is passed to the pipeline.
Conclusion

At the end, the form contains some intermediate field to display only one or two fields to the user:

In conclusion, Cloudbees CD/RO is a truly powerful and flexible tool. So, by using a few intermediate fields and linking them smartly, we can create simple, intuitive forms for our users. As a result, a non-technical user can run a pipeline while advanced users can do what they want – and all without duplicating pipelines!

L’article Cloudbees CD/RO, Pipelines advanced parameters est apparu en premier sur dbi Blog.

Alfresco – Use Alfresco for a Trivia Game – Play the game

Tue, 2023-11-07 02:30

In previous blogs (here & here), I talked about the Repository skeleton necessary and then about how to use REST-API to add questions and answers into Alfresco for the Trivia Game. In this third part, I will go through what would be needed to really play the game since that was the initial goal.

This time again, the first thing to do would be to define the target endpoint and setup my credentials so that I can exchange with Alfresco. No changes on the authentication method, I will continue to use the Basic authentication and I will of course use the same parent folder, which represents the quiz I prepared earlier (reminder: multiple parent folders = multiple quiz):

$ base_url="https://alf-trivia.dbi-services.com"
$ endpoint="${base_url}/alfresco"
$ folder_id="e6395354-d38e-489b-b112-3549b521b04c"
$ username="admin"
$ read -s -p "Please enter the password of the '${username}' user for '${endpoint}': " password
Please enter the password of the 'admin' user for 'https://alf-trivia.dbi-services.com/alfresco':
$
$ auth=$(echo -n ${username}:${password} | base64)
$

To be able to play the game, I will need to retrieve all the questions with all their associated answers and to do that, I will need to loop on all the children of the parent folder. Therefore, the next step is to retrieve the list of all Child Node IDs (“uuid“) that are present in the folder, i.e., the unique identifier of each question. For that purpose, I’m using the GET node’s children REST-API (listNodeChildren) that I used in the previous blog to retrieve the number of existing questions but this time, I will retrieve the list of IDs from it:

$ response=$(curl -k -s -X GET "${endpoint}/api/-default-/public/alfresco/versions/1/nodes/${folder_id}/children" \
>   -H "Authorization: Basic ${auth}" \
>   -H "Accept: application/json")
$
$ echo ${response} | jq
{
  "list": {
    "pagination": {
      "count": 8,
      "hasMoreItems": false,
      "totalItems": 8,
      "skipCount": 0,
      "maxItems": 100
    },
    "entries": [
      {
        "entry": {
          "createdAt": "2023-01-18T10:42:14.957+0000",
          "isFolder": true,
          ...
          "name": "Q1",
          "id": "d2afbdb6-2afb-4acc-85e7-61a800e96db3",
          "nodeType": "cm:folder",
          "parentId": "e6395354-d38e-489b-b112-3549b521b04c"
        }
      },
      ...
      {
        "entry": {
          "createdAt": "2023-11-03T09:38:45.786+0000",
          "isFolder": true,
          ...
          "name": "Q8",
          "id": "0b147c71-70fd-498e-9bf6-d40a738699fa",
          "nodeType": "cm:folder",
          "parentId": "e6395354-d38e-489b-b112-3549b521b04c"
        }
      }
    ]
  }
}
$
$ echo ${response} | jq -r ".list.entries[].entry.id"
d2afbdb6-2afb-4acc-85e7-61a800e96db3
c4655fb0-5eef-494c-8b1e-5f160ca53558
d53bb919-ce60-42a7-a4db-5c8e3c5bfdac
c4e9826a-a1f4-4bf7-9768-137205c01045
a45e1b98-5780-448e-b26d-a603f9b03a85
4218c4b7-0be1-4948-9ae6-50e1575d1185
46d78e31-c796-4839-bac8-e6d5a7ff5973
0b147c71-70fd-498e-9bf6-d40a738699fa
$

If I wanted to, I could just use the above single REST-API call to get the properties of all nodes (without retrieving their IDs first), by using “/nodes/${folder_id}/children?include=properties” instead of “/nodes/${folder_id}/children“. This would automatically include all the properties of the nodes and therefore I would see the aspect’s properties in this single command, for all the questions:

$ response=$(curl -k -s -X GET "${endpoint}/api/-default-/public/alfresco/versions/1/nodes/${folder_id}/children?include=properties" \
>   -H "Authorization: Basic ${auth}" \
>   -H "Accept: application/json")
$
$ echo ${response} | jq -r ".list.entries[].entry.properties"
{
  "cm:title": "Question 1",
  ...
}
...
{
  "cm:title": "Question 7",
  "dbi:dbi_question": "Is it possible to use Alfresco to play a Trivia Game?",
  "dbi:dbi_correct_answer": "Of course, Alfresco can do everything",
  "dbi:dbi_answer4": "I don't know",
  "dbi:dbi_answer3": "Of course, Alfresco can do everything",
  "dbi:dbi_answer2": "Only if using the Enterprise version...",
  "dbi:dbi_answer1": "Obviously not, it's just an ECM!",
  "cm:description": "Question #7 of the TriviaGame",
  "cm:taggable": [
    "6017bd2f-05d2-4828-9a1d-a418cf43a84e"
  ]
}
{
  "cm:title": "Question 8",
  "dbi:dbi_question": "Is this working as it should?",
  "dbi:dbi_correct_answer": "Of course...",
  "dbi:dbi_answer4": "Why do you ask me???",
  "dbi:dbi_answer3": "Definitively not",
  "dbi:dbi_answer2": "Maybe?",
  "dbi:dbi_answer1": "Of course...",
  "cm:description": "Question #8 of the TriviaGame",
  "cm:taggable": [
    "6017bd2f-05d2-4828-9a1d-a418cf43a84e"
  ]
}
$

The above query including the properties is probably the optimized way to retrieve the information, since it’s a single REST-API call and you have all you need there… But I couldn’t just complete the blog by using a single REST-API call, it would be too fast ;). Therefore, I will use the list of question IDs, going through them all, retrieving the question and all the possible answers for each one separately, so it can be displayed to the “player” to test his knowledge. For that purpose, I’m using the GET node REST-API (getNode):

$ node="0b147c71-70fd-498e-9bf6-d40a738699fa"
$ response=$(curl -k -s -X GET "${endpoint}/api/-default-/public/alfresco/versions/1/nodes/${node}" \
>   -H "Authorization: Basic ${auth}" \
>   -H "Accept: application/json")
$
$ echo ${response} | jq
{
  "entry": {
    "aspectNames": [
      "cm:titled",
      "cm:auditable",
      "dbi:dbi_trivia",
      "cm:taggable"
    ],
    "createdAt": "2023-11-03T09:38:45.786+0000",
    "isFolder": true,
    ...
    "name": "Q8",
    "id": "0b147c71-70fd-498e-9bf6-d40a738699fa",
    "nodeType": "cm:folder",
    "properties": {
      "cm:title": "Question 8",
      "dbi:dbi_question": "Is this working as it should?",
      "dbi:dbi_correct_answer": "Of course...",
      "dbi:dbi_answer4": "Why do you ask me???",
      "dbi:dbi_answer3": "Definitively not",
      "dbi:dbi_answer2": "Maybe?",
      "dbi:dbi_answer1": "Of course...",
      "cm:description": "Question #8 of the TriviaGame",
      "cm:taggable": [
        "6017bd2f-05d2-4828-9a1d-a418cf43a84e"
      ]
    },
    "parentId": "e6395354-d38e-489b-b112-3549b521b04c"
  }
}
$

From that point, it’s only a matter of display and verification, which is simple scripting not related to Alfresco, so I won’t go through it in details. The final content of the small bash script and its execution to play the game:

$ cat triviaPlay.sh
#!/bin/bash

# Define endpoint, credentials, and folder ID
base_url="https://alf-trivia.dbi-services.com"
endpoint="${base_url}/alfresco"
folder_id="e6395354-d38e-489b-b112-3549b521b04c"
username="admin"
read -s -p "Please enter the password of the '${username}' user for '${endpoint}': " password
auth=$(echo -n ${username}:${password} | base64)

# Get all nodes in the TriviaGame folder
echo
echo
echo "Fetching all existing questions from Alfresco..."
response=$(curl -k -s -X GET "${endpoint}/api/-default-/public/alfresco/versions/1/nodes/${folder_id}/children" \
  -H "Authorization: Basic ${auth}" \
  -H "Accept: application/json")
nodes=$(echo ${response} | jq -r ".list.entries[].entry.id")

# Iterate through all nodes
nb_correct=0
nb_incorrect=0
for node in ${nodes}; do
  # Get question, answers, and correct answer from node's custom aspect
  response=$(curl -k -s -X GET "${endpoint}/api/-default-/public/alfresco/versions/1/nodes/${node}" \
    -H "Authorization: Basic ${auth}" \
    -H "Accept: application/json")
  question=$(echo ${response} | jq -r '.entry.properties."dbi:dbi_question"')
  answer1=$(echo ${response} | jq -r '.entry.properties."dbi:dbi_answer1"')
  answer2=$(echo ${response} | jq -r '.entry.properties."dbi:dbi_answer2"')
  answer3=$(echo ${response} | jq -r '.entry.properties."dbi:dbi_answer3"')
  answer4=$(echo ${response} | jq -r '.entry.properties."dbi:dbi_answer4"')
  correct_answer=$(echo ${response} | jq -r '.entry.properties."dbi:dbi_correct_answer"')

  # Ask question and get user input
  echo
  echo -e "\033[4mQuestion #$((nb_correct+nb_incorrect+1)):\033[0m"
  echo "${question}"
  echo "  1) ${answer1}"
  echo "  2) ${answer2}"
  echo "  3) ${answer3}"
  echo "  4) ${answer4}"
  read -p "Please enter your answer (1, 2, 3 or 4): " user_answer
  answer=$(eval "echo \${answer$user_answer}")

  # Check if answer is correct
  if [[ "${answer}" == "${correct_answer}" ]]; then
    echo -e "\033[32;1m  --> Correct!\033[0m"
    nb_correct=$((nb_correct+1))
  else
    echo -e "\033[31m  --> Incorrect... The correct answer is: \033[31;1m${correct_answer}\033[31m.\033[0m"
    nb_incorrect=$((nb_incorrect+1))
  fi
done

# Print final score
echo
if [[ "${nb_incorrect}" == "0" ]]; then
  echo -e "\033[32;1m==> Congratulations, your final score is a perfect ${nb_correct}/$((nb_correct+nb_incorrect))!\033[0m"
else
  if [[ "${nb_correct}" -gt "${nb_incorrect}" ]]; then
    echo -e "\033[32;1m==> Your final score is an acceptable ${nb_correct}/$((nb_correct+nb_incorrect)). You can still do better!\033[0m"
  else
    echo -e "\033[31;1m==> Oops, your final score is ${nb_correct}/$((nb_correct+nb_incorrect))... You will do better next time!\033[0m"
  fi
fi
echo
$
$
$ # Execute the script to play the game
$ ./triviaPlay.sh
Please enter the password of the 'admin' user for 'https://alf-trivia.dbi-services.com/alfresco':

Fetching all existing questions from Alfresco...

Question #1:
What is the best ECM?
  1) Documentum
  2) Alfresco
  3) Nuxeo
  4) SharePoint (lol)
Please enter your answer (1, 2, 3 or 4): 2
  --> Correct!

Question #2:
Why?
  1) Because
  2) Because
  3) Because it is the best
  4) Because
Please enter your answer (1, 2, 3 or 4): 3
  --> Correct!

Question #3:
How can you interact with Alfresco REST-API?
  1) Using a browser
  2) Using a script (bash/python/java/etc)
  3) Using Postman
  4) All of the above
Please enter your answer (1, 2, 3 or 4): 4
  --> Correct!

Question #4:
What is the correct HTTP Verb to use to perform a search?
  1) POST
  2) GET
  3) PUT
  4) DELETE
Please enter your answer (1, 2, 3 or 4): 1
  --> Correct!

Question #5:
What is the correct URI to use to perform a search?
  1) /alfresco/api/search
  2) /alfresco/api/-default-/versions/1/search
  3) /alfresco/api/-default-/public/search
  4) /alfresco/api/-default-/public/search/versions/1/search
Please enter your answer (1, 2, 3 or 4): 4
  --> Correct!

Question #6:
How can you create a Node with content in a single API call?
  1) Using the content API and a multipart body
  2) Using the content API and a json body
  3) Using the children API and a multipart body
  4) Using the children API and a json body
Please enter your answer (1, 2, 3 or 4): 3
  --> Correct!

Question #7:
Is it possible to use Alfresco to play a Trivia Game?
  1) Obviously not, it's just an ECM!
  2) Only if using the Enterprise version...
  3) Of course, Alfresco can do everything
  4) I don't know
Please enter your answer (1, 2, 3 or 4): 3
  --> Correct!

Question #8:
Is this working as it should?
  1) Of course...
  2) Maybe?
  3) Definitively not
  4) Why do you ask me???
Please enter your answer (1, 2, 3 or 4): 1
  --> Correct!

==> Congratulations, your final score is a perfect 8/8!

$

I can also force some wrong answers, just to make sure it detects it properly:

$ ./triviaPlay.sh
...
Question #2:
Why?
  1) Because
  2) Because
  3) Because it is the best
  4) Because
Please enter your answer (1, 2, 3 or 4): 1
  --> Incorrect... The correct answer is: Because it is the best.
...

==> Your final score is an acceptable 7/8. You can still do better!

$
$
$ ./triviaPlay.sh
...
Question #8:
Is this working as it should?
  1) Of course...
  2) Maybe?
  3) Definitively not
  4) Why do you ask me???
Please enter your answer (1, 2, 3 or 4): 4
  --> Incorrect... The correct answer is: Of course....

==> Oops, your final score is 0/8... You will do better next time!

$

As mentioned earlier, you could simply use the first REST-API command to get all the details and then creating arrays using “JQ“, containing all metadata needed for the game. Both approaches are very easy to implement/script and give a funny ending to a presentation about Alfresco REST-API, so it was good enough for me! In case you missed them, the previous parts of this blog can be found here and here.

L’article Alfresco – Use Alfresco for a Trivia Game – Play the game est apparu en premier sur dbi Blog.

PostgreSQL 17: New catalog view pg_stat_checkpointer

Mon, 2023-11-06 07:52

Up to PostgreSQL 16 checkpointer related statistics are available in pg_stat_bgwriter. As the work of bgwriter and checkpointer has been split into two separate processes back in 2011 for PostgreSQL 9.2 (see 806a2aee3791), it makes sense to split the statistics related to those background processes as well. This has now happened for PostgreSQL 17.

Up to PostgreSQL 16 the pg_stat_bgwriter catalog view contains these fields:

postgres=# select version();
                                                   version                                                    
--------------------------------------------------------------------------------------------------------------
 PostgreSQL 16.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
(1 row)

postgres=# \d pg_stat_bgwriter 
                        View "pg_catalog.pg_stat_bgwriter"
        Column         |           Type           | Collation | Nullable | Default 
-----------------------+--------------------------+-----------+----------+---------
 checkpoints_timed     | bigint                   |           |          | 
 checkpoints_req       | bigint                   |           |          | 
 checkpoint_write_time | double precision         |           |          | 
 checkpoint_sync_time  | double precision         |           |          | 
 buffers_checkpoint    | bigint                   |           |          | 
 buffers_clean         | bigint                   |           |          | 
 maxwritten_clean      | bigint                   |           |          | 
 buffers_backend       | bigint                   |           |          | 
 buffers_backend_fsync | bigint                   |           |          | 
 buffers_alloc         | bigint                   |           |          | 
 stats_reset           | timestamp with time zone |           |          | 

postgres=# 

Starting with PostgreSQL 17 this view looks much simpler and all checkpointer related columns have been removed:

postgres=# select version();
                              version                               
--------------------------------------------------------------------
 PostgreSQL 17devel on x86_64-linux, compiled by gcc-12.2.0, 64-bit
(1 row)

postgres=# \d pg_stat_bgwriter 
                      View "pg_catalog.pg_stat_bgwriter"
      Column      |           Type           | Collation | Nullable | Default 
------------------+--------------------------+-----------+----------+---------
 buffers_clean    | bigint                   |           |          | 
 maxwritten_clean | bigint                   |           |          | 
 buffers_alloc    | bigint                   |           |          | 
 stats_reset      | timestamp with time zone |           |          | 

All those columns can now be found in pg_stat_checkpointer:

postgres=# select version();
                              version                               
--------------------------------------------------------------------
 PostgreSQL 17devel on x86_64-linux, compiled by gcc-12.2.0, 64-bit
(1 row)
postgres=# \d pg_stat_checkpointer 
                   View "pg_catalog.pg_stat_checkpointer"
     Column      |           Type           | Collation | Nullable | Default 
-----------------+--------------------------+-----------+----------+---------
 num_timed       | bigint                   |           |          | 
 num_requested   | bigint                   |           |          | 
 write_time      | double precision         |           |          | 
 sync_time       | double precision         |           |          | 
 buffers_written | bigint                   |           |          | 
 stats_reset     | timestamp with time zone |           |          | 

Looks much cleaner like this, one statistic view per process.

L’article PostgreSQL 17: New catalog view pg_stat_checkpointer est apparu en premier sur dbi Blog.

Alfresco – Use Alfresco for a Trivia Game – Add questions

Mon, 2023-11-06 07:30

In a previous blog, I talked about using Alfresco to setup a simple Trivia Game and more specifically the Repository part of it, i.e., the meta-model and the structure I will use. In this second part, I will go through the REST-API commands that can be used to create new questions with their associated good/bad answers.

First of all, I would need to define the target endpoint and setup my credentials so that I can exchange with Alfresco. To keep things simple, I will use the Basic authentication with username/password. You could of course use a ticket or another authentication mechanism that is supported as you see fit. The “folder_id” is the “uuid” of the folder in which I prepared a few questions beginning of the year for my REST-API presentation and in which I will now add a new one for this blog:

$ base_url="https://alf-trivia.dbi-services.com"
$ endpoint="${base_url}/alfresco"
$ folder_id="e6395354-d38e-489b-b112-3549b521b04c"
$ username="admin"
$ read -s -p "Please enter the password of the '${username}' user for '${endpoint}': " password
Please enter the password of the 'admin' user for 'https://alf-trivia.dbi-services.com/alfresco':
$
$ auth=$(echo -n ${username}:${password} | base64)
$

I decided that the name of the folders inside Alfresco would be “QX” where X is the number of the question, from 1 to infinite. Therefore, to be able to create a new question, I would first need to find out how many are currently present in my quiz. For that purpose, I’m using the GET node’s children REST-API (listNodeChildren) that will list all the children of a specific node, here “${folder_id}“, with some of its details such as the children name, creation/modification date, creator/modifier, type, etc. In the result, all I would care about is the current total count of children that I can get easily using “JQ” (a JSON Processor command line utility):

$ response=$(curl -k -s -X GET "${endpoint}/api/-default-/public/alfresco/versions/1/nodes/${folder_id}/children" \
>   -H "Authorization: Basic ${auth}" \
>   -H "Accept: application/json")
$
$ echo ${response} | jq
{
  "list": {
    "pagination": {
      "count": 6,
      "hasMoreItems": false,
      "totalItems": 6,
      "skipCount": 0,
      "maxItems": 100
    },
    "entries": [
      {
        "entry": {
          "createdAt": "2023-01-18T10:42:14.957+0000",
          "isFolder": true,
          ...
          "name": "Q1",
          "id": "d2afbdb6-2afb-4acc-85e7-61a800e96db3",
          "nodeType": "cm:folder",
          "parentId": "e6395354-d38e-489b-b112-3549b521b04c"
        }
      },
      ...
    ]
  }
}
$
$ echo ${response} | jq -r ".list.pagination.totalItems"
6
$

Here, there are currently 6 children and therefore the next question to create would be “Q7“. For that purpose, I’m using the POST node’s children REST-API (createNode). The HTTP Method changes compared to the previous request and with the POST, it is possible to ask Alfresco to create a child instead of listing them. Here, I can directly assign the “dbi:dbi_trivia” aspect to it, so that it would enable the specific metadata for this node:

$ response=$(curl -k -s -X POST "${endpoint}/api/-default-/public/alfresco/versions/1/nodes/${folder_id}/children" \
>   -H "Authorization: Basic ${auth}" \
>   -H "Content-Type: application/json" \
>   -H "Accept: application/json" \
>   -d "{
>     \"name\": \"Q${folder_nb}\",
>     \"nodeType\": \"cm:folder\",
>     \"aspectNames\": [
>       \"dbi:dbi_trivia\"
>     ],
>     \"properties\": {
>       \"cm:title\":\"Question ${folder_nb}\",
>       \"cm:description\":\"Question #${folder_nb} of the TriviaGame\"
>     }
>   }")
$
$ echo ${response} | jq -r ".entry.id"
46d78e31-c796-4839-bac8-e6d5a7ff5973
$

With that, the new folder exists, and it has the associated aspect. The next step is purely optional, but I wanted to put a specific tag to all my questions, so that I can find them quicker in Share. For that purpose, I’m using the POST node’s tags REST-API (createTagForNode) on the newly create folder:

$ tag_name="triviaquestion"
$ response=$(curl -k -s -X POST "${endpoint}/api/-default-/public/alfresco/versions/1/nodes/${new_folder_id}/tags" \
>   -H "Authorization: Basic ${auth}" \
>   -H "Content-Type: application/json" \
>   -H "Accept: application/json" \
>   -d "{
>     \"tag\": \"${tag_name}\"
>   }")
$

The last step for the first script would then be to set the different aspect’s metadata into the newly create folder. For that purpose, I’m using the PUT node REST-API (updateNode), after prompting for the values to add exactly:

$ question="Is it possible to use Alfresco to play a Trivia Game?"
$ answer1="Obviously not, it's just an ECM!"
$ answer2="Only if using the Enterprise version..."
$ answer3="Of course, Alfresco can do everything"
$ answer4="I don't know"
$ correct_answer="Of course, Alfresco can do everything"
$
$ response=$(curl -k -s -X PUT "${endpoint}/api/-default-/public/alfresco/versions/1/nodes/${new_folder_id}" \
>   -H "Authorization: Basic ${auth}" \
>   -H "Content-Type: application/json" \
>   -H "Accept: application/json" \
>   -d "{
>     \"properties\": {
>       \"dbi:dbi_question\":\"${question}\",
>       \"dbi:dbi_answer1\":\"${answer1}\",
>       \"dbi:dbi_answer2\":\"${answer2}\",
>       \"dbi:dbi_answer3\":\"${answer3}\",
>       \"dbi:dbi_answer4\":\"${answer4}\",
>       \"dbi:dbi_correct_answer\":\"${correct_answer}\"
>     }
>   }")
$
$ echo ${response} | jq
{
  "entry": {
    "aspectNames": [
      "cm:titled",
      "cm:auditable",
      "dbi:dbi_trivia",
      "cm:taggable"
    ],
    "createdAt": "2023-11-03T10:12:49.804+0000",
    "isFolder": true,
    ...
    "name": "Q7",
    "id": "46d78e31-c796-4839-bac8-e6d5a7ff5973",
    "nodeType": "cm:folder",
    "properties": {
      "cm:title": "Question 7",
      "dbi:dbi_question": "Is it possible to use Alfresco to play a Trivia Game?",
      "dbi:dbi_correct_answer": "Of course, Alfresco can do everything",
      "dbi:dbi_answer4": "I don't know",
      "dbi:dbi_answer3": "Of course, Alfresco can do everything",
      "dbi:dbi_answer2": "Only if using the Enterprise version...",
      "dbi:dbi_answer1": "Obviously not, it's just an ECM!",
      "cm:description": "Question #7 of the TriviaGame",
      "cm:taggable": [
        "6017bd2f-05d2-4828-9a1d-a418cf43a84e"
      ]
    },
    "parentId": "e6395354-d38e-489b-b112-3549b521b04c"
  }
}
$

Putting everything together into a small bash script and then executing it again to make sure everything works, to create the 8th question:

$ cat triviaAdd.sh
#!/bin/bash

# Define endpoint, credentials, and folder ID
base_url="https://alf-trivia.dbi-services.com"
endpoint="${base_url}/alfresco"
share="${base_url}/share/page"
folder_id="e6395354-d38e-489b-b112-3549b521b04c"
username="admin"
read -s -p "Please enter the password of the '${username}' user for '${endpoint}': " password
auth=$(echo -n ${username}:${password} | base64)
tag_name="triviaquestion"

# Get number of children
echo
echo
echo "Fetching the current number of questions from Alfresco..."
echo
response=$(curl -k -s -X GET "${endpoint}/api/-default-/public/alfresco/versions/1/nodes/${folder_id}/children" \
  -H "Authorization: Basic ${auth}" \
  -H "Accept: application/json")
num_children=$(echo ${response} | jq -r ".list.pagination.totalItems")
folder_nb="$((num_children+1))"

# Create new folder
response=$(curl -k -s -X POST "${endpoint}/api/-default-/public/alfresco/versions/1/nodes/${folder_id}/children" \
  -H "Authorization: Basic ${auth}" \
  -H "Content-Type: application/json" \
  -H "Accept: application/json" \
  -d "{
    \"name\": \"Q${folder_nb}\",
    \"nodeType\": \"cm:folder\",
    \"aspectNames\": [
      \"dbi:dbi_trivia\"
    ],
    \"properties\": {
      \"cm:title\":\"Question ${folder_nb}\",
      \"cm:description\":\"Question #${folder_nb} of the TriviaGame\"
    }
  }")
new_folder_id=$(echo ${response} | jq -r ".entry.id")
echo -e "\033[32;1m  --> A new folder 'Q${folder_nb}' has been created: ${share}/folder-details?nodeRef=workspace://SpacesStore/${new_folder_id}\033[0m"
echo

# Add the tag
response=$(curl -k -s -X POST "${endpoint}/api/-default-/public/alfresco/versions/1/nodes/${new_folder_id}/tags" \
  -H "Authorization: Basic ${auth}" \
  -H "Content-Type: application/json" \
  -H "Accept: application/json" \
  -d "{
    \"tag\": \"${tag_name}\"
  }")
echo -e "\033[32;1m  --> The tag '${tag_name}' has been added to the folder\033[0m"
echo

# Add question, answers and correct answer
read -p "Enter the question: " question
read -p "Enter the answer #1: " answer1
read -p "Enter the answer #2: " answer2
read -p "Enter the answer #3: " answer3
read -p "Enter the answer #4: " answer4
read -p "Enter the correct answer: " correct_answer
response=$(curl -k -s -X PUT "${endpoint}/api/-default-/public/alfresco/versions/1/nodes/${new_folder_id}" \
  -H "Authorization: Basic ${auth}" \
  -H "Content-Type: application/json" \
  -H "Accept: application/json" \
  -d "{
    \"properties\": {
      \"dbi:dbi_question\":\"${question}\",
      \"dbi:dbi_answer1\":\"${answer1}\",
      \"dbi:dbi_answer2\":\"${answer2}\",
      \"dbi:dbi_answer3\":\"${answer3}\",
      \"dbi:dbi_answer4\":\"${answer4}\",
      \"dbi:dbi_correct_answer\":\"${correct_answer}\"
    }
  }")
echo -e "\033[32;1m  --> The question and answers have been added to the folder\033[0m"
echo
$
$
$ # Execute the script to add a question
$ ./triviaAdd.sh
Please enter the password of the 'admin' user for 'https://alf-trivia.dbi-services.com/alfresco':

Fetching the current number of questions from Alfresco...

  --> A new folder 'Q8' has been created: https://alf-trivia.dbi-services.com/share/page/folder-details?nodeRef=workspace://SpacesStore/0b147c71-70fd-498e-9bf6-d40a738699fa

  --> The tag 'triviaquestion' has been added to the folder

Enter the question: Is this working as it should?
Enter the answer #1: Of course...
Enter the answer #2: Maybe?
Enter the answer #3: Definitively not
Enter the answer #4: Why do you ask me???
Enter the correct answer: Of course...
  --> The question and answers have been added to the folder

$

Unfortunately I cannot display the colors in our blog platform for the code sections, but if you look at the source code, you will see the script has that so that the output is clearer. Looking into Alfresco Share shows the newly created folder with the proper metadata:

That concludes the second part of this blog. If you are familiar with REST-API, writing this kind of small script shouldn’t take you more than a few minutes, let’s say 30 minutes if you want to add colors as I did and wrapper/information around, to have something that can be presented. The first part of this blog is available here and the third part of this blog here.

L’article Alfresco – Use Alfresco for a Trivia Game – Add questions est apparu en premier sur dbi Blog.

Alfresco – Use Alfresco for a Trivia Game – Repository skeleton

Mon, 2023-11-06 02:30

Have you ever wondered how you could use Alfresco to run a Trivia Game? No? Strange, I believe that should be a very important question to ask yourself when you are looking for an ECM for your company! If you have been working or following Alfresco in the past 10 years, you might have heard about the ContentCraft project, which is an integration between Alfresco and Minecraft, using CMIS. A few months ago, I was preparing a presentation about REST-API in Alfresco and to conclude the talk on a funnier note (a 45min talk about REST-API IS fun!), I thought about which game I could play using Alfresco REST-API.

With half a day to setup my Alfresco environment and implement the game, I obviously didn’t want to do something too complex and therefore, I thought about a Trivia Game. It’s essentially a question-and-answer game so knowing that Alfresco stores metadata and documents and that the REST-API can be used to fetch that, it appeared to be something feasible easily. In addition to that, it would help me and my presentation by running a small quiz to “test” the attendees and make sure they understood (/followed :D) the talk.

In this first blog, I will talk about the Alfresco preparation needed, which mainly consists of the meta-model. In a second blog, I will go through the REST-API calls needed to add questions into Alfresco with their associated good/bad answers of course. And the final blog will be around really “playing” the game. I will be using REST-API because it’s the simplest/fastest way to interact with Alfresco. A more advanced version of the game would most probably be using Web Scripts/Services so that it’s not up to the client to know how many answers are needed or to check whether the answer is good or bad, since on the client I could obviously just disregard the real answer and display that I selected the correct answer (but I don’t cheat! ;)).

First thing first, to have something done quickly for my purpose, I designed a small meta-model using the Share Model Manager. I started with the creation of the model “dbi”:

Create Model

In terms of XML, it should be something like:

<model xmlns="http://www.alfresco.org/model/dictionary/1.0" name="dbi:dbi">
  <description>Model for demo</description>
  <author>Morgan Patou</author>
  ...
  <namespaces>
    <namespace uri="http://www.dbi-services.com/model/content/1.0" prefix="dbi"/>
  </namespaces>
  ...
</model>

Then I added an aspect “dbi_trivia”, so it can be added into existing nodes:

Create Aspect

In terms of XML, it should add something like:

  <aspect name="dbi:dbi_trivia">
    <title>dbi trivia</title>
    <properties>
      ...
    </properties>
    ...
  </aspect>

This game could be done using documents. For example, a document being a question with the different answers and the correct one as content. Or it could be a type of document with the details as metadata. In this blog, I will use an aspect that will be applied to folders. Basically, a question is a folder, and the aspect is assigned to the folder so that it has access to the different metadata for questions and answers. I thought that would be one of the fastest/simplest to apply so I went with that. Therefore, the next step is to create the different properties for the aspect, 1 for the question, then 4 for the different answers (it could be a multi-valued one as well) and finally a last one to specify which is the correct answer:

Aspect Properties

In terms of XML, it should add something like:

    <properties>
      <property name="dbi:dbi_question">
        <title>Question</title>
        <type>d:text</type>
        <mandatory>true</mandatory>
        ...
      </property>
      <property name="dbi:dbi_answer1">
        <title>Answer#1</title>
        <type>d:text</type>
        <mandatory>false</mandatory>
        ...
      </property>
      ...
      <property name="dbi:dbi_correct_answer">
        <title>Correct Answer</title>
        <type>d:text</type>
        <mandatory>false</mandatory>
        ...
      </property>
    </properties>

The “dbi:dbi_correct_answer” here is again a “d:text” metadata, in the sense that it will contain again the textual answer (same value as either answer #1, 2, 3 OR 4). It would of course be possible to have this parameter as an integer instead, to link to the answer. I selected a text so that it is slightly easier to show/see which one is the correct one if you want to randomise the display of possible answers for example.

The next step is to create the layout to display these properties, which is optional, if you want them to be visible through Share. Again, something very simple:

Aspect Layout

As mentioned previously, the questions of a quiz would be different folders. To be able to handle/support multiple quiz, it would be possible to either separate the quiz based on a parent folder (each parent folder containing the different questions) or through another metadata that indicate a Unique ID for a specific quiz, in which case you could find the different questions through searches, and you wouldn’t really mind where the files are stored in Alfresco. The simplest being the parent folder (=quiz) and sub-folder (=questions) approach, I went with that, create one parent folder for now and noted its “uuid” (last part of the “nodeRef“).

That concludes the first part of this blog. It’s basically a 10/15 minutes setup to have the skeleton needed to play the game on the repository side. The second part of this blog can be found here and the third part here.

L’article Alfresco – Use Alfresco for a Trivia Game – Repository skeleton est apparu en premier sur dbi Blog.

DBVISIT StandbyMP for PostgreSQL – 3 – Switchovers

Mon, 2023-11-06 01:09

In the last two posts (here and here) we’ve installed and configured DBVISIT StandbyMP for PostgreSQL and created a simple primary -> replica setup. Most of the replication setup was GUI driven and almost nothing had to be done manually, except for the creation of a user and configuration of pg_hba.conf. Currently we have the primary replicating to the replica and the StandbyMP management console managing this configuration:

One advantage of such a setup is, that the replica can take over the role of the primary and vice versa, e.g. when patching is required on one of the nodes. In StandbyMP this is called a “Graceful Switchover”, so lets try to do this and see what happens:

This takes a few second and the instances will have switched the roles:

The same works of course in the other direction.

If you prefer a graphical user interface for setting up and operating a primary -> replica configuration, then this could be an option for you. It is really easy to setup and you don’t have to touch the command line.

L’article DBVISIT StandbyMP for PostgreSQL – 3 – Switchovers est apparu en premier sur dbi Blog.

What is the maximum number of columns for a table in PostgreSQL?

Sun, 2023-11-05 07:08

In our PostgreSQL DBA Essentials workshop we’re also letting the attendees know about the limits of PostgreSQL. Those limits are documented but one of those limits might not be very clear. The limit I am talking about is the maximum number of columns which can be in a table. In the documentation you’ll find this: “The maximum number of columns for a table is further reduced as the tuple being stored must fit in a single 8192-byte heap page. For example, excluding the tuple header, a tuple made up of 1600 int columns would consume 6400 bytes and could be stored in a heap page, but a tuple of 1600 bigint columns would consume 12800 bytes and would therefore not fit inside a heap page. Variable-length fields of types such as text, varchar, and char can have their values stored out of line in the table’s TOAST table when the values are large enough to require it. Only an 18-byte pointer must remain inside the tuple in the table’s heap. For shorter length variable-length fields, either a 4-byte or 1-byte field header is used and the value is stored inside the heap tuple.”

Depending on how you read this it might be clear or it might not be clear. This is how I read it: You cannot create a table which contains data types which in total would exceed 8160 bytes, which is the blocksize minus the tuple header. Lets see if this is true or I did understand something wrong.

Here is a little bash script which generates a table consisting of n columns of a specific data type. There is not any error handling in the script, so the data type needs to exist:

postgres@debian12-pg:/home/postgres/ [pgdev] cat gen_tab.sh 
#!/bin/bash
AMOUNT_OF_COLS=$1
DATA_TYPE=$2
STATEMENT="create table t ( "
for (( i=1 ; i<${AMOUNT_OF_COLS} ; i++ )); 
do
  STATEMENT+=" col${i} ${DATA_TYPE},"
done
STATEMENT+="col${AMOUNT_OF_COLS} ${DATA_TYPE} );"
echo ${STATEMENT}

When we create a table containing 1600 int columns, does that work?

postgres@debian12-pg:/home/postgres/ [pgdev] ./gen_tab.sh 1600 int > a.sql
postgres@debian12-pg:/home/postgres/ [pgdev] psql -f a.sql
CREATE TABLE

This works just fine and is consistent with the documentation. What happens if we do the same but we use a bigint instead of an int column? The way I’ve read the documentation this should not work:

postgres@debian12-pg:/home/postgres/ [pgdev] psql -c "drop table t"
DROP TABLE
postgres@debian12-pg:/home/postgres/ [pgdev] ./gen_tab.sh 1600 bigint > a.sql
postgres@debian12-pg:/home/postgres/ [pgdev] psql -f a.sql
CREATE TABLE

This works fine as well and I was a bit surprised by that. What the documentation is really saying is, that an row which consumes more than 8160 bytes which fail to insert or update. Here is another simple bash script to test this:

postgres@debian12-pg:/home/postgres/ [pgdev] cat ./insert_data.sh 
#!/bin/bash
AMOUNT_OF_COLS=$1
DATA=$2
STATEMENT="insert into t values ( "
for (( i=1 ; i<${AMOUNT_OF_COLS} ; i++ )); 
do
  STATEMENT+=" ${DATA},"
done
STATEMENT+="${DATA} );"
echo ${STATEMENT}

Using this against the table with the 1600 bigint columns will fail:

postgres@debian12-pg:/home/postgres/ [pgdev] ./insert_data.sh 1600 1 > b.sql
postgres@debian12-pg:/home/postgres/ [pgdev] psql -f b.sql
psql:b.sql:1: ERROR:  row is too big: size 12824, maximum size 8160

Doing the same against a table with 1600 int columns will succeed:

postgres@debian12-pg:/home/postgres/ [pgdev] psql -c "drop table t"
DROP TABLE
postgres@debian12-pg:/home/postgres/ [pgdev] ./gen_tab.sh 1600 int > a.sql
postgres@debian12-pg:/home/postgres/ [pgdev] psql -f a.sql
CREATE TABLE
postgres@debian12-pg:/home/postgres/ [pgdev] ./insert_data.sh 1600 1 > b.sql
postgres@debian12-pg:/home/postgres/ [pgdev] psql -f b.sql 
INSERT 0 1

So it is not about the types of columns only when it comes to the maximum number of columns for a table. It is also about the content of the rows coming in.

Btw: Deleted columns also count against the limit of 1600. Consider this:

postgres@debian12-pg:/home/postgres/ [pgdev] psql -c "drop table t"
DROP TABLE
postgres@debian12-pg:/home/postgres/ [pgdev] ./gen_tab.sh 1600 int > a.sql
postgres@debian12-pg:/home/postgres/ [pgdev] psql -f a.sql
CREATE TABLE

This created a table with 1600 columns. Let’s drop two and add another one:

postgres=# alter table t drop column col1600;
ALTER TABLE
postgres=# alter table t drop column col1599;
ALTER TABLE
postgres=# alter table t add column xxx int;
ERROR:  tables can have at most 1600 columns
postgres=# 

Even if we dropped two column, we cannot add another one afterwards.

L’article What is the maximum number of columns for a table in PostgreSQL? est apparu en premier sur dbi Blog.

Documentum – Unable to install an IndexAgent due to RSA/FIPS libraries

Fri, 2023-11-03 08:51

A couple year ago at a new customer, I was asked to install a new xPlore IndexAgent on an existing Documentum 16.4 environment. Unfortunately, that installation didn’t succeed, and I ended up with the following install logs:

[xplore@ds-0 ~]$ cd $XPLORE_HOME/setup/indexagent/logs
[xplore@ds-0 logs]$
[xplore@ds-0 logs]$ cat install.log
09:33:02,618  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product name is: Index Agent
09:33:02,619  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product version is: 16.4.0000.0197
09:33:02,619  INFO [main]  -
09:33:02,647  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - Done InitializeSharedLibrary ...
09:33:11,929  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - setApplicationServer sharedDfcLibDir is:$XPLORE_HOME/dfc
09:33:11,930  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - getFileFromResource for templates/appserver.properties
09:33:12,006  INFO [main] com.documentum.install.indexagent.installanywhere.actions.ConfigIndexAgentActions - starting DctmActions
09:33:14,381  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - user name =
09:33:14,382  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - The batch file: $XPLORE_HOME/temp/installer/wildfly/dctm_tmpcmd0.sh exist? false
09:33:14,383  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - The user home is : /home/xplore
09:33:14,386  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - Executing temporary batch file: $XPLORE_HOME/temp/installer/wildfly/dctm_tmpcmd0.sh for running: $JAVA_HOME/bin/java -cp $WILDFLY_HOME/modules/system/layers/base/emc/documentum/security/main/dfc.jar:$WILDFLY_HOME/modules/system/layers/base/emc/documentum/security/main/aspectjrt.jar:$WILDFLY_HOME/modules/system/layers/base/emc/documentum/security/main/DctmUtils.jar com.documentum.install.appserver.utils.DctmCreatePasswordHash xxxx
09:33:15,046  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - Deploying to Group Indexagent_REPO2... IndexAgent (IndexAgent.war): does not exist!
09:33:15,057  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - resolving $WILDFLY_HOME/server/DctmServer_Indexagent_REPO2/deployments/IndexAgent.war/WEB-INF/classes/dfc.properties
09:33:15,065  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - resolving $WILDFLY_HOME/server/DctmServer_Indexagent_REPO2/deployments/IndexAgent.war/WEB-INF/classes/log4j.properties
09:33:15,071  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - resolving $WILDFLY_HOME/server/DctmServer_Indexagent_REPO2/deployments/IndexAgent.war/WEB-INF/classes/logback.xml
09:33:15,080  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - resolving $WILDFLY_HOME/server/DctmServer_Indexagent_REPO2/deployments/IndexAgent.war/WEB-INF/classes/indexagent.xml
09:33:15,092  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - resolving $WILDFLY_HOME/server/DctmServer_Indexagent_REPO2/deployments/IndexAgent.war/WEB-INF/web.xml
09:33:15,102  INFO [main] com.documentum.install.shared.actions.DiActionCopyFilesTarget - copying $XPLORE_HOME/setup/indexagent/tools/templates/aclreplication.sh.template to $XPLORE_HOME/setup/indexagent/tools/aclreplication_for_REPO2.sh
09:33:15,120  INFO [main] com.documentum.install.shared.actions.DiActionCopyFilesTarget - copying $XPLORE_HOME/setup/indexagent/tools/templates/ftintegrity.sh.template to $XPLORE_HOME/setup/indexagent/tools/ftintegrity_for_REPO2.sh
09:33:15,130  INFO [main] com.documentum.install.shared.actions.DiActionSetPermissionTarget - performing chmod 755 $XPLORE_HOME/setup/indexagent/tools/aclreplication_for_REPO2.sh
09:33:15,137  INFO [main] com.documentum.install.shared.actions.DiActionSetPermissionTarget - performing chmod 755 $XPLORE_HOME/setup/indexagent/tools/ftintegrity_for_REPO2.sh
09:33:15,141  INFO [main] com.documentum.install.shared.actions.DiActionResolveMacrosTarget - performing macro resolution on $XPLORE_HOME/setup/indexagent/tools/aclreplication_for_REPO2.sh
09:33:15,153  INFO [main] com.documentum.install.shared.actions.DiActionResolveMacrosTarget - performing macro resolution on $XPLORE_HOME/setup/indexagent/tools/ftintegrity_for_REPO2.sh
09:33:15,166  INFO [main] com.documentum.install.indexagent.installanywhere.actions.ConfigIndexAgentActions - Finished DctmActions.
09:33:15,571  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is $WILDFLY_HOME/server/DctmServer_Indexagent_REPO2/deployments/IndexAgent.war/WEB-INF/classes/dfc.keystore
09:33:15,945 ERROR [main] com.documentum.fc.client.security.impl.IdentityManager - [DFC_SECURITY_IDENTITY_INIT] no identity initialization or incomplete identity initialization
com.documentum.fc.common.DfCriticalException:
        at com.documentum.fc.client.security.internal.CreateIdentityCredential$MultiFormatPKIKeyPair.<init>(CreateIdentityCredential.java:734)
        at com.documentum.fc.client.security.internal.CreateIdentityCredential.<init>(CreateIdentityCredential.java:98)
        at com.documentum.fc.client.security.impl.InitializeKeystoreForDfc.execute(InitializeKeystoreForDfc.java:46)
        at com.documentum.fc.client.security.internal.KeystoreMgr.initForDfcWithDefaultPolicy(KeystoreMgr.java:71)
        at com.documentum.fc.client.security.impl.IdentityManager.initFromKeystore(IdentityManager.java:139)
        at com.documentum.fc.client.security.impl.IdentityManager.<init>(IdentityManager.java:41)
        at com.documentum.fc.impl.RuntimeContext.<init>(RuntimeContext.java:52)
        at com.documentum.fc.impl.RuntimeContext.<clinit>(RuntimeContext.java:185)
        at com.documentum.fc.client.DfClient.<clinit>(DfClient.java:772)
        at com.documentum.com.DfClientX.getLocalClient(DfClientX.java:43)
        at com.documentum.server.impl.fulltext.indexagent.install.SessionController.createSessionManager(SessionController.java:43)
        at com.documentum.server.impl.fulltext.indexagent.install.SessionController.getSessionManager(SessionController.java:20)
        at com.documentum.server.impl.fulltext.indexagent.install.SessionController.getSession(SessionController.java:61)
        at com.documentum.server.impl.fulltext.indexagent.install.DocbaseSetup.<init>(DocbaseSetup.java:71)
        at com.documentum.install.indexagent.services.Docbase.<init>(Docbase.java:27)
        at com.documentum.install.indexagent.services.Docbase.<init>(Docbase.java:22)
        at com.documentum.install.indexagent.services.Docbase.createFulltextObjectsUsingMacros(Docbase.java:72)
        at com.documentum.install.indexagent.installanywhere.actions.ConfigIndexAgentActions.configureFulltextDocbaseObjects(ConfigIndexAgentActions.java:367)
        at com.documentum.install.indexagent.installanywhere.actions.ConfigIndexAgentActions.createIndexAgent(ConfigIndexAgentActions.java:107)
        at com.documentum.install.indexagent.installanywhere.actions.ConfigIndexAgentActions.install(ConfigIndexAgentActions.java:50)
        at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.GhostDirectory.install(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.Installer.install(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.consoleInstallMain(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.executeApplication(Unknown Source)
        at com.zerog.ia.installer.Main.main(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.zerog.lax.LAX.launch(Unknown Source)
        at com.zerog.lax.LAX.main(Unknown Source)
Caused by: java.security.NoSuchAlgorithmException: A JSAFE_SecureRandom object of HMACDRBG256 is not available on any of the devices. (Java)
        at com.rsa.jsafe.JSAFE_SecureRandom.a(Unknown Source)
        at com.rsa.jsafe.JSAFE_SecureRandom.getInstance(Unknown Source)
        at com.documentum.fc.client.security.internal.CreateIdentityCredential$MultiFormatPKIKeyPair.<init>(CreateIdentityCredential.java:601)
        ... 34 more
09:33:15,948  INFO [main] com.documentum.fc.impl.RuntimeContext - DFC Version is 16.4.0000.0185
09:33:15,982  INFO [Timer-2] com.documentum.fc.client.impl.bof.cache.ClassCacheManager$CacheCleanupTask - [DFC_BOF_RUNNING_CLEANUP] Running class cache cleanup task
09:33:17,069  INFO [main] com.documentum.fc.client.impl.connection.docbase.DocbaseConnection - Object protocol version 2
09:33:17,087 ERROR [main] com.documentum.fc.client.security.impl.IdentityManager - [DFC_SECURITY_IDENTITY_CREATION] failure on creation of identity: 'bad bundle'
DfException:: THREAD: main; MSG: [DFC_SECURITY_IDENTITY_BUNDLE_FAIL] could not create identity bundle because identity initialization failed; ERRORCODE: ff; NEXT: null
        at com.documentum.fc.client.security.impl.IdentityManager.createIdentityBundle(IdentityManager.java:188)
        at com.documentum.fc.client.security.impl.IdentityManager.getIdentityBundle(IdentityManager.java:51)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.authenticate(DocbaseConnection.java:413)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.open(DocbaseConnection.java:131)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.<init>(DocbaseConnection.java:100)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.<init>(DocbaseConnection.java:60)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnectionFactory.newDocbaseConnection(DocbaseConnectionFactory.java:26)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnectionManager.createNewConnection(DocbaseConnectionManager.java:192)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnectionManager.getDocbaseConnection(DocbaseConnectionManager.java:122)
        at com.documentum.fc.client.impl.session.SessionFactory.newSession(SessionFactory.java:23)
        at com.documentum.fc.client.impl.session.PrincipalAwareSessionFactory.newSession(PrincipalAwareSessionFactory.java:44)
        at com.documentum.fc.client.impl.session.PooledSessionFactory.newSession(PooledSessionFactory.java:49)
        at com.documentum.fc.client.impl.session.SessionManager.getSessionFromFactory(SessionManager.java:134)
        at com.documentum.fc.client.impl.session.SessionManager.newSession(SessionManager.java:72)
        at com.documentum.fc.client.impl.session.SessionManager.getSession(SessionManager.java:191)
        at com.documentum.fc.client.impl.bof.classmgmt.ModuleManager.connect(ModuleManager.java:397)
        at com.documentum.fc.client.impl.bof.classmgmt.ModuleManager.init(ModuleManager.java:352)
        at com.documentum.fc.client.impl.bof.classmgmt.ModuleManager.getInstance(ModuleManager.java:43)
        at com.documentum.fc.client.security.impl.DfcIdentityPublisher.<init>(DfcIdentityPublisher.java:44)
        at com.documentum.fc.client.security.internal.RegistrationMgr.register(RegistrationMgr.java:34)
        at com.documentum.fc.impl.RuntimeContext.<clinit>(RuntimeContext.java:195)
        at com.documentum.fc.client.DfClient.<clinit>(DfClient.java:772)
        at com.documentum.com.DfClientX.getLocalClient(DfClientX.java:43)
        at com.documentum.server.impl.fulltext.indexagent.install.SessionController.createSessionManager(SessionController.java:43)
        at com.documentum.server.impl.fulltext.indexagent.install.SessionController.getSessionManager(SessionController.java:20)
        at com.documentum.server.impl.fulltext.indexagent.install.SessionController.getSession(SessionController.java:61)
        at com.documentum.server.impl.fulltext.indexagent.install.DocbaseSetup.<init>(DocbaseSetup.java:71)
        at com.documentum.install.indexagent.services.Docbase.<init>(Docbase.java:27)
        at com.documentum.install.indexagent.services.Docbase.<init>(Docbase.java:22)
        at com.documentum.install.indexagent.services.Docbase.createFulltextObjectsUsingMacros(Docbase.java:72)
        at com.documentum.install.indexagent.installanywhere.actions.ConfigIndexAgentActions.configureFulltextDocbaseObjects(ConfigIndexAgentActions.java:367)
        at com.documentum.install.indexagent.installanywhere.actions.ConfigIndexAgentActions.createIndexAgent(ConfigIndexAgentActions.java:107)
        at com.documentum.install.indexagent.installanywhere.actions.ConfigIndexAgentActions.install(ConfigIndexAgentActions.java:50)
        at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.GhostDirectory.install(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.Installer.install(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.consoleInstallMain(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.executeApplication(Unknown Source)
        at com.zerog.ia.installer.Main.main(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.zerog.lax.LAX.launch(Unknown Source)
        at com.zerog.lax.LAX.main(Unknown Source)
09:33:17,373  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is $WILDFLY_HOME/server/DctmServer_Indexagent_REPO2/deployments/IndexAgent.war/WEB-INF/classes/dfc.keystore
09:33:17,395 ERROR [main] com.documentum.install.indexagent.installanywhere.actions.ConfigIndexAgentActions - Failed to configure fulltext repository objects.
java.lang.Exception: java.lang.ExceptionInInitializerError
        at com.documentum.install.indexagent.installanywhere.actions.ConfigIndexAgentActions.configureFulltextDocbaseObjects(ConfigIndexAgentActions.java:372)
        at com.documentum.install.indexagent.installanywhere.actions.ConfigIndexAgentActions.createIndexAgent(ConfigIndexAgentActions.java:107)
        at com.documentum.install.indexagent.installanywhere.actions.ConfigIndexAgentActions.install(ConfigIndexAgentActions.java:50)
        at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.GhostDirectory.install(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.Installer.install(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.consoleInstallMain(Unknown Source)
        at com.zerog.ia.installer.LifeCycleManager.executeApplication(Unknown Source)
        at com.zerog.ia.installer.Main.main(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.zerog.lax.LAX.launch(Unknown Source)
        at com.zerog.lax.LAX.main(Unknown Source)
Caused by: java.lang.ExceptionInInitializerError
        at com.documentum.fc.client.DfClient.<clinit>(DfClient.java:772)
        at com.documentum.com.DfClientX.getLocalClient(DfClientX.java:43)
        at com.documentum.server.impl.fulltext.indexagent.install.SessionController.createSessionManager(SessionController.java:43)
        at com.documentum.server.impl.fulltext.indexagent.install.SessionController.getSessionManager(SessionController.java:20)
        at com.documentum.server.impl.fulltext.indexagent.install.SessionController.getSession(SessionController.java:61)
        at com.documentum.server.impl.fulltext.indexagent.install.DocbaseSetup.<init>(DocbaseSetup.java:71)
        at com.documentum.install.indexagent.services.Docbase.<init>(Docbase.java:27)
        at com.documentum.install.indexagent.services.Docbase.<init>(Docbase.java:22)
        at com.documentum.install.indexagent.services.Docbase.createFulltextObjectsUsingMacros(Docbase.java:72)
        at com.documentum.install.indexagent.installanywhere.actions.ConfigIndexAgentActions.configureFulltextDocbaseObjects(ConfigIndexAgentActions.java:367)
        ... 17 more
Caused by: java.lang.NullPointerException
        at com.documentum.fc.client.security.impl.IpAndRcHelper.getRegistration(IpAndRcHelper.java:439)
        at com.documentum.fc.client.security.impl.DfcIdentityPublisher.getRegistration(DfcIdentityPublisher.java:475)
        at com.documentum.fc.client.security.impl.DfcIdentityPublisher.publish(DfcIdentityPublisher.java:134)
        at com.documentum.fc.client.security.impl.DfcIdentityPublisher.publish(DfcIdentityPublisher.java:102)
        at com.documentum.fc.client.security.internal.RegistrationMgr.register(RegistrationMgr.java:34)
        at com.documentum.fc.impl.RuntimeContext.<clinit>(RuntimeContext.java:195)
        ... 27 more
09:33:17,396  INFO [main]  - The INSTALLER_UI value is SILENT
09:33:17,396  INFO [main]  - The common.installLocation value is $XPLORE_HOME
09:33:17,396  INFO [main]  - The indexagent.configMode.create value is 1
09:33:17,396  INFO [main]  - The indexagent.configMode.upgrade value is 0
09:33:17,396  INFO [main]  - The indexagent.configMode.delete value is 0
09:33:17,396  INFO [main]  - The indexagent.configMode.create.migration value is 0
09:33:17,397  INFO [main]  - The indexagent.ess.host value is ds-0.ds.namespace.svc.cluster.local
09:33:17,397  INFO [main]  - The indexagent.ess.port value is 9300
09:33:17,397  INFO [main]  - The indexagent.name value is Indexagent_REPO2
09:33:17,397  INFO [main]  - The indexagent.FQDN value is ds-0.ds.namespace.svc.cluster.local
09:33:17,397  INFO [main]  - The indexagent.instance.port value is 9220
09:33:17,397  INFO [main]  - The indexagent.instance.password value is ******
09:33:17,397  INFO [main]  - The indexagent.docbase.name value is REPO2
09:33:17,397  INFO [main]  - The indexagent.docbase.user value is dmadmin
09:33:17,397  INFO [main]  - The indexagent.docbase.password value is ******
09:33:17,398  INFO [main]  - The indexagent.connectionBroker.host value is cs.namespace.svc.cluster.local
09:33:17,398  INFO [main]  - The indexagent.connectionBroker.port value is 1489
09:33:17,398  INFO [main]  - The indexagent.globalRegistryRepository.name value is gr_REPO
09:33:17,398  INFO [main]  - The indexagent.globalRegistryRepository.user value is dm_bof_registry
09:33:17,398  INFO [main]  - The indexagent.globalRegistryRepository.password value is ******
09:33:17,398  INFO [main]  - The indexagent.storage.name value is default
09:33:17,398  INFO [main]  - The indexagent.local_content_area value is /data/ds-0/data/export/ia/Indexagent_REPO2
09:33:17,398  INFO [main]  - The common.installOwner.password value is ******
09:33:17,398  INFO [main]  - The indexagent.delete.objects value is 1
09:33:17,398  INFO [main]  - The indexagent.STANDBY_FULLTEXT_USER value is dm_fulltext_index_user_02
09:33:17,398  INFO [main]  - The indexagent.STANDBY_FULLTEXT_INDEX_NAME value is REPO2_ftindex_02
09:33:17,399  INFO [main]  - The indexagent.STANDBY_FULLTEXT_ENGINE_NAME value is DSearch Fulltext Engine Configuration 02
09:33:17,399  INFO [main]  - The env PATH value is: /usr/xpg4/bin:$JAVA_HOME/bin:$JAVA_HOME/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/xplore/.local/bin:/home/xplore/bin:/sbin
09:33:17,445  WARN [main] com.documentum.install.indexagent.installanywhere.actions.RegisterIndexAgentWithWatchdog - Stopping watchdog.
09:33:17,445  WARN [main] com.documentum.install.indexagent.installanywhere.actions.RegisterIndexAgentWithWatchdog - Registering ds-0.ds.namespa_9220_IndexAgent with watchdog.
09:33:17,793  INFO [main] com.emc.ess.watchdog.config.registry.impl.EssMultiWatchdogConfiguration - Begin to add SingleWatchdogConfig with hostname: ds-0.ds.namespace.svc.cluster.local
09:33:17,795  INFO [main] com.emc.ess.watchdog.config.EssWatchdogRegistration - Successfully createdcom.emc.ess.watchdog.common.impl.tasks.SendFailureNotificationTask
09:33:17,795  INFO [main] com.emc.ess.watchdog.config.EssWatchdogRegistration - Successfully created IndexAgent task ds-0.ds.namespa_9220_IndexAgent_HeartBeat for application instance ds-0.ds.namespa_9220_IndexAgent
09:33:17,888  INFO [main] com.emc.ess.watchdog.config.EssWatchdogRegistration - Successfully added configuration for  {IndexAgent,ds-0.ds.namespa_9220_IndexAgent} under watchdog host ds-0.ds.namespace.svc.cluster.local
09:33:17,888  WARN [main] com.documentum.install.indexagent.installanywhere.actions.RegisterIndexAgentWithWatchdog - Registration succeeded.
[xplore@ds-0 logs]$

As you probably know, xPlore 16.4 is now quite old and it was still a version that came out-of-the-box with the Oracle JDK 8. The issue above is that the DFC Keystore couldn’t be initialized/created for this new IndexAgent and the reason for that is shown in the log file as “NoSuchAlgorithmException: A JSAFE_SecureRandom object of HMACDRBG256 is not available“. HMACDRBG256 is the default Pseudo-Random Number Generator (PRNG) algorithm for FIPS, so that made me think about the cryptoj and other jars that will be needed for RSA/FIPS.

Therefore, the first thing I did was check the current Java version as well as the status of the RSA/FIPS libraries:

[xplore@ds-0 logs]$ cd $JAVA_HOME/jre/lib/ext
[xplore@ds-0 ext]$
[xplore@ds-0 ext]$ java -version
java version "1.8.0_261"
Java(TM) SE Runtime Environment (build 1.8.0_261-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.261-b12, mixed mode)
[xplore@ds-0 ext]$
[xplore@ds-0 ext]$ ls -ltr
total 27792
-rw-r----- 1 xplore xplore 18493706 Jun 18  2020 jfxrt.jar
-rw-r----- 1 xplore xplore    69304 Jun 18  2020 zipfs.jar
-rw-r----- 1 xplore xplore   280139 Jun 18  2020 sunpkcs11.jar
-rw-r----- 1 xplore xplore   280999 Jun 18  2020 sunjce_provider.jar
-rw-r----- 1 xplore xplore    60361 Jun 18  2020 sunec.jar
-rw-r----- 1 xplore xplore  2024157 Jun 18  2020 nashorn.jar
-rw-r----- 1 xplore xplore  1179462 Jun 18  2020 localedata.jar
-rw-r----- 1 xplore xplore    44516 Jun 18  2020 jaccess.jar
-rw-r----- 1 xplore xplore     8286 Jun 18  2020 dnsns.jar
-rw-r----- 1 xplore xplore  3860563 Jun 18  2020 cldrdata.jar
-rw-r----- 1 xplore xplore     1269 Jun 18  2020 meta-index
[xplore@ds-0 ext]$

As you can see, it looked like someone updated/patched the Oracle JDK 8 (I believe the default that comes with xPlore 16.4 being 1.8.0_152 and not 1.8.0_261) but while doing so, only the base JARs were put in place and not the RSA/FIPS related ones. So, it looks like something was missed in the patching steps. These additional JARs aren’t really needed for the runtime, but they are when trying to install a new IndexAgent because of the DFC Keystore creation, so that probably explained why the customer didn’t see any issues in the few months after the JDK upgrade.

I quickly installed an OOTB xPlore 16.4 environment (with the OOTB JDK 8u152) to be able to list all the “custom” JARs present so that I could take them and restore these into the environment with patched JDK. This is the list of JARs (OOTB + “custom”):

[xplore@ds-0 ext]$ ls -ltr
total 29700
-rw-r----- 1 xplore xplore 18513457 Sep 14  2017 jfxrt.jar
-rw-r----- 1 xplore xplore    68965 Sep 14  2017 zipfs.jar
-rw-r----- 1 xplore xplore   248726 Sep 14  2017 sunpkcs11.jar
-rw-r----- 1 xplore xplore   273952 Sep 14  2017 sunjce_provider.jar
-rw-r----- 1 xplore xplore    41672 Sep 14  2017 sunec.jar
-rw-r----- 1 xplore xplore  2023991 Sep 14  2017 nashorn.jar
-rw-r----- 1 xplore xplore      746 Sep 14  2017 meta-index
-rw-r----- 1 xplore xplore  1179101 Sep 14  2017 localedata.jar
-rw-r----- 1 xplore xplore    44516 Sep 14  2017 jaccess.jar
-rw-r----- 1 xplore xplore     8286 Sep 14  2017 dnsns.jar
-rw-r----- 1 xplore xplore  3860502 Sep 14  2017 cldrdata.jar
-rw-r----- 1 xplore xplore     8046 Oct 15 09:41 util.jar
-rw-r----- 1 xplore xplore   421576 Oct 15 09:41 sslj.jar
-rw-r----- 1 xplore xplore   505314 Oct 15 09:41 jcm.jar
-rw-r----- 1 xplore xplore   506533 Oct 15 09:41 jcmFIPS.jar
-rw-r----- 1 xplore xplore   325610 Oct 15 09:41 jcmandroidfips.jar
-rw-r----- 1 xplore xplore  1548393 Oct 15 09:41 cryptojcommon.jar
-rw-r----- 1 xplore xplore    70096 Oct 15 09:41 cryptojce.jar
-rw-r----- 1 xplore xplore   727472 Oct 15 09:41 certj.jar
[xplore@ds-0 ext]$

As you can see above, there are 11 OOTB JARs inside a Java 8u152 and then xPlore during its installation will deploy an additional 8 JARs for RSA/FIPS. Therefore, it looks like the person that upgraded the JDK forgot to restore these 8 JARs into the new JDK folder. For information, this is documented in the xPlore Patch Notes under the “Upgrading Bundled JDK 8” section.

I restored the 8 “custom” JARs related to RSA/FIPS into the upgraded JDK 8u261 and then started again the xPlore IndexAgent installation, which succeeded this time:

[xplore@ds-0 ~]$ cd $JAVA_HOME/jre/lib/ext
[xplore@ds-0 ext]$
[xplore@ds-0 ext]$ ls -ltr
total 29740
-rw-r----- 1 xplore xplore 18493706 Jun 18  2020 jfxrt.jar
-rw-r----- 1 xplore xplore    69304 Jun 18  2020 zipfs.jar
-rw-r----- 1 xplore xplore   280139 Jun 18  2020 sunpkcs11.jar
-rw-r----- 1 xplore xplore   280999 Jun 18  2020 sunjce_provider.jar
-rw-r----- 1 xplore xplore    60361 Jun 18  2020 sunec.jar
-rw-r----- 1 xplore xplore  2024157 Jun 18  2020 nashorn.jar
-rw-r----- 1 xplore xplore  1179462 Jun 18  2020 localedata.jar
-rw-r----- 1 xplore xplore    44516 Jun 18  2020 jaccess.jar
-rw-r----- 1 xplore xplore     8286 Jun 18  2020 dnsns.jar
-rw-r----- 1 xplore xplore  3860563 Jun 18  2020 cldrdata.jar
-rw-r----- 1 xplore xplore     1269 Jun 18  2020 meta-index
-rw-r----- 1 xplore xplore     8046 Oct 15 10:20 util.jar
-rw-r----- 1 xplore xplore   421576 Oct 15 10:20 sslj.jar
-rw-r----- 1 xplore xplore   505314 Oct 15 10:20 jcm.jar
-rw-r----- 1 xplore xplore   506533 Oct 15 10:20 jcmFIPS.jar
-rw-r----- 1 xplore xplore   325610 Oct 15 10:20 jcmandroidfips.jar
-rw-r----- 1 xplore xplore  1548393 Oct 15 10:20 cryptojcommon.jar
-rw-r----- 1 xplore xplore    70096 Oct 15 10:20 cryptojce.jar
-rw-r----- 1 xplore xplore   727472 Oct 15 10:20 certj.jar
[xplore@ds-0 ext]$
[xplore@ds-0 ext]$ cd $XPLORE_HOME/setup/indexagent/logs
[xplore@ds-0 logs]$
[xplore@ds-0 logs]$ cat install.log
10:50:18,085  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product name is: Index Agent
10:50:18,085  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product version is: 16.4.0000.0197
10:50:18,085  INFO [main]  -
10:50:18,126  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - Done InitializeSharedLibrary ...
10:50:27,513  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - setApplicationServer sharedDfcLibDir is:$XPLORE_HOME/dfc
10:50:27,514  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - getFileFromResource for templates/appserver.properties
10:50:27,596  INFO [main] com.documentum.install.indexagent.installanywhere.actions.ConfigIndexAgentActions - starting DctmActions
10:50:32,428  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - user name =
10:50:32,430  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - The batch file: $XPLORE_HOME/temp/installer/wildfly/dctm_tmpcmd0.sh exist? false
10:50:32,430  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - The user home is : /home/xplore
10:50:32,435  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - Executing temporary batch file: $XPLORE_HOME/temp/installer/wildfly/dctm_tmpcmd0.sh for running: $JAVA_HOME/bin/java -cp $WILDFLY_HOME/modules/system/layers/base/emc/documentum/security/main/dfc.jar:$WILDFLY_HOME/modules/system/layers/base/emc/documentum/security/main/aspectjrt.jar:$WILDFLY_HOME/modules/system/layers/base/emc/documentum/security/main/DctmUtils.jar com.documentum.install.appserver.utils.DctmCreatePasswordHash xxxx
10:50:32,867  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - Server DctmServer_Indexagent_REPO2 already exists!
10:50:32,868  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - Deploying to Group Indexagent_REPO2... IndexAgent (IndexAgent.war): does not exist!
10:50:32,872  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - resolving $WILDFLY_HOME/server/DctmServer_Indexagent_REPO2/deployments/IndexAgent.war/WEB-INF/classes/dfc.properties
10:50:32,879  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - resolving $WILDFLY_HOME/server/DctmServer_Indexagent_REPO2/deployments/IndexAgent.war/WEB-INF/classes/log4j.properties
10:50:32,886  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - resolving $WILDFLY_HOME/server/DctmServer_Indexagent_REPO2/deployments/IndexAgent.war/WEB-INF/classes/logback.xml
10:50:32,893  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - resolving $WILDFLY_HOME/server/DctmServer_Indexagent_REPO2/deployments/IndexAgent.war/WEB-INF/classes/indexagent.xml
10:50:32,904  INFO [main] com.documentum.install.appserver.jboss.JbossApplicationServer - resolving $WILDFLY_HOME/server/DctmServer_Indexagent_REPO2/deployments/IndexAgent.war/WEB-INF/web.xml
10:50:32,914  INFO [main] com.documentum.install.shared.actions.DiActionCopyFilesTarget - copying $XPLORE_HOME/setup/indexagent/tools/templates/aclreplication.sh.template to $XPLORE_HOME/setup/indexagent/tools/aclreplication_for_REPO2.sh
10:50:32,931  INFO [main] com.documentum.install.shared.actions.DiActionCopyFilesTarget - copying $XPLORE_HOME/setup/indexagent/tools/templates/ftintegrity.sh.template to $XPLORE_HOME/setup/indexagent/tools/ftintegrity_for_REPO2.sh
10:50:32,949  INFO [main] com.documentum.install.shared.actions.DiActionSetPermissionTarget - performing chmod 755 $XPLORE_HOME/setup/indexagent/tools/aclreplication_for_REPO2.sh
10:50:32,955  INFO [main] com.documentum.install.shared.actions.DiActionSetPermissionTarget - performing chmod 755 $XPLORE_HOME/setup/indexagent/tools/ftintegrity_for_REPO2.sh
10:50:32,962  INFO [main] com.documentum.install.shared.actions.DiActionResolveMacrosTarget - performing macro resolution on $XPLORE_HOME/setup/indexagent/tools/aclreplication_for_REPO2.sh
10:50:32,978  INFO [main] com.documentum.install.shared.actions.DiActionResolveMacrosTarget - performing macro resolution on $XPLORE_HOME/setup/indexagent/tools/ftintegrity_for_REPO2.sh
10:50:32,989  INFO [main] com.documentum.install.indexagent.installanywhere.actions.ConfigIndexAgentActions - Finished DctmActions.
10:50:33,385  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is $WILDFLY_HOME/server/DctmServer_Indexagent_REPO2/deployments/IndexAgent.war/WEB-INF/classes/dfc.keystore
10:50:34,002  INFO [main] com.documentum.fc.client.security.internal.CreateIdentityCredential$MultiFormatPKIKeyPair - generated RSA (2,048-bit strength) mutiformat key pair in 328 ms
10:50:34,028  INFO [main] com.documentum.fc.client.security.internal.CreateIdentityCredential - certificate created for DFC <CN=dfc_ZpcFYVkXHfBSo51C2rp7HXSbxbwa,O=EMC,OU=Documentum> valid from Fri Oct 15 10:45:33 UTC 2021 to Mon Oct 13 10:50:33 UTC 2031:

10:50:34,032  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is $WILDFLY_HOME/server/DctmServer_Indexagent_REPO2/deployments/IndexAgent.war/WEB-INF/classes/dfc.keystore
10:50:34,055  INFO [main] com.documentum.fc.client.security.impl.InitializeKeystoreForDfc - [DFC_SECURITY_IDENTITY_INITIALIZED] Initialized new identity in keystore, DFC alias=dfc, identity=dfc_ZpcFYVkXHfBSo51C2rp7HXSbxbwa
10:50:34,056  INFO [main] com.documentum.fc.client.security.impl.AuthenticationMgrForDfc - identity for authentication is dfc_ZpcFYVkXHfBSo51C2rp7HXSbxbwa
10:50:34,057  INFO [main] com.documentum.fc.impl.RuntimeContext - DFC Version is 16.4.0000.0185
10:50:34,078  INFO [Timer-2] com.documentum.fc.client.impl.bof.cache.ClassCacheManager$CacheCleanupTask - [DFC_BOF_RUNNING_CLEANUP] Running class cache cleanup task
10:50:35,346  INFO [main] com.documentum.fc.client.impl.connection.docbase.DocbaseConnection - Object protocol version 2
10:50:35,402  INFO [main] com.documentum.fc.client.security.internal.AuthenticationMgr - new identity bundle <dfc_ZpcFYVkXHfBSo51C2rp7HXSbxbwa  1634295035      ds-0.ds.namespace.svc.cluster.local           YnmJpnz48aXW8QuEkgQm/eHbq9hCgJnUGCxdjmPsUJzEnEr5nSwk3KPQ2WUo5ihIuylu2ihLt5o8mTpsEoIBt177lMnAvtU+nbBtB0ItTnD4Bt1M83IQR+t5JAQSYF7Yk7WYCiAADEcKWno6/JiEgkYM0qjU5HwR0w/zmxDSf2330KImqYRPLQtEdCoW8Z3+SSs2/xJdrcki27of9CKH5BXBaXpGpqFzBCYYy76C+SL5+GUR8srU2Q9d4qo5GrlaLJecyWw1Oy3ZZbxDX/nzn7PmBOemkS8rqHZK5msTW9Z8G7Lt6WfXv6w3qumAYJhJCXnXtQEd0TG2liM5XWGqVg==>
10:50:35,689  INFO [main] com.documentum.fc.client.security.impl.JKSKeystoreUtilForDfc - keystore file name is $WILDFLY_HOME/server/DctmServer_Indexagent_REPO2/deployments/IndexAgent.war/WEB-INF/classes/dfc.keystore
10:50:35,734  INFO [main] com.documentum.fc.client.security.impl.DfcIdentityPublisher - found client registration: false
10:50:38,098  INFO [main] com.documentum.fc.client.privilege.impl.PublicKeyCertificate - stored certificate for CN
10:50:38,262  INFO [main] com.documentum.fc.client.security.impl.IpAndRcHelper - filling in gr_REPO a new record with this persistent certificate:
-----BEGIN CERTIFICATE-----
MIIDHzCCAgcCEBVLERfLzAbZ/gi5w7PVnM4wDQYJKoZIhvcNAQELBQAwTjETMBEG
A1UECwwKRG9jdW1lbnR1bTEMMAoGA1UECgwDRU1DMSkwJwYDVQQDDCBkZmNfWlhT
YnhrWHBjRllWMnJwN0hIZkJTbzUxQ2J3YTAeFw0yMTEwMTUxMDQ1MzNaFw0zMTEw
tvbnbJm2IAOE1a1w/dK+NTltmB4QvOoVAvOkaiuTN6qMmi4zCQRjDssaxfdnn9xV
K9lj+v+lXdjlhkCg8PGIssjaAd+zxiVPK9+so52pt2U9rwSk/ASXZqpa3sERO0ua
CJtlPyHuCXHVB6yrwwgnkyWO0PFyfWTi4DUMKJpQCjXGR17+EkfS2u5dGtEyeiLt
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCiBaGp0Wh0hp3/xptFspRkPKaDoOB3
phrVNQqiJT3CiqJgFabiIEGS3AOkm85372RaS4ZrqUh++P5LEkflFZuRzvgmeLMj
HAH5VPSKM8QE3VrVQo17X1msfq4HeRXpUYPk7iL9HPoDSrL7J+kphTPrAgMBAAEw
DQYJKoZIhvcNAQELBQADggEBAB4pqKL0J7C63/IyJGXN/jZXaN8Fg2bz5bYythgI
MTMxMDUwMzNaME4xEzARBgNVBAsMCkRvY3VtZW50dW0xDDAKBgNVBAoMA0VNQzEp
MCcGA1UEAwwgZGZjX1pYU2J4a1hwY0ZZVjJycDdISGZCU281MUNid2EwggEiMA0G
ZwF0p0Io6oDVXjK921c7TovGz8SdxVARSR/LG7EAXYAqkvXkXMkow65BonvBPZcE
Y6oYUq7OrtWTWd+yOryZ+y+TwG0P2cDd5nErCSoiuNlJ+MvhvJ/fraknJsr9/iGD
rZvZpAT0aRAKwKaj65RlK6Pj43B6rPlouD1yS6Kgb5KpgOCuK/HxpcseUsmNDh5v
JRD05r0VxD6b3nMqRP3E7XPr3hbEbq79R6qF8aaF+mm4SZNSEgm1Ps2ruzchwShe
Xn+DBv7XA0xCOmHbLvLe/tLlNKH8GPCuqcbcSUTTCmyE8/A=
-----END CERTIFICATE-----
10:50:38,266  INFO [main] com.documentum.fc.client.security.impl.DfcIdentityPublisher - found client registration: false
10:50:38,454  INFO [main] com.documentum.fc.client.security.impl.IpAndRcHelper - filling a new registration record for dfc_ZpcFYVkXHfBSo51C2rp7HXSbxbwa
10:50:38,478  INFO [main] com.documentum.fc.client.security.impl.DfcIdentityPublisher - [DFC_SECURITY_GR_REGISTRATION_PUBLISH] this dfc instance is now published in the global registry gr_REPO
10:50:39,542  INFO [main] com.documentum.fc.client.impl.connection.docbase.DocbaseConnection - Object protocol version 2
10:50:45,788  INFO [main] com.documentum.fc.client.impl.connection.docbase.DocbaseConnection - Object protocol version 2
10:50:46,549  INFO [main] com.documentum.fc.client.impl.connection.docbase.DocbaseConnection - Object protocol version 2
10:50:47,848  INFO [main] com.documentum.install.indexagent.installanywhere.actions.ConfigIndexAgentActions - storage name is set to: default
10:50:49,292  INFO [main] com.documentum.install.indexagent.installanywhere.actions.ConfigIndexAgentActions - creating $XPLORE_HOME/installinfo/instances/indexagent
10:50:49,332  WARN [main] com.documentum.install.indexagent.installanywhere.actions.RegisterIndexAgentWithWatchdog - Stopping watchdog.
10:50:49,332  WARN [main] com.documentum.install.indexagent.installanywhere.actions.RegisterIndexAgentWithWatchdog - Registering ds-0.ds.namespa_9220_IndexAgent with watchdog.
10:50:49,568  INFO [main] com.emc.ess.watchdog.config.registry.impl.EssMultiWatchdogConfiguration - Begin to add SingleWatchdogConfig with hostname: ds-0.ds.namespace.svc.cluster.local
10:50:49,568  INFO [main] com.emc.ess.watchdog.config.EssWatchdogRegistration - Successfully added configuration for  {IndexAgent,ds-0.ds.namespa_9220_IndexAgent} under watchdog host ds-0.ds.namespace.svc.cluster.local
10:50:49,568  WARN [main] com.documentum.install.indexagent.installanywhere.actions.RegisterIndexAgentWithWatchdog - Registration succeeded.
[xplore@ds-0 logs]$

Long story short, whenever you want to try to patch something, always check the documentation to make sure you aren’t forgetting something! Even if the system can start and run properly after the patching, you might cause some issues to other people in the future like it was the case here ;).

L’article Documentum – Unable to install an IndexAgent due to RSA/FIPS libraries est apparu en premier sur dbi Blog.

Add TDE to Oracle Database and upload keys to OKV

Thu, 2023-11-02 08:35
Introduction

The TDE database encryption is one of the main Oracle database feature usage. Once the TDE configured, the wallet must be backed up. If the wallet is lost nobody can help to recover the database. Also for security reasons is not a good idea to store the wallet with the database RMAN backups.

OKV was created to address these main points: store in a safe place the database encryption keys and avoid to keep the keys on the disk.

In this post I will show how to configure, from scratch, the TDE on the database, how to upload the wallet keys in the OKV and how to configure the database to use these keys.

I choose to make the full configuration using the OKV API, so it can be scripted and implemented on a new server.

The passwords are not hidden in order to easy find which password is used in which case.

So let’s start…

The steps
  • Configure the TDE at the dataabse level.
  • Configure the OKV RESTfull API.
  • Add the endpoint server to the OKV.
  • Upload database keys to OKV.
  • Migrate from local keys usage to OKV.
Main configuration Oracle Key Vault ip 172.168.0.41Oracle Key Vault version21.7Database Server ip 172.168.1.128Database nameCDB01Database version 21c Configure TDE at database level Create the directories

First let’s create some directories to keep the database wallet.

[oracle@okvcli oracle]$ export WALLET_DIR=/u01/app/oracle/wallet

[oracle@okvcli oracle]$ mkdir -p ${WALLET_DIR}/tde

[oracle@okvcli oracle]$ mkdir -p ${WALLET_DIR}/tde_seps

[oracle@okvcli oracle]$ mkdir -vp ${WALLET_DIR}/okv

[oracle@okvcli oracle]$ echo "export WALLET_DIR=/u01/app/oracle/wallet" >> $HOME/.bashrc
Set the TDE parameters
[oracle@okvcli oracle]$ . oraenv
ORACLE_SID = [CDB01] ?

[oracle@okvcli oracle]$ sqlplus / as sysdba

SQL> alter system set wallet_root = '/u01/app/oracle/wallet' scope=spfile;

SQL> alter system set tablespace_encryption_default_algorithm = 'AES256';

SQL> alter system set encrypt_new_tablespaces='ALWAYS';

SQL> shutdown immediate; 

SQL> startup; 

SQL> alter system set tde_configuration = "KEYSTORE_CONFIGURATION=FILE"
Wallet creation
SQL> select a.con_id, b.name, a.wrl_type, 
        a.wrl_parameter, a.status, 
        a.wallet_type 
from v$encryption_wallet a, 
         v$containers b 
where a.con_id=b.con_id 
order by a.con_id;

    CON_ID NAME       WRL_TYPE     WRL_PARAMETER                        STATUS                   WALLET_TYPE
---------- ---------- ------------ ------------------------------------ ------------------------------ ------------
     1     CDB$ROOT   FILE          /u01/app/oracle/wallet/tde/          NOT_AVAILABLE               UNKNOWN
     2     PDB$SEED   FILE                                               NOT_AVAILABLE               UNKNOWN
     3     PDB1       FILE                                               NOT_AVAILABLE               UNKNOWN
Create the keystore at CDB level
SQL> administer key management create keystore identified by "Hello123"; 

keystore altered.

SQL> administer key management add secret 'Hello123' for client 'TDE_WALLET' to local auto_login keystore '/u01/app/oracle/wallet/tde_seps';

keystore altered.

SQL> administer key management set keystore open identified by external store container=all;

keystore altered.

select a.con_id, b.name, a.wrl_type, 
       a.wrl_parameter, a.status, 
       a.wallet_type 
from v$encryption_wallet a, 
    v$containers b 
where a.con_id=b.con_id 
order by a.con_id;

    CON_ID NAME       WRL_TYPE     WRL_PARAMETER                        STATUS                WALLET_TYPE
---------- ---------- ------------ ------------------------------------ --------------------- ------------
     1     CDB$ROOT   FILE          /u01/app/oracle/wallet/tde/          OPEN_NO_MASTER_KEY    PASSWORD
     2     PDB$SEED   FILE                                               OPEN_NO_MASTER_KEY    PASSWORD
     3     PDB1       FILE                                               OPEN_NO_MASTER_KEY    PASSWORD
Create the master key password MEK at CDB and PDB level
SQL> administer key management set key using tag 'CDB1: Initial MEK' identified by external store with backup container=current;

Keystore altered.

SQL> select a.con_id, 
   b.name, a.wrl_type, a.wrl_parameter, a.status 
from v$encryption_wallet a, 
   v$containers b 
where a.con_id=b.con_id order by a.con_id;

CON_ID  NAME       WRL_TYPE      WRL_PARAMETER                    STATUS
---------- ---------- ------------ ------------------------------ ------------------------------
     1     CDB$ROOT   FILE          /u01/app/oracle/wallet/tde/   OPEN
     2     PDB$SEED   FILE                                        OPEN
     3     PDB1       FILE                                        OPEN_NO_MASTER_KEY

SQL> alter session set container=pdb1;

Session altered

SQL> administer key management set key using tag 'PDB1: Initial MEK' identified by external store with backup container=current;

Keystore altered.

SQL> alter session set container=CDB$ROOT;

Session altered.

SQL> select a.con_id, 
   b.name, a.wrl_type, a.wrl_parameter, a.status 
from v$encryption_wallet a, 
   v$containers b 
where a.con_id=b.con_id order by a.con_id;

CON_ID  NAME       WRL_TYPE      WRL_PARAMETER                    STATUS
---------- ---------- ------------ ------------------------------ ------------------------------
     1     CDB$ROOT   FILE          /u01/app/oracle/wallet/tde/   OPEN
     2     PDB$SEED   FILE                                        OPEN
     3     PDB1       FILE                                        OPEN
Create autologin wallet
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> administer key management create local auto_login keystore from keystore '/u01/app/oracle/wallet/tde' identified by "Hello123";

keystore altered.
Configure OKV RESTfull service on oracle database server
[oracle@okvcli bin]$ export JAVA_HOME=/u01/app/oracle/product/21.0.0/dbhome_1/jdk

[oracle@okvcli bin]$ echo "export JAVA_HOME=/u01/app/oracle/product/21.0.0/dbhome_1/jdk" >> $HOME/.bashrc

[oracle@okvcli oracle]$ mkdir /u01/app/oracle/okvapi

[oracle@okvcli oracle]$ cd okvapi/

[oracle@okvcli okvapi]$ curl -O -k https://172.168.0.41:5695/okvrestclipackage.zip
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 3865k  100 3865k    0     0   107M      0 --:--:-- --:--:-- --:--:--  107M

[oracle@okvcli okvapi]$ unzip okvrestclipackage.zip
Archive:  okvrestclipackage.zip
   creating: lib/
   creating: bin/
  inflating: bin/okv
  inflating: bin/okv.bat
   creating: conf/
  inflating: conf/okvrestcli.ini
  inflating: conf/okvrestcli_logging.properties
  inflating: lib/okvrestcli.jar

[oracle@okvcli okvapi]$ rm okvrestclipackage.zip

oracle@okvcli ~]$ export OKV_HOME=/u01/app/oracle/okvapi

[oracle@okvcli ~]$ echo "export OKV_HOME=/u01/app/oracle/okvapi" >> $HOME/.bashrc

[oracle@okvcli ~]$ cat $OKV_HOME/conf/okvrestcli.ini
[Default]
log_property=$OKV_HOME/conf/okvrestcli_logging.properties
server=172.168.0.41
okv_client_config=$WALLET_DIR/okv/conf/okvclient.ora
user=admin
client_wallet=$OKV_HOME/wallet

Edit the $OKV_HOME/bin/okv file and uncomment the line export OKV_RESTCLI_CONFIG=$OKV_RESTCLI_DIR/conf/okvrestcli.ini

Create the wallet for OKV RESTfull API

As for the okvrestcli.ini file I used the admin user (see user=admin from okvrestcli.ini) to configure the wallet. The password is the one defined in OKV for the admin user.

[oracle@okvcli okvapi]$ mkdir $OKV_HOME/wallet

[oracle@okvcli okvapi]$ $OKV_HOME/bin/okv admin client-wallet add --client-wallet $OKV_HOME/wallet --wallet-user admin
Password: ****** 
{
  "result" : "Success"
}
Test the correct installation of OKV RESTfull API
[oracle@okvcli conf]$ $OKV_HOME/bin/okv
{
  "restCLIVersion" : "21.7.0.0.0"
}

[oracle@okvcli okvapi]$  $OKV_HOME/bin/okv server info get
{
  "result" : "Success",
  "value" : {
    "caCertificateExpirationDate" : "2026-10-19 09:52:32",
    "cpuCores" : "4",
    "deploymentType" : "Standalone",
    "diskInGB" : "3652",
    "fraInGB" : "20",
    "memoryInKB" : "30504316",
    "serverCertificateExpirationDate" : "2024-10-19 09:57:11",
    "serverTime" : "2023-11-02 10:40:47",
    "version" : "21.7.0.0.0"
  }
}

At this moment the OKV RESTfull API is communicate with the OKV server. So we can use the API to add the endpoint to OKV.

Add the endpoint to OKV.

First we can create a wallet in the OKV for the endpoint. This step is optional. The name of the wallet is DBCDB01_WLT

[oracle@okvcli okvapi]$ $OKV_HOME/bin/okv manage-access wallet create --wallet DBCDB01_WLT
{
  "result" : "Success"
}

Now create the endpoint in the OKV

[oracle@okvcli okvapi]$ $OKV_HOME/bin/okv admin endpoint create --endpoint okvcli_host --description "$HOSTNAME, $(hostname -i)" --type ORACLE_DB --platform LINUX64
{
  "result" : "Success"
}

In the OKV the added endpoint is REGISTERED as is not enrolled yet:

As we create a wallet for this endpoint let’s define it as default wallet for it:

[oracle@okvcli okvapi]$ $OKV_HOME/bin/okv manage-access wallet set-default --wallet DBCDB01_WLT  --endpoint okvcli_host
{
  "result" : "Success"
}

Enroll the endpoint:

[oracle@okvcli okvapi]$ echo $ORACLE_HOME
/u01/app/oracle/product/21.0.0/dbhome_1

[oracle@okvcli okvapi]$ echo $ORACLE_BASE
/u01/app/oracle

[oracle@okvcli okvapi]$ $OKV_HOME/bin/okv admin endpoint provision --endpoint okvcli_host --location /u01/app/oracle/wallet/okv --auto-login FALSE
Enter Oracle Key Vault endpoint password: OKV_cli_123
{
  "result" : "Success"
}

The --location parameter must be wallet_root parameter path from the database plus okv. In this path OKV will download the okvutil file needed by the database to communicate with OKV server:

[oracle@okvcli okvapi]$ ls /u01/app/oracle/wallet/okv
bin  conf  csdk  jlib  lib  log  ssl

The okvclient.ora file from /u01/app/oracle/wallet/okv/conf path can be added in the $OKV_HOME/conf/okvrestcli.ini file: okv_client_config=/u01/app/oracle/wallet/okv/conf/okvclient.ora to be able to use the RESTfull API for this database.

Notice that I put the password in the command to identified it in the next steps.

At OKV server level the endpoint is enrolled now.

Let’s test if the endpoint okvutil is working:

oracle@okvcli ~]$ cd /u01/app/oracle/wallet/okv/bin

[oracle@okvcli bin]$ ./okvutil list
Enter Oracle Key Vault endpoint password:  OKV_cli_123
Unique ID                               Type            Identifier
3E32A151-D656-4E43-ADE8-629B11B8B4C9	Template	Default template for OKVCLI_HOST

At this moment we have the database encrypted with TDE, and the database server is able to communicate with the OKV server, and is enrolled in the OKV client.

The next step is to upload the TDE keys in the OKV.

Upload TDE key to OKV.

The first password id the wallet password (Hello123) the second password is the endpoint password (OKV_cli_123)

[oracle@okvcli bin]$ ./okvutil upload -t WALLET -l /u01/app/oracle/wallet/tde  -g DBCDB01_WLT -v 4
okvutil version 21.7.0.0.0
Endpoint type: Oracle Database
Configuration file: /u01/app/oracle/wallet/okv/conf/okvclient.ora
Server: 172.168.0.41:5696
Standby Servers:
Uploading from /u01/app/oracle/wallet/tde
Enter source wallet password:  Hello123
No auto-login wallet found, password needed
Enter Oracle Key Vault endpoint password: OKV_cli_123
ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY
Trying to connect to 172.168.0.41:5696 ...
Connected to 172.168.0.41:5696.
ORACLE.SECURITY.DB.ENCRYPTION.AZXyYcD7Y0+Rv5tK3Y2lrT8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.KM.ENCRYPTION.AZXyYcD7Y0+Rv5tK3Y2lrT8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.KT.ENCRYPTION.AZXyYcD7Y0+Rv5tK3Y2lrT8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.KB.ENCRYPTION.
ORACLE.SECURITY.ID.ENCRYPTION.
ORACLE.SECURITY.KT.ENCRYPTION.AezKLH/ZbU8ev0KGkyVh9XAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.KM.ENCRYPTION.AezKLH/ZbU8ev0KGkyVh9XAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.DB.ENCRYPTION.AezKLH/ZbU8ev0KGkyVh9XAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY.0928CAFC93BBEE6CE0638001A8AC296F

Uploaded 2 TDE keys
Uploaded 0 SEPS entries
Uploaded 0 other secrets
Uploaded 4 opaque objects

Uploading private key
Uploading certificate request
Uploading trust points

Uploaded 1 private keys
Uploaded 1 certificate requests
Uploaded 0 user certificates
Uploaded 0 trust points

Upload succeeded

[oracle@okvcli bin]$ ./okvutil list
Enter Oracle Key Vault endpoint password: OKV_cli_123
Unique ID                               Type            Identifier
F1C1537B-970E-4F09-BF41-6E5525D92E28	Opaque Object	TDE Wallet Metadata
A2FE6566-A68E-5217-89B0-19370EF78066	Symmetric Key	TDE Master Encryption Key: TAG CDB1: Initial MEK
F3C5C8F1-5DBA-4FF2-BF28-4EC3E8606772	Opaque Object	TDE Wallet Metadata
3E77A59D-DB88-4F26-BF7C-5449137B46FE	Opaque Object	TDE Wallet Metadata
C910D912-0031-5776-AE3F-43967710B8DB	Symmetric Key	TDE Master Encryption Key: TAG PDB1: Initial MEK
0ABA58D9-7295-4F12-BFBF-F13382476355	Opaque Object	TDE Wallet Metadata
492260F0-113C-4F07-BFCA-25360F0173BC	Private Key	Private Key
4F1FC5D9-2C87-4F56-BF4B-6B3A5D730C6E	Opaque Object	Certificate Request
3E32A151-D656-4E43-ADE8-629B11B8B4C9	Template	Default template for OKVCLI_HOST

At database level we are still in the FILE keystore mode (local on the server).

SQL> select wrl_type, status, con_id from v$encryption_wallet;

WRL_TYPE         STATUS CON_ID
---------------- ------ ------
FILE             OPEN       1
FILE             OPEN       2
FILE             OPEN       3

At OKV server level the wallet DBCDB01_WLT contains our keys and the OKVCLI_HOST endpoint has access to the wallet:

Migrate the FILE local wallet to OKV
SQL> show con_id

CON_ID
------------------------------
1

SQL> alter system set tde_configuration = "KEYSTORE_CONFIGURATION=OKV|FILE" ;

SQL> administer key management set encryption key identified BY "OKV_cli_123"  migrate using "Hello123"  WITH BACKUP;

keystore altered.

The parameter "KEYSTORE_CONFIGURATION=OKV|FILE" means that the database will get the encryption key from OKV and the auto_login file cwallet.sso from local disk (/u01/app/oracle/wallet/tde)

Remove the auto_login file from the tde, directory in order to create the new auto_login file:

[oracle@okvcli tde]$ cd /u01/app/oracle/wallet/tde

[oracle@okvcli tde]$ rm cwallet.sso

Create the auto_login file for OKV access:

SQL> administer key management add secret 'OKV_cli_123' for client 'OKV_PASSWORD' to local auto_login keystore '/u01/app/oracle/wallet/tde';

keystore altered.

SQL> select wrl_type, status, con_id from v$encryption_wallet;

WRL_TYPE    STATUS    CON_ID
----------- -------- --------
FILE        OPEN      1
OKV         OPEN      1
FILE        OPEN      2
OKV         OPEN      2
FILE        OPEN      3
OKV         OPEN      3

At this step the database use OKV for the encryption key and local file for auto_login. In the wallet directory /u01/app/oracle/wallet/tde only the cwallet.sso file should exist.

Final test

The final test consist in the creation of an encrypted tablespace with a table inside, then clean local wallet (ewallet.p12 file ) and finally restart the database.

If everything is correctly configured then no password should be asked, the CDB and PDB(s) must be opened, the encrypted tablespace and the table must be accessible.

Let’s clean the /u01/app/oracle/wallet/tde directory (is a good practice to start by making a backup in another directory, just in case):

[oracle@okvcli tde]$ cd /u01/app/oracle/wallet/tde

[oracle@okvcli tde]$ mkdir bck

[oracle@okvcli tde]$ mv ewallet_* bck

[oracle@okvcli tde]$ ls 
cwallet.sso

Create an encrypted (ENCR_TBS) tablespace and add a table (ENCR_TABLE):

SQL> alter session set container=pdb1;

Session altered.

SQL> create tablespace encr_tbs;

SQL> select tablespace_name, encrypted from dba_tablespaces where tablespace_name='ENCR_TBS';

TABLESPACE_NAME 	       ENC
------------------------------ ---
ENCR_TBS		       YES

SQL> create table encr_table(c number) tablespace encr_tbs;

Table created.

SQL> insert into encr_table values(1);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from encr_table;

	 C
----------
	 1

SQL> alter session set container=CDB$ROOT;

Session altered 

Finally restart the database and try to select the table:

SQL> shutdown immediate; 

SQL> startup; 

SQL> show pdbs;

   CON_ID  CON_NAME    OPEN MODE   RESTRICTED
---------- ----------- ----------- ---------- 
     2     PDB$SEED    READ ONLY    NO
     3     PDB1        READ WRITE   NO

SQL> alter session set container=pdb1;

Session altered.

SQL> select * from encr_table;

	 C
----------
	 1

Conclusion

Using OKV to store TDE encryption keys is the preferred solution. The RESTfull API proposed by OKV is complete and easy to configure. It is a good practice to script all commands in order to add the endpoints, and configure the database, as a post script of vm creation for instance.

L’article Add TDE to Oracle Database and upload keys to OKV est apparu en premier sur dbi Blog.

Combining Powerful Cilium Features Together In Kubernetes

Wed, 2023-11-01 08:23

My colleagues and I have played with Cilium from Isovalent for a while now and we have successfully deployed it as simple CNI (Container Network Interface) along with Hubble on several production Kubernetes cluster of our customers.

However, over the past year there has been several very interesting new features being released. Smartly, Isovalent gives you the opportunity to play with each new feature in their excellent labs they have now put together in this original map quest. You follow the instructions to learn how the feature works and earn a badge (for the main features) if you solve the final challenge. Be careful it is very addictive, as soon as you earn your first badge you will then want to collect all of them!

Simplifying a Kubernetes cluster architecture

Cilium just became a Graduated CNCF project and I wanted to try to combine several of its features together to simplify our traditional Kubernetes cluster architecture for future projects.

If you are using a Kubernetes on-premise cluster that hosts applications reachable from outside of this cluster then you are using an Ingress Controller and a load balancer (probably MetalLB using Layer 2 configuration). This is the traditional architecture we deploy and with Cilium it is now possible to replace these two components by using a combination of the following features: Ingress Controller, LoadBalancer IPAM and L2 Service Announcement.

I’ve done my tests with Minikube as at this stage I just wanted to see how the combining configuration would work. I didn’t see yet some examples of those 3 features together so if you are also interested to optimize the architecture of your Kubernetes cluster with Cilium then read on!

Overview of the configuration

If you are like me, you like to have some visuals about what we are doing. So I start by giving the summary of my configuration that combine the 3 Cilium features (with their name in bold in the diagram below). The colored rectangle boxes show the important parameters of the objects and when they are of the same color, it is how the link is made between the objects.

I’ve used the application “testcontainers/helloworld” that I want to expose outside of my cluster. So from my laptop I should be able to reach it and this is my goal for this first stage of testing.

Each feature brings its customized object (CRD) and you can see in the diagram above the configuration of the object for each of the 3 features. The service object with the type LoadBalancer is automatically created when the feature Ingress Controller is enabled. We then just associate a deployment for our “testcontainers/helloworld” application to this service by using the selector/labels association.

Minikube configuration

Let’s start from the beginning, here is my Minikube configuration for testing these features. I’ve started a new Minikube cluster with 3 nodes, no cni and I like to create a separate profile called here cluster-cilium to separate that cluster from the other cluster tests I’m doing.

$ minikube start --nodes 3 --network-plugin=cni --cni=false --memory=4096 -p cluster-cilium

When my cluster is up and running, I install Cilium 1.14.2 and activate all the features and parameters I need for my testing. I also like to install Cilium in a separate namespace called here cilium:

$ kubectl create ns cilium
$ cilium install --version 1.14.2 --set kubeProxyReplacement=strict --set ingressController.enabled=true --set ingressController.loadbalancerMode=dedicated --set kubeProxyReplacement=true --set l2announcements.enabled=true --set l2announcements.leaseDuration="3s" --set l2announcements.leaseRenewDeadline="1s" --set l2announcements.leaseRetryPeriod="500ms" --set l2podAnnouncements.enabled=true --set l2podAnnouncements.interface="eth0" --set externalIPs.enabled=true -n cilium
Combined features configuration

When my cluster is ready, I can check the LoadBalancer service is automatically created with an http and https nodePort. For this example I’ve edited this service and removed the https port to avoid any confusion. The http nodePort 31814 was automatically created but that value could be changed. For my testing I’ll not use it:

$ kubectl get svc -n cilium
NAME             TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
cilium-ingress-basic-ingress   LoadBalancer   10.107.176.240   <pending>      80:31814/TCP   16h

You can see that this service doesn’t have any external IP address yet (the status shows pending). To provide one, we need to create an object CiliumLoadBalancerIPPool and define a cidr range of IP to use as external IP Address. Note that /30 (so 2 IP Addresses) is the minimum range you can use. It is not possible to use just 1 IP Address in a pool. We then set a serviceSelector to match the label blue (in this example) so each LoadBalancer service that have that label will take an IP Address from this pool if there is one available. If all IP Addresses are already taken then the EXTERNAL-IP field of the service will stay in the pending state. Below is the CiliumLoadBalancerIPPool configuration:

apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:
  name: "pool-blue"
spec:
  cidrs:
  - cidr: "10.0.0.0/30"
  serviceSelector:
    matchLabels:
      color: blue

You’ll then need to add the blue label to the LoadBalancer service

$ kubectl edit svc -n cilium cilium-ingress-basic-ingress

Just add the label color: blue and you’ll then see the following:

$ kubectl get svc -n cilium
NAME             TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
cilium-ingress-basic-ingress   LoadBalancer   10.107.176.240   10.0.0.1      80:31814/TCP   16h

Now our service has an external IP Address that comes from the pool-blue we have defined previously.

We can create our deployment by applying the following yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: testciliuml2
  namespace: cilium
spec:
  replicas: 2
  selector:
    matchLabels:
      name: testcontainers
  template:
    metadata:
      labels:
        name: testcontainers
    spec:
      containers:
      - name: testciliuml2
        image: testcontainers/helloworld
        env:
        - name: DELAY_START_MSEC
          value: "2000"

You can see we use the label name: testcontainers for this deployment. We will then need to edit again our LoadBalancer service to associate it with that deployment label (you can also see it in the diagram above if you want to check):

  selector:
    name: testcontainers

Finally we can configure an ingress object to link it to our LoadBalancer service by using the yaml file below:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: basic-ingress
  namespace: cilium
spec:
  ingressClassName: cilium
  rules:
  - http:
      paths:
      - backend:
          service:
            name: cilium-ingress-basic-ingress
            port:
              number: 80
        path: /
        pathType: Prefix

Note that the ingressClassName field uses the value cilium. This instructs Kubernetes to use Cilium as the Ingress controller for this resource. We can check the state of our ingress:

 kubectl get ing -n cilium
NAME            CLASS    HOSTS   ADDRESS    PORTS   AGE
basic-ingress   cilium   *       10.0.0.1   80      30d

You can recognize the external IP Address of our LoadBalancer service.

Test of connection to the LoadBalancer service

To fully test the flow from our laptop through the ingress and down to the pod, we would need the external IP Address to be reachable from my laptop. In production you will not use this external IP Address directly but use a URL with a domain name that will be resolved by a DNS to this external IP Address.

Here I will just test the connection to the LoadBalancer service with kubectl port-forward to reach port 80 of this service as it would be by our ingress rule:

$ kubectl port-forward svc/cilium-ingress-basic-ingress -n cilium 8080:80 &

I can then reach my application from my laptop by using the URL http://localhost:8080

As a conclusion, I’ve found the configuration of all these features pretty easy to bound together. It is then very simple to provide a complete Kubernetes architecture with only Cilium features to reach my application from outside of this cluster.

The next step will be to test it in a real Kubernetes cluster with some failover scenarii but that all look very promising to me!

L’article Combining Powerful Cilium Features Together In Kubernetes est apparu en premier sur dbi Blog.

APEX/ORDS 23.x – DispatcherNotFoundException

Tue, 2023-10-31 16:54

The aim of that blog is to look into Oracle REST Data Services (ORDS) 23.x error raised when setting up for APEX 23.1.

This is following some experience at a customer who wanted to move to latest version of ORDS with APEX 23.1, in late 2023, and ran into an issue.

If you have encountered similar error as below, when trying to run ORDS 23.x standalone for APEX 23.1, you will find a resolution and more details in this blog:

Context

– APEX 23.1 is installed first on Oracle DB 19.16 at PDB level without error
– ORDS 23.3 binaries are installed on Oracle Linux 8 from the yum package provided by Oracle ($ORDS_CONFIG=/etc/ords/config and $ORDS_BASE=/usr/local/bin)

ORDS was installed with following command line:
$/usr/local/bin/ords --config $ORDS_CONFIG install
ORDS: Release 23.3 Production on Fri Oct 27 09:38:20 2023
Copyright (c) 2010, 2023, Oracle.
Configuration:
/etc/ords/config/The configuration folder /etc/ords/config does not contain any configuration files.

Oracle REST Data Services - Interactive Install

Enter a number to select the type of installation
[1] Install or upgrade ORDS in the database only
[2] Create or update a database pool and install/upgrade ORDS in the database
[3] Create or update a database pool only
Choose [2]: 2
Enter a number to select the database connection type to use
[1] Basic (host name, port, service name)
[2] TNS (TNS alias, TNS directory)
[3] Custom database URL
Choose [1]: 2
Enter the TNS location: /u01/app/oracle/product/network/admin
Enter a number to select the TNS Network alias to use
[1] PDB1 …SERVICE_NAME=pdb1)))
[2] PDB2 …SERVICE_NAME=pdb2)))
Choose [1]: 1
Provide database user name with administrator privileges.
Enter the administrator username: ORDS_INSTALL_ADMIN
Enter the database password for ORDS_INSTALL_ADMIN:
Connecting to database user: ORDS_INSTALL_ADMIN url:
jdbc:oracle:thin:@PDB1?TNS_ADMIN=/u01/app/oracle/product/network/admin

Retrieving information.
Connecting to database user: ORDS_PUBLIC_USER url:
jdbc:oracle:thin:@PDB1?TNS_ADMIN=/u01/app/oracle/product/network/admin
Enter a number to select additional feature(s) to enable:
[1] Database Actions (Enables all features)
[2] REST Enabled SQL and Database API
[3] REST Enabled SQL
[4] Database API
[5] None
Choose [1]: 1
Enter a number to configure and start ORDS in standalone mode
[1] Configure and start ORDS in standalone mode
[2] Skip
Choose [1]: 1
Enter a number to select the protocol
[1] HTTP
[2] HTTPS
Choose [1]: 1
Enter the HTTP port [8080]: 8080
Enter the APEX static resources location: /u01/app/oracle/product/apex/images

As per standard installation ORDS was installed after APEX and all Database Actions have been deployed.

Issue

Unfortunately ORDS is providing a 404 error when trying to access APEX from the URL:
http://mywebappserver:8080/ords/apex

Also when accessing the ORDS landing page with following URL, APEX is unavailable:
http://mywebappserver:8080/ords/_landing

Investigation

ORDS is returning 404 for different errors as no handler is defined by default. Also looking into the Jetty log files is not showing any error details. The only option left, in order to have more information about the error behind the scene, is to print debug information to screen asking ORDS to return details. This is enabled by updating the ORDS configuration file pool.xml for the database connection to APEX in folder $ORDS_CONFIG/databases/default adding entry debug.printDebugToScreen as following:

<?xml version="1.0" encoding="UTF-8"?>
<!--?xml version="1.0" encoding="UTF-8"?-->

<properties>
<comment>Saved on Fri Oct 27 09:41:18 UTC 2023</comment>
  <entry key="db.connectionType">tns</entry>
  <entry key="db.tnsAliasName">PDB1</entry>
  <entry key="db.tnsDirectory">/u01/app/oracle/product/network/admin</entry>
  <entry key="db.username">ORDS_PUBLIC_USER</entry>
  <entry key="feature.sdw">true</entry>
  <entry key="restEnabledSql.active">true</entry>
  <entry key="debug.printDebugToScreen">true</entry>
  <entry key="security.requestValidationFunction">ords_util.authorize_plsql_gateway</entry>
</properties>

This requires a restart of ORDS and provides further details when accessing the apex URL in the web browser:

The Java Stack Trace provided is showing some DispatcherNotFoundException which may not be very specific or helpful at a first glance. At least it gives some hint that ORDS is not finding a way to dispatch request towards the requested apex target.

Resolution

When checking the configuration file pool.xml and comparing to previous installation (with ORDS 22.x), it shows there is one entry missing for plsql.gateway.mode that needs to be set to value proxied and added as following:

<?xml version="1.0" encoding="UTF-8"?>
<!--?xml version="1.0" encoding="UTF-8"?-->

<properties>
<comment>Saved on Fri Oct 27 09:41:18 UTC 2023</comment>
  <entry key="db.connectionType">tns</entry>
  <entry key="db.tnsAliasName">PDB1</entry>
  <entry key="db.tnsDirectory">/u01/app/oracle/product/network/admin</entry>
  <entry key="db.username">ORDS_PUBLIC_USER</entry>
  <entry key="feature.sdw">true</entry>
  <entry key="restEnabledSql.active">true</entry>
  <entry key="plsql.gateway.mode">proxied</entry>
  <entry key="security.requestValidationFunction">ords_util.authorize_plsql_gateway</entry>
</properties>

This can also be achieved with following command:
ords --config $ORDS_CONFIG config --db-pool default set plsql.gateway.mode proxied
The proxied mode is required as requests from ORDS will connect through the DB connection pool using ORDS_PUBLIC_USER who ever the end user connecting on APEX will be.
More information about proxy user can be found on AskTOM.
After restarting ORDS in standalone mode APEX is finally accessible.

This was tested with ORDS 23.2.3 with same result as well as with ORDS 23.1.

Conclusion

I hope this blog will help you to finalize your ORDS 23.x configuration in standalone for use as APEX 23.x gateway, since both are linked according to the documentation.
It would be good to know if there is a specific reason for not setting that parameter as in previous version. If so, documentation should also be updated accordingly. JeffSmith feel free to provide some feedback.
Enjoy APEX and ORDS!

L’article APEX/ORDS 23.x – DispatcherNotFoundException est apparu en premier sur dbi Blog.

Rancher – Enable RKE2 node autoscaling

Tue, 2023-10-31 06:30

In Rancher, if you make use of node drivers to provision your RKE2 nodes, you can also make use of the cluster autoscaler. In this article, I will show you how simply you can configure the autoscaler for your cluster.


Rancher official documentation: Rancher
RKE2 official documentation: RKE2

Cluster autoscaler

Cluster autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:

  • There are pods that failed to run in the cluster due to insufficient resources.
  • There are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.

For more details: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler

Get started

As stated previously, as long as you use node drivers for your RKE2 cluster, the cluster autoscaler should work. There are multiple cloud providers for Cluster Autoscaler, in our case, we will use Rancher.

Cluster autoscaler will use the API of Rancher to adjust the size of the cluster.

RKE2 cluster

We will provision an RKE2 cluster with node drivers. We want at least two pools of machines, one for the control plane and another one for workers that will be autoscale automatically.

  • Control plane pool: 1 machine (control plane and etcd)
  • Worker pool: 1 machine (worker)

This section will show an example for AWS.

Let’s start an RKE2 cluster on AWS with node drivers.

For the IAM configuration, creation of access key, and cloud credentials, please follow the prerequisite provided by Rancher: https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/launch-kubernetes-with-rancher/use-new-nodes-in-an-infra-provider/create-an-amazon-ec2-cluster#prerequisites

Once you have a cloud credential for AWS, you can create the cluster as follows.

Be sure to select RKE2.

Create at least two pools, one for the control plane, and another one for the workers. Adapt the configuration (region, instance, VPC, etc.).

Then setup the RKE2 configuration as you wish, for our demo, we are using a 1.26 version.

Now that we have the following RKE2 cluster:

  • Control plane pool: 1 machine (control plane and etcd)
  • Worker pool: 1 machine (worker)

We will see how to install and configure the autoscaler.

Deployment of configuration

To enable the cluster autoscaler, we need to deploy it into the cluster with the correct configuration.

First, we need to setup the service account and roles into the RKE2 cluster.

Deploy the following:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
  name: cluster-autoscaler
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
rules:
  - apiGroups: [""]
    resources: ["events", "endpoints"]
    verbs: ["create", "patch"]
  - apiGroups: [""]
    resources: ["pods/eviction"]
    verbs: ["create"]
  - apiGroups: [""]
    resources: ["pods/status"]
    verbs: ["update"]
  - apiGroups: [""]
    resources: ["endpoints"]
    resourceNames: ["cluster-autoscaler"]
    verbs: ["get", "update"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["watch", "list", "get", "update"]
  - apiGroups: [""]
    resources:
      - "pods"
      - "services"
      - "replicationcontrollers"
      - "persistentvolumeclaims"
      - "persistentvolumes"
      - "namespaces"
    verbs: ["watch", "list", "get"]
  - apiGroups: ["extensions"]
    resources: ["replicasets", "daemonsets"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["policy"]
    resources: ["poddisruptionbudgets"]
    verbs: ["watch", "list"]
  - apiGroups: ["apps"]
    resources: ["statefulsets", "replicasets", "daemonsets"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses", "csinodes","csistoragecapacities","csidrivers"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["batch", "extensions"]
    resources: ["jobs"]
    verbs: ["get", "list", "watch", "patch"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["create"]
  - apiGroups: ["coordination.k8s.io"]
    resourceNames: ["cluster-autoscaler"]
    resources: ["leases"]
    verbs: ["get", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["create","list","watch"]
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"]
    verbs: ["delete", "get", "update", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: kube-system

You can copy-paste and import it easily with the Rancher web interface.

Then to allow the autoscaler to directly call Rancher and adjust the size of the cluster via API call, we will need to get the following:

  • Rancher URL
  • Rancher token
  • Cluster name
  • Rancher CA certificate

This information will be inserted into the following YAML.

All those resources are needed to monitor and communicate with Rancher to scale the cluster.

---
  apiVersion: v1
  kind: ConfigMap
  metadata:
    name: autoscaler-config
    namespace: kube-system
  data:
    config.yaml: |
      # rancher server credentials
      url: <rancher-url>
      token: <rancher-token>
      # name and namespace of the clusters.provisioning.cattle.io resource on the
      # rancher server
      clusterName: <cluster-name>
      clusterNamespace: fleet-default
---
  apiVersion: v1
  data:
    ca-certificates.crt: |
      -----BEGIN CERTIFICATE-----
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      -----END CERTIFICATE-----
  kind: ConfigMap
  metadata:
    namespace: kube-system
    name: rancher-ca
---
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: cluster-autoscaler
    namespace: kube-system
    labels:
      app: cluster-autoscaler
  spec:
    replicas: 1
    selector:
      matchLabels:
        app: cluster-autoscaler
    template:
      metadata:
        labels:
          app: cluster-autoscaler
        annotations:
          prometheus.io/scrape: 'true'
          prometheus.io/port: '8085'
      spec:
        priorityClassName: system-cluster-critical
        securityContext:
          runAsNonRoot: true
          runAsUser: 65534
          fsGroup: 65534
          seccompProfile:
            type: RuntimeDefault
        serviceAccountName: cluster-autoscaler
        tolerations:
          - effect: NoSchedule
            operator: "Exists"
            key: node-role.kubernetes.io/control-plane
          - effect: NoExecute
            operator: "Exists"
            key: node-role.kubernetes.io/etcd
        nodeSelector:
          node-role.kubernetes.io/control-plane: 'true'
        containers:
          - image: registry.k8s.io/autoscaling/cluster-autoscaler:v1.26.1
            name: cluster-autoscaler
            resources:
              limits:
                cpu: 100m
                memory: 600Mi
              requests:
                cpu: 100m
                memory: 600Mi
            command:
              - ./cluster-autoscaler
              - --v=4
              - --stderrthreshold=info
              - --cloud-provider=rancher
              - --cloud-config=/mnt/config.yaml
              - --skip-nodes-with-local-storage=false
            volumeMounts:
              - name: ssl-certs
                mountPath: /etc/ssl/certs
                readOnly: true
              - name: autoscaler-config
                mountPath: /mnt/config.yaml
                subPath: config.yaml
                readOnly: true
            imagePullPolicy: "Always"
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                  - ALL
              readOnlyRootFilesystem: true
        volumes:
          - name: autoscaler-config
            configMap:
              name: autoscaler-config
          - name: ssl-certs
            configMap:
              name: rancher-ca

Let’s first add the URL and name of the cluster. In my demo it will be “https://<your-domain-name>” and rke2-aws”.

Let’s get the CA certificate directly from the browser:

Copy the content of the file into the YAML.

Generate a token in Rancher. I am using the admin account and I select no scope.

Proceed to deploy the YAML file into the cluster.

Now there is a final step, to specify which pools to scale and the minimum and maximum node for those pools.

Enable autoscaling on the cluster

Now that the resources are ready, the last step to enable the autoscaling on the cluster is to modify the YAML of the cluster.

Find the machinePools to autoscale, add the following annotations, and save it.

machineDeploymentAnnotations:
  cluster.provisioning.cattle.io/autoscaler-min-size: "1"
  cluster.provisioning.cattle.io/autoscaler-max-size: "3"
Testing

Our current cluster has two machines.

Let’s run the following deployment and see how it affects our cluster.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:latest
        imagePullPolicy: Always
        name: nginx
        ports:
        - containerPort: 80
          protocol: TCP
        resources:
          limits:
            cpu: 1000m
            memory: 1024Mi
          requests:
            cpu: 1000m
            memory: 1024Mi

Do not hesitate to modify the number of replicas of the deployment to force your cluster to scale up if needed. In this cluster, the deployment cannot create all the replicas due to insufficient resources.

We can see that the cluster has been scaled up automatically.

Once the provisioning of the new worker nodes is complete, the deployment should be good.

Now let’s scale down the replicas of the deployment and wait 10 minutes.

While waiting, you can also monitor the logs of the autoscaler to verify that it is indeed monitoring the cluster.

The cluster should has been automatically scale down.

Conclusion

Cluster autoscaling is a great feature that greatly enhances your cluster health by adding more nodes if needed. As you can see, for RKE2 cluster on Rancher, it is quite fast and easy to setup!

L’article Rancher – Enable RKE2 node autoscaling est apparu en premier sur dbi Blog.

Share Gitlab agent to deploy microservices projects

Mon, 2023-10-30 11:05

In a microservice application, the code for each service has its own git repository. However, how do deploy without duplicating the Gitlab agents ?

In Gitlab, you can share an agent between several projects to deploy the application on a Kubernetes cluster !

Define and configure the main project

The main project contains the agent for deploying the application. It installs in the usual way, without any difference. To declare the agent, see the article: Connect a Kubernetes cluster to GitLab

To authorize other projects to access the agent, you must configure the main project:

  • In the project repository, create a folder .gitlab/agents
  • In .gitlab/agents, add a directory with the same name as the agent you want to share. (You can see the list of agents in Operate > Kubernetes Cluster)
  • In the previous folder, create a file named config.yaml
The config.yaml file

To allow access to a group of project, add this configuration into the config.yaml:

ci_access:
  groups:
    - id: path/to/my_group

With this code, all the projects into my_group can use the agent to deploy into the Kubernetes cluster.

To allow acces to projects define this configuration into the config.yaml:

ci_access:
  projects:
    - id: path/to/my_project
    - id: path/to/my_second_project

This code allows specific projects to use the agent. By the way, all authorized projects must be listed one by one.

Using the agent from an authorized project

Projects or groups authorized by the main project can therefore use the agent to deploy. For more information about deployment, see the article : Deploy an application with GitLab CI/CD

To target the main project agent, set the right path when defining the context in the before_script:

deploy:
  before_script:
  - kubectl config use-context path/to/main/project:agent-name
  

The path used in the “kubectl config use-context”, corresponds to the main project path, followed by : and the name of the agent in the main project.

Conclusion

In conclusion, it is perfectly possible to share an agent between several projects. This is particularly useful when an application is split into several git projects. As a result, maintenance is simpler because there’s only one agent to keep up to date.

L’article Share Gitlab agent to deploy microservices projects est apparu en premier sur dbi Blog.

Pages