Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 15 hours 11 min ago

Is it possible to switch a service between 2 pluggable databases?

Sun, 2021-05-02 11:33
Introduction

In an Oracle database, services have multiple purposes:

  • identifying group of users prior authenticating them
  • enabling/disabling access to a database for particular groups
  • managing preferred nodes on a RAC configuration
  • redirecting users to the primary after a switchover/failover when using Data Guard or Dbvisit Standby
  • redirecting read only connections to Standby database with Active Guard option

A customer asked me if services could be used for switching between PDBs without modifying client’s connexion, for a very specific usage. Let’s try to find out.

Initializing the test environment

Let’s create 2 PDBs. The goal is to share a service that can alternatively run on the first PDB or on the second one:

create pluggable database JDU1 admin user JDU identified by JDU2021 ROLES=(DBA) DEFAULT TABLESPACE data
DATAFILE SIZE 1G AUTOEXTEND ON MAXSIZE 8G TEMPFILE REUSE;

create pluggable database JDU2 admin user JDU identified by JDU2021 ROLES=(DBA) DEFAULT TABLESPACE data
DATAFILE SIZE 1G AUTOEXTEND ON MAXSIZE 8G TEMPFILE REUSE;


alter pluggable database jdu1 open;

alter pluggable database jdu2 open;

Let’s check current running services on these 2 PDBs:

alter session set container=JDU1;

select name from v$active_services;
NAME
----------------------------------------------------------------
jdu1


alter session set container=JDU2;

select name from v$active_services;

NAME
----------------------------------------------------------------
jdu2
Create a “shared” service and test it

Let’s go to the first container and create the “shared” service:

alter session set container=JDU1;

execute DBMS_SERVICE.CREATE_SERVICE (service_name => 'JDU_SVC', network_name => 'JDU_SVC', failover_method => 'BASIC', failover_type => 'SELECT', failover_retries => 1800, failover_delay => 1); ​
​
exec dbms_service.start_service (service_name => 'JDU_SVC');​

select name from v$active_services;

NAME
----------------------------------------------------------------
jdu1
JDU_SVC

Now, let’s also try to create the service inside the other PDB:

alter session set container=JDU2;

execute DBMS_SERVICE.CREATE_SERVICE (service_name => 'JDU_SVC', network_name => 'JDU_SVC', failover_method => 'BASIC', failover_type => 'SELECT', failover_retries => 1800, failover_delay => 1); ​
​

*
ERROR at line 1:
ORA-44303: service name exists
ORA-06512: at "SYS.DBMS_SERVICE_ERR", line 21
ORA-06512: at "SYS.DBMS_SERVICE", line 316
ORA-06512: at line 1

It does not work because service seems to be global. Let’s try to start it:

exec dbms_service.start_service (service_name => 'JDU_SVC');​


*
ERROR at line 1:
ORA-44786: Service operation cannot be completed.
ORA-06512: at "SYS.DBMS_SERVICE", line 76
ORA-06512: at "SYS.DBMS_SERVICE", line 483
ORA-06512: at line 1

It does not work because it’s already started on the first PDB. Let’s stop it on the first PDB:

alter session set container=JDU1;

exec dbms_service.stop_service (service_name => 'JDU_SVC');​

And start it on the second PDB:

alter session set container=JDU2;

exec dbms_service.start_service (service_name => 'JDU_SVC');​

*
ERROR at line 1:
ORA-44786: Service operation cannot be completed.
ORA-06512: at "SYS.DBMS_SERVICE", line 76
ORA-06512: at "SYS.DBMS_SERVICE", line 483
ORA-06512: at line 1

No, it doesn’t work.

Let’s now remove the service from the first PDB and create and start it on the second PDB:

alter session set container=JDU1;

exec dbms_service.delete_service (service_name => 'JDU_SVC');

PL/SQL procedure successfully completed.

alter session set container=JDU2;

Session altered.

execute DBMS_SERVICE.CREATE_SERVICE (service_name => 'JDU_SVC', network_name => 'JDU_SVC', failover_method => 'BASIC', failover_type => 'SELECT', failover_retries => 1800, failover_delay => 1); ​
​
exec dbms_service.start_service (service_name => 'JDU_SVC');​

select name from v$active_services;

NAME
----------------------------------------------------------------
JDU_SVC
jdu2
Findings

A service is global and cannot exist in more than one PDB. That’s quite obvious but I thought it was possible to declare it in multiple PDB and start only in one PDB.

Note that at the CDB level, querying the CDB_SERVICES will give you an overview of services and PDB associated:

col name for a30
col pdb for a30
select name, pdb from cdb_services;

NAME                           PDB
------------------------------ ------------------------------
SYS$BACKGROUND                 CDB$ROOT
SYS$USERS                      CDB$ROOT
CDBSYR01.test.ch               CDB$ROOT
CDBSYR01XDB                    CDB$ROOT
CDBSYR01_SITE1.test.ch         CDB$ROOT
CSYR01AXDB                     CDB$ROOT
CSYR01A.test.ch                CDB$ROOT
CFRD01AXDB                     CDB$ROOT
CFRD01A.test.ch                CDB$ROOT
JDU1                           JDU1
JDU2                           JDU2
JDU_SVC                        JDU2

Last question I’ve been asking: is it possible to stop the service from the CDB? No, you’ll have to switch to the correct container to do that.

conn / as sysdba

exec dbms_service.stop_service (service_name => 'JDU_SVC');

*
ERROR at line 1:
ORA-44786: Service operation cannot be completed.
ORA-06512: at "SYS.DBMS_SERVICE_ERR", line 91
ORA-06512: at "SYS.DBMS_SERVICE", line 519
ORA-06512: at line 1
Conclusion

Service is not manageable through CDB, although being a global mechanism. If you want to use services to switch from one PDB to another one, you’ll have to script that because it’s not a feature.

But would you really use this kind of script? Not sure this would be a good idea. If your script failed for some reason, your application will use a database that may not be the expected one, without being aware. Quite dangerous in my opinion.

Cet article Is it possible to switch a service between 2 pluggable databases? est apparu en premier sur Blog dbi services.

An example of ORA-01152: file … was not restored from a sufficiently old backup

Fri, 2021-04-30 10:00
By Franck Pachot

.


Oracle Error: 
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below 
ORA-01152: file ... was not restored from a sufficiently old backup 
ORA-01110: data file ...

This error is one of the most misunderstood recovery error. It can happen in different case but I have here a simple example to reproduce it, with some comments and queries to see the state. I run an example to avoid long theory but let’s put the main media recovery concepts in a few words. [Note that even if my idea was to put only a few words… it is longer than expected, so skip to the example if that’s what you were expecting]

Except when you are doing direct-path insert, the database files (datafiles) are written from the buffer cache, by the database writer, asynchronously, without caring about transactions. At any point in time, in a database that is opened, you may have commited changes for which the changed block is not yet written to the datafiles. You may also have changes written to the datafiles which are not commited yet or will be rolled back later. You may also have some part only of an atomic changes. There are points (checkpoint) where the database ensures that what has been changed in memory is written back to the datafiles. Those points are important as they mark the start of the redo log stream that is required to ensure that the changes done in memory can be applied (recovered) to the datafiles in case of database crash. The controlfile keeps track of this but the checkpoint information (time, SCN, redo threads,…) is also written to the datafile headers so that recovery is possible even of the current controlfile is lost, or restored from a backup.

This is for instance failure. But the stream of redo may be used also in case of media recovery where you had to restore a datafile from a previous backup, in order to bring the datafile to the current state. Then the datafile header shows a checkpoint from a previous point in time (before the last checkpoint of the current state) and recovery can be possible because the redo from before the last checkpoint has also been backed up as archived logs. And finally, there’s also the possibility that all datafiles are from past checkpoints, because you want to do Point-In-Time Recovery (PITR) either by restoring all datafiles from a previous backup, and recover up to this PIT only. Or you used flashback database to bring them back, which recovers to this PIT as well. In those cases, you will open to a new incarnation of the database, like a branch in the timeline, and OPEN RESETLOGS to explicitly tell Oracle that you know that your datafile state is different than the current state, as known in the controlfile and the online redo logs (which will then be discarded).

However, even if you have this possibility, with a datafile PIT state that does not match the controlfile one, there are some conditions that must be met in order to open the database without inconsistencies. Basically, they must be consistent among themselves. Because table data are in some datafiles, indexes may be in others, metadata in another one, undo vertors elswhere, and the same for foreign key parent table… So even if all recovery is correct (no changes lost thanks to redo log recovery) the database may refuse an OPEN RESETLOGS. And that’s basically what ORA-01152 tells you: your recovery is ok for each file, but the point you recovered to is not globally the same consistent state.

So, there are two major pieces of information that are checked by Oracle when opening the database. One is about the consistency of each datafiles and the other is about the consistency of the database. When the database is opened, there may be some changes that are from after the checkpoint recorded in the datafile header, because dbwriter continuously does its job. This is known as the datafile being “fuzzy”. Only when the database is closed, the fuzzy bit is cleared to say that all blocks of the datafiles are consistent with the checkpoint time. That’s for each datafile consistency. And in order to leave a clean state, closing the database also does a last checkpoint so that all datafiles are consistent. This can be opened without the need to apply some redo log, given that you want to get the database at the same point in time than it was closed. But once closed, you can do things that Oracle doesn’t know. Like restoring the files from a previous backup, even from a hot backup where the files were fuzzy. So, in any case, when Oracle opens the database it checks the datafile headers, as if it were not cleanly closed.


SQL> host mkdir -p /u90/{close,open}/{before,after}

I create directories to put a backup of a datafile. I’ll backup the datafile in the open or mount state (to have fuzzy and non fuzzy backups). And from two points in time (‘before’ and ‘after’ the state I want to open resetlogs)


SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

    OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
_____________ ______________ _____________________ __________________ ______________________
READ WRITE           1692588               1692038            1691878                1692586

SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
     natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ              CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ _______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL           NO     YES    30-apr-2021 10:38:02        8196 1692038             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     05-aug-2019 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     05-aug-2019 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     05-aug-2019 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL           NO     YES    30-apr-2021 10:38:02        8196 1692038             183 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL           NO     YES    30-apr-2021 10:38:02           4 1692038             183 0

11 rows selected.

I’ll run those queries each time. They show the checkpoint SCN of my database, from the controlfile, and from my datafile headers. The FUZZY=YES column tells me that the database is opened, which means that there are changes in the datafiles that were written after the checkpoint. This is also visible by the flag 4 in FHSTA (or 8196 because 8192 is another flag for the SYSTEM datafiles). There are files that are not fuzzy even if the database is opened, because the tablespaces are in read-only, PDB$SEED in this example. You can see that their checkpoint time is from a long time ago because they havent been opened read-write since them. As they are not fuzzy, and checkpointed at the same SCN, they are consistent. And as they are read-only since then Oracle knows that they don’t need any recovery. I think we have a clue about this with the RECOVER column being null.


SQL> alter database begin backup;

Database altered.

SQL> host cp /u02/oradata/CDB19/users01CDB19.dbf /u90/open/before

SQL> alter database end backup;

Database altered.

I’ve taken a hot backup of this datafile. The backup mode ensures that recovery will be possible, but the file is still online, and fuzzy, with db writer writing to it. So the header stilll shows it fuzzy and with the last checkpoint SCN.


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.

Total System Global Area   2147482744 bytes
Fixed Size                    9137272 bytes
Variable Size               520093696 bytes
Database Buffers           1610612736 bytes
Redo Buffers                  7639040 bytes
Database mounted.

I’ve closed my database cleanly and started it in mount, which means not opened.


SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

   OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
____________ ______________ _____________________ __________________ ______________________
MOUNTED                   0               1692886            1692000                1692860

SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
          natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL           NO     NO     2021-04-30 10:56:41        8192 1692886             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL           NO     NO     2021-04-30 10:56:41           0 1692886             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL           NO     NO     2021-04-30 10:56:41           0 1692886             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL           NO     NO     2021-04-30 10:56:41           0 1692886             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39        8192 1692776             183 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0

11 rows selected.

After a clean shutdown, no files are fuzzy and all files where checkpointed at the same time: SCN 1692886 that we see in v$datafile and v$datafile_header. All consistent. You can see that the PDB datafiles have a SCN a little bit earlier but this is because the PDB are closed before the database is. Exactly the same as the read-only PDB$SEED. Then checkpoint is consistent for the container but earlier than the database and the RECOVER column is null.


SQL> host cp /u02/oradata/CDB19/users01CDB19.dbf /u90/close/before

I’ve taken another backup of my datafile here, now in a non fuzzy state (like a cold backup)


SQL> create restore point now guarantee flashback database;
Restore point NOW created.

I’m taking a snapshot of my database here as I’ll come back to this point. This PIT that I call ‘now’ is the where I’ll try to restore the datafile from backups from before (what i just did) or from after (what i’m going to do before reverting back to this snapshot)


SQL> alter database open;
Database altered.

SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

    OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
_____________ ______________ _____________________ __________________ ______________________
READ WRITE           1693832               1692889            1692000                1692999

SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
               natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL           NO     YES    2021-04-30 11:03:00        8196 1692889             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL           NO     YES    2021-04-30 11:03:00           4 1692889             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL           NO     YES    2021-04-30 11:03:00           4 1692889             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL           NO     YES    2021-04-30 11:03:00           4 1692889             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39        8192 1692776             183 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0

11 rows selected.

SQL> alter system checkpoint;

System CHECKPOINT altered.

SQL> alter database begin backup;

Database altered.

SQL> host cp /u02/oradata/CDB19/users01CDB19.dbf /u90/open/after

SQL> alter database end backup;

Same as I did before, a hot backup of my datafile, but from a later point in time.


Database altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.

Total System Global Area   2147482744 bytes
Fixed Size                    9137272 bytes
Variable Size               520093696 bytes
Database Buffers           1610612736 bytes
Redo Buffers                  7639040 bytes
Database mounted.

SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

   OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
____________ ______________ _____________________ __________________ ______________________
MOUNTED                   0               1694252            1692000                1693891


SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
     natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL           NO     NO     2021-04-30 11:05:43        8192 1694252             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL           NO     NO     2021-04-30 11:05:43           0 1694252             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL           NO     NO     2021-04-30 11:05:43           0 1694252             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL           NO     NO     2021-04-30 11:05:43           0 1694252             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39        8192 1692776             183 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0

SQL> host cp /u02/oradata/CDB19/users01CDB19.dbf /u90/close/after

And finally a cold backup from a later point in time.


SQL> host ls -l /u90/{close,open}/{before,after}
/u90/close/after:
total 5140
-rw-r-----. 1 oracle oinstall 5251072 Apr 30 11:07 users01CDB19.dbf

/u90/close/before:
total 5140
-rw-r-----. 1 oracle oinstall 5251072 Apr 30 11:00 users01CDB19.dbf

/u90/open/after:
total 5140
-rw-r-----. 1 oracle oinstall 5251072 Apr 30 11:05 users01CDB19.dbf

/u90/open/before:
total 5140
-rw-r-----. 1 oracle oinstall 5251072 Apr 30 10:55 users01CDB19.dbf

I have 4 backups, from before and after, and in a clean or fuzzy state.


SQL> flashback database to restore point now;

Flashback succeeded.

Now back to my snapshot so that my current state is after the ‘before’ backup and before the ‘after’ backup. Sorry for this bad description of it, time travel is never easy to explain.


SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

   OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
____________ ______________ _____________________ __________________ ______________________
MOUNTED                   0               1694252            1692000                1692886


SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
     natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41        8192 1692886             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39        8192 1692776             183 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 10:56:39           0 1692776             183 0
11 rows selected.

So, here we are. In closed (mount) state. No files are opened, No files are fuzzy. The checkpoint time is consistent – we will check only the CDB$ROOT now as we know the other containers where checkpointed earlier when closed. So CDB$ROOT checkpoint is at 10:56:41, SCN 1692886, which matches the controlfile SCN. I can OPEN RESETLOGS this database without any recovery but that’s not what I want to show.


SQL> host cp /u90/open/before/users01CDB19.dbf /u02/oradata/CDB19/users01CDB19.dbf

I restored the datafile from the previous hot backup (older than my current state, and fuzzy)


SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
  2* natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41        8192 1692886             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL                  YES    2021-04-30 10:53:23           1 1692602             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 11:11:00        8192 1692776               1 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 11:11:00           0 1692776               1 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 11:11:00           0 1692776               1 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 11:11:00           0 1692776               1 0

11 rows selected.

The file header shows the fuzzy state (FUZZY=Y) and this means that Oracle needs to apply some redo log, starting from the checkpoint SCN 1692602 and until it brings it to a consistent state, at least after the END BACKUP, to the end of fuzziness.


SQL> alter database open resetlogs;

Error starting at line : 1 in command -
alter database open resetlogs
Error report -
ORA-01195: online backup of file 7 needs more recovery to be consistent
ORA-01110: data file 7: '/u02/oradata/CDB19/users01CDB19.dbf'
01195. 00000 -  "online backup of file %s needs more recovery to be consistent"
*Cause:    An incomplete recovery session was started, but an insufficient
           number of logs were applied to make the file consistent. The
           reported file is an online backup which must be recovered to the
           time the backup ended.
*Action:   Either apply more logs until the file is consistent or
           restore the database files from an older backup and repeat recovery.
SQL>

I’ll not do this recovery. I’m just showing the error message. This is ORA-01195 that tells you you need more recovery to clear the fuzziness.


SQL> host cp /u90/close/before/users01CDB19.dbf /u02/oradata/CDB19/users01CDB19.dbf

I’ve restored from the cold backup here. So no fuzzy flag in the header.


SQL> alter database open resetlogs;

Database altered.

SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
  2* natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL           NO     YES    2021-04-30 11:48:42        8196 1692890               1 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL           NO     YES    2021-04-30 11:48:42           4 1692890               1 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL           NO     YES    2021-04-30 11:48:42           4 1692890               1 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL           NO     YES    2021-04-30 11:48:42           4 1692890               1 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 11:48:45        8192 1693089               1 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 11:48:45           0 1693089               1 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 11:48:45           0 1693089               1 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 11:48:45           0 1693089               1 0

11 rows selected.

SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

    OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
_____________ ______________ _____________________ __________________ ______________________
READ WRITE           1694039               1692890                  0                1693104

From this cold backup, I was able to OPEN RESETLOGS. Because this cold backup was taken when the database was closed, so all are consistent.

This was to show the ORA-01195 meaning: a datafile needs to be recovered to be consistent by itself.

Now we will see the consistency with the other datafiles by restoring a backup from the future.


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area   2147482744 bytes
Fixed Size                    9137272 bytes
Variable Size               520093696 bytes
Database Buffers           1610612736 bytes
Redo Buffers                  7639040 bytes
Database mounted.
SQL> flashback database to restore point now;

Flashback succeeded.

SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

   OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
____________ ______________ _____________________ __________________ ______________________
MOUNTED                   0               1694381                  0                1692886

SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
     natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41        8192 1692886             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 11:48:35        8192 1692776               1 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0

11 rows selected.

as my OPEN RESETLOGS was sucessfull, I flashback database again to go to the same point in time for my experiment.


SQL> host cp /u90/close/after/users01CDB19.dbf /u02/oradata/CDB19/users01CDB19.dbf

I’ve restored the cold backup (not fuzzy) but from a checkpoint that happened after my current state.


SQL> select * from (select file#,name,substr(status,1,3) sta,error err,recover rec,fuzzy fuz,checkpoint_time checkpoint from v$datafile_header)
     natural join (select hxfil file#, fhsta, fhscn, fhrba_seq, fhafs from x$kcvfhall);

   FILE#                                             NAME    STA    ERR    REC    FUZ             CHECKPOINT    FHSTA      FHSCN    FHRBA_SEQ    FHAFS
________ ________________________________________________ ______ ______ ______ ______ ______________________ ________ __________ ____________ ________
       1 /u02/oradata/CDB19/system01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41        8192 1692886             183 0
       2 /u02/oradata/CDB19/pdbseed/system01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21        8192 1276435              17 0
       3 /u02/oradata/CDB19/sysaux01CDB19.dbf             ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       4 /u02/oradata/CDB19/pdbseed/sysaux01CDB19.dbf     ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       5 /u02/oradata/CDB19/undotbs01CDB19.dbf            ONL                  NO     2021-04-30 10:56:41           0 1692886             183 0
       6 /u02/oradata/CDB19/pdbseed/undotbs01CDB19.dbf    ONL                  NO     2019-08-05 17:03:21           0 1276435              17 0
       7 /u02/oradata/CDB19/users01CDB19.dbf              ONL                  NO     2021-04-30 11:05:43           0 1694252             183 0
       8 /u02/oradata/CDB19/PDB/system01CDB19.dbf         ONL                  NO     2021-04-30 11:48:35        8192 1692776               1 0
       9 /u02/oradata/CDB19/PDB/sysaux01CDB19.dbf         ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0
      10 /u02/oradata/CDB19/PDB/undotbs01CDB19.dbf        ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0
      11 /u02/oradata/CDB19/PDB/users01.dbf               ONL                  NO     2021-04-30 11:48:35           0 1692776               1 0

11 rows selected.

SQL> select open_mode,current_scn,checkpoint_change#,archive_change#,controlfile_change# from v$database;

   OPEN_MODE    CURRENT_SCN    CHECKPOINT_CHANGE#    ARCHIVE_CHANGE#    CONTROLFILE_CHANGE#
____________ ______________ _____________________ __________________ ______________________
MOUNTED                   0               1694381                  0                1692886

You can see the datafile is not fuzzy but with a checkpoint at 11:05:43, SCN 1694252, where all other datafiles, for this container, and the controlfile, are at 10:56:41, SCN 1692886. My file is from a state in the future of the other ones.


SQL> alter database open resetlogs;

Error starting at line : 1 in command -
alter database open resetlogs
Error report -
ORA-01152: file 7 was not restored from a sufficiently old backup
ORA-01110: data file 7: '/u02/oradata/CDB19/users01CDB19.dbf'
01152. 00000 -  "file %s was not restored from a sufficiently old backup "
*Cause:    An incomplete recovery session was started, but an insufficient
           number of logs were applied to make the database consistent. This
           file is still in the future of the last log applied. The most
           likely cause of this error is forgetting to restore the file
           from a backup before doing incomplete recovery.
*Action:   Either apply more logs until the database is consistent or
           restore the database file from an older backup and repeat recovery.

Here is ORA-01152 and the message may be misleading because there can be several reasons. Maybe the problem is the file mentioned, because you restored it from a backup that is too recent when compared to the others and to the point in time you want to open resetlogs. Or maybe it was not restored at all and it is the current datafile that remains there because you forgot to restore one file. Or maybe you want to go to a further point in time by recovering the other datafile up to the same PIT as this one.

I’ll not go further here, this blog post is already too long. Of course, I’ll get the same error if I restore the fuzzy backup from the future. When you encounter this error you may think about the Point In Time you want to recover to. Either you have the right PIT and then you need to restore a backup of this datafile from before this point in time. Or maybe you want to recover to a further point in time to reach the state of this datafile. The error message supposes you have recovered to the right point in time but didn’t restore the right file.

I tried to summarize this situation in a tweet:

ORA-01152 is misleading.
The problem may be with the other files.
When I see:
ORA-01152: file #n was not restored from a sufficiently old backup
I think of #n as Marty McFly telling the other datafiles:
Guess you guys aren't ready for that yet…

(is it clear or should I blog?)

— Franck Pachot (@FranckPachot) April 30, 2021


Cet article An example of ORA-01152: file … was not restored from a sufficiently old backup est apparu en premier sur Blog dbi services.

SQL Server CPU limitation for Express and Standard Edition

Wed, 2021-04-28 07:22
Introduction

When performing an audit on an SQL Server VM the first thing the system administrator told me was that there’s no issue with the CPU on this box, the monitoring shows it’s always around 50% usage.

Problem

Here is the Task Manager:

The average CPU usage on this VM is indeed 50% but half of the cores are stuck at 100%.
There are two SQL Server instances installed on this server but they are both Express Edition.

Compute capacity limits by Edition

As stated in the documentation the compute capacity limit for SQL Server Express Edition is the lesser of 1 socket or 4 logical cores.

So what happens is that the VM is configured with 2 sockets and 4 cores. Both instances are limited to 1 socket and therefore to 2 cores.
One way to confirm this is to look at the sys.dm_os_sys_info DMV.
We can see that the scheduler_count is 2. Each scheduler is mapped to an individual processor (CPU) as described in the Thread and Task Architecture Guide.

The same information can also be retrieved from the Error Log with the following query.

EXEC sys.xp_readerrorlog 0, 1, N'detected', N'socket';
Solution

To improve this situation a solution without impacting the availability of the service would be to set a Processor Affinity mask as it does not requires a restart of the service.
Each instance could be configured to use 2 differents cores so the workload would be balanced over the 4 cores.
I would not recommend doing this as the result is still sub-optimal.
The best option here is to re-configure the VM with a single socket so both instances could use 4 cores.
This is what we have done for this VM. Now the instances can use 4 schedulers and the load

Conclusion

There are some CPU limitations with non-Enterprise Editions of SQL Server.
Make sure you are aware of them and configure your VMs accordingly or you may face unexpected CPU contention on your instances.

PS:

Just before publishing this post, I identified the same issue on another VM. This one has several SQL Server instances installed with Standard Edition.
The Standard Edition is limited to the lesser of 4 sockets or 28 cores. The VM is configured with 8 cores but there are also 8 sockets. So only 4 cores can be used by SQL Server.
Reconfiguring the VM with a maximum of 4 sockets will again allow the use of all cores.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Cet article SQL Server CPU limitation for Express and Standard Edition est apparu en premier sur Blog dbi services.

SQL Server: Get email notifications for Error Log entries

Fri, 2021-04-23 06:43
Introduction

In a previous blog post, I did a T-SQL script reading Audit files to send emails with valuable information about instance changes.
In this one, you will see how to get nice emails from critical errors or specific events occurring on the instance using the ErrorLog.

A common best practice for any DBA is to configure SQL Server Agent Alert for errors with severity 17 to 25.

Alerts are also often set for less critical errors like “Login failed” (severity level 14).
Emails sent from Alerts are very basic and looks like this:

All these errors are written to the SQL Server Error Log file.

The text message for these error messages that are logged into the Error Log can be found in the sys.messages view.

select message_id, severity, text
from sys.messages 
where (severity between 18 and 21
   or severity = 14)
  and is_event_logged = 1 
  and language_id = 1033
order by severity;

This query returns 174 rows on SQL Server version 2019.

The error log files can be read using the stored procedure sp_reaerrorlog.

Searching in the Error Log in SQL

The aim of the T-SQL script I am writing now is to replace Alerts with a job running every 5 minutes on my instance that will send me emails based on some messages I will define.

Knowing all the information we’ve seen before, I can build a table variable with the text that I want to look for in the ErrorLog, whether it is critical errors or informational messages.

DECLARE @errorlog_definition TABLE(
	error_category varchar(150)
	, error_pattern varchar(1000)
)
insert into @errorlog_definition
	values ('Database Write Latency', '%I/O requests taking longer than%seconds to complete%')
		,('Database Write Latency', '%cleaned up%bufs with%in%ms%for db%')
		,('Database Write Latency', '%average%')
		,('Database Write Latency', '%last target outstanding:%avgWriteLatency%')
		,('Database Write Error Disk Full', 'Could not allocate%')
		,('Database Login Failure', '%Login Failed%')
		,('SQL Server starting', 'SQL Server is starting%')

The first column is the type of event I want to be emailed about and will be used in the email title.

The current error log entries are inserted in another table variable.

insert into @errorlog
	exec sp_readerrorlog 0

Now, using a CROSS APPPY I can get only the Error Log entries matching my messages patterns.

select e.LogDate, e.ProcessInfo, e.Text, c.error_category
from @errorlog AS e
	cross apply (
		select *
		from @errorlog_definition AS d
		where e.Text like d.error_pattern
	) AS c

I managed to get only the Error Log entries I want. Entries are categorized and I decided not to send just one email every time the job runs but once for every category.

I did this with cursors. Here is the whole script if you want to try it.

Script
--	Job frequency
DECLARE @CheckPeriodInMinute int = 5

--	Variables
DECLARE @colldate datetime         
      , @object varchar(1024)
      , @HTML_footer varchar(max)  
      , @HTML_header varchar(max)  
      , @HTML varchar(max)         
	  , @HTML_part varchar(max)    

DECLARE @errorlog TABLE(
	  LogDate datetime
	, ProcessInfo varchar(32)
	, Text varchar(max)
)
DECLARE @notifiable_errors table(
	  LogDate varchar(19)
	, ProcessInfo varchar(32)
	, Text varchar(1024)
	, error_category varchar(150)
)

SELECT @HTML_header='
<head>
<style type="text/css">
table {border: 1px solid #1C6EA4;background-color: #EEEEEE;width: 100%;text-align: left;border-collapse: collapse;}
table td, table th {border: 1px solid #AAAAAA;padding: 3px 2px;}
table tbody td {font-size: 13px;}
table thead {background: #1C6EA4;border-bottom: 2px solid #444444;}
table thead th {font-size: 15px;font-weight: bold;color: #FFFFFF;border-left: 2px solid #D0E4F5;}
table thead th:first-child {border-left: none;}
h1      {font:bold 16pt Arial,Helvetica,Geneva,sans-serif; color:#336699; background-color:White; border-bottom:1px solid #cccc99; margin-top:15pt; margin-bottom:0pt; padding:0px 0px 0px 0px;}
</style>
</head>
<body>
<a name="top"></a>'

set @HTML_footer='<p><i>Executed on '+convert(varchar,getdate(),120)+' from server "'+isnull(convert(varchar(128),@@SERVERNAME),'')+'"</i></p>'

--	----
DECLARE @errorlog_definition TABLE(
	error_category varchar(150)
	, error_pattern varchar(1000)
)
insert into @errorlog_definition
	values ('Database Write Latency', '%I/O requests taking longer than%seconds to complete%')
		,('Database Write Latency', '%cleaned up%bufs with%in%ms%for db%')
		,('Database Write Latency', '%average%')
		,('Database Write Latency', '%last target outstanding:%avgWriteLatency%')
		,('Database Write Error Disk Full', 'Could not allocate%')
		,('Database Login Failure', '%Login Failed%')
		,('SQL Server starting', 'SQL Server is starting%')

insert into @errorlog
	exec sp_readerrorlog 0

-- Get Error Log entries matching pattern (like)
insert into @notifiable_errors
	select e.LogDate, e.ProcessInfo, e.Text, c.error_category
	from @errorlog AS e
		cross apply (
			select *
			from @errorlog_definition AS d
			where e.Text like d.error_pattern
		) AS c
	where LogDate > DATEADD(MINUTE, -@CheckPeriodInMinute, GETDATE())

-- If any rows to process
if @@ROWCOUNT>0
begin

	DECLARE @logdate datetime
	DECLARE @processInfo varchar(32)
	DECLARE @Text varchar(MAX)
	DECLARE @error_category varchar(150)

	DECLARE category_cursor CURSOR FOR 
		select distinct error_category
		from @notifiable_errors

	OPEN category_cursor  
	FETCH NEXT FROM category_cursor INTO @error_category

	WHILE @@FETCH_STATUS = 0  
	BEGIN
		-- Loops 1 time per category
		
		-- Email Object + HTML Table header
		SELECT @object = @error_category+' - Last '+convert(varchar(50),@CheckPeriodInMinute)+' min ('+convert(varchar(max),@@servername)+')'
		SELECT @HTML=@HTML_header+'<h1>'+@error_category+' (last '+convert(varchar(50),@CheckPeriodInMinute)+' min)</h1>'
		
		SELECT @HTML=@HTML+'
		<table>
		<tr><th>LogDate</th><th>ProcessInfo</th><th>Text</th></tr>
		'
		--	----
		--	Cursor: Get all entries for the current category
		--	Create HTML Table rows
		--	----
		DECLARE error_cursor CURSOR FOR 
			select LogDate, ProcessInfo, Text
			from @notifiable_errors
			where error_category = @error_category

		OPEN error_cursor  
		FETCH NEXT FROM error_cursor INTO @logdate, @processInfo, @Text
		WHILE @@FETCH_STATUS = 0  
		BEGIN
			-- HTML Table rows
			select @HTML_part = '<tr><td>'+isnull(convert(varchar,@logdate,120),'')+'</td><td>'+isnull(@processInfo,'')+'</td><td>'+isnull(@Text,'')+'</td></tr>'
			set @HTML=@HTML+@HTML_part+'
			'

			FETCH NEXT FROM error_cursor INTO @logdate, @processInfo, @Text;
		END
		CLOSE error_cursor  
		DEALLOCATE error_cursor 
		
		-- HTML Table end + send email
		set @HTML=@HTML+'</table>
		'+@HTML_footer
	
		EXEC msdb.dbo.sp_send_dbmail
		  @profile_name = 'DBA_Profile'
		, @subject= @object 
		, @recipients = 'sql@DBA.local'
		, @body_format='html'
		, @body = @HTML
	
		FETCH NEXT FROM category_cursor INTO @error_category;
	END
	CLOSE category_cursor  
	DEALLOCATE category_cursor 
end

 

Email examples

Here are some emails sent by this Job.

On SQL Server start:On Failed login:

Conclusion

The drawback of this approach is that I have to know exactly what I want to get emailed about. If I don’t use Agent Alerts I might miss some critical errors.
What is nice is that I can look for informational messages and not only errors.
I hope you found this interesting and it gave you some ideas on how to get information from your SQL Server instances.

 

Cet article SQL Server: Get email notifications for Error Log entries est apparu en premier sur Blog dbi services.

Pass a variable to a trigger in PostgreSQL

Thu, 2021-04-22 16:14
By Franck Pachot

.
With Oracle there are many ways to set a state in the session: context variables, set with DBMS_SESSION.SET_CONTEXT and retrieved with SYS_CONTEXT, or package global variables, or global or private temporary tables with ON COMMIT PRESERVE ROWS,…

How would you do it in PostgreSQL? I’m taking an example from a Yugabyte slack question, as Yugabyte SQL layer is the PostgreSQL one.


CREATE TABLE employees (
  employee_no integer PRIMARY KEY,
  name text,
  department text
);

INSERT INTO employees (employee_no, name, department) VALUES 
(1221, 'John Smith', 'Marketing'),
(1222, 'Bette Davis', 'Sales'),
(1223, 'Lucille Ball', 'Operations'),
(1224, 'John Zimmerman', 'Sales');

CREATE TABLE mgr_table (
  mgr_id integer references employees,
  employee_no integer references employees,
  primary key(mgr_id, employee_no)
);

insert into mgr_table values(1223,1222);

We have an employee/manager table example:


CREATE TABLE employee_dept_changes (
  employee_no integer NOT NULL references employees,
  name text,
  department text,
  changed_on TIMESTAMP(6) NOT NULL,
  changed_by integer
);

This table is there to log the changes when an employee is transferred to another manager. You can see that we log the values, but also the context about who is transferring because this operation is allowed only by a manager.

Ideally, all database operations are encapsulated in a microservice. The application calls a stored procedure that does all the logic: update the manager and log the change. And this procedure has all context. However, the idea here is to do this logging through a trigger. The trigger has access to the table values, but how can we pass the context of who is doing this change? Without context variables, or package variables, this requires something else.

If you look at the stackoverflow for this question, you will see Frits Hoogland and Bryn Llewellyn answers with workaround and recommendations. I’m just adding here a possibility (with the same recommendation: don’t have procedures calling SQL calling triggers, but all logic encapsulated in procedural code calling SQL).


ALTER TABLE employees ADD COLUMN
  "_trigger_state_mgr_id" integer --> this is added to pass the state variable

I’m adding a column to my table. This will not take any storage but will be used only to pass some values to the trigger


CREATE OR REPLACE FUNCTION record_dept_changes()
RETURNS TRIGGER AS
$$
BEGIN
 IF NEW.department  OLD.department
  THEN INSERT INTO employee_dept_changes(employee_no, name, department, changed_on, changed_by)
   VALUES(OLD.employee_no, OLD.name, OLD.department, now(), NEW."_trigger_state_mgr_id"  --> passing the state to the added column
   );
 END IF;
 NEW."_trigger_state_mgr_id" := null; --> setting null not to persist anything (or maybe a real "last_update_by" column makes sense?)
 RETURN NEW;
END;
$$
LANGUAGE 'plpgsql';

The trigger logs the change into the “changes” table, reading this added column value. But I set this value to null once used because I don’t want to waste any storage (disk and memory) for it.


CREATE TRIGGER dept_changes
BEFORE UPDATE ON employees
FOR EACH ROW
EXECUTE PROCEDURE record_dept_changes();

In PostgreSQL the trigger calls the procedure which has access to the NEW. valies


CREATE OR REPLACE PROCEDURE transfer_employee(integer, integer, text)
LANGUAGE plpgsql
AS $$
BEGIN
  -- IF employee reports to mgr, allow mgr to transfer person
  IF EXISTS (SELECT employee_no FROM mgr_table where mgr_id = $1 and employee_no = $2)
  THEN
   UPDATE employees
   SET department = $3, "_trigger_state_mgr_id" = $1 --> passing the state through the column
   WHERE employee_no = $2;
   COMMIT;
  END IF;
END;
$$;

Here is my procedure that implements the transfer service. In addition to checking if the manager is allowed to do the transfer, and doing the update, I set the additional column with the context value.

This column is declared in the metadata, is used by the update statement but is discarded by the trigger. If PostgreSQL had a feature like the Oracle invisible column, I would set this column as invisible as I don’t want to see it in SELECT * or INSERT to it. I can do the same with a view on top of the table.

In summary:

  • you may have less feature in open source software than commercial ones, but this probably means that there’s something to do differently in the application design. Commercial vendors can implement anything when important customers pay for it. An open source community will not add more complexity to workaround a problem that should be addressed by the application design. Don’t call SQL DML with trigger magic behind. Call a procedure that has all the logic, including all DML (main change and audit) and verifications
  • workarounds are valid as long as you document them (what they do, and why) and verify all side effects (like setting it to null afterwards). There are often many workarounds and choose the one that makes more sense. Here, maybe the “_trigger_state_mgr_id” will become a real column to persist one day, if there’s a need to store the latest manager that assigned the employee to a department

Cet article Pass a variable to a trigger in PostgreSQL est apparu en premier sur Blog dbi services.

SQL Server: How to delete the msdb backup history kindly

Thu, 2021-04-22 14:46

This week I noticed some slowness on queries related to backup history. The instance had a huge msdb database (more than 10GB size), the backup history had never been purged.

I already wrote a post on this topic and provided a small PowerShell script using dbatools to clean the backup history.
The issue with sp_delete_backuphistory or Remove-DbaDbBackupRestoreHistory is that the operation is done as one transaction and this could badly impact the msdb transaction log file.

Remember you can have a look at the code behind a system stored procedure using sp_helptext.

use msdb
go
exec sp_helptext sp_delete_backuphistory

The code for sp_delete_backuphistory looks like this:

CREATE   PROCEDURE sp_delete_backuphistory  
  @oldest_date datetime  
AS  
BEGIN  
  SET NOCOUNT ON  
  
  DECLARE @backup_set_id TABLE      (backup_set_id INT)  
  DECLARE @media_set_id TABLE       (media_set_id INT)  
  DECLARE @restore_history_id TABLE (restore_history_id INT)  
  
  INSERT INTO @backup_set_id (backup_set_id)  
  SELECT DISTINCT backup_set_id [...]
  
  INSERT INTO @media_set_id (media_set_id)  
  SELECT DISTINCT media_set_id [...] 
  
  INSERT INTO @restore_history_id (restore_history_id)  
  SELECT DISTINCT restore_history_id [...]  
  
  BEGIN TRANSACTION  
  
  DELETE FROM msdb.dbo.backupfile  
  WHERE backup_set_id IN (SELECT backup_set_id  
                          FROM @backup_set_id)  
  IF (@@error > 0)  
    GOTO Quit  
  
  DELETE FROM msdb.dbo.backupfilegroup  
  WHERE backup_set_id IN (SELECT backup_set_id  
                          FROM @backup_set_id)  
  IF (@@error > 0)  
    GOTO Quit  
  
  [...]

In order to clean up the backup history while minimizing the impact on the size of the transaction log file, the operation should be done in small chunks.

Here is the small T-SQL script that will do the job:

use msdb
go

declare @retentionDate datetime = DATEADD(MONTH, -3, getdate())
declare @oldest_date datetime = (select min(backup_start_date) from msdb.dbo.backupset)

while (@oldest_date  < @retentionDate)
begin
	print 'sp_delete_backuphistory ' + CAST(@oldest_date AS varchar)
	exec msdb.dbo.sp_delete_backuphistory @oldest_date

	--	Delete by 2 weeks increments
	set @oldest_date = DATEADD(WEEK, 2, @oldest_date)

	checkpoint
end

exec msdb.dbo.sp_delete_backuphistory @retentionDate
go

 

The sp_delete_backuphistory procedure is called in 2 weeks increments from the oldest entry in msdb.dbo.backupset until the retention date which I set to 3 months.
An important point here is the checkpoint command which actually performs a manual checkpoint and helps to keep the impact on the transaction log size at a minimum when the database is in the SIMPLE recovery model. You can read more about this technic in this great writing from Aaron Bertrand.

 

If you don’t already have one, I would recommend creating a Job on your SQL Server instance that will call sp_delete_backuphistory weekly or fortnightly. This can be a standard Job on all your servers.

Here is the script for such Job with its schedule. The retention is set to 6 months of backup history.

USE [msdb]
GO

BEGIN TRANSACTION
DECLARE @ReturnCode INT = 0;

DECLARE @jobId BINARY(16)
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'DBA_MAINTENANCE_BACKUP_HISTORY', 
		@enabled=1, 
		@notify_level_eventlog=0, 
		@notify_level_email=0, 
		@notify_level_netsend=0, 
		@notify_level_page=0, 
		@delete_level=0, 
		@description=N'Purge backup history', 
		@category_name=N'Database Maintenance', 
		@owner_login_name=N'sa', @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'DBA_MAINTENANCE_BACKUP_HISTORY_STEP', 
		@step_id=1, 
		@cmdexec_success_code=0, 
		@on_success_action=1, 
		@on_success_step_id=0, 
		@on_fail_action=2, 
		@on_fail_step_id=0, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N'TSQL', 
		@command=N'declare @retentionDate datetime = DATEADD(MONTH, -6, getdate());
exec msdb.dbo.sp_delete_backuphistory @retentionDate;', 
		@database_name=N'msdb', 
		@flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId, @name=N'WeeklySchedule_Sunday_10pm', 
		@enabled=1, 
		@freq_type=8, 
		@freq_interval=1, 
		@freq_subday_type=1, 
		@freq_subday_interval=0, 
		@freq_relative_interval=0, 
		@freq_recurrence_factor=1, 
		@active_start_date=20210422, 
		@active_end_date=99991231, 
		@active_start_time=220000, 
		@active_end_time=235959, 
		@schedule_uid=N'f41b6a99-3d5d-49d7-871b-5e8a9ac2c0eb'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:
GO

 

 

 

 

 

 

 

 

 

 

 

 

 

Cet article SQL Server: How to delete the msdb backup history kindly est apparu en premier sur Blog dbi services.

Maven: Could not find or load main class org.codehaus.plexus.classworlds.launcher.Launcher

Thu, 2021-04-22 12:58

Working on Jenkins and Artifactory I often seen options suggesting Maven tool,but what is Maven purpose?How can we use and install it on a Windows system?During this install you may encounter some kind of issues

Introduction

Maven is a tool that can be used for building and managing any Java-based project ( In our case we will use it on Jenkins )
Maven builds a project using its project object model (POM) and a set of plugins
More info about Apache Maven following their site:
https://maven.apache.org/index.html
We will see now how to install it and avoid installation issues

Installation
  • Prerequisites

  • Install Java

You must have Java installed to install Maven
https://www.oracle.com/java/technologies/javase-downloads.html
Once installed you can verify your Java version

  • Install Maven

Maven installation link
https://maven.apache.org/install.html
The installation of Apache Maven is a simple process of extracting the archive and adding the `bin` folder with the `mvn` command to the `PATH`

  • Download the file on first line

  • Create a folder named Maven and uncompress the tar file

Issue encountered when launching a Maven command

When going into bin folder to start the command mvn -v we get this error

C:\Users\nso>cd C:\Program Files\maven\apache-maven-3.8.1\bin

C:\Program Files\maven\apache-maven-3.8.1\bin>java -version
java version "1.8.0_271"
Java(TM) SE Runtime Environment (build 1.8.0_271-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.271-b09, mixed mode)

C:\Program Files\maven\apache-maven-3.8.1\bin>mvn -v
Error: Could not find or load main class org.codehaus.plexus.classworlds.launcher.Launcher

The reason is that environment variable value are not correctly set

  • Go to environment variable options on you machine:

  • Add Java home variable

  • Add Maven environment variable:

  • Define the Maven home

  • Check if variable are correctly set for JAVA and MAVEN

  • Test command again to check if Maven is correctly installed and issue fixed

Conclusion

Here we are Maven is correctly installed on you Windows post you can now use it for your Jenkins builds and associate it with Artifactory!

Cet article Maven: Could not find or load main class org.codehaus.plexus.classworlds.launcher.Launcher est apparu en premier sur Blog dbi services.

SQL Server: Replace the old SCOM login by the new one

Thu, 2021-04-22 01:07

It’s every time hard to replace a login who as role and permissions in every database like the service account for SCOM.
In the previous version of Scom, you create a service account (dbi\scom) to access the SQL Server and check the health of the databases.
Now it’s possible to use the System Center Operations Manager Health Service (NT SERVICE\HealthService).
To do this change on all SQL Servers, I write a script.

I create the script to replace the old SCOM login with the new one on 5 steps:

  • Step 1: kill sessions with the old scom login
  • Step 2: drop the old user scom in all databases
  • Step 3: drop server role scom in the instance
  • Step 4: Drop the old scom login
  • Step 5: Set the new scom account
Step 1: kill sessions with the old scom login

Before dropping the login, you need to be sure that he has no more connection to the instance.
Most of the time, the service will be stop but this script insure you that no process is connected.

/*Step 1: kill sessions with the old scom login*/
DECLARE @LoginToDrop sysname SET @LoginToDrop='dbi\scom';
DECLARE @SessionToKill nvarchar(max) SET @SessionToKill = ''; 

SELECT @SessionToKill=@SessionToKill + 'Kill '+ CAST(session_id as nvarchar(10)) +';' 
        FROM sys.dm_exec_sessions
        WHERE is_user_process = 1 and login_name=@LoginToDrop;
PRINT (@SessionToKill) 
EXEC(@SessionToKill)

 

Step 2: drop the old user scom in all databases

The second part of the script is to go though all databases and drop the user-databases

/*Step 2: drop user scom in all databases sessions with the old scom session */
DECLARE @DropDatabaseUserAndRole nvarchar(max);
SET @DropDatabaseUserAndRole = ''; 
SELECT @DropDatabaseUserAndRole = @DropDatabaseUserAndRole  + ' USE ' + QUOTENAME(db.name) + '; DROP USER ['+@LoginToDrop+'];' 
  FROM sys.databases db  
  LEFT JOIN sys.dm_hadr_availability_replica_states hadrstate ON db.replica_id = hadrstate.replica_id  
  WHERE db.database_id != 2 AND db.user_access = 0 AND db.state = 0 AND db.is_read_only = 0 AND (hadrstate.role = 1 or hadrstate.role IS NULL); 
PRINT (@DropDatabaseUserAndRole ) 
EXEC(@DropDatabaseUserAndRole ) 
GO

 

Step 3: drop server role scom in the instance

This step is optional and only if you create a server role for the scom login. In this script, the first part is to drop all members from the role and after drop the role

/*Step 3: drop server role scom in the instance*/
USE [master]
GO
DECLARE @RoleName sysname;
set @RoleName = N'scom';
IF @RoleName <> N'public' and (select is_fixed_role from sys.server_principals where name = @RoleName) = 0
BEGIN
    DECLARE @RoleMemberName sysname
    DECLARE Member_Cursor CURSOR FOR
    select [name]
    from sys.server_principals
    where principal_id in (select member_principal_id from sys.server_role_members where role_principal_id in 
           (select principal_id FROM sys.server_principals where [name] = @RoleName  AND type = 'R' ))
    OPEN Member_Cursor;
    FETCH NEXT FROM Member_Cursor into @RoleMemberName
    DECLARE @SQL NVARCHAR(4000)
    WHILE @@FETCH_STATUS = 0
    BEGIN
        SET @SQL = 'ALTER SERVER ROLE '+ QUOTENAME(@RoleName,'[') +' DROP MEMBER '+ QUOTENAME(@RoleMemberName,'[')
        EXEC(@SQL)
        FETCH NEXT FROM Member_Cursor
        into @RoleMemberName
    END;
    CLOSE Member_Cursor;
    DEALLOCATE Member_Cursor;
END
DROP SERVER ROLE [scom]
GO

In this script, i use a cursor, it’s not the best but it’s efficient…

Step 4: Drop the old scom login

After these 3 steps, I can now drop the login scom

/*Step 4: Drop the scom login*/
USE [master]
GO
DROP LOGIN [dbi\scom]
GO

 

Step 5: Set the new scom account

Now, it’ the time to add the new one.
The new service account is the System Center Operations Manager Health Service (NT SERVICE\HealthService).
After Adding the login to the instance, I create the role SCOM_HealthService in each database and add this login as member.
Also give the server permissions “VIEW ANY DATABASE”,”VIEW SERVER STATE” and “VIEW ANY DEFINITION” and specials permissions in the master and msdb databases.

/*Step 5: Set the new scom account*/
USE [master];
SET NOCOUNT ON;
DECLARE @accountname sysname = 'NT SERVICE\HealthService';
-- Create the server role and grant instance level permissions
CREATE SERVER ROLE [SCOM_HealthService];
GRANT VIEW ANY DATABASE TO [SCOM_HealthService];
--GRANT ALTER ANY DATABASE TO [SCOM_HealthService]; --Required only for SCOMDB tasks
GRANT VIEW ANY DEFINITION TO [SCOM_HealthService];
GRANT VIEW SERVER STATE TO [SCOM_HealthService];
DECLARE @createLoginCommand nvarchar(200);
SET @createLoginCommand = ' CREATE LOGIN '+ QUOTENAME(@accountname) +' FROM WINDOWS WITH DEFAULT_DATABASE=[master];'
EXEC(@createLoginCommand);
-- Add the login to the user defined server role 
EXEC sp_addsrvrolemember @loginame = @accountname, @rolename = 'SCOM_HealthService' 
-- Add the login and database role to each database
DECLARE @createDatabaseUserAndRole nvarchar(max) SET @createDatabaseUserAndRole = ''; 
SELECT @createDatabaseUserAndRole = @createDatabaseUserAndRole + ' USE ' + QUOTENAME(db.name) + '; CREATE USER ' + QUOTENAME(@accountname) + ' FOR LOGIN ' + QUOTENAME(@accountname) + '; CREATE ROLE [SCOM_HealthService]; EXEC sp_addrolemember @rolename = ''SCOM_HealthService'', @membername = '+ QUOTENAME(@accountname) + '' 
  FROM sys.databases db  
  LEFT JOIN sys.dm_hadr_availability_replica_states hadrstate ON db.replica_id = hadrstate.replica_id  
  WHERE db.database_id != 2 AND db.user_access = 0 AND db.state = 0 AND db.is_read_only = 0 AND (hadrstate.role = 1 or hadrstate.role IS NULL); 
EXEC(@createDatabaseUserAndRole) 
-- Add database specific permissions to database role
USE [master]; 
GRANT EXECUTE ON sys.xp_readerrorlog TO [SCOM_HealthService]
GRANT SELECT ON sys.database_mirroring_witnesses TO [SCOM_HealthService];
USE [msdb]; 
GRANT SELECT ON [dbo].[sysjobschedules] TO [SCOM_HealthService]; 
GRANT SELECT ON [dbo].[sysschedules] TO [SCOM_HealthService]; 
GRANT SELECT ON [dbo].[sysjobs_view] TO [SCOM_HealthService]; 
GRANT SELECT ON [dbo].[log_shipping_primary_databases] TO [SCOM_HealthService]; 
GRANT SELECT ON [dbo].[log_shipping_secondary_databases] TO [SCOM_HealthService]; 
GRANT SELECT ON [dbo].[log_shipping_monitor_history_detail] TO [SCOM_HealthService]; 
GRANT SELECT ON [dbo].[log_shipping_monitor_secondary] TO [SCOM_HealthService]; 
GRANT SELECT ON [dbo].[log_shipping_monitor_primary] TO [SCOM_HealthService]; 
GRANT EXECUTE ON [dbo].[sp_help_job] TO [SCOM_HealthService]; 
GRANT EXECUTE ON [dbo].[sp_help_jobactivity] TO [SCOM_HealthService]; 
EXEC sp_addrolemember @rolename='SQLAgentReaderRole', @membername='SCOM_HealthService';
EXEC sp_addrolemember @rolename='PolicyAdministratorRole', @membername='SCOM_HealthService';

 
 

Global Script

To finish my post, I give you also the global script:

/*Step 1: kill sessions with the old scom login*/
DECLARE @LoginToDrop sysname SET @LoginToDrop='dbi\scom';
DECLARE @SessionToKill nvarchar(max) SET @SessionToKill = ''; 

SELECT @SessionToKill=@SessionToKill + 'Kill '+ CAST(session_id as nvarchar(10)) +';' 
        FROM sys.dm_exec_sessions
        WHERE is_user_process = 1 and login_name=@LoginToDrop;
PRINT (@SessionToKill) 
EXEC(@SessionToKill) 

/*Step 2: drop user scom in all databases sessions with the old scom session */
DECLARE @DropDatabaseUserAndRole nvarchar(max);
SET @DropDatabaseUserAndRole = ''; 
SELECT @DropDatabaseUserAndRole = @DropDatabaseUserAndRole  + ' USE ' + QUOTENAME(db.name) + '; DROP USER ['+@LoginToDrop+'];' 
  FROM sys.databases db  
  LEFT JOIN sys.dm_hadr_availability_replica_states hadrstate ON db.replica_id = hadrstate.replica_id  
  WHERE db.database_id != 2 AND db.user_access = 0 AND db.state = 0 AND db.is_read_only = 0 AND (hadrstate.role = 1 or hadrstate.role IS NULL); 
PRINT (@DropDatabaseUserAndRole ) 
EXEC(@DropDatabaseUserAndRole ) 
GO

/*Step 3: drop server role scom in the instance*/

USE [master]
GO

DECLARE @RoleName sysname
set @RoleName = N'scom'

IF @RoleName <> N'public' and (select is_fixed_role from sys.server_principals where name = @RoleName) = 0
BEGIN
    DECLARE @RoleMemberName sysname
    DECLARE Member_Cursor CURSOR FOR
    select [name]
    from sys.server_principals
    where principal_id in ( 
        select member_principal_id 
        from sys.server_role_members 
        where role_principal_id in (
            select principal_id
            FROM sys.server_principals where [name] = @RoleName  AND type = 'R' ))

    OPEN Member_Cursor;

    FETCH NEXT FROM Member_Cursor
    into @RoleMemberName

    DECLARE @SQL NVARCHAR(4000)
        
    WHILE @@FETCH_STATUS = 0
    BEGIN
        
        SET @SQL = 'ALTER SERVER ROLE '+ QUOTENAME(@RoleName,'[') +' DROP MEMBER '+ QUOTENAME(@RoleMemberName,'[')
        EXEC(@SQL)
        
        FETCH NEXT FROM Member_Cursor
        into @RoleMemberName
    END;

    CLOSE Member_Cursor;
    DEALLOCATE Member_Cursor;
END
DROP SERVER ROLE [scom]
GO

/*Step 4: Drop the scom login*/
USE [master]
GO
DROP LOGIN [dbi\scom]
GO


/*Step 5: Set the new scom account*/
USE [master];
SET NOCOUNT ON;
DECLARE @accountname sysname = 'NT SERVICE\HealthService';
-- Create the server role and grant instance level permissions
CREATE SERVER ROLE [SCOM_HealthService];
GRANT VIEW ANY DATABASE TO [SCOM_HealthService];
--GRANT ALTER ANY DATABASE TO [SCOM_HealthService]; --Required only for SCOMDB tasks
GRANT VIEW ANY DEFINITION TO [SCOM_HealthService];
GRANT VIEW SERVER STATE TO [SCOM_HealthService];
DECLARE @createLoginCommand nvarchar(200);
SET @createLoginCommand = ' CREATE LOGIN '+ QUOTENAME(@accountname) +' FROM WINDOWS WITH DEFAULT_DATABASE=[master];'
EXEC(@createLoginCommand);
-- Add the login to the user defined server role 
EXEC sp_addsrvrolemember @loginame = @accountname, @rolename = 'SCOM_HealthService' 
-- Add the login and database role to each database
DECLARE @createDatabaseUserAndRole nvarchar(max) SET @createDatabaseUserAndRole = ''; 
SELECT @createDatabaseUserAndRole = @createDatabaseUserAndRole + ' USE ' + QUOTENAME(db.name) + '; CREATE USER ' + QUOTENAME(@accountname) + ' FOR LOGIN ' + QUOTENAME(@accountname) + '; CREATE ROLE [SCOM_HealthService]; EXEC sp_addrolemember @rolename = ''SCOM_HealthService'', @membername = '+ QUOTENAME(@accountname) + '' 
  FROM sys.databases db  
  LEFT JOIN sys.dm_hadr_availability_replica_states hadrstate ON db.replica_id = hadrstate.replica_id  
  WHERE db.database_id != 2 AND db.user_access = 0 AND db.state = 0 AND db.is_read_only = 0 AND (hadrstate.role = 1 or hadrstate.role IS NULL); 
EXEC(@createDatabaseUserAndRole) 
-- Add database specific permissions to database role
USE [master]; 
GRANT EXECUTE ON sys.xp_readerrorlog TO [SCOM_HealthService]
GRANT SELECT ON sys.database_mirroring_witnesses TO [SCOM_HealthService];
USE [msdb]; 
GRANT SELECT ON [dbo].[sysjobschedules] TO [SCOM_HealthService]; 
GRANT SELECT ON [dbo].[sysschedules] TO [SCOM_HealthService]; 
GRANT SELECT ON [dbo].[sysjobs_view] TO [SCOM_HealthService]; 
GRANT SELECT ON [dbo].[log_shipping_primary_databases] TO [SCOM_HealthService]; 
GRANT SELECT ON [dbo].[log_shipping_secondary_databases] TO [SCOM_HealthService]; 
GRANT SELECT ON [dbo].[log_shipping_monitor_history_detail] TO [SCOM_HealthService]; 
GRANT SELECT ON [dbo].[log_shipping_monitor_secondary] TO [SCOM_HealthService]; 
GRANT SELECT ON [dbo].[log_shipping_monitor_primary] TO [SCOM_HealthService]; 
GRANT EXECUTE ON [dbo].[sp_help_job] TO [SCOM_HealthService]; 
GRANT EXECUTE ON [dbo].[sp_help_jobactivity] TO [SCOM_HealthService]; 
EXEC sp_addrolemember @rolename='SQLAgentReaderRole', @membername='SCOM_HealthService';
EXEC sp_addrolemember @rolename='PolicyAdministratorRole', @membername='SCOM_HealthService';

 
I hope this script can help you for all migrations of service account present in all databases.

Cet article SQL Server: Replace the old SCOM login by the new one est apparu en premier sur Blog dbi services.

Documentum – SSL Certificate based secure communication setup

Sun, 2021-04-18 12:10

Around four years ago, I did a few presentations, here, in Switzerland about “Security & Documentum”. In there, I talked about a lot of different subjects related to both security and Documentum (you guessed it…) like: ciphers, SHA, FIPS & JCE, Documentum & DFC connect mode (characteristics, ciphers, protocols, encryptions, limitations), Documentum & DFC encryptions in transit and at rest (AEK/DBK/LTK/FSK/FEK, TCS, CS Lockbox, D2 Lockbox vs D2 Keystore, Passwords encryption and decryption), and some other topics (HTTPS on WebLogic, JBoss/WildFly, Tomcat, best practices for security, LDAPS support, FIPS 140-2 support and compliance).

 

“Why the hell are you talking about presentations you gave 4 years ago?”. Good question, thank you! This presentation was really dense so all I could do was just put an example of the configuration files needed for real SSL Certificate based secure communication but not how exactly to reach this point. I talked about this configuration in several blogs already but never took the time to explain/show it from A to Z. So, that’s what I’m going to do here because without being able to create the SSL Certificate and trust stores, you will probably have some trouble to really configure Documentum to use the real-secure mode (in opposition to the default-secure, which is using anonymous and therefore not fully secure).

 

In this blog, I will use self-signed SSL Certificate only. It is possible to use CA signed SSL Certificate, the only thing it would change is that you would need to set the trust chain into the different trust stores instead of the self-signed SSL Certificate. This has pros and cons however… This means it is easier to automate because a CA trust chain is a public SSL Certificate and therefore in case you are in a CI/CD infrastructure, you can easily create the needed Documentum trust stores from anywhere (any pods, any containers, any VMs, aso…). However, that also means that anybody with access to this trust chain can potentially create the needed files used by a DFC Client to talk to your Docbroker and Repositories. That might or might not be a problem for you so I will let you decide on that. On the other hand, using a self-signed SSL Certificate makes it more difficult to gain access to the certificates (unless you are storing it in a public and open location of course) but at the same time, this complicates a little bit the setup for remote DFC Clients since you will need to share, somehow, the Docbroker and Repositories certificates in order to create a trust store for the DFC Clients.

 

I split the steps into different sections: one global definition of parameters & passwords and then one section per component. Please note that for the DFC Client section, I used the JMS. The same steps can be applied for any DFC Client, you will just need to have access to the needed input files. Please make sure that all components are shutdown when you start the configuration, to avoid expected errors: it will be easier to spot errors if something you expect to be working isn’t, if you don’t have hundreds of expected errors in the middle because all clients are still trying to use non-secure (or default-secure) modes. Alright, enough blabbering, let’s start with the setup.

 

I. Global setup/parameters

All the files needed for the Docbroker and Repositories setup needs to be put into the $DOCUMENTUM/dba/secure/ folder so all the commands will be executed in there directly. I defined here some environment variables that will be used by all the commands. The read commands will simply ask you to enter the needed password and press enter. Doing that will store the password into the environment variable (lb_pp, b_pwd, s_pwd and d_pwd). If you aren’t using any Lockbox (since deprecated since Documentum 16.7), just ignore the Lockbox part.

cd $DOCUMENTUM/dba/secure/
lb_name="lockbox.lb"
aek_name="CSaek"
b_name="docbroker"
s_name="contentserver"
d_name="dfc"

read -s -p "  ----> Please enter the ${lb_name} passphrase: " lb_pp

read -s -p "  ----> Please enter the ${b_name} related password: " b_pwd

read -s -p "  ----> Please enter the ${s_name} related password: " s_pwd

read -s -p "  ----> Please enter the ${d_name} related password: " d_pwd

echo "
Lockbox passphrase entered: ${lb_pp}
Broker password entered: ${b_pwd}
Server password entered: ${s_pwd}
DFC password entered: ${d_pwd}"

 

II. Docbroker setup – SSL Server only

In this section, we will create the certificate for the Docbroker, create the needed keystore (it needs to be PKCS12) and encrypt the keystore password. If you aren’t using any Lockbox, in the “dm_encrypt_password” command, just remove the two parameters related to it (and its associated value/password) and remove the “crypto_lockbox” from the Docbroker.ini file (or whatever the name of your file is).

openssl req -x509 -days 1096 -newkey rsa:2048 -keyout ${b_name}.key -out ${b_name}.crt -subj "/C=CH/ST=Jura/L=Delemont/O=dbi services/OU=IT/CN=${b_name}" -passout pass:"${b_pwd}"

openssl pkcs12 -export -out ${b_name}.p12 -inkey ${b_name}.key -in ${b_name}.crt -name ${b_name} -descert -passin pass:"${b_pwd}" -passout pass:"${b_pwd}"

dm_encrypt_password -lockbox "${lb_name}" -lockboxpassphrase "${lb_pp}" -keyname "${aek_name}" -encrypt "${b_pwd}" -file ${b_name}.pwd

cp $DOCUMENTUM/dba/Docbroker.ini $DOCUMENTUM/dba/Docbroker.ini.orig

echo "[DOCBROKER_CONFIGURATION]
secure_connect_mode=secure
crypto_keyname=${aek_name}
crypto_lockbox=${lb_name}
keystore_file=${b_name}.p12
keystore_pwd_file=${b_name}.pwd" > $DOCUMENTUM/dba/Docbroker.ini

 

At this point, you can start the Docbroker and it should start only on the secure port, without errors. If there are still clients up&running, you will probably face a lot of handshake failure errors… It is possible to define the list of ciphers to use in the Docbroker.ini file (cipherlist=xxx:yyy:zzz) but if you do so, please make sure that all the SSL Clients (Repository and DFC Clients alike) that will talk to it does support this cipher as well.

 

III. Repository setup – SSL Server and SSL Client

In this section, we will create the certificate for the Repositories (each repo can have its own if you prefer), create the needed keystore (it needs to be PKCS12), create the needed trust store (it needs to be PKCS7) and encrypt the keystore password. If you aren’t using any Lockbox, in the “dm_encrypt_password” command, just remove the two parameters related to it (and its associated value/password). In case you have several Lockbox and AEK Key, you might want to retrieve their names from the server.ini directly (inside the loop) and then use these to encrypt the password, for each Repository, independently.

openssl req -x509 -days 1096 -newkey rsa:2048 -keyout ${s_name}.key -out ${s_name}.crt -subj "/C=CH/ST=Jura/L=Jura/O=dbi services/OU=IT/CN=${s_name}" -passout pass:"${s_pwd}"

openssl pkcs12 -export -out ${s_name}.p12 -inkey ${s_name}.key -in ${s_name}.crt -name ${s_name} -descert -passin pass:"${s_pwd}" -passout pass:"${s_pwd}"

dm_encrypt_password -lockbox "${lb_name}" -lockboxpassphrase "${lb_pp}" -keyname "${aek_name}" -encrypt "${s_pwd}" -file ${s_name}.pwd

openssl crl2pkcs7 -nocrl -certfile ${b_name}.crt -outform der -out ${s_name}-trust.p7b

for s_ini in $(ls $DOCUMENTUM/dba/config/*/server.ini); do
  cp ${s_ini} ${s_ini}.orig
  sed -i --follow-symlinks "/keystore_file/d" ${s_ini}
  sed -i --follow-symlinks "/keystore_pwd_file/d" ${s_ini}
  sed -i --follow-symlinks "/truststore_file/d" ${s_ini}
  sed -i --follow-symlinks "/cipherlist/d" ${s_ini}
  sed -i --follow-symlinks "/^crypto_keyname/a \truststore_file = ${s_name}-trust.p7b" ${s_ini}
  sed -i --follow-symlinks "/^crypto_keyname/a \keystore_pwd_file = ${s_name}.pwd" ${s_ini}
  sed -i --follow-symlinks "/^crypto_keyname/a \keystore_file = ${s_name}.p12" ${s_ini}
done

 

At this point, you can start the different Repositories and it should start and project itself to the Docbroker. However, The AgentExec should still fail to start properly because it should use the global dfc.properties of the Documentum Server, which wasn’t updated yet. So you might want to configure the global dfc.properties before starting the Repositories. It is possible to define the list of ciphers to use in the server.ini file (cipherlist=xxx:yyy:zzz) but if you do so, please make sure that all the SSL Clients (DFC Clients) that will talk to it and SSL Servers (Docbroker) it talks to does support this cipher as well.

 

IV. DFC Clients setup (JMS, IndexAgent, DA, D2, …) – SSL Client only

In this section, we will create the needed trust store (it needs to be JKS) and encrypt the trust store password. Regarding the password encryption, this command will work on any DFC Client, you will just need to add the dfc.jar in the classpath (for example on xPlore: -cp “$XPLORE_HOME/dfc/dfc.jar”) if you aren’t executing it on a Documentum Server.

openssl x509 -outform der -in ${b_name}.crt -out ${b_name}.der

openssl x509 -outform der -in ${s_name}.crt -out ${s_name}.der

$JAVA_HOME/bin/keytool -importcert -keystore ${d_name}-trust.jks -file ${b_name}.der -alias ${b_name} -noprompt -storepass ${d_pwd}

$JAVA_HOME/bin/keytool -importcert -keystore ${d_name}-trust.jks -file ${s_name}.der -alias ${s_name} -noprompt -storepass ${d_pwd}

d_pwd_enc=$($JAVA_HOME/bin/java com.documentum.fc.tools.RegistryPasswordUtils ${d_pwd})

cp $DOCUMENTUM/config/dfc.properties $DOCUMENTUM/config/dfc.properties.orig
sed -i '/dfc.session.secure_connect_default/d' $DOCUMENTUM/config/dfc.properties
sed -i '/dfc.security.ssl.use_existing_truststore/d' $DOCUMENTUM/config/dfc.properties
sed -i '/dfc.security.ssl.truststore/d' $DOCUMENTUM/config/dfc.properties
sed -i '/dfc.security.ssl.truststore_password/d' $DOCUMENTUM/config/dfc.properties

echo "dfc.session.secure_connect_default=secure
dfc.security.ssl.use_existing_truststore=false
dfc.security.ssl.truststore=$DOCUMENTUM/dba/secure/${d_name}-trust.jks
dfc.security.ssl.truststore_password=${d_pwd_enc}" >> $DOCUMENTUM/config/dfc.properties

 

This is technically the global dfc.properties of a Documentum Server and not really the JMS one but I assume almost everybody in the world is just including this one (using #include) for the dfc.properties of the JMS (ServerApps, acs, bpm, …), to avoid duplication of generic parameters/configurations at multiple locations and just manage them globally.

 

At this point, you can start the DFC Client and it should be able to communicate with the Docbroker and with the Repositories. As said before, if you already started the Repository, you might want to make sure that the AgentExec is running and if not, maybe restart the Repositories quickly.

 

Some final remarks on the SSL Certificate based secure configuration of Documentum:

  • Other Content Servers & Docbrokers (HA part) must re-use the exact same keystores (and therefore trust store as well in the end). Files must be sent to all other hosts and re-used exactly in the same way
  • Other DFC clients can use newly created files but in the end, it will contain the exact same content (either the self-signed Docbroker and Repositories certificates or the CA-signed trust chain)… Therefore, files can be sent to all DFC clients and re-used exactly in the same way as well
  • After the initial generation, you don’t need any of the key, crt or der files anymore so you can remove them for security reasons:
    • rm ${b_name}.key ${b_name}.crt ${b_name}.der ${s_name}.key ${s_name}.crt ${s_name}.der
  • I didn’t describe everything in full-length here, there are a bunch of other things and limitations to know before going into that direction so you will probably want to read the documentation carefully

 

Cet article Documentum – SSL Certificate based secure communication setup est apparu en premier sur Blog dbi services.

Documentum – Configuration of an IDS Target Memory/RAM usage on Windows

Sun, 2021-04-18 10:10

A few months ago, I had to work on a Windows Server to setup an IDS Target. The installation and configuration of the target wasn’t that different compared to a Linux host, so it wasn’t difficult at all (if you ignore some strange behavior like described here for example). But there was one point for which I was a little bit skeptical: how do you configure the IDS Target Memory/RAM assignment for its JVM? On Linux, it’s very easy since the IDS Target configuration will create some start/stop scripts and in these, you can easily find the Java commands executed. Therefore, changing the JVM Memory is just adding the usual Xms/Xmx parameters needed there…

 

Unfortunately, on Windows, IDS will setup a service automatically and this service uses a .exe file, which you, therefore, cannot modify in any way. OpenText (or rather EMC before) could have used a cmd or ps1 script to call the Java command, similarly to Linux or even used a java.ini file somewhere but that’s not the case.

 

By default, the JVM will probably use something like 256Mb of RAM. The exact value will depend on the Java version and potentially on your server as well (how much RAM the host has). There are a lot of blogs or posts already on how to check how much memory is used by the JVM by default but for the quick reference, you can check that with something like:

# Linux:
java -XX:+PrintFlagsFinal -version | grep HeapSize

# Windows:
java -XX:+PrintFlagsFinal -version | findstr HeapSize

 

Having 256Mb of RAM for the IDS Target might be sufficient if the number of files to transfer is rather “small”. However, at some point, you might end-up facing an OutOfMemory error, most probably whenever the IDS Target tries to open the properties.xml file from the previous full-sync or directly during the initial full-sync. If the file is too big (bigger than the Memory of the JVM), it will probably end-up with the OOM and your synchronization will fail.

 

Therefore, how do you increase the default IDS Target JVM settings on Windows? It’s actually not that complicated but you will need to update the registry directly:

  • Open regedit on the target Windows Server
  • Navigate to (that’s an example with secure IDS on port 2787, your path might be different):
    • HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\OpenText Documentum IDS Target_secure_2787\Env
  • Double click on the registry key inside this folder named “Values
  • Update the “jvmoptions” definition (around the end normally) to add the Xms and Xmx parameters like:
    • from: “jvmoptions=-Dfile.encoding=UTF-8
    • to: “jvmoptions=-Dfile.encoding=UTF-8 -Xms2g -Xmx4g
  • Restart the IDS Target Service

 

 

With that, the IDS Target should now be allowed to use up to 4GB of RAM, hopefully, which should give you some space to have proper synchronization without OutOfMemory.

 

Cet article Documentum – Configuration of an IDS Target Memory/RAM usage on Windows est apparu en premier sur Blog dbi services.

DynamoDB / Aurora: sparse and partial indexes

Thu, 2021-04-15 14:50
By Franck Pachot

.
In a previous post I tried to build a glossary about Amazon DynamoDB terms that look like relational database terms, but with a different technical meaning. Here is more about it. If you work with AWS Databases and frequently switch between DynamoDB and Aurora, or other RDS databases, you may be confused by the same terms used for different meanings.

An index is a redundant structure that is maintained by the database to provide faster and ordered access when querying on a small part of the table. Basically, rather than scanning a table, or a partition, reading all values, and filtering afterwards, you can access to a small part of it that you don’t have to filter too much, and sort afterwards. This small part is a subset of rows and columns, or items and attributes.

Let’s take an example in Aurora with PostgreSQL compatibility in order to explain what is a covering index and a partial index in the relational database vocabulary.


postgres=> \c postgres://reader:NWDMCE5xdipIjRrp@hh-pgsql-public.ebi.ac.uk:5432/pfmegrnargs

pfmegrnargs=> \! rm /var/tmp/rna.csv
pfmegrnargs=> \copy rna to '/var/tmp/rna.csv' csv header
COPY

pfmegrnargs=> \! du -h /var/tmp/rna.csv
24G     /var/tmp/rna.csv

I’ve downloaded the RNA table, from @RNAcentral, to local csv, about 25 GB.


postgres=> \c postgres://postgres:postgres@database-1.cluster-cvlvfe1jv6n5.eu-west-1.rds.amazonaws.com:5432/postgres
You are now connected to database "postgres" as user "postgres".

I’ve created an RDS Aurora with PostgreSQL compatibility (a small db.r6g.large 2 cVPU 16GB RAM)


postgres=> CREATE TABLE rna (id int8 null, upi varchar(30) not null, "timestamp" timestamp null, userstamp varchar(60) null, crc64 bpchar(16) null, len int4 null, seq_short varchar(4000) null, seq_long text null, md5 varchar(64) null, constraint rna_pkey primary key (upi));
CREATE TABLE

Here is the same table as the source, but without any index except the primary key


postgres=> \copy rna from '/var/tmp/rna.csv' csv header
COPY
postgres=> vacuum rna;
VACUUM

This loaded the 35 million rows from the CSV.

I’ll query on a small time range, the latest rows from this year:


postgres=> explain (analyze,verbose,buffers) select upi from rna where timestamp > date '2021-01-01';

                                                                 QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------
 Gather  (cost=1000.00..3179976.11 rows=254424 width=14) (actual time=38927.822..43965.577 rows=407181 loops=1)
   Output: upi
   Workers Planned: 2
   Workers Launched: 2
   Buffers: shared hit=1265727 read=1702515
   I/O Timings: read=36.660
   ->  Parallel Seq Scan on public.rna  (cost=0.00..3153533.71 rows=106010 width=14) (actual time=38924.249..43688.330 rows=135727 loops=3)
         Output: upi
         Filter: (rna."timestamp" > '2021-01-01'::date)
         Rows Removed by Filter: 11722942
         Buffers: shared hit=1265727 read=1702515
         I/O Timings: read=36.660
         Worker 0: actual time=38908.360..43841.438 rows=120508 loops=1
           Buffers: shared hit=463939 read=561349
           I/O Timings: read=13.017
         Worker 1: actual time=38936.741..43374.967 rows=172038 loops=1
           Buffers: shared hit=377612 read=564988
           I/O Timings: read=11.924
 Planning Time: 0.112 ms
 Execution Time: 44001.385 ms

Without any index, there’s not other choice than scanning the whole table (or partition if it were partitioned). This is long, but automatically parallelized, so it depends on your instance shape and I/O throughput


postgres=> create index demo_index on rna(timestamp);
CREATE INDEX

This creates an index on the timestamp column, the one that I use in my where clause.


postgres=> explain (analyze,verbose,buffers) select upi from rna where timestamp > date '2021-01-01';

                                                               QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------
 Index Scan using demo_index on public.rna  (cost=0.56..236038.54 rows=254424 width=14) (actual time=0.018..237.076 rows=407181 loops=1)
   Output: upi
   Index Cond: (rna."timestamp" > '2021-01-01'::date)
   Buffers: shared hit=34667 read=28
 Planning Time: 0.172 ms
 Execution Time: 260.792 ms

This is an Index Scan: the time range in my where clause is transformed to an index range scan: use the B-Tree structure to go to the first page for this value and follow the link to the next pages until the end value is reached. Those index entries have a reference to the page in the table where the row is, to get the selected column “upi”. This is ok here (3500 page read for 400000 rows) thanks to the good clustering, but it could be worse. If this is a critical use case, we can do better.


postgres=> create index demo_covering_index on rna(timestamp, upi);
CREATE INDEX

That’s another index where I added the “upi” column to the index.


postgres=> explain (analyze,verbose,buffers) select upi from rna where timestamp > date '2021-01-01';

                                                                     QUERY PLAN

----------------------------------------------------------------------------------------------------------------------------------------------------
 Index Only Scan using demo_covering_index on public.rna  (cost=0.56..9488.99 rows=254424 width=14) (actual time=0.017..57.300 rows=407181 loops=1)
   Output: upi
   Index Cond: (rna."timestamp" > '2021-01-01'::date)
   Heap Fetches: 0
   Buffers: shared hit=4464
 Planning Time: 0.182 ms
 Execution Time: 80.701 ms

Now, the same access to the index leaves doesn’t have to fetch from the table because the “uid” column is also stored there. This is an Index Only Scan. In database vocabulary, this index is called a covering index. And it has read only 4000 buffers, with more chances to get them from a cache hit.

The DynamoDB term for this is projection. In relational databases, the projection is the operations that filters a subset of the columns from a query. So, exactly the opposite of this one where we store (not filter) a superset (not subset) of the columns to index.


create index demo_partial_index on rna(timestamp, upi) where timestamp > date '2021-01-01';

That’s another index with a where clause. This is called a partial index in SQL databases: not all rows are indexed. Here, for example, a reason can be row lifecycle. For fresh data, we query on specific dates. For older data, larger range where a scan is better (partitioned by year for example).


postgres=> explain (analyze,verbose,buffers) select upi from rna where timestamp > date '2021-01-01';

                                                                    QUERY PLAN

---------------------------------------------------------------------------------------------------------------------------------------------------
 Index Only Scan using demo_partial_index on public.rna  (cost=0.42..8856.78 rows=254424 width=14) (actual time=0.016..52.719 rows=407181 loops=1)
   Output: upi
   Heap Fetches: 0
   Buffers: shared hit=4462
 Planning Time: 0.244 ms
 Execution Time: 76.214 ms

Note that I’m running the same query here. In relational databases, the indexes are transparently maintained and used. You don’t have to query them explicitely. You always query a logical view (the table or a view on it) and the query planner will find the best access path for it. Here, because the index is smaller, the access is cheaper, and this one has been chosen. So, in addition to the Index Only Scan, we benefit from partial indexing. The difference is not huge here for a range scan because a B-Tree index is very efficient already. But for larger tables, the depth of the index is smaller. The important thing is that updates on rows that are out of this partial index do not have the overhead of index maintenance.

The DynamoDB term for this, especially in the case of WHERE … IS NOT NULL, is sparse index. In database concepts, a sparse index is something different. But That’s difficult to explain in PostgreSQL where indexes are always dense.

If you take the same example with AWS Aurora with MySQL compatibility, you will not have the same possibilities. MySQL doesn’t have partial indexes. But if you look at the the primary key, you will not see an index. Because InnoDB actually stores the table in an index structure (like a covering index extended to all columns). Because it is physically ordered, you don’t need an additional index structure. The leaves of the B-Tree are the table and the branches are the index. This index is not dense: you don’t need to index each entry. Only the first value of each table page (leaf block here) is sufficient because everything is ordered. This is what is called a sparse index in databases. Sparse indexes are possible only with primary indexes, the ones that can define the physical organization of the table. But DynamoDB uses the term “sparse” for secondary indexes only, where a attribute can nonexistent (primary key attributes are mandatory) in an item, and then not indexed, in the same way as what relational databases call partial indexes.

RDBMS and NoSQL, often presented as opposite, have many similarities: you can store JSON documents in RDS, hash partition those tables on their primary key, and you have a NoSQL data structure. And you can query the NoSQL databases with an API that looks like SQL (PartiQL for example). And I think that the same converged data platform could be used by both APIs, in order to de-correlate the microservices isolation from the distributed infrastructure. Actually the storage design of Aurora and DynamoDB have some similarities. The big difference is in what NoSQL calls “eventual consistency” but that’s for a next post. The difference of vocabulary is really misleading and it starts with NoSQL articles using the term “SQL” as an umbrella for strong typing, relational modeling, declarative language query, ACID properties,… So, the most important is to understand the concepts behing those terms. In summary:

  • A projection in RDS is restricting the columns that are read by the SELECT
  • A projection in DynamoDB is adding more columns to the index to avoid a table access per item
  • A covering index in RDS is adding more columns to the index, with no need to sort on them, and to avoid a table access
  • A partial index in RDS is maintaining index entries only for a subset of the rows, to get a smaller B-Tree
  • A sparse index in DynamoDB is partially indexing by bypassing secondary index entries for nonexistent attributes
  • A sparse index in RDS is avoiding a dense primary index entries thanks to a physically ordered table

Cet article DynamoDB / Aurora: sparse and partial indexes est apparu en premier sur Blog dbi services.

K8s on Windows/VirtualBox

Mon, 2021-04-12 08:02
By Franck Pachot

.
This is a little demo, easy to copy-paste, if you want to play with Kubernetes on your laptop. And, not a simple “Hello World” but a real database running here, able to scale up and down with full availability.

Install Virtualbox

I use Oracle VirtuaBox because I’m a big fan of Oracle products, especially when they are good and free. However, you can use Hyper-V (then just skip this paragraph and use –driver=hyperv instead of –driver=virtualbox)

I have VirtualBox installed. You can install it for free https://www.virtualbox.org/wiki/Downloads

Virtualization must be enabled in the BIOS:

Microsoft Windows [Version 10.0.19042.906]
(c) Microsoft Corporation. All rights reserved.

C:\Users\Franck> systeminfo

...
Hyper-V Requirements:      VM Monitor Mode Extensions: Yes
                           Virtualization Enabled In Firmware: Yes
                           Second Level Address Translation: Yes
                           Data Execution Prevention Available: Yes

C:\WINDOWS\system32>bcdedit

...
hypervisorlaunchtype    Off

Even if this is the Hyper-V requirements, they are the same for VirtualBox and hypervisorlaunchtype at off allows VirtualBox.

Install Chocolatey

I’ll use Chocolatey software manager to install Minikube, here is how to install it from a PowerShell window:

Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

(run this on a PowerShell window as Administrator)

Microsoft Windows [Version 10.0.19042.906]
(c) Microsoft Corporation. All rights reserved.

C:\Users\Franck>choco
Chocolatey v0.10.15
Please run 'choco -?' or 'choco <command> -?' for help menu.

C:\Users\Franck>
Install Minikube

Minikube is a tool to run a single node Kubernetes on laptop:

choco install minikube

(run this on a ‘DOS’ window as Administrator)

The minikube command will be used to interact with the VM (start, stop, tunnel…), and the kubectl to interact with kubernetes.

Install Helm

Helm is an open source package tool for kubernetes to install applications.

choco install kubernetes-helm

(run this on a ‘DOS’ window as Administrator)

I’ll use Helm to install the package I want to run. I work with databases, and I’ll install one for this demo. Even if Kubernetes is initially designed to scale stateless applications, it has evolved with stateful sets, and persistent storage. However, it may not be the right platform for a monolith database. Then, I’ll run YugabyteDB, an open-source cloud-native distributed database, which makes more sense in an environment that can scale by adding pods, and where replication maintains the availability when a node goes down, or is added.

Testing with a database: YugabyteDB

The installation on a local cluster is documented:
https://docs.yugabyte.com/latest/quick-start/create-local-cluster/kubernetes/

This creates a cluster with no replication (replication factor 1) which is ok for a local lab on a laptop.

I’ll do something a bit different, trying RF3 (the replication factor should be an odd number as there’s a quorum election for the leader). This means more pods, and I’ll limit the memory and CPU so that it can run on any laptop. I’m doing this on my Surface Pro which has 2 cores, 4 Intel threads, and 16GB RAM).

First add the Yugabyte Helm charts repository

helm repo add yugabytedb https://charts.yugabyte.com

Here is the current version:


C:\Users\Franck> helm repo update
C:\Users\Franck> helm search repo yugabytedb/yugabyte

NAME                    CHART VERSION   APP VERSION     DESCRIPTION
yugabytedb/yugabyte     2.5.3           2.5.3.1-b10     YugabyteDB is the high-performance distributed ...

You can download the chart if you want to have a look at it (helm repo update & helm fetch yugabytedb/yugabyte)

Start

Now I can start the Minikube node:

minikube start --driver=virtualbox --cpus=4 --memory 8192 --disk-size 30g 

This creates and starts a VirtualBox VM. It seems that whatever the –cpu I pass, it creates a 2 vCPU VM which is not sufficient for a RF3 cluster (I need 3 yb-master and at least 3 yb-tserver).

I’ll update the settings with VBoxManage to set to 4 vCPU (the maximum threads on my Surface Pro) but capping them at 80% to leave some capacity to for host OS.

minikube stop

"C:\Program Files\Oracle\VirtualBox\VBoxManage" modifyvm minikube --cpus 4 --cpuexecutioncap 80

minikube start

Of course you can do that with the VirtualBox GUI.

Create the distributed DB

I create a namespace for my Yugabyte universe:

kubectl create namespace franck-yb

Checking them:

C:\Users\Franck> kubectl get namespaces

NAME                   STATUS   AGE
default                Active   7m54s
franck-yb              Active   24s
kube-node-lease        Active   7m55s
kube-public            Active   7m55s
kube-system            Active   7m55s
kubernetes-dashboard   Active   9s

All that can also visible from the web console that is started with:
minikube dashboard as this opens a localhost port to display the kubernetes dashboard.

I can start a YugabyteDB universe on a Kubernetes cluster with a simple helm install yugabyte yugabytedb/yugabyte --namespace franck-yb but the defaults are too large for my laptop:

  • 2 vCPU per pod. I’ll set 0.5 vCPU to run on my 4 vCPU VM.
  • 4GiB per Tserver and 2GiB per master. I’ll set 1GiB. (this will set –memory_limit_hard_bytes to 85% with -default_memory_limit_to_ram_ratio=0.85)

Here it is with my settings:

helm install yugabyte yugabytedb/yugabyte --namespace franck-yb --set resource.master.requests.cpu=0.5,resource.master.requests.memory=1Gi,storage.master.count=1,storage.master.size=2Gi,resource.tserver.requests.cpu=0.5,resource.tserver.requests.memory=1Gi,storage.tserver.count=1,storage.tserver.size=2Gi,replicas.master=3,replicas.tserver=3,storage.ephemeral=true

I’ve defined storage.ephemeral=true here and that’s probably not what you want for a database in general. But, first, this is a lab on my laptop, running in a VirtualBox VM that I can snapshot. And I use a database that is replicated (replication factor 3) and then one node, out of 3, can be lost, storage included. And a new node added later, with no data loss, as the replicas will be populated from the other nodes.

If you want no replication, less servers, but persistent storage, just change the last 3 settings with replicas.master=1,replicas.tserver=1,storage.ephemeral=false

With low resource this may take a few seconds:

C:\Users\Franck> kubectl --namespace franck-yb get pods

NAME           READY   STATUS              RESTARTS   AGE
yb-master-0    0/1     ContainerCreating   0          1m1s
yb-master-1    0/1     ContainerCreating   0          1m1s
yb-master-2    0/1     ContainerCreating   0          1m
yb-tserver-0   0/1     ContainerCreating   0          1m1s
yb-tserver-1   0/1     ContainerCreating   0          1m1s
yb-tserver-2   0/1     ContainerCreating   0          1m1s

The stateful sets take care of the order (masters are started before tservers) and the network addresses (which must be known before they are created).

You can ssh to the VM to check what is running:

minikube ssh
stty columns 120

(The tty columns was not set correctly) but all is also accessible from the web console of kubernetes with `minikube dashboard` and choosing the right namespace.

When you want to cleanup all this you can:

helm delete yugabyte -n franck-yb
minikube delete

the last command even deletes the VirtualBox VM.

But before cleaning it up I check that I can connect to my distributed database:

kubectl exec --namespace franck-yb -it yb-tserver-0 -- /home/yugabyte/bin/ysqlsh -h yb-tserver-0.yb-tservers.franck-yb

This runs the YSQLSH command line for YugabyteDB SQL API through kubernetes container exec.

But, as this SQL API is fully compatible with PostgreSQL I can just forward the YSQL port from one of the table servers:

kubectl --namespace franck-yb port-forward svc/yb-tservers 5433:5433

With this I can connect with psql, DBeaver, or any PostgreSQL client as YugabyteDB is fully postgres-compatible:

C:\Users\fpa> psql -h localhost -p 5433 -U yugabyte

psql (12.6, server 11.2-YB-2.5.3.1-b0)
Type "help" for help.

yugabyte=# create table demo ( k int primary key, v int ) split into 3 tablets;
CREATE TABLE
yugabyte=# insert into demo select generate_series,42 from generate_series(1,1000);
INSERT 0 1000
yugabyte=# select count(*) from demo;
 count
------------
  1000
(1 row)

You see the version, this YugabyteDB version 2.5.3 is compatible with PostgreSQL 11.2, and I have created a demo table sharded over 3 tablets, with 1000 rows.

I can check the tablets from the web console, exposed by the yb-master-ui service. Here are all exposed services:

C:\Users\Franck> kubectl --namespace franck-yb get services

NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                      AGE
yb-master-ui         LoadBalancer   10.105.151.9     <pending>     7000:31447/TCP                                                               7m42s
yb-masters           ClusterIP      None             <none>        7000/TCP,7100/TCP                                                            7m42s
yb-tserver-service   LoadBalancer   10.100.217.206   <pending>     6379:31842/TCP,9042:32404/TCP,5433:31626/TCP                                 7m42s
yb-tservers          ClusterIP      None             <none>        9000/TCP,12000/TCP,11000/TCP,13000/TCP,9100/TCP,6379/TCP,9042/TCP,5433/TCP   7m42s

There’s no external IP in Minikube but it is easy to tunnel the UI (port 7000 of the master):

kubectl --namespace franck-yb port-forward svc/yb-master-ui 7000:7000

Then I can access the YugabyteDB GUI on http://localhost:7000 and this is where the following screenshots come from.

scale-down / scale-up

The big advantage of Kubenetes is that a node can be killed, another created, scaling down and up the cluster. That’s possible with a stateless application, but also with a database, when it is distributed shared-nothing one. And it stays available thanks to a replication factor higher than one.

All 3 nodes are up:

My table is hash-sharded into 3 tablets, with one leader on each node and followers on 2 other nodes:

I scale down the tablet server stateful-set to 2 nodes:

C:\Users\Franck> kubectl scale -n franck-yb statefulset yb-tserver --replicas=2
statefulset.apps/yb-tserver scaled

The yb-tserver-2 is considered temporarily unavailable, still with 3 tablets but no leaders:

The follower that was in yb-tserver-0 has been elected as the new leader for this tablet:

At that point, my demo table is still fully available. And each of the remaining node have a full copy of the data.

After a minute, the node is definitely considered as dead:

Now I scale up to 3 nodes again:

C:\Users\Franck> kubectl scale -n franck-yb statefulset yb-tserver --replicas=3
statefulset.apps/yb-tserver scaled

The node is detected and rebalancing starts:

The tablet leaders distribution is quickly re-balanced to the 3 pods:

With 1000 rows this is very fast of course.

As I defined my minikube VM with only 4 vCPUs I canot scale-up the yb-server to one additional pod. But, and please remember this is a lab, I can kill one of the yb-master to leave room for one additional pod:

kubectl scale -n franck-yb statefulset yb-master --replicas=2
kubectl scale -n franck-yb statefulset yb-tserver --replicas=4

The failure of one master is detected, but thanks to RF3 the system is still available:

The new node has taken over some of the followers:

There is no need to elect a leader in this new node:

Of course, if I create a new table it will be sharded with leaders on all nodes:

yugabyte=# create table demo2 as select * from demo;
SELECT 1000


There are 32 tablets for this table because I didn’t specify the SPLIT INTO clause and the default is 8 per server (–ysql_num_shards_per_tserver=8) here.

ephemeral storage?

I used non-persistent storage for two reasons. First, Minikube has some limitations when compared with a real Kubernetes cluster, and I’m not sure how I can run a multi-node RF3 cluster with persistent volumes and easily scale it up and down. But the main reason is that this is a lab, running on VirtualBox, and I can save the state at this VBox level.

Here is how I stop the VirtualBox VM keeping the disc + memory state, and start it up again:

C:\Users\fpa> "C:\Program Files\Oracle\VirtualBox\VBoxManage" controlvm minikube savestate
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%

C:\Users\fpa> "C:\Program Files\Oracle\VirtualBox\VBoxManage" startvm minikube --type headless
Waiting for VM "minikube" to power on...
VM "minikube" has been successfully started.

No data was loss here, my demo table is still there because, from the Kubernetes perspectives, the pods were not stopped, just paused by the hypervisor.

In summary: you can play, on a 2 cores laptop, with scaling up and down a distributed database running its nodes on kubernetes pods. Feel free to follow me and discuss on Twitter about this. Databases in microservices and stateless containers is hot topic today.

During the week-end, I played with #kubernetes on my laptop and here is a blog post:https://t.co/SIz5tUiKeM – Not a "Hello World" example. A real RDBMS scaling on containers #minikube #k8s @virtualbox @Windows @Yugabyte

— Franck Pachot (@FranckPachot) April 12, 2021

Cet article K8s on Windows/VirtualBox est apparu en premier sur Blog dbi services.

SQL Server connectivity issue – troubleshoot TLS configuration

Thu, 2021-04-08 11:20

In the blog post, I will share a case of troubleshooting a connectivity issue with SQL Server.

The problem occurred in a migration context. The SQL Server databases (in version 2008 to 2014) from multiple applications were consolidated on a new server with SQL Server 2019. Application servers were also migrated to new VMs.
The new servers are hosted on a brand new infrastructure managed by a third-party provider which my customer has very little knowledge about.

Problem

Before I actually connect and have a look at the servers the only information I had, as it’s often the case, was:

The application server can’t connect to the new SQL Server.

My first thought when it comes to a client connection problem is firewall blocking, a network configuration issue on the MSSQL server, or even a simple authentication problem.

So, in a screen share session, I asked my customer (who is not a DBA) to show me exactly what he is doing so I can understand the issue.

What he did was to open the ODBC Data Sources tool on the application server to try a connection to the SQL Server instance on the remote MSSQL server.
Several attempts prompted different errors, but it mostly looked like this:

The keywords are “ConnectionOpen (SECDoClientHandshake())” and “SSL Security Error”.

 Troubleshooting

So the first thing I did was to check the connectivity between both servers on the TCP Port the SQL Server instance is listening on.
After I excluded any Firewall issue or SQL Server configuration issue I used my best Google-fu and I started to check the TLS configuration.
So I went to the Windows Registry to look for any Keys under

HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\

Here is what I found on both the application and the SQL server:

These keys do not exist by default on Windows. When created as shown in the screenshot they disable TLS 1.0 and TLS 1.1 protocols and implicitly force the use of TLS 1.2.

So TLS 1.2 needs to be used. SQL Server 2019 of course supports TLS 1.2 so the issue is certainly client-side.

The problem here is the use of the SQL Server ODBC driver version 10.00.14393 not supporting TLS 1.2.

To help troubleshooting, you can use this kind of simple query to get information about the current connections on an SQL Server instance, I’ve used it to compare the results for the client_interface_name column on different servers.

SELECT   
    c.session_id, c.net_transport, c.encrypt_option,   
    c.auth_scheme, s.host_name, s.program_name,   
    s.client_interface_name, s.login_name, s.nt_domain,   
    s.nt_user_name, s.original_login_name, c.connect_time,   
    s.login_time   
FROM sys.dm_exec_connections AS c  
	JOIN sys.dm_exec_sessions AS s  
		ON c.session_id = s.session_id;

 

Also, the Microsoft SQL Server Tiger Team published a PowerShell script to check if driver updates are required to use TLS 1.2.
It has not been updated lately so I’m not sure if it’s still useful but it’s a good starting point if you want to automate the check of your SQL clients versions.

Resolution

Installing and using the latest ODBC driver for SQL Server on the application server allows for a TLS 1.2 connection to SQL Server.
I hope this can help you troubleshoot this kind of connectivity error.

Cet article SQL Server connectivity issue – troubleshoot TLS configuration est apparu en premier sur Blog dbi services.

temBoard on SLES15 – 2 – The agent

Wed, 2021-04-07 08:16

In the last post we’ve installed the temBoard repository and the UI. As this alone is not so much fun, we’ll be installing the temBoard agent on another SLES 15 machine in this post. We’ll end up with one PostgreSQL instance registered in the temBoard repository and this instance is constantly monitored. With the repository and the UI we had the requirement to use Python 2.7, but this is not anymore required for the temBoard agent. Lets go.

What I’ve already prepared, is the PostgreSQL instance we want the agent to monitor:

postgres@localhost:/u02/pgdata/13/PG1/ [PG1] psql -l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
(3 rows)

As pip3 is already available we can go straight ahead and install the temBoard agent:

ostgres@localhost:/u02/pgdata/13/PG1/ [PG1] sudo sudo pip3 install temboard-agent
Collecting temboard-agent
  Downloading https://files.pythonhosted.org/packages/58/6e/3d5c772587e48b17cc6fa5a1569daab0d879845828c792f16ecec49564ae/temboard_agent-7.6-py2.py3-none-any.whl (115kB)
    100% |████████████████████████████████| 122kB 1.5MB/s 
pygobject 3.34.0 requires pycairo>=1.11.1, which is not installed.
Installing collected packages: temboard-agent
Successfully installed temboard-agent-7.6
You are using pip version 10.0.1, however version 21.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.

First issue, but easy to fix:

postgres@localhost:/u02/pgdata/13/PG1/ [PG1] sudo zypper in cairo-devel
postgres@localhost:/u02/pgdata/13/PG1/ [PG1] sudo pip3 install pycairo
Collecting pycairo
  Using cached pycairo-1.20.0.tar.gz (344 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
Building wheels for collected packages: pycairo
  Building wheel for pycairo (PEP 517) ... done
  Created wheel for pycairo: filename=pycairo-1.20.0-cp36-cp36m-linux_x86_64.whl size=246164 sha256=e25a94a3a0801e25db8eee9b1080d793b7a6dd313b12b011e56888509e411362
  Stored in directory: /root/.cache/pip/wheels/09/91/ec/f970f7c0cd3fba778c7150409181074fdfc2af42148486561d
Successfully built pycairo
Installing collected packages: pycairo
Successfully installed pycairo-1.20.0

Once again:

postgres@localhost:/u02/pgdata/13/PG1/ [PG1] sudo sudo pip3 install temboard-agent
Requirement already satisfied: temboard-agent in /usr/lib/python3.6/site-packages (7.6)
postgres@localhost:/u02/pgdata/13/PG1/ [PG1] which temboard-agent
/usr/bin/temboard-agent
postgres@localhost:/u02/pgdata/13/PG1/ [PG1] temboard-agent --version
Traceback (most recent call last):
  File "/usr/bin/temboard-agent", line 7, in 
    from temboardagent.scripts.agent import main
  File "/usr/lib/python3.6/site-packages/temboardagent/scripts/agent.py", line 12, in 
    from ..cli import Application
  File "/usr/lib/python3.6/site-packages/temboardagent/cli.py", line 5, in 
    from .postgres import Postgres
  File "/usr/lib/python3.6/site-packages/temboardagent/postgres.py", line 8, in 
    import psycopg2.extensions
ModuleNotFoundError: No module named 'psycopg2'

For connecting to PostgreSQL psycopg2 must be there as well:

postgres@localhost:/u02/pgdata/13/PG1/ [PG1] sudo pip3 install psycopg2-binary
Collecting psycopg2-binary
  Downloading psycopg2_binary-2.8.6-cp36-cp36m-manylinux1_x86_64.whl (3.0 MB)
     |████████████████████████████████| 3.0 MB 1.3 MB/s 
Installing collected packages: psycopg2-binary
Successfully installed psycopg2-binary-2.8.6
postgres@localhost:/u02/pgdata/13/PG1/ [PG1] temboard-agent --version
 INFO: Starting temboard-agent 7.6.
7.6

The agent is ready, lets proceed with the configuration. As with the UI in the last post, there is a configuration script:

postgres@localhost:/u02/pgdata/13/PG1/ [PG1] ls -l /usr/share/temboard-agent/
total 24
-rwxr-xr-x 1 root root 7318 Apr  1 11:32 auto_configure.sh
-rwxr-xr-x 1 root root  682 Apr  1 11:32 purge.sh
drwxr-xr-x 1 root root  276 Apr  1 11:32 quickstart
-rwxr-xr-x 1 root root  674 Apr  1 11:32 restart-all.sh
-rw-r--r-- 1 root root 2843 Apr  1 11:32 temboard-agent.conf
-rw-r--r-- 1 root root  176 Apr  1 11:32 temboard-agent.logrotate

Again, make sure that psql can be executed and all the environment variable match your environment:

postgres@localhost:/u02/pgdata/13/PG1/ [PG1] sudo su -
localhost:~ $ export ETCDIR=/u01/app/postgres/local/dmk/etc/
localhost:~ $ export VARDIR=/u01/app/postgres/local/dmk/bin
localhost:~ $ export LOGDIR=/u01/app/postgres/local/dmk/log
localhost:~ $ export SYSUSER=postgres
localhost:~ $ export PGHOST=/tmp
localhost:~ $ export PATH=/u01/app/postgres/product/13/db_2/bin:$PATH
localhost:~ $ sudo -Eu postgres psql 
could not change directory to "/root": Permission denied
psql (13.2)
Type "help" for help.

postgres=# 

Once you’ve that ready run the configuration:

localhost:~ $ /usr/share/temboard-agent/auto_configure.sh
FQDN is not properly configured. Set agent hostname with TEMBOARD_HOSTNAME env var..
Failure. See /var/log/temboard-agent-auto-configure.log for details.

Ok, this error message is clear:

localhost:~ $ export TEMBOARD_HOSTNAME=sles15tbagent.it.dbi-services.com
localhost:~ $ /usr/share/temboard-agent/auto_configure.sh
Using hostname sles15tbagent.it.dbi-services.com.
Configuring for PostgreSQL user postgres.
Configuring for cluster on port 5432.
Configuring for cluster at /u02/pgdata/13/PG1.
Configuring temboard-agent in /u01/app/postgres/local/dmk/etc//13/pg5432/temboard-agent.conf .
Saving auto-configuration in /u01/app/postgres/local/dmk/etc//13/pg5432/temboard-agent.conf.d/auto.conf
Configuring temboard-agent to run on port 2345.
Enabling systemd unit temboard-agent@13-pg5432.service.

Success. You can now start temboard-agent using:

    systemctl start temboard-agent@13-pg5432.service

For registration, use secret key deb91ab1322007536e2e8284b26bdb5c .
See documentation for detailed instructions.

As we had troubles with the systemd unit file in the last post, lets have a look at it:

localhost:~ $ systemctl status temboard-agent@13-pg5432.service
● temboard-agent@13-pg5432.service - PostgreSQL Remote Control Agent 13/pg5432
   Loaded: loaded (/usr/lib/systemd/system/temboard-agent@.service; enabled; vendor preset: disabled)
   Active: inactive (dead)
localhost:~ $ cat /usr/lib/systemd/system/temboard-agent@.service
[Unit]
Description=PostgreSQL Remote Control Agent %I
After=network.target postgresql@%i.service
AssertPathExists=/etc/temboard-agent/%I/temboard-agent.conf

[Service]
Type=simple
User=postgres
Group=postgres
ExecStart=/usr/bin/env SYSTEMD=1 temboard-agent -c /etc/temboard-agent/%I/temboard-agent.conf

[Install]
WantedBy=multi-user.target

This again points to the wrong configuration file, ours is here:

localhost:~ $ cat /u01/app/postgres/local/dmk/etc/13/pg5432/temboard-agent.conf
#
#   T E M B O A R D   A G E N T   C O N F I G U R A T I O N
#
# This file contains all possible settings of temBoard agent, with defaults for
# quickstart. temBoard agent overrides values of this file with .conf files
# found in directory named after this filename, with .d suffix. e.g.
# temboard-agent.conf.d/auto.conf is likely to override some values of this
# file.
#

[temboard]
# Bind port.
port = 2345
# Bind address.
address = 192.168.22.211
# User & password file.
users = users
# Plugins in use.
plugins = ["monitoring", "dashboard", "pgconf", "administration", "activity", "maintenance", "statements"]
# SSL: certificat file path (.pem)
ssl_cert_file = /u01/app/postgres/local/dmk/etc/13/pg5432/server.pem
# SSL: private key file path (.key)
ssl_key_file = /u01/app/postgres/local/dmk/etc/13/pg5432/server.key
# Hostname must be an unique and valid FQDN : e.g. db1.mydomain.foo
# If you leave this empty, then the system wide hostname will be used
# Note : `localhost` is not a correct value
hostname = sles15tbagent
# Key
key = SECRETKEYTOBECHANGED

[postgresql]
# Unix socket path.
host = /tmp
# PG port number.
port = 5432
# Super-user name.
user = postgres
# Super-user password.
# password =
# Default database.
dbname = postgres
# Instance name.
instance = main

[logging]
# Available methods for logging: stderr, syslog or file
method = stderr
# Syslog facility.
# facility = local0
# Log destination, should be /dev/log for syslog on Linux system.
# When using file logging method, this is referencing the log file path.
# destination = /var/log/temboard-agent/temboard-agent.log
# Log level, can be: DEBUG, INFO, WARNING, ERROR or CRITICAL.
level = INFO

### PLUGINS ###
[dashboard]
# Dashboard plugin part
# Interval, in second, between each run of the process collecting
# data used to render the dashboard. Default: 2
# scheduler_interval = 2
# Number of record to keep. Default: 150
# history_length = 150

[monitoring]
# Monitoring plugin part.
# DB name list (comma separator) to supervise. * for all.
dbnames = *
# List of probes to run, comma separator, * for all.
# Available probes: bgwriter,blocks,btree_bloat,cpu,db_size,filesystems_size,heap_bloat,loadavg,locks,memory,process,replication_connection,replication_lag,sessions,tblspc_size,temp_files_size_delta,wal_files,xacts
# when monitoring a standby server, 'wal_file' probe shouldn't be loaded.
probes = *
# Interval, in second, between each run of the process executing
# the probes. Default: 60
# scheduler_interval = 60

[administration]
# External command used for start/stop PostgreSQL.
# This commands actually works on Debian jessie.
pg_ctl = '/usr/lib/postgresql/9.4/bin/pg_ctl %s -D /var/lib/postgresql/9.4'

[statements]
# Statements plugin part.
# DB name hosting pg_stat_statements view (the one where the extension has
# been created with "CREATE EXTENSION")
dbname = postgres

Once more, adjust the the systemd service file (I am creating a static one here):

localhost:~ $ cat /usr/lib/systemd/system/temboard-agent@.service
[Unit]
Description=PostgreSQL Remote Control Agent %I
After=network.target postgresql@%i.service
AssertPathExists=/u01/app/postgres/local/dmk/etc/13/pg5432/temboard-agent.conf

[Service]
Type=simple
User=postgres
Group=postgres
ExecStart=/usr/bin/env SYSTEMD=1 temboard-agent -c /u01/app/postgres/local/dmk/etc/13/pg5432/temboard-agent.conf

[Install]
WantedBy=multi-user.target
localhost:~ $ mv /usr/lib/systemd/system/temboard-agent@.service /usr/lib/systemd/system/temboard-agent.service
localhost:~ $ systemctl daemon-reload
localhost:~ $ systemctl status temboard-agent.service
● temboard-agent.service - PostgreSQL Remote Control Agent 
   Loaded: loaded (/usr/lib/systemd/system/temboard-agent.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

Before starting with systemd, lets try to start with the postgres user. Before we do that we need to create a user which will be used for authentication later on:

localhost:~ $ su - postgres
postgres@localhost:/home/postgres/ [pg132] temboard-agent-adduser -c /u01/app/postgres/local/dmk/etc/13/pg5432/temboard-agent.conf
 INFO: Starting temboard-agent-adduser 7.6.
ERROR: Invalid temboard_ssl_key_file from config: /etc/ssl/private/temboard-agent-13-pg5432.key: File not found...
CRITI: Failed to load configuration.
postgres@localhost:/home/postgres/ [pg132] sudo ls -l /etc/ssl/private/temboard-agent-13-pg5432.key
-rw------- 1 root root 1704 Apr  1 11:46 /etc/ssl/private/temboard-agent-13-pg5432.key

Again the issue with the key file, so:

postgres@localhost:/home/postgres/ [pg132] sudo mv /etc/ssl/private/temboard-agent-13-pg5432.key /u01/app/postgres/local/dmk/etc/
postgres@localhost:/home/postgres/ [pg132] sudo chown postgres:postgres /u01/app/postgres/local/dmk/etc/temboard-agent-13-pg5432.key
postgres@localhost:/home/postgres/ [pg132] cat /u01/app/postgres/local/dmk/etc/13/pg5432/temboard-agent.conf | grep key
# SSL: private key file path (.key)
ssl_key_file = /u01/app/postgres/local/dmk/etc/temboard-agent-13-pg5432.key
key = SECRETKEYTOBECHANGED

Once more:

postgres@localhost:/home/postgres/ [pg132] temboard-agent-adduser -c /u01/app/postgres/local/dmk/etc/13/pg5432/temboard-agent.conf
 INFO: Starting temboard-agent-adduser 7.6.
ERROR: Invalid temboard_ssl_key_file from config: /etc/ssl/private/temboard-agent-13-pg5432.key: File not found...
CRITI: Failed to load configuration.

Seems the location of the key file is somehow hard coded, so workaround this like this:

postgres@localhost:/home/postgres/ [pg132] sudo ln -s /u01/app/postgres/local/dmk/etc/13/pg5432/temboard-agent.key /etc/ssl/private/temboard-agent-13-pg5432.key
postgres@localhost:/home/postgres/ [pg132] sudo chmod 755 /etc/ssl/private
postgres@localhost:/home/postgres/ [pg132] ls /etc/ssl/private/temboard-agent-13-pg5432.key
/etc/ssl/private/temboard-agent-13-pg5432.key
postgres@localhost:/home/postgres/ [pg132] temboard-agent-adduser -c /u01/app/postgres/local/dmk/etc/13/pg5432/temboard-agent.conf
 INFO: Starting temboard-agent-adduser 7.6.
Username: admin
Password: 
Retype password: 
Done.

The user/password combination will be written to the users file:

postgres@localhost:/home/postgres/ [pg132] cat /u01/app/postgres/local/dmk/etc/13/pg5432/users 
admin:xiet7KLumux50Q2dmV/FBue2zqtexElavkVAKnEYN/rw2YLIn51TQsXb3u8FPm4wiuHUTtEjjvBIrtBPAgRMsA==

That should be all and we should be able to start the agent:

postgres@localhost:/home/postgres/ [pg132] temboard-agent --config /u01/app/postgres/local/dmk/etc/13/pg5432/temboard-agent.conf 
 INFO: Starting temboard-agent 7.6.
 INFO: Found plugin monitoring = temboardagent.plugins.monitoring:MonitoringPlugin.
 INFO: Found plugin dashboard = temboardagent.plugins.dashboard:DashboardPlugin.
 INFO: Found plugin pgconf = temboardagent.plugins.pgconf:PgConfPlugin.
...
 INFO: Starting .
 INFO: Starting .
 INFO: Starting .

Looks fine, lets try with systemd:

sles15tbagent:/home/postgres $ systemctl start temboard-agent.service
sles15tbagent:/home/postgres $ systemctl status temboard-agent.service
● temboard-agent.service - PostgreSQL Remote Control Agent 
   Loaded: loaded (/usr/lib/systemd/system/temboard-agent.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2021-04-03 14:42:21 CEST; 4s ago
 Main PID: 9299 (temboard-agent)
    Tasks: 5
   CGroup: /system.slice/temboard-agent.service
           ├─9299 temboard-agent: 13/pg5432: main process
           ├─9301 temboard-agent: 13/pg5432: worker pool
           └─9302 temboard-agent: 13/pg5432: scheduler

sles15tbagent:/home/postgres $ ps -ef | grep temboard
postgres  9299     1  0 14:42 ?        00:00:00 temboard-agent: 13/pg5432: main process
postgres  9301  9299  0 14:42 ?        00:00:00 temboard-agent: 13/pg5432: worker pool
postgres  9302  9299  0 14:42 ?        00:00:00 temboard-agent: 13/pg5432: scheduler
root      9804  9249  0 14:44 pts/1    00:00:00 grep --color=auto temboard

The final step is to register the agent to the WebUI:

postgres@sles15tbagent:/home/postgres/ [PG1] temboard-agent-register -c /u01/app/postgres/local/dmk/etc/13/pg5432/temboard-agent.conf --host $(hostname --fqdn) --port 2345 --groups default https://192.168.22.210:8888
 INFO: Starting temboard-agent-register 7.6.
 INFO: Getting system & PostgreSQL informations from the agent (https://sles15tbagent:2345/discover) ...
Login at https://192.168.22.210:8888 ...
 Username: admin
 Password: 
Registering instance/agent to https://192.168.22.210:8888 ...
Done.

Heading over to the WebUI confirms, that the agent is registered and the PostgreSQL instance is monitored:

So, even if there are no pre-build packages for your distribution, it should be quite easy to get temBoard up and running.

Cet article temBoard on SLES15 – 2 – The agent est apparu en premier sur Blog dbi services.

SQL Server – Contained Database

Thu, 2021-04-01 08:45
Introduction

You may have encounter already case when an application offers the possibility to create and maintain SQL database users and consequently the associated SQL Server instance logins. That’s of course not especially really what a DBA likes, loosing control on the logins that are created on the instance he is responsible of. But sometimes you have to deal with such situation especially when a customer ask you to enable this.
Well, then it’s starts to be more complicated when the application of that database is part of an always on availability group. You can imagine what happen in case of failover, all the logins created within the application will not be present on the secondary replicas and all connections to the new primary replica will fail.

We had a sync process in place to make sure that the logins were available in all replicas. That’s a good solution, but the users could also change their password through the application. In such case the process to sync the logins becomes more complicated. Thus before starting a long development process, I wanted to experiment and test the containment option that presumably could be a good option in such case.

In fact, experimenting how containment option is working, the creation of the database users must be done differently and therefore if your application is not aware or fitting with this option it won’t solve your problem.

Set the Containment option for your database

The first thing you have to do is to configure your SQL Server instance to enable the containment authentication.

For that you have to change the “contained database authentication” configuration option.

 

--###Scripts:
---------------------------------------------------------------------
execute sp_configure 'contained database authentication',1
Reconfigure
---------------------------------------------------------------------

Caution: if you are in an always on availability group situation, of course do not forget to do the same on all the replicas otherwise you will face some problems after a primary switch.

Then you have to change your data base to enable this option.

--###Script:
---------------------------------------------------------------------
USE [master]
GO
ALTER DATABASE [DB_Contained] SET CONTAINMENT = PARTIAL WITH NO_WAIT
GO
---------------------------------------------------------------------

You can do the same using your database properties GUI in SSMS.

Create the user on your database level

Now that the option is enable, if you want to create a new database user, you will see that new options appeared “SQL User with password”

You can create you database user using the following script

--###Script:
-----------------------------------------------------------
USE [DB_Contained]
GO
CREATE USER [user_with_pwd] WITH PASSWORD=N'**********'
GO
-----------------------------------------------------------

You can also create a domain account

--###Script:
-----------------------------------------------------------
USE [DB_Contained]
GO
CREATE USER [YourDomain\YourAccount] WITHOUT LOGIN
GO
-----------------------------------------------------------

Be aware that in this case the database user will be shown with an associated login but it will not be present at instance level and you will not see it listed in the server logins.

Connect to you database

Remember that with the contained database the login will be done on database level therefore you must specify the catalog (the database) to which you want to establish the connection otherwise the connection will failed.

Conclusion

This could be a nice solution to have a portable database and make it independent from the instance it is located on, and therefore without having to worry about to copy the logins when it is moved to another server.
But of course it must fit your case.

Cet article SQL Server – Contained Database est apparu en premier sur Blog dbi services.

temBoard on SLES15 – 1 – The WebUI

Thu, 2021-04-01 03:29

During our last SwissPUG online meeting there was a presentation about monitoring PostgreSQL with temBoard. A question that came up afterwards was: How to install that on SLES 15 as there are only packages for Debian and CentOS/RHEL. As temBoard is written in Python you can also install it using pip. That requires a bit more work compared to the package method, but it is perfectly fine. The only downside (but this is true for the packaged version as well) is, that you need Python 2.7 for the WebUI, and this is already end of life. So, lets give it a try on a fresh SLES 15 minimal installation.

As mentioned just before we’ll start with a minimal installation, as this is intended to be a step by step guide:

sles15webui:~ $ cat /etc/os-release 
NAME="SLES"
VERSION="15-SP2"
VERSION_ID="15.2"
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP2"
ID="sles"
ID_LIKE="suse"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:15:sp2"

The minimal installation of SLES 15 comes with Python 3.6.13 and this is not what we need:

sles15webui:~ $ python3 --version
Python 3.6.13

For getting Python 2.7 onto the system, the corresponding repository needs to be enabled:

sles15webui:~ $ SUSEConnect --product sle-module-python2/15.2/x86_64
Registering system to SUSE Customer Center

Updating system details on https://scc.suse.com ...

Activating sle-module-python2 15.2 x86_64 ...
-> Adding service to system ...
-> Installing release package ...

Successfully registered system

Having that ready, we can install the requirements for temBoard:

sles15webui:~ $ zypper in python2 python2-pip
Refreshing service 'Basesystem_Module_15_SP2_x86_64'.
Refreshing service 'Python_2_Module_15_SP2_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_15_SP2_x86_64'.
Refreshing service 'Server_Applications_Module_15_SP2_x86_64'.
...
sles15webui:~ $ python2 --version
Python 2.7.18

temBoard requires a PostgreSQL instance up and running as a repository database, and SLES 15 comes with PostgreSQL 13 already packaged:

sles15webui:~ $ zypper search postgres | grep 13
  | postgresql13                 | Basic Clients and Utilities for PostgreSQL                              | srcpackage
  | postgresql13                 | Basic Clients and Utilities for PostgreSQL                              | package
  | postgresql13-contrib         | Contributed Extensions and Additions to PostgreSQL                      | package
  | postgresql13-devel           | PostgreSQL client development header files and libraries                | package
  | postgresql13-docs            | HTML Documentation for PostgreSQL                                       | package
  | postgresql13-plperl          | The PL/Tcl, PL/Perl, and  PL/Python procedural languages for PostgreSQL | package
  | postgresql13-plpython        | The PL/Python Procedural Languages for PostgreSQL                       | package
  | postgresql13-pltcl           | PL/Tcl Procedural Language for PostgreSQL                               | package
  | postgresql13-server          | The Programs Needed to Create and Run a PostgreSQL Server               | package
  | postgresql13-server-devel    | PostgreSQL server development header files and utilities                | package

Either use that to install PostgreSQL and create a cluster, or install PostgreSQL from source code. I’ll skip these steps here and my PostgreSQL instance is already running:

postgres@sles15webui:/home/postgres/ [tb] psql
psql (13.2)
Type "help" for help.

postgres=# select version();
                                      version                                       
------------------------------------------------------------------------------------
 PostgreSQL 13.2 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux) 7.5.0, 64-bit
(1 row)

The installation of temBoard is quite simple, all you have to do is this:

postgres@sles15webui:/home/postgres/ [tb] sudo pip2 install temboard psycopg2-binary
DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
Collecting temboard
...
    Running setup.py install for tornado ... done
  WARNING: The script mako-render is installed in '/usr/local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  WARNING: The script alembic is installed in '/usr/local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  WARNING: The scripts temboard and temboard-migratedb are installed in '/usr/local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed Mako MarkupSafe alembic backports-abc configparser contextlib2 futures importlib-metadata pathlib2 psycopg2-binary python-dateutil python-editor scandir singledispatch sqlalchemy temboard tornado zipp

As you can see from the last lines above, temBoard gets installed into “/usr/local/bin”:

postgres@sles15webui:/home/postgres/ [tb] ls /usr/local/bin/
alembic  mako-render  temboard  temboard-migratedb

The other important directory is “/usr/local/share/temboard/”, this one contains all the scripts:

postgres@sles15webui:/home/postgres/ [tb] ls /usr/local/share/temboard/
auto_configure.sh  create_repository.sh  purge.sh  quickstart  sql

The “auto_configure.sh” script is the one which does all the work. You can either go with the default or adjust the parameters as you like, e.g.:

postgres@sles15webui:/home/postgres/ [tb] sudo su -
sles15webui:~ $ export ETCDIR=/u01/app/postgres/local/dmk/etc
sles15webui:~ $ export VARDIR=/u01/app/postgres/local/dmk/bin
sles15webui:~ $ export LOGDIR=/u01/app/postgres/local/dmk/log
sles15webui:~ $ export LOGFILE=/u01/app/postgres/local/dmk/log/temboard-auto-configure.log

The auto configuration script needs to connect to your PostgreSQL instance, so make sure that this works before starting:

sles15webui:~ $ export PATH=/u01/app/postgres/product/13/db_2/bin:$PATH
sles15webui:~ $ export PGHOST=/tmp
sles15webui:~ $ sudo -Eu postgres psql
could not change directory to "/root": Permission denied
psql (13.2)
Type "help" for help.

If that works just execute the script:

sles15webui:~ $ /usr/local/share/temboard/auto_configure.sh
Creating system user temBoard.
Configuring temboard in /u01/app/postgres/local/dmk/etc.
Generating self-signed certificate.
Creating Postgres user, database and schema.
Failure. See /u01/app/postgres/local/dmk/log/temboard-auto-configure.log for details.

… and that fails. Checking the log file, the reason is that “temboard-migratedb” is not found:

CREATE DATABASE
./create_repository.sh: line 17: temboard-migratedb: command not found
./create_repository.sh: line 18: temboard-migratedb: command not found

The easy fix is to update the create_repository.sh script:

if ! /usr/local/bin/temboard-migratedb check ; then
    /usr/local/bin/temboard-migratedb upgrade
    psql="psql -aw --set ON_ERROR_STOP=on --pset pager=off"
    if [ -n "${DEV-}" ] ; then
        $psql -f $SQLDIR/dev-fixture.sql
    fi
fi

Running the auto configuration once more, and it succeeds:

sles15webui:~ $ /usr/local/share/temboard/auto_configure.sh
Configuring temboard in /u01/app/postgres/local/dmk/etc.
Creating Postgres user, database and schema.

Success. You can now start temboard using:

    systemctl start temboard

Remember to replace default admin user!!!

A systemd service was created automatically:

sles15webui:~ $ systemctl status temboard
● temboard.service - temBoard Web UI
   Loaded: loaded (/usr/local/lib/systemd/system/temboard.service; enabled; vendor preset: disabled)
   Active: inactive (dead)
sles15webui:~ $ cat /usr/local/lib/systemd/system/temboard.service
[Unit]
Description=temBoard Web UI
After=network.target

[Service]
Type=simple
User=temboard
Group=temboard
ExecStart=/usr/bin/env SYSTEMD=1 temboard -c /etc/temboard/temboard.conf

[Install]
WantedBy=multi-user.target

The issue with that service is, that the configuration file does not exist:

sles15webui:~ $ cat /etc/temboard/temboard.conf
cat: /etc/temboard/temboard.conf: No such file or directory

Because we’ve set the environment variables before doing the temBoard installation, our configuration file is here:

sles15webui:~ $ cat /u01/app/postgres/local/dmk/etc/temboard.conf 
# Configuration initiated by /usr/local/share/temboard/auto_configure.sh on Thu Apr  1 09:17:58 CEST 2021
#
# See https://temboard.rtfd.io/ for details on configuration
# possibilities.

[temboard]
ssl_cert_file = /etc/ssl/certs/temboard.pem
ssl_key_file = /etc/ssl/private/temboard.key
cookie_secret = d11bcef48a3f6bc6de09cb3e39b88b99b2fae98b4d3dbfe501019efdcac681044be29eed3907b0cbe12dc64d49923f50f262b33fe1d154cf9b8609f2770edaef
home = /u01/app/postgres/local/dmk/bin

[repository]
host = /tmp
port = 5432
user = temboard
password = 4ebe5cd9715718780702420436cbd268
dbname = temboard

[logging]
method = stderr
level = INFO

[monitoring]
# purge_after = 365

[statements]
# purge_after = 7

Let’s try to start temBoard using this configuration manually as root:

sles15webui:~ $ /usr/bin/env SYSTEMD=1 temboard -c /u01/app/postgres/local/dmk/etc/temboard.conf
 INFO: Starting temboard 7.6.
 INFO: Running on SLES 15-SP2.
 INFO: Using Python 2.7.18 (/usr/bin/python2).
 INFO: Using Psycopg2 2.8.6 (dt dec pq3 ext lo64), Tornado 5.1.1 and SQLAlchemy 1.4.4
Loaded plugin 'activity'.
Loaded plugin 'dashboard'.
Loaded plugin 'monitoring'.
Loaded plugin 'pgconf'.
...

This works and accessing the UI over https://[IP]:8888 works as well:

So, we need to fix the systemd unit file to use the correct temBoard configuration file like this:

sles15webui:~ $ cat /usr/local/lib/systemd/system/temboard.service
[Unit]
Description=temBoard Web UI
After=network.target

[Service]
Type=simple
User=temboard
Group=temboard
ExecStart=/usr/bin/env SYSTEMD=1 temboard -c /u01/app/postgres/local/dmk/etc/temboard.conf

[Install]
WantedBy=multi-user.target
sles15webui:~ $ systemctl daemon-reload

Lets try to start with systemd:

sles15webui:~ $ systemctl start temboard
sles15webui:~ $ systemctl status temboard
● temboard.service - temBoard Web UI
   Loaded: loaded (/usr/local/lib/systemd/system/temboard.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Thu 2021-04-01 09:49:17 CEST; 3s ago
  Process: 28629 ExecStart=/usr/bin/env SYSTEMD=1 temboard -c /u01/app/postgres/local/dmk/etc/temboard.conf (code=exited, status=1/FAILURE)
 Main PID: 28629 (code=exited, status=1/FAILURE)

Apr 01 09:49:16 sles15webui systemd[1]: Started temBoard Web UI.
Apr 01 09:49:17 sles15webui env[28629]:  INFO: Starting temboard 7.6.
Apr 01 09:49:17 sles15webui env[28629]: ERROR: Invalid temboard_ssl_key_file from config: /etc/ssl/private/temboard.key: File not found...
Apr 01 09:49:17 sles15webui env[28629]: CRITI: Failed to load configuration.
Apr 01 09:49:17 sles15webui systemd[1]: temboard.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 09:49:17 sles15webui systemd[1]: temboard.service: Unit entered failed state.
Apr 01 09:49:17 sles15webui systemd[1]: temboard.service: Failed with result 'exit-code'.

Next issue, the key file can not be found (the service uses the tembaord user to start tembaord). The file actually is there, but the temboard user does not have access to it:

sles15webui:~ $ ls -la /etc/ssl/private/temboard.key
-rw------- 1 root root 1704 Apr  1 08:29 /etc/ssl/private/temboard.key

As I did not want to open permissions to the SSL configuration in /etc, I’ve copied that over:

sles15webui:~ $ mv /etc/ssl/private/temboard.key /u01/app/postgres/local/dmk/etc/
sles15webui:~ $ chown temboard:temboard /u01/app/postgres/local/dmk/etc/temboard.key 

Adjusted the temboard configuration with the new location of the key file:

sles15webui:~ $ cat /u01/app/postgres/local/dmk/etc/temboard.conf | grep key
ssl_key_file = /u01/app/postgres/local/dmk/etc/temboard.key

Started again:

sles15webui:~ $ systemctl start temboard
sles15webui:~ $ systemctl status temboard
● temboard.service - temBoard Web UI
   Loaded: loaded (/usr/local/lib/systemd/system/temboard.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Thu 2021-04-01 09:54:50 CEST; 2s ago
  Process: 28792 ExecStart=/usr/bin/env SYSTEMD=1 temboard -c /u01/app/postgres/local/dmk/etc/temboard.conf (code=exited, status=1/FAILURE)
 Main PID: 28792 (code=exited, status=1/FAILURE)

Apr 01 09:54:50 sles15webui env[28792]:     now=datetime.utcnow()
Apr 01 09:54:50 sles15webui env[28792]:   File "/usr/local/lib/python2.7/site-packages/temboardui/toolkit/tasklist/sqlite3_engine.py", line 253, in recover
Apr 01 09:54:50 sles15webui env[28792]:     raise StorageEngineError("Could not recover tasks.")
Apr 01 09:54:50 sles15webui env[28792]: StorageEngineError: Could not recover tasks.
Apr 01 09:54:50 sles15webui env[28792]: temboardui version is 7.6.
Apr 01 09:54:50 sles15webui env[28792]: This is a bug!
Apr 01 09:54:50 sles15webui env[28792]: Please report traceback to https://github.com/dalibo/temboard/issues! Thanks!
Apr 01 09:54:50 sles15webui systemd[1]: temboard.service: Main process exited, code=exited, status=1/FAILURE
Apr 01 09:54:50 sles15webui systemd[1]: temboard.service: Unit entered failed state.
Apr 01 09:54:50 sles15webui systemd[1]: temboard.service: Failed with result 'exit-code'.

… and it again fails. Lets try to start manually with the temBoard user:

sles15webui:~ $ su - temboard
This account is currently not available.
sles15webui:~ $ usermod -s /bin/bash temboard
sles15webui:~ $ su - temboard
attempt to write a readonly database
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/temboardui/toolkit/tasklist/sqlite3_engine.py", line 244, in recover
    (st_aborted, datetime_to_epoch(now), st_doing)
OperationalError: attempt to write a readonly database
Unhandled error:
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/temboardui/toolkit/app.py", line 298, in entrypoint
    retcode = self.main(argv, environ)
  File "/usr/local/lib/python2.7/site-packages/temboardui/__main__.py", line 395, in main
    self.scheduler.apply_config()
  File "/usr/local/lib/python2.7/site-packages/temboardui/__main__.py", line 143, in apply_config
    super(SchedulerService, self).apply_config()
  File "/usr/local/lib/python2.7/site-packages/temboardui/toolkit/taskmanager.py", line 724, in apply_config
    self.scheduler.setup_task_list()
  File "/usr/local/lib/python2.7/site-packages/temboardui/toolkit/taskmanager.py", line 482, in setup_task_list
    self.task_list.recover()
  File "/usr/local/lib/python2.7/site-packages/temboardui/toolkit/taskmanager.py", line 196, in recover
    now=datetime.utcnow()
  File "/usr/local/lib/python2.7/site-packages/temboardui/toolkit/tasklist/sqlite3_engine.py", line 253, in recover
    raise StorageEngineError("Could not recover tasks.")
StorageEngineError: Could not recover tasks.
temboardui version is 7.6.
This is a bug!
Please report traceback to https://github.com/dalibo/temboard/issues! Thanks!

The issue is this: “attempt to write a readonly database”. As there is no information where that database (seems to be SQLite) is located, lets use strace to spot the location:

temboard@sles15webui:~> strace -f /usr/bin/env SYSTEMD=1 temboard -c /u01/app/postgres/local/dmk/etc/temboard.conf 

Digging through the strace output, the database is this one: /u01/app/postgres/local/dmk/bin/server_tasks.db. Because we started as root before, this file has the woring permissions (this is not the fault of temBoard):

sles15webui:~$  ls -la /u01/app/postgres/local/dmk/bin/server_tasks.db
-rw-r--r-- 1 root root 12288 Apr  1 09:45 /u01/app/postgres/local/dmk/bin/server_tasks.db

Lets fix this, and try again:

sles15webui:~ $ chown temboard:temboard /u01/app/postgres/local/dmk/bin/server_tasks.db
sles15webui:~ $ systemctl status temboard
● temboard.service - temBoard Web UI
   Loaded: loaded (/usr/local/lib/systemd/system/temboard.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2021-04-01 10:15:22 CEST; 3s ago
 Main PID: 1064 (temboard)
    Tasks: 5
   CGroup: /system.slice/temboard.service
           ├─1064 temboard: web
           ├─1071 temboard: worker pool
           └─1072 temboard: scheduler

Apr 01 10:15:22 sles15webui env[1064]: Loaded plugin 'pgconf'.
Apr 01 10:15:22 sles15webui env[1064]: Loaded plugin 'maintenance'.
Apr 01 10:15:22 sles15webui env[1064]: Loaded plugin 'statements'.
Apr 01 10:15:22 sles15webui env[1064]: temBoard database is up-to-date.
Apr 01 10:15:22 sles15webui env[1064]: Starting web.
Apr 01 10:15:22 sles15webui env[1064]: Serving temboardui on https://0.0.0.0:8888
Apr 01 10:15:22 sles15webui env[1064]: Starting scheduler.
Apr 01 10:15:22 sles15webui env[1064]: Starting worker pool.
Apr 01 10:15:23 sles15webui env[1064]: Starting collector scheduler worker.
Apr 01 10:15:23 sles15webui env[1064]: End of collector scheduler worker.

Now we are fine, temBoard is started with the correct user with systemd, login (admin/admin) works as well:

This was kind of a brute force method for getting it installed, but if you plan that well, it should not be an issue if you follow these steps:

  • Prepare your own self signed certificate and make sure the temBoard user can access it
  • Prepare your own systemd configuration file
  • Prepare your own temBoard configuration file

In the next post we’ll install the temBoard agent on another SLES 15 machine, and actually want to monitor a PostgreSQL instance.

Cet article temBoard on SLES15 – 1 – The WebUI est apparu en premier sur Blog dbi services.

Clone a Terabyte Database in Seconds

Thu, 2021-04-01 01:57
Introduction

Have you ever cloned a database using RMAN?
Have you ever gotten the advice not to clone during daytime to avoid performance downgrade on the entire storage or network?
For a proof of concept, we wanted to evaluate a Pure Storage array in terms of feasibility and speed of database cloning by using storage-based snapshot technology.
First, I want to mention that PureStorage produces Storage devices in several sizes, all based on NVMe devices together with a high bandwidth connection and data compression.
All in all a “fast as hell” storage but this is not enough. There is additionally a snapshot technology which enables a snapshot-copy of a volume in no time, regardless of its size.
We wanted to see if it works as well for cloning Oracle databases.

Starting point

We set up:
• Two virtual machines on two ESX hosts, running RHEL 7.9
• Two sets of disk-groups to store the database files.
• A source database, running on machine 1, Oracle Release 19.10.
• Prepare machine 2 for a target database to be created by cloning

We had to install only very few extras on the virtual machines
• Install rpms, required by PureStorage
• Configure multipathing
• Update UDEV rules

About the Storage

The GUI allows comfortably to create volumes and assign them to groups and “Protection Groups”.

Connect the volumes with the hosts.

In vSphere Client, attach the volumes to the VMs

Configure the volumes on Linux OS

configure multipath

vi /etc/multipath.conf
multipaths {
 multipath { wwid 3624a9370F9C23FA49CFC431C00013CD5  alias  oradbi1-data }
 multipath { wwid 3624a9370F9C23FA49CFC431C00013CD7  alias  oradbi1-fra  }
 multipath { wwid 3624a9370F9C23FA49CFC431C00013CD6  alias oradbi1-redo  }
}

create filesystems on the volumes

mkfs.ext4 /dev/mapper/oradbi2-data
mkfs.ext4 /dev/mapper/3624a9370f9c23fa49cfc431c00013cda
mkfs.ext4 /dev/mapper/3624a9370f9c23fa49cfc431c00013cd9
mkfs.ext4 /dev/mapper/3624a9370f9c23fa49cfc431c00013cd8

enter the volumes in fstab
vi /etc/fstab

/dev/mapper/3624a9370f9c23fa49cfc431c00013cd5 /u02/oradata             ext4  defaults  0 0
/dev/mapper/3624a9370f9c23fa49cfc431c00013cd6 /u03/oradata             ext4  defaults  0 0
/dev/mapper/3624a9370f9c23fa49cfc431c00013cd7 /u90/fast_recovery_area  ext4  defaults  0 0

And finally mount them

mount -a

Now you can clone the database.

On the target server, shut down the database and unmount the volumes:

Create a snapshot

You can either use the GUI or CLI.
in the web-console, navigate to Storage -> Protection Groups
select the Protection Group you want to “snapshot”
select “create” from the overflow-menu
provide a “speaking” name for your first snapshot.

Next you copy the volume snapshots to their target counterparts:

That’s it !

 

On the CLI it looks even simpler:

Only two commands required: “purepgroup snap” and “purevol copy”
That’s all !

You can mount the copied volumes now and start the database on the target side:

You have probably noted that the copy is nothing else than a copy.

Means that your target-DB has still the same name as the source-DB.

 

Don’t worry! Changing the name and the DBID of a database, will be continued next week.
see my blog “Rename your DB clone completely

 

 

Cet article Clone a Terabyte Database in Seconds est apparu en premier sur Blog dbi services.

SQL Server: Create a Shared Storage for your Failover Cluster LAB

Wed, 2021-03-31 06:18

I’m currently working on the migration of a Failover Clustered SSAS instance from Windows Server 2012 to Windows Server 2019.

The context is quite complex and in order to choose the right migration scenario I need to play with it on my LAB environment first.
I usually work with AlwaysOn architectures and it’s not often that I need to set up an FCI with its Cluster Shared Volume.

As a reminder, a Cluster Shared Volume is a shared disk containing an NTFS volume that is made accessible for read and write operations by all nodes within a Windows Server Failover Cluster.

In this blog post, will go through the steps I used to create an iSCSI Cluster Shared Volume that will be used to install an SQL Server Failover Cluster Instance (of SSAS) on my Hyper-V LAB.

Windows Server vSAN VM

First, I created a new VM on my Hyper-V Server (my laptop) with Windows Server 2019. You can get a 180 days free trial from the Evaluation Center.
This is a standard VM I named “vSAN”. It will hosts my iSCSI Disks and Targets.
I could have used my Domain Controller VM too as this is just a LAB, but I prefer to have a dedicated VM.

WIN2012VM1 and WIN2012VM2 are the Windows Server 2012 nodes of my Windows Failover Cluster.

iSCSI Server

On vSAN, first we need to install the ISCSI Target Server feature. You can do it with the Server Manager interface or with PowerShell. I’ll do everything with GUI for this post.

For information with PowerShell:

Install-WindowsFeature –Name FS-iSCSITarget-Server –IncludeManagementTools
iSCSI Disk

Always in Server Manager, in “File and Storage Services / iSCSI” click on “Create an iSCSI virtual disk”.

Choose the location where the new disk will be stored. It could be somewhere in C:. I select my larger S: drive.

Enter a name, choose a disk size. I configure my disk to be Dynamically expanding.

 

iSCSI Target

In the following wizard panel, choose “create” then enter an  iSCSI Target name.

 

On the “Access Server” tab, click “Add” and enter the client VM that will access the iSCSI Target.
You could add them by IP address or with DNS Name like I did. Make sure to use the FQDN.

On the next tab, do not enable CHAP.

We are done on the iSCSI Target side. We have a Disk and it’s Target ready. As you can see the status is “Not connected” for now.

iSCSI initiators configuration

Now, on each node of my Windows Failover Cluster, I need to configure the iSCSI initiator to use the Target we just created.

On WIN2012VM1 and WIN2012VM2, in Server Manager click on “iSCSI initiator”.

The iSCSI initiator Properties box should appear.
On Target I enter “vSAN” and click “Quick Connect”.

In the Discovery tab, you can see the Port used by iSCSI is 3260. You might need to add rules in your Firewall to allow TCP traffic on this port.

Click “Auto Configure” on the “Volumes and Devices” tab.

Configure Disk

The iSCSI Target and initiators are now configured.
Looking at the disk on one of the WFC nodes I can see an Offline 10GB disk of bus Type iSCSI.
Let’s initialize it and create a partition. This can be done on one of the nodes only.

Add Disk to Cluster

With the Failover Cluster Manager, I can now add my disk to the Cluster and use it later for my Clustered Instance of SQL Server Engine or SSAS.

That’s it. The new disk is now an Available Storage for my Windows Failover Cluster.

I hope this blog post can help you setting up your Lab and playing with Failover Cluster instances.

Cet article SQL Server: Create a Shared Storage for your Failover Cluster LAB est apparu en premier sur Blog dbi services.

Oracle Java Virtual Machine included in Oracle Database license ?

Fri, 2021-03-26 06:05
Introduction

A customer of mine asked me a licensing question regarding the usage or Oracle Java (OJVM) in an Oracle database.  Basically this customer have an Oracle Database Standard Edition License and would like to use Oracle JVM in order to load Java classes in his PL/SQL code executed by this database. The customer would like to ensure that his Oracle Database Standard Edition gives him the right to execute Java Classes in his PL/SQL code.

Sources of information

The question, even if looking obvious for some people, wasn’t so obvious for me.  Fact is that I wasn’t really able to find the information on Oracle documentation nor in a note on My Oracle Support. I started to ask to some of our internal Specialist such as Pascal Brand and Franck Pachot in order to get some piece of answer. I use this opportunity to thanks both of them for their feedbacks ;-).

Both provided me some valuable inputs:

Their understanding was that the Java License is included in Oracle Database Standard Edition according to the following documentation:

Oracle Database Standard Edition 2 On-Premise SE2 Oracle Database Standard Edition 2 includes features necessary to develop workgroup, department-level, and Web applications.

In addition they provided me some MOS (My Oracle Support) notes were it was more clearly explained such as the note 1557737.1. as well as some link to the Java FAQ where the answer was even more explicit.

“If you are an Oracle Customer with a supported Oracle product which requires Java SE, you continue to have access to Oracle Java updates, as required by your Oracle product, for the use of supported Oracle products, at no additional cost.”

Finally in order to be sure of my answer to the customer I wrote the following Service Request:

Q:
=======
A question regarding Oracle Database Standard Edition and Java.

We would like to use Oracle JVM in order to load Java classes in our PL/SQL code executed by a standard edition database. We would like to ensure that the Oracle Database Standard Edition gives us the right to execute Java Classes in PL/SQL code.

Our understanding is that the Java License is included in Oracle Database Standard Edition according to the following documentation:
https://docs.oracle.com/en/database/oracle/oracle-database/19/dblic/Licensing-Information.html#GUID-AB354617-6614-487E-A022-7FC9A5A08472

As far as we do understand we will however have to use the Java version provided by the Oracle Database located in the ORACLE_HOME. According to https://blogs.oracle.com/java-platform-group/oracle-java-se-releases-faq :
“If you are an Oracle Customer with a supported Oracle product which requires Java SE, you continue to have access to Oracle Java updates, as required by your Oracle product, for the use of supported Oracle products, at no additional cost.”

May you please confirm that our understanding is correct, meaning that with Oracle Database Standard Edition, the Oracle Java is included and therefore we can load java classes in PL/SQL code without additional license ?

The answer from the support was crystal clear:

A:
======
Yes, your understanding correct. Here in Oracle there are different Java products.
If you are going to use JAVAVM (Embedded JVM in Database) then DB license is enough. The embedded Database JVM is used to develop and load java source/ classes in database under custom schema.
These java stored procedures are wrapped with pl/sql and invoked by user.
ref: https://docs.oracle.com/en/database/oracle/oracle-database/19/jjdev/Java-application-strategy.html#GUID-492B0C81-24A3-4551-A151-BC0DCE23C802

Thank you.

Conclusion

Following these inputs I’ve been able to inform the customer that even with the Standard Database Edition he is fully allowed to load Java Classes in his PL/SQL Code. If you have doubts regarding usage of Java with Oracle products you will find some interesting information in the following note 1557737.1. I hope this small post will help some of you.

Cet article Oracle Java Virtual Machine included in Oracle Database license ? est apparu en premier sur Blog dbi services.

Working with multipass, a VM manager

Thu, 2021-03-25 11:47
Introduction

What if you need quickly to launch virtual machine instances running Ubuntu for development and testing purposes.
Multipass from Canonical can be the right tool for this. You can even customize the instances during launch by using Cloud-init in order to simulate a small cloud deployment from your laptop or desktop.
So, in this short blog post, I will talk about this tool and demonstrate how easy it is to use it.

Introduction

Multipass is a lightweight Virtual Machines Manager (VMs) and can be the ideal way for launching VM’s for developers.
It can be installed either on Linux, MacOs or Windows.
Multipass use KVM on linux, Hyperkit on MacOs and Hyper-V on Windows and supports metadata for cloud-init.
It allows you to launch a fresh Ubuntu environment with a single command and it is also possible to simulate a small cloud deployment.

Installation

I’m going now to show you how to install multipass locally on my MacBook but you can also install the tool on Windows or on Linux.
You have 2 options to achieve this. brew (https://brew.sh/), the package manager or the mac installer.
As brew is already installed, let’s do it with brew.
(⎈ |docker-desktop:default)sme@Said-MacBook-Pro-3 K8S % brew install --cask multipass
Updating Homebrew...
==> Auto-updated Homebrew!
Updated 1 tap (homebrew/core).
==> Updated Formulae
Updated 1 formula.
==> Downloading https://github.com/CanonicalLtd/multipass/releases/download/v1.6.2/multipass-1.6.2+mac-Darwin.pkg
Already downloaded: /Users/sme/Library/Caches/Homebrew/downloads/64f65f71508526f4215f89bec1a12bbc130d408d2bd22f9460fc792fe5aa09f6--multipass-1.6.2+mac-Darwin.pkg
==> Installing Cask multipass
==> Running installer for multipass; your password may be necessary.
Package installers may write to any location; options such as `--appdir` are ignored.
Password:
installer: Package name is multipass
installer: Installing at base path /
installer: The install was successful.
&#x1f37a; multipass was successfully installed!

Let’s check the Version
(⎈ |docker-desktop:default)sme@Said-MacBook-Pro-3 ~ % multipass version
multipass 1.6.2+mac

Environment setup

To provision and customize the virtual machines to simulate a small cloud deployment, I will use “cloud-init” from Canonical
Let’s create the 3 VMs.
((⎈ |docker-desktop:default)sme@Said-MacBook-Pro-3 K8S % curl https://raw.githubusercontent.com/tigera/ccol1/main/control-init.yaml | multipass launch -n control -m 2048M 20.04 --cloud-init -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 6325 100 6325 0 0 11115 0 --:--:-- --:--:-- --:--:-- 11115
Launched: control
(⎈ |docker-desktop:default)sme@Said-MacBook-Pro-3 K8S % curl https://raw.githubusercontent.com/tigera/ccol1/main/node1-init.yaml | multipass launch -n node1 20.04 --cloud-init -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 6481 100 6481 0 0 13789 0 --:--:-- --:--:-- --:--:-- 13760
Launched: node1
(⎈ |docker-desktop:default)sme@Said-MacBook-Pro-3 K8S % curl https://raw.githubusercontent.com/tigera/ccol1/main/node2-init.yaml | multipass launch -n node2 20.04 --cloud-init -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 6481 100 6481 0 0 12487 0 --:--:-- --:--:-- --:--:-- 12463
Launched: node2

Depending on which platform you are, you may need to start the VMs after they have been launched.
On my desktop, they were all running.
(⎈ |docker-desktop:default)sme@Said-MacBook-Pro-3 K8S % multipass list
Name State IPv4 Image
control Running 198.19.0.1 Ubuntu 20.04 LTS
node1 Running 198.19.0.2 Ubuntu 20.04 LTS
node2 Running 198.19.0.3 Ubuntu 20.04 LTS

otherwise start them as following:
(⎈ |docker-desktop:default)sme@Said-MacBook-Pro-3 K8S % multipass start --all
To check and validate if the deployment was successful, log on the first VM and run
ubuntu@control:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 NotReady 5m12s v1.19.3
node2 NotReady 6m33s v1.19.3
control NotReady master 3m12s v1.19.3

As you can notice above, the status is “NotReady” and this is because the CNI (Container Nerwork Interface) plugin is not installed.

Conclusion

That’s all you have to do to install Multipass VM manager and deploy Virtual Machines.
With this tool you can rapidly test your latest developments, so give this tool a chance. In a next blog, I will use this environment and show how to install “Calico”, one of the most famous CNI.

Cet article Working with multipass, a VM manager est apparu en premier sur Blog dbi services.

Pages