Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 4 hours 58 min ago

Installing the cdata ODBC drivers for Excel

Thu, 2022-01-13 11:06

This article is part of a series that includes SQLite, Postgresql, Firebird, Oracle RDBMS, Microsoft SQL Server, HSQLDB, MongoDB, and MariaDB. The goal is to set up a self-standing environment for testing an ODBC extension for gawk presented here to be completed. Refer to SQLite for installing the required ODBC Driver Manager.
The test system is a debian v11 (bullseye).
Like MongoDB, Excel is not a relational data source, but the ODBC API will allow to access its data using SQL statements. The idea is to access spreadsheet files as if they were relational databases, each sheet being a table, each line a table’s row and each column a table’s column. The first row of a sheet contains the column headers. This is quite self-evident; see the screen copy down below.
As the Excel sheet is the database, there is no server software to install but only the ODBC drivers.
Let’s create a spreadsheet with the data sample from the site. We will just query an existing database, e.g. the PostgreSQL one if the instructions here were applied, and extract the content of the tables, as shown below:

$ psql sampledb
\a
Output format is unaligned.
sampledb=> select * from regions;
region_id|region_name
1|Europe
2|Americas
3|Asia
4|Middle East and Africa
(4 rows)

sampledb=> select * from countries;
country_id|country_name|region_id
AR|Argentina|2
AU|Australia|3
BE|Belgium|1
BR|Brazil|2
...
(29 rows)

and so on with the other tables locations, departments, jobs, employees and dependents.
We will simply create one spreadsheet using LibreOffice Calc or Excel, create one tab per table with the table name as its name, paste the data from the above output with the column headers as the first line. Be sure to select the ‘|’ character as the column separator and remove spurious lines and columns. This step is manual and a bit tedious, but it’ll only take a few minutes.
The image below shows how the spreadsheet may look like with some data in it:

If using isql to extract the data into a tabular format, spurious blanks are added, e.g.:

$ isql -v mysqldb -v debian debian
SQL> select * from regions;
+------------+--------------------------+
| region_id  | region_name              |
+------------+--------------------------+
| 1          | Europe                   |
| 2          | Americas                 |
| 3          | Asia                     |
| 4          | Middle East and Africa   |
+------------+--------------------------+
SQLRowCount returns 4

SQL> select * from countries;
+-----------+-----------------------------------------+------------+
| country_id| country_name                            | region_id  |
+-----------+-----------------------------------------+------------+
| AR        | Argentina                               | 2          |
| AU        | Australia                               | 3          |
| BE        | Belgium                                 | 1          |
...
| ZM        | Zambia                                  | 4          |
| ZW        | Zimbabwe                                | 4          |
+-----------+-----------------------------------------+------------+
SQLRowCount returns 29
29 rows fetched

This leading and trailing blanks must be removed as they don’t belong to the data but were added by isql to format the table so that it displays nicely. The following gawk script can be used to this effect:

for t in {regions,countries,locations,departments,jobs,employees,dependents}; do echo "select * from $t;" | isql -v mysqldb debian debian | gawk -v table_name=$t 'BEGIN {
   print "table", table_name
   do {
      getline
   } while (0 == index($0, "SQL>"))
   n = split ($0, t, "+")
   delete t[1]
   FIELDWIDTHS = "1"
   for (i = 2; i < n; i++)
      FIELDWIDTHS = FIELDWIDTHS sprintf(" %d 1", length(t[i]))
   FIELDWIDTHS = FIELDWIDTHS " 1"
   bHeader = 0
}
{
   bHeader++
   if (index($0, "+-"))
      if (2 == bHeader)
         next
      else
         exit
   for (i = 2; i < NF; i += 2) {gsub(/(^ +)|( +$)/, "", $i); printf("%s|", $i)}
   printf "\n"
}
END {
   printf "\n"
}';
done
Output:
table regions
region_id|region_name|
1|Europe|
2|Americas|
3|Asia|
4|Middle East and Africa|

table countries
country_id|country_name|region_id|
AR|Argentina|2|
...
etc...

If no prior database with the sample data is available, just save the data from here into a text file, say populate-tables.sql, save the following gawk script sql2flat.awk:

# convert sql statements into csv;
# INSERT INTO regions(region_id,region_name) VALUES (1,'Europe');
# becomes
#    1|Europe
# INSERT INTO locations(location_id,street_address,postal_code,city,state_province,country_id) VALUES (1400,'2014 Jabberwocky Rd','26192','Southlake','Texas','US');
# becomes
#    1400|2014 Jabberwocky Rd|26192|Southlake|Texas|US
# sql comes from here for example: https://www.sqltutorial.org/wp-content/uploads/2020/04/postgresql-data.txt;
# Usage:
#   gawk -v sep=separator -f sql2csv.awk data.sql
# Example:
#   gawk -v sep="|" -f sql2csv.awk populate-tables.sql | tee populate-tables.csv
{
   if (match($0, /INSERT INTO ([^)]+)\(.+\) VALUES \((.+)\)/, m)) {
      if (m[1] != ex_table) {
         ex_table = m[1]
         print "# tab", m[1] ":"
      }
      gsub(/'|"/, "", m[2])
      n = split(m[2], data, ",")
      for (i = 1; i  1 ≤ n; i++)
         printf("%s%s", i > 1 ? sep : "", data[i])
      printf "\n"
   }
}

and run it on the data file:

gawk -v sep="|" -f sql2flat.awk populate-tables.sql

Snippet of output:

# tab regions:
1|Europe
2|Americas
3|Asia
4|Middle East and Africa
# tab countries:
AR|Argentina|2
AU|Australia|3
BE|Belgium|1
BR|Brazil|2
...
# tab locations:
1400|2014 Jabberwocky Rd|26192|Southlake|Texas|US
1500|2011 Interiors Blvd|99236|South San Francisco|California|US
...
# tab jobs:
1|Public Accountant|4200.00|9000.00
2|Accounting Manager|8200.00|16000.00
3|Administration Assistant|3000.00|6000.00
...
# tab departments:
1|Administration|1700
2|Marketing|1800
...
# tab employees:
100|Steven|King|steven.king@sqltutorial.org|515.123.4567|1987-06-17|4|24000.00|NULL|9
101|Neena|Kochhar|neena.kochhar@sqltutorial.org|515.123.4568|1989-09-21|5|17000.00|100|9
102|Lex|De Haan|lex.de haan@sqltutorial.org|515.123.4569|1993-01-13|5|17000.00|100|9
...
# tab dependents:
1|Penelope|Gietz|Child|206
2|Nick|Higgins|Child|205
3|Ed|Whalen|Child|200
...

Next, copy and paste the above output into the respective sheet’s tabs.
After the edition is completed, save the file as SampleWorkbook.xlsx (for compatibility, select the output format as Excel file if using LibreOffice Calc) and keep a copy of it somewhere in case it becomes corrupted during the tests.
We will use the commercial ODBC drivers provided by cdata (https://www.cdata.com/odbc/); other vendors are e.g. Easysoft (https://www.easysoft.com/developer/interfaces/odbc/index.html) and Devart (https://www.devart.com/odbc/).
Download the drivers from https://www.cdata.com/drivers/excel/odbc/ by following the instructions on the screen, and install them as root:

# dpkg -i ExcelODBCDriverforUnix.deb 

Run the licensing script and follow the on-screen instructions:

# cd /opt/cdata/cdata-odbc-driver-for-excel/bin/
# ./install-license.x64 

*************************************************************************
Please provide your name and email to install a trial license.

To install a full version license pass your product key as a parameter.
For instance: ./install-license MY_PRODUCT_KEY

Please refer to the documentation for additional details.
*************************************************************************

Name: debian
Email: ...
Installing TRIAL license...
Downloading license data...
Verifying license data...
License installation succeeded.

Check the system-wide installed drivers:

# odbcinst -q -d
...
[CData ODBC Driver for Excel]

# odbcinst -q -d -n "CData ODBC Driver for Excel"
[CData ODBC Driver for Excel]
Description=CData ODBC Driver for Excel 2021
Driver=/opt/cdata/cdata-odbc-driver-for-excel/lib/libexcelodbc.x64.so
UsageCount=1
Driver64=/opt/cdata/cdata-odbc-driver-for-excel/lib/libexcelodbc.x64.so

Check the system-wide DSN in /etc/odbc.ini:

# odbcinst -q -s
...
[CData-Excel-Source]

# odbcinst -q -s -n "CData-Excel-Source"
[CData Excel Source]
Driver=CData ODBC Driver for Excel
AWS Access Key=
AWS Region=NORTHERNVIRGINIA
AWS Role ARN=
AWS Secret Key=
Azure Access Key=
Azure Shared Access Signature=
Azure Storage Account=
Azure Tenant=
MFA Serial Number=
MFA Token=
Password=
Share Point Edition=SharePointOnline
SSL Mode=AUTOMATIC
User=

Copy the newly inserted DSN into debian’s own ~/.odbc.ini file and append the URI setting as follows:

$ vi ~/.odbc.ini
...
[CData-Excel-Source]
Driver=CData ODBC Driver for Excel
AWS Access Key=
AWS Region=NORTHERNVIRGINIA
AWS Role ARN=
AWS Secret Key=
Azure Access Key=
Azure Shared Access Signature=
Azure Storage Account=
Azure Tenant=
MFA Serial Number=
MFA Token=
Password=
Share Point Edition=SharePointOnline
SSL Mode=AUTOMATIC
User=
URI=/home/debian/odbc4gawk/SampleWorkbook.xlsx

As debian, check its DSN so far:

$ odbcinst -q -s
[mysqlitedb]
[mysqldb]
[myfbdb]
[myfbdb_Devart]
[mypostgresqldb]
[OracleODBC-21]
[mymssqlserverdb]
[myhsqldb]
[mymongodb]
[DEVART_MONGODB]
[DEVART_FIREBIRD]
[CData-Excel-Source]

$ odbcinst -q -s -n [CData-Excel-Source]
[CData-Excel-Source]
Driver=CData ODBC Driver for Excel
AWS Access Key=
AWS Region=NORTHERNVIRGINIA
AWS Role ARN=
AWS Secret Key=
Azure Access Key=
Azure Shared Access Signature=
Azure Storage Account=
Azure Tenant=
MFA Serial Number=
MFA Token=
Password=
Share Point Edition=SharePointOnline
SSL Mode=AUTOMATIC
User=
URI=/media/sf_customers/dbi/odbc4gawk/SampleWorkbook.xlsx

Try a connection to the spreadsheet via the ODBC Driver Manager’s isql tool:

# isql -v "CData Excel Source"
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> help
+-----------+-------------+-------------------------------------------------------------------------+
| TABLE_CAT | TABLE_SCHEM | TABLE_NAME    | TABLE_TYPE | REMARKS                                    |
+-----------+-------------+---------------+------------+--------------------------------------------+
| CData     | Excel       | regions       | TABLE      | Retrieve data from the "regions" sheet.    |
| CData     | Excel       | countries     | TABLE      | Retrieve data from the "countries" sheet.  |
| CData     | Excel       | locations     | TABLE      | Retrieve data from the "locations" sheet.  |
| CData     | Excel       | department    | TABLE      | Retrieve data from the "department" sheet. |
| CData     | Excel       | employees     | TABLE      | Retrieve data from the "employees" sheet.  |
| CData     | Excel       | dependent     | TABLE      | Retrieve data from the "dependent" sheet.  |
| CData     | Excel       | jobs          | TABLE      | Retrieve data from the "jobs" sheet.       |
+-----------+-------------+---------------+------------+--------------------------------------------+
SQLRowCount returns -1
7 rows fetched

Interesting how the driver presents the data dictionary.
Run the test query:

SQL> SELECT c.country_name, c.country_id, l.country_id, l.street_address, l.city FROM countries c LEFT JOIN locations l ON l.country_id = c.country_id WHERE c.country_id IN ('US', 'UK', 'CN');
Output:
+----------------------------+-------------+------------+-------------------------------------------+----------------------+
| country_name               | country_id  | country_id | street_address                            | city                 |
+----------------------------+-------------+------------+-------------------------------------------+----------------------+
| United Kingdom             | UK          | UK         | 8204 Arthur St                            | London               |
| United Kingdom             | UK          | UK         | Magdalen Centre, The Oxford Science Park  | Oxford               |
| United States of America   | US          | US         | 2014 Jabberwocky Rd                       | Southlake            |
| United States of America   | US          | US         | 2011 Interiors Blvd                       | South San Francisco  |
| United States of America   | US          | US         | 2004 Charade Rd                           | Seattle              |
| China                      | CN          |            |                                           |                      |
+----------------------------+-------------+------------+-------------------------------------------+----------------------+
SQLRowCount returns -1
6 rows fetched

The join was performed as expected.
Let’s now see how the driver behaves when used with python the module pyodbc:

$ python3
import pyodbc Python 3.9.2 (default, Feb 28 2021, 17:03:44) 
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.

import pyodbc

# connect using the DSN;
cnxn = pyodbc.connect(DSN='Cdata-Excel-Source')
# or:
# connect directly using the DRIVER definition, no DSN;
cnxn = pyodbc.connect('DRIVER={CData ODBC Driver for Excel};URI=/media/sf_customers/dbi/odbc4gawk/SampleWorkbook.xlsx')

cursor = cnxn.cursor()      
cursor.execute("""SELECT
               c.country_name,
               c.country_id,
               l.country_id,
               l.street_address,
               l.city
       FROM
               countries c
       LEFT JOIN locations l ON l.country_id = c.country_id
       WHERE
               c.country_id IN ('US', 'UK', 'CN')""") 

row = cursor.fetchone() 
while row:
     print (row) 
     row = cursor.fetchone()

Output:
('United Kingdom', 'UK', 'UK', '8204 Arthur St', 'London')
('United Kingdom', 'UK', 'UK', 'Magdalen Centre - The Oxford Science Park', 'Oxford')
('United States of America', 'US', 'US', '2014 Jabberwocky Rd', 'Southlake')
('United States of America', 'US', 'US', '2011 Interiors Blvd', 'South San Francisco')
('United States of America', 'US', 'US', '2004 Charade Rd', 'Seattle')
('China', 'CN', None, None, None)

Excel spreadsheets are now accessible via ODBC under the debian account.
Admittedly, Excel sheets are no natural and reliable data sources for too many reasons to mention here (but they have other advantages) but it is quite impressive and almost magical to query them using SQL vs. some low-level cell-oriented API !
Instructions for the other data sources can be accessed through the following links:
SQLite
Firebird
HSQLDB
MariaDB
PostgreSQL
Oracle
Microsoft SQLServer for Linux
MongoDB

Cet article Installing the cdata ODBC drivers for Excel est apparu en premier sur Blog dbi services.

Installing the ODBC drivers for MongoDB

Thu, 2022-01-13 10:57

This article is part of a series that includes SQLite, Postgresql, Firebird, Oracle RDBMS, Microsoft SQL Server, HSQLDB, Excel, and MariaDB. The goal is to set up a self-standing environment for testing an ODBC extension for gawk presented here to be completed. Refer to SQLite for installing the required ODBC Driver Manager.
The test system is a debian v11 (bullseye).
MongoDB is a noSQL database but the ODBC API will hide this fact and allow to access its data using SQL statements. This is to show that with the appropriate ODBC drivers doing the translation, any data source can be accessed using SQL. Not all native operations are rendered correctly of course (notably SELECT’s JOIN clauses, see below) but the main ones, the CRUD, such as INSERT, SELECT, UPDATE and DELETE are all available.
As root, install MongoDB Community Edition and its command-line tool from the official package repositories:

# apt install mongodb
# apt install mongodb-mongosh

Follow the instructions here to configure and start mongodb.
A systemd service has been created. Still as root, start the service as shown below:

# ulimit -n 64000
# chown -R mongodb:mongodb /var/lib/mongodb
# systemctl stop mongod
# systemctl start mongod
# systemctl status mongod
● mongod.service - MongoDB Database Server
     Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
     Active: active (running) since Tue 2021-09-21 09:27:47 CEST; 3s ago
       Docs: https://docs.mongodb.org/manual
   Main PID: 38200 (mongod)
     Memory: 9.8M
        CPU: 26ms
     CGroup: /system.slice/mongod.service
             └─38200 /usr/bin/mongod --config /etc/mongod.conf

Check the MongoDB server availability by connecting to the server through the mongosh tool documented here:

# mongosh
Current Mongosh Log ID:	6149ba4e6d2a951c4e869dac
Connecting to:		mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000
Using MongoDB:		5.0.3
Using Mongosh:		1.0.6
test> quit

The data sample site does not provide scripts for creating and populating MongoDB collections. There are 2 ways to work around this:
1. use isql and ODBC to feed SQL statements to MongoDB;
2. translate SQL statements to javascripts ones suitable for MongoDB;
At this stage, ODBC drivers are not installed yet and therefore isql cannot work so we will select the 2nd alternative. The script below will do just that. As a Mongo database is schema-less, there is no need to create collections beforehand, they will be implicitely created when populated with documents. Any sample populating script will do so we’ll take the one for mysql here and save it to the file data4mongodb.sql. The script pop-mongodb.awk will convert SQL insert statements into MongoDB statements. Here it is:

# format of the input lines, e.g:
#   INSERT INTO regions(region_id,region_name) VALUES (1,'Europe');
# or:
#   INSERT INTO locations(location_id,street_address,postal_code,city,state_province,country_id) VALUES (2500,'Magdalen Centre, The Oxford Science Park','OX9 9ZB','Oxford','Oxford','UK');
# Output:
#    db.regions.insert({
#    region_id: 1,
#    region_name: "Europe"
#    })
# or:
#    db.locations.insert({
#    location_id: 2500,
#    street_address: "Magdalen Centre, The Oxford Science Park",
#    postal_code: "OX9 9ZB",
#    city: "Oxford",
#    state_province: "Oxford",
#    country_id: "UK"
#    })
# Usage:
#    gawk -v Apostrophe="'" -f pop-mongodb.awk data4mongodb.sql > data4mongodb.json
BEGIN {
   ApoRE="(([^,]+)|(" Apostrophe "([^" Apostrophe "]+)" Apostrophe ")),?" # i.e. (([^,]+)|('([^']+)')),?
   print "use sampledb"
}
{
   if (!$0 || match($0, /^\/\*/)) next

   match($0, /INSERT INTO ([^(]+)\(([^)]+)\)/, stmt)
   table_name = stmt[1]
   if (!bemptied[table_name]) {
      print "db." table_name ".deleteMany({})"
      bemptied[table_name] = 1
   }

   nb_columns = split(stmt[2], columns, ",")

   nb = match($0, /VALUES \(([^)]+)\)/, stmt)
   S = stmt[1]
   nb_values = 0
   while (match(S, ApoRE, res)) {
      values[++nb_values] = res[1]
      S = substr(S, RLENGTH + 1)
   }

   print "db." table_name ".insert({"
   for (i = 1; i <= nb_columns; i++) {
      if ("NULL" == values[i])
         values[i] = "null"
      gsub(Apostrophe, "\"", values[i])
      print columns[i] ": " values[i] (i < nb_columns ? "," : "")
   }
   print "})"
   printf "\n"
}

Invoke it:

gawk -v Apostrophe="'" -f pop-mongodb.awk data4mongodb.sql > data4mongodb.json

Example of input:

INSERT INTO regions(region_id,region_name) VALUES (1,'Europe');
...
INSERT INTO countries(country_id,country_name,region_id) VALUES ('AR','Argentina',2);
...
INSERT INTO locations(location_id,street_address,postal_code,city,state_province,country_id) VALUES (1400,'2014 Jabberwocky Rd','26192','Southlake','Texas','US');
...
INSERT INTO jobs(job_id,job_title,min_salary,max_salary) VALUES (1,'Public Accountant',4200.00,9000.00);
...
INSERT INTO departments(department_id,department_name,location_id) VALUES (1,'Administration',1700);
...
INSERT INTO employees(employee_id,first_name,last_name,email,phone_number,hire_date,job_id,salary,manager_id,department_id) VALUES (100,'Steven','King','steven.king@sqltutorial.org','515.123.4567','1987-06-17',4,24000.00,NULL,9);
...
INSERT INTO dependents(dependent_id,first_name,last_name,relationship,employee_id) VALUES (1,'Penelope','Gietz','Child',206);
...

and their corresponding generated json statements:

use sampledb
db.regions.deleteMany({})
db.regions.insert({
region_id: 1,
region_name: "Europe",
})
...
db.countries.deleteMany({})
db.countries.insert({
country_id: "AR",
country_name: "Argentina",
region_id: 2,
})
...
db.locations.deleteMany({})
db.locations.insert({
location_id: 1400,
street_address: "2014 Jabberwocky Rd",
postal_code: "26192",
city: "Southlake",
state_province: "Texas",
country_id: "US",
})
...
db.jobs.deleteMany({})
db.jobs.insert({
job_id: 1,
job_title: "Public Accountant",
min_salary: 4200.00,
max_salary: 9000.00,
})
...
db.departments.deleteMany({})
db.departments.insert({
department_id: 1,
department_name: "Administration",
location_id: 1700,
})
...
db.employees.deleteMany({})
db.employees.insert({
employee_id: 100,
first_name: "Steven",
last_name: "King",
email: "steven.king@sqltutorial.org",
phone_number: "515.123.4567",
hire_date: "1987-06-17",
job_id: 4,
salary: 24000.00,
manager_id: null,
department_id: 9,
})
...
db.dependents.deleteMany({})
db.dependents.insert({
dependent_id: 1,
first_name: "Penelope",
last_name: "Gietz",
relationship: "Child",
employee_id: 206,
})

To be sure the populating step is idempotent, the collections are emptied each time the script is run.
From mongosh, populate the tables as shown:

mongosh < data4mongodb.isql

Back in mongosh, check the data:

mongosh  show databases
admin       41 kB
config     111 kB
local     81.9 kB
sampledb   516 kB

test> use sampledb
switched to db sampledb

sampledb> show collections
countries
departments
dependents
employees
jobs
locations
regions

sampledb> db.regions.find()
[
  {
    _id: ObjectId("616eef8e230e4e4893edd45f"),
    region_id: 1,
    region_name: 'Europe'
  },
...
sampledb> db.countries.find()
[
  {
    _id: ObjectId("616eef8f230e4e4893edd463"),
    country_id: 'AR',
    country_name: 'Argentina',
    region_id: 2
  },
...
sampledb> db.locations.find()
[
  {
    _id: ObjectId("616eef91230e4e4893edd47c"),
    location_id: 1400,
    street_address: '2014 Jabberwocky Rd',
    postal_code: '26192',
    city: 'Southlake',
    state_province: 'Texas',
    country_id: 'US'
  },
...
sampledb> db.jobs.find()
[
  {
    _id: ObjectId("616eef92230e4e4893edd483"),
    job_id: 1,
    job_title: 'Public Accountant',
    min_salary: 4200,
    max_salary: 9000
  },
...
sampledb> db.departments.find()
[
  {
    _id: ObjectId("616eef94230e4e4893edd496"),
    department_id: 1,
    department_name: 'Administration',
    location_id: 1700
  },
...
sampledb> db.employees.find()
[
  {
    _id: ObjectId("616eef95230e4e4893edd4a1"),
    employee_id: 100,
    first_name: 'Steven',
    last_name: 'King',
    email: 'steven.king@sqltutorial.org',
    phone_number: '515.123.4567',
    hire_date: '1987-06-17',
    job_id: 4,
    salary: 24000,
    manager_id: null,
    department_id: 9
  },
...
sampledb> db.dependents.find()
[
  {
    _id: ObjectId("616eef9c230e4e4893edd4c9"),
    dependent_id: 1,
    first_name: 'Penelope',
    last_name: 'Gietz',
    relationship: 'Child',
    employee_id: 206
  },
...
sampledb> quit

The MongoDB ODBC drivers are available here. They are a modified version of MySQL ODBC driver. However, they don’t work on my test machine running Debian Linux v11 because of a missing openssl v1.0.2 library, which was predictable because those drivers are several years old and my test machine runs the latest Debian linux. For this reason, a commercial replacement from Devart has been installed; it comes with a one-month evaluation period. Once registered, it can be downloaded and installed as root as follows:

# wget https://www.devart.com/odbc/mongodb/devartodbcmongo_amd64.deb
# apt install /home/debian/Downloads/devartodbcmongo_amd64.deb

Check the system-wide installed drivers:

# odbcinst -q -d
...
[MongoDB Unicode]
[MongoDB ANSI]
...
[Devart ODBC Driver for MongoDB]
# odbcinst -q -d -n "Devart ODBC Driver for MongoDB"
[Devart ODBC Driver for MongoDB]
Driver=/usr/local/lib/libdevartodbcmongo.so

Check the system-wide DSN in /etc/odbc.ini:

# odbcinst -q -s
...
[DEVART_MONGODB]
#  odbcinst -q -s -n "DEVART_MONGODB"
[DEVART_MONGODB]
Description=My MongoDB sample database
Driver=Devart ODBC Driver for MongoDB
Data Source=
Port=27017
Database=sampledb
User ID=
Password=
Client Library=
BSON Library=
Additional Servers=
Connection Options=

Copy the newly inserted DSN into debian’s ~/.odbc.ini file:

$ vi ~/.odbc.ini
...
[DEVART_MONGODB]
Description=My MongoDB sample database
Driver=Devart ODBC Driver for MongoDB
Data Source=
Port=27017
Database=sampledb
User ID=
Password=
Client Library=
BSON Library=
Additional Servers=
Connection Options=

As debian, check its DSN so far:

$ odbcinst -q -s
...
[DEVART_MONGODB]
...

$ odbcinst -q -s -n DEVART_MONGODB
[DEVART_MONGODB]
Description=My MongoDB sample database
Driver=Devart ODBC Driver for MongoDB
Data Source=
Port=27017
Database=sampledb
User ID=
Password=
Client Library=
BSON Library=
Additional Servers=
Connection Options=

Try a connection to the MongoDB database via ODBC:

isql -v DEVART_MONGODB
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> help
+-----------+-------------+-------------+------------+---------+
| TABLE_CAT | TABLE_SCHEM | TABLE_NAME  | TABLE_TYPE | REMARKS |
+-----------+-------------+-------------+------------+---------+
| sampledb  |             | countries   | TABLE      |         |
| sampledb  |             | dependents  | TABLE      |         |
| sampledb  |             | regions     | TABLE      |         |
| sampledb  |             | locations   | TABLE      |         |
| sampledb  |             | departments | TABLE      |         |
| sampledb  |             | jobs        | TABLE      |         |
| sampledb  |             | employees   | TABLE      |         |
+-----------+-------------+-------------+------------+---------+ 
SQLRowCount returns -1
7 rows fetched

It looks good. Run the test query now:

SQL> SELECT c.country_name, c.country_id, l.country_id, l.street_address, l.city FROM countries c LEFT JOIN locations l ON l.country_id = c.country_id WHERE c.country_id IN ('US', 'UK', 'CN');
Output:
+--------------+-----------------+-----------------+----------------+------+
| country_name | country_id      | country_id_1    | street_address | city |
+--------------+-----------------+-----------------+----------------+------+
| China        | CN              |                 |                |      |
+--------------+-----------------+-----------------+----------------+------+ 
SQLRowCount returns -1
1 rows fetched

Only one row is returned because the left join is not implemented in the combo MongoDB/ODBC driver. In effect, joins in relational database are a necessity due to the data normalization; as MongoDB does not care about data duplication and works with complete, self-standing documents, no joins are necessary although inner queries can be simulated with aggregate() and lookup() as illustrated by the following equivalent query:

db.countries.aggregate([
   {
      $match:{$or:[{"country_id" : "US"}, {"country_id" : "UK"}, {"country_id" : "CN"}]}
   },
   {
     $lookup:
       {
         from: "locations",
         localField: "country_id",
         foreignField: "country_id",
         as: "r"
       }
  }
])

with its result:

[
  {
    _id: ObjectId("616eef90230e4e4893edd469"),
    country_id: 'CN',
    country_name: 'China',
    region_id: 3,
    r: []
  },
  {
    _id: ObjectId("616eef91230e4e4893edd478"),
    country_id: 'UK',
    country_name: 'United Kingdom',
    region_id: 1,
    r: [
      {
        _id: ObjectId("616eef92230e4e4893edd480"),
        location_id: 2400,
        street_address: '8204 Arthur St',
        postal_code: null,
        city: 'London',
        state_province: null,
        country_id: 'UK'
      },
      {
        _id: ObjectId("616eef92230e4e4893edd481"),
        location_id: 2500,
        street_address: 'Magdalen Centre, The Oxford Science Park',
        postal_code: 'OX9 9ZB',
        city: 'Oxford',
        state_province: 'Oxford',
        country_id: 'UK'
      }
    ]
  },
  {
    _id: ObjectId("616eef91230e4e4893edd479"),
    country_id: 'US',
    country_name: 'United States of America',
    region_id: 2,
    r: [
      {
        _id: ObjectId("616eef91230e4e4893edd47c"),
        location_id: 1400,
        street_address: '2014 Jabberwocky Rd',
        postal_code: '26192',
        city: 'Southlake',
        state_province: 'Texas',
        country_id: 'US'
      },
      {
        _id: ObjectId("616eef91230e4e4893edd47d"),
        location_id: 1500,
        street_address: '2011 Interiors Blvd',
        postal_code: '99236',
        city: 'South San Francisco',
        state_province: 'California',
        country_id: 'US'
      },
      {
        _id: ObjectId("616eef91230e4e4893edd47e"),
        location_id: 1700,
        street_address: '2004 Charade Rd',
        postal_code: '98199',
        city: 'Seattle',
        state_province: 'Washington',
        country_id: 'US'
      }
    ]
  }
]

To easy up the comparaison, we save that output to the file mongodb.out and run the following commands to reformat it into a tabular presentation.
First, convert mongodb.out to correct json syntax:

$ gawk -v Apo="'" '{
   gsub(Apo, "\"", $0)
   if (match($1, /^_id/)) next
   print gensub(/^( +)([^:]+)(:.+$)/, "\\1\"\\2\"\\3", "g", $0)
}' mongodb.out > mongodb.json

Now, save the following python script into the python script mongodb2tab.py:

$ cat - <<eop mongodb2tab.py
import os
import json
with open("./mongodb.json") as json_file:
   mongodb_out = json.load(json_file)
widths = [25, 10, 10, 42, 10]
print(f"{'country_name': <{widths[0]}}  {'country_id': <{widths[1]}}  {'country_id': <{widths[2]}} \ {'street_address': <{widths[3]}}  {'city': <{widths[4]}}")
for row in mongodb_out:
   if row['r']:
      for l in row['r']:
         print(f"{row['country_name']: <{widths[0]}}  {row['country_id']: <{widths[1]}}  {l['country_id']: <{widths[2]}}  {l['street_address']: <{widths[3]}}  {l['city']: <{widths[4]}}")
   else:
      print(f"{row['country_name']: <{widths[0]}}  {row['country_id']: <{widths[1]}}")
eop

Finally, execute it:

$ python3 mongodb2tab.py
Output:
country_name               country_id  country_id  street_address                              city      
China                      CN        
United Kingdom             UK          UK          8204 Arthur St                              London    
United Kingdom             UK          UK          Magdalen Centre, The Oxford Science Park    Oxford    
United States of America   US          US          2014 Jabberwocky Rd                         Southlake 
United States of America   US          US          2011 Interiors Blvd                         South San Francisco
United States of America   US          US          2004 Charade Rd                             Seattle   

Which shows that the output of the MongoDb query to simulate the left outer join was correct.
Let’s now test the DSN with pyodbc:

$ python3
import pyodbc 
cnxn = pyodbc.connect(DSN='mymongodb_Devart')
cursor = cnxn.cursor()	
cursor.execute("""SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN')""")
row = cursor.fetchone() 
while row:
    print (row) 
    row = cursor.fetchone()
Output:
('China', 'CN', None, None, None)

Here too, there are missing values as expected.
MongoDB is now accessible from ODBC under the debian account, albeit not all SQL statements are fully supported, which is understandable with a NoSQL database.
Instructions for the other data sources can be accessed through the following links:
SQLite
Firebird
HSQLDB
MariaDB
PostgreSQL
Oracle
Microsoft SQLServer for Linux
Excel

Cet article Installing the ODBC drivers for MongoDB est apparu en premier sur Blog dbi services.

Installing the ODBC drivers for Microsoft SQLServer for Linux

Thu, 2022-01-13 10:51

This article is part of a series that includes SQLite, Postgresql, Firebird, Oracle RDBMS, HSQLDB, MariaDB, MongoDB, and Excel. The goal is to set up a self-standing environment for testing an ODBC extension for gawk presented here to be completed. Refer to SQLite for installing the required ODBC Driver Manager.
The test system is a debian v11 (bullseye).
As root, follow the documentation here to get and install sqlserver express for Linux. Here are the needed steps:

# wget -qO- https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
# add-apt-repository "$(wget -qO- https://packages.microsoft.com/config/ubuntu/20.04/mssql-server-2019.list)"
# apt-get install -y mssql-server
# /opt/mssql/bin/mssql-conf setup

A systemd service was set up and launched:

# systemctl status mssql-server --no-pager
● mssql-server.service - Microsoft SQL Server Database Engine
     Loaded: loaded (/lib/systemd/system/mssql-server.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2021-08-27 15:22:14 CEST; 15s ago
       Docs: https://docs.microsoft.com/en-us/sql/linux
   Main PID: 7795 (sqlservr)
      Tasks: 120
     Memory: 880.5M
        CPU: 5.983s
     CGroup: /system.slice/mssql-server.service
             ├─7795 /opt/mssql/bin/sqlservr
             └─7817 /opt/mssql/bin/sqlservr

Get and install the ODBC drivers for sqlserver:

# apt install tdsodbc
# apt install curl
# curl https://packages.microsoft.com/config/ubuntu/20.04/prod.list | sudo tee /etc/apt/sources.list.d/msprod.list
# apt-get update 
# apt-get install mssql-tools unixodbc-dev

Check that the device has been added system-wide:

# odbcinst -q -d
...
[ODBC Driver 17 for SQL Server]

# odbcinst -q -d -n "ODBC Driver 17 for SQL Server"
[ODBC Driver 17 for SQL Server]
Description=Microsoft ODBC Driver 17 for SQL Server
Driver=/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.8.so.1.1

As debian, create the sampledb database using the native administration tool sqlcmd:

$ sqlcmd -S localhost -U SA -P admin2021!
CREATE DATABASE sampledb
go
1> SELECT Name from sys.Databases;
2> go
Name                                                                                                                            
-----------
master                                                                                                                          
tempdb                                                                                                                          
model                                                                                                                           
msdb                                                                                                                            
sampledb                                                                                                                        

(5 rows affected)

Populate the sampledb database; statements for table creation for slqserver are available here and the ones to populate the tables here.
Click and save the files to create_tables_mssql.sql respectively populate_tables_mssql.sql.
Execute the above SQL scripts still using the default command-line administration tool, sqlcmd:

$ sqlcmd -S localhost -U SA -P admin2021! -i create_tables_mssql.sql
$ sqlcmd -S localhost -U SA -P admin2021! -i populate_tables_mssql.sql

Let’s check the data:

cat - <<eot | sqlcmd -S localhost -U SA -P admin2021!
USE sampleDB
SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN');
eot
Changed database context to 'sampledb'.
country_name                             country_id country_id street_address                           city                 
---------------------------------------- ---------- ---------- ---------------------------------------- ---------------------
China                                    CN         NULL       NULL                                     NULL                          
United Kingdom                           UK         UK         8204 Arthur St                           London                        
United Kingdom                           UK         UK         Magdalen Centre, The Oxford Science Park Oxford                        
United States of America                 US         US         2014 Jabberwocky Rd                      Southlake                     
United States of America                 US         US         2011 Interiors Blvd                      South San Francisco           
United States of America                 US         US         2004 Charade Rd                          Seattle                       

(6 rows affected)
SQL> 

Configure an ODBC DSN by editing the user’s DSN file:

$ vi ~/.odbc.ini
...
[mymssqlserverdb]
Driver = ODBC Driver 17 for SQL Server
Server = localhost
#Port = 1433
User = SA
Password = admin2021!
Database = sampledb
Language = us_english
NeedODBCTypesOnly = 1

Check the DSN:

$ odbcinst -q -s
...
[mymssqlserverdb]

$ odbcinst -q -s -n mymssqlserverdb
[mymssqlserverdb]
Driver=/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.8.so.1.1
Server=localhost
Port=
Database=sampledb
User=SA
Password=admin2021!
Language=us_english
NeedODBCTypesOnly=1

Try a connection to the mssql db via ODBC using the ODBC Driver Manager test tool, isql:

$ isql -v mymssqlserverdb SA admin2021!
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> SELECT c.country_name, c.country_id, l.country_id, l.street_address, l.city FROM countries c LEFT JOIN locations l ON l.country_id = c.country_id WHERE c.country_id IN ('US', 'UK', 'CN');
Output:
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
| country_name                            | country_id| country_id| street_address                          | city          |
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
| China                                   | CN        |           |                                         |               |
| United Kingdom                          | UK        | UK        | 8204 Arthur St                          | London        |
| United Kingdom                          | UK        | UK        | Magdalen Centre, The Oxford Science Park| Oxford        |
| United States of America                | US        | US        | 2014 Jabberwocky Rd                     | Southlake     |
| United States of America                | US        | US        | 2011 Interiors Blvd                     | South San Francisco           |
| United States of America                | US        | US        | 2004 Charade Rd                         | Seattle       |
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
SQLRowCount returns 6
6 rows fetched

Test the DSN with the python module pyodbc:

$ python3
import pyodbc 
cnxn = pyodbc.connect(DSN='mymssqlserverdb;UID=SA;PWD=admin2021!')
cursor = cnxn.cursor()	
cursor.execute("""SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN')""")
row = cursor.fetchone() 
while row:
    print (row) 
    row = cursor.fetchone()
Output:
('China', 'CN', None, None, None)
('United Kingdom', 'UK', 'UK', '8204 Arthur St', 'London')
('United Kingdom', 'UK', 'UK', 'Magdalen Centre, The Oxford Science Park', 'Oxford')
('United States of America', 'US', 'US', '2014 Jabberwocky Rd', 'Southlake')
('United States of America', 'US', 'US', '2011 Interiors Blvd', 'South San Francisco')
('United States of America', 'US', 'US', '2004 Charade Rd', 'Seattle')

mssqlservr is now fully accessible from any ODBC application under the debian account.
Instructions for the other data sources can be accessed through the following links:
SQLite
Firebird
HSQLDB
MariaDB
PostgreSQL
Oracle
MongoDB
Excel

Cet article Installing the ODBC drivers for Microsoft SQLServer for Linux est apparu en premier sur Blog dbi services.

Installing the ODBC drivers for HSQLDB

Thu, 2022-01-13 10:48

This article is part of a series that includes SQLite, Postgresql, Firebird, Oracle RDBMS, Microsoft SQL Server, MariaDB, MongoDB, and Excel. The goal is to set up a self-standing environment for testing an ODBC extension for gawk presented here to be completed. Refer to SQLite for installing the required ODBC Driver Manager.
The test system is a debian v11 (bullseye).
As user debian, follow the documentation here and here to get and install hsqldb for Linux. Here are the needed steps:

$ export workdir=~/odbc4gawk
$ mkdir $workdir
$ cd $workdir
$ wget https://sourceforge.net/projects/hsqldb/files/hsqldb/hsqldb_2_6/hsqldb-2.6.0.zip/download
$ mv download hsqldb-2.6.0.zip
$ unzip hsqldb-2.6.0.zip

As root, install a system-wide JDK from the official packages repository:

# apt install openjdk-17-jdk
# java -version
openjdk version "17-ea" 2021-09-14
OpenJDK Runtime Environment (build 17-ea+19-Debian-1)
OpenJDK 64-Bit Server VM (build 17-ea+19-Debian-1, mixed mode, sharing)

No special action is needed for the ODBC drivers because hsqldb starting in v2.0 can use the PostgreSQL ones, a clever decision; why reinventing the wheel when a few modifications are enough to make HSQLDB compatible with mainstream ODBC drivers ? Moreover, the choice of those drivers make sense as PostgreSQL is here to stay. If needed, please refer to the article Installing the ODBC drivers for PostgreSQL for step by step instructions. Later, we will only have to create a DSN for the hsqldb database.
As debian, start the database server process and send it to the background:

$ cd $workdir/hsqldb-2.6.0/hsqldb/lib
$ java -cp ../lib/hsqldb.jar org.hsqldb.server.Server --database.0 file:sampledb --dbname.0 xdb &

Connect to the local hsqldb service using the hsqldb’s provided administration application:

$ java -jar ../lib/sqltool.jar --inlineRc=url=jdbc:hsqldb:localhost/runtime,user=sa

Note that the tool uses JDBC for the connection, the java counterpart to ODBC. From that tool, create the tables for hsqldb using the Oracle script here; save it into the text file create_tables_hsqldb.sql, edit it and change occurrences of NUMBER to INT and occurrences of VARCHAR2 to VARCHAR using your favorite text editor. Finally, execute it:

\i /home/debian/odbc4gawk/create_tables_hsqldb.sql

Populate the tables for hsqldb using the Oracle script here; save it into the text file populate_tables_hsqldb.sql and execute it unchanged.

\i /home/debian/odbc4gawk/populate_tables_hsqldb.sql

Check the data:

sql> SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN');
Output:
COUNTRY_NAME              COUNTRY_ID  COUNTRY_ID  STREET_ADDRESS                            CITY
------------------------  ----------  ----------  ----------------------------------------  -------------------
China                     CN                      [null]                                    [null]
United Kingdom            UK          UK          8204 Arthur St                            London
United Kingdom            UK          UK          Magdalen Centre, The Oxford Science Park  Oxford
United States of America  US          US          2014 Jabberwocky Rd                       Southlake
United States of America  US          US          2011 Interiors Blvd                       South San Francisco
United States of America  US          US          2004 Charade Rd                           Seattle

Fetched 6 rows.

The data are OK.
Configure an ODBC DSN by editing the user’s DSN file:

$ vi ~/.odbc.ini
...
[myhsqldb]
Description=My Postgresql sample database
Driver=PostgreSQL Unicode
Database=sampledb

On line 4, we have specified that the PostgreSQL ODBC driver is to be used.
Check the DSN:

$ odbcinst -q -s
...
[myhsqldb]

$ odbcinst -q -s -n myhsqldb
...
[myhsqldb]
Description=My Postgresql sample database
Driver=PostgreSQL Unicode
Database=sampledb

Try a hsqldb connection to the hsqldb database via ODBC using the native administrative tool isql:

isql -v myhsqldb
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> SELECT c.country_name, c.country_id, l.country_id, l.street_address, l.city FROM countries c LEFT JOIN locations l ON l.country_id = c.country_id WHERE c.country_id IN ('US', 'UK', 'CN');
Output:
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
| country_name                            | country_id| country_id| street_address                          | city          |
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
| United States of America                | US        | US        | 2014 Jabberwocky Rd                     | Southlake     |
| United States of America                | US        | US        | 2011 Interiors Blvd                     | South San Francisco           |
| United States of America                | US        | US        | 2004 Charade Rd                         | Seattle       |
| United Kingdom                          | UK        | UK        | 8204 Arthur St                          | London        |
| United Kingdom                          | UK        | UK        | Magdalen Centre, The Oxford Science Park| Oxford        |
| China                                   | CN        |           |                                         |               |
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
SQLRowCount returns 6
6 rows fetched

It looks good. Test now the DSN from a python script using the pyodbc module:

$ python3
import pyodbc 
cnxn = pyodbc.connect(DSN='myhsqldb')
cursor = cnxn.cursor()	
cursor.execute("""SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN')""")
row = cursor.fetchone() 
while row:
    print (row) 
    row = cursor.fetchone()
Output:
('United States of America', 'US', 'US', '2014 Jabberwocky Rd', 'Southlake')
('United States of America', 'US', 'US', '2011 Interiors Blvd', 'South San Francisco')
('United States of America', 'US', 'US', '2004 Charade Rd', 'Seattle')
('United Kingdom', 'UK', 'UK', '8204 Arthur St', 'London')
('United Kingdom', 'UK', 'UK', 'Magdalen Centre, The Oxford Science Park', 'Oxford')
('China', 'CN', None, None, None)

Everything is OK. hsqldb is now fully accessible from ODBC under the debian account.
Instructions for the other data sources can be accessed through the following links:
SQLite
Firebird
MariaDB
PostgreSQL
Oracle
Microsoft SQLServer for Linux
MongoDB
Excel

Cet article Installing the ODBC drivers for HSQLDB est apparu en premier sur Blog dbi services.

Installing the ODBC drivers for Oracle RDBMS

Thu, 2022-01-13 10:48

This article is part of a series that includes SQLite, Postgresql, Firebird, MongoDB, Microsoft SQL Server, HSQLDB, Excel, and MariaDB. The goal is to set up a self-standing environment for testing an ODBC extension for gawk presented here to be completed. Refer to SQLite for installing the required ODBC Driver Manager.
The test system is a debian v11 (bullseye).
As we already have a running Oracle remote instance, we don’t need to set one up and only the ODBC drivers need to be installed.
As user debian, get and install the ODBC drivers for Oracle (an account is needed), cf here.

$ id
uid=1000(debian) gid=1000(debian) groups=1000(debian)
$ cd ~/Downloads
$ wget https://download.oracle.com/otn_software/linux/instantclient/213000/instantclient-basiclite-linux.x64-21.3.0.0.0.zip
$ wget https://download.oracle.com/otn_software/linux/instantclient/213000/instantclient-sqlplus-linux.x64-21.3.0.0.0.zip
$ wget https://download.oracle.com/otn_software/linux/instantclient/213000/instantclient-odbc-linux.x64-21.3.0.0.0.zip
$ mkdir ~/odbc4gawk
$ export workdir=~/odbc4gawk
$ mkdir $workdir
$ unzip instantclient-basiclite-linux.x64-21.3.0.0.0.zip -d ${workdir}/.
$ unzip instantclient-sqlplus-linux.x64-21.3.0.0.0.zip -d ${workdir}/.
$ unzip instantclient-odbc-linux.x64-21.3.0.0.0.zip -d ${workdir}/.

The instant client software is required; we also download and install Oracle’s native command-line administration tool sqlplus to populate the test schema.
Follow the installation instructions here.
Add the Instant Client path to the shared library path:

$ vi ~/.bashrc
export LD_LIBRARY_PATH=/home/debian/odbc4gawk/instantclient_21_3:$LD_LIBRARY_PATH

Note: If WordPress does not render them correctly, there is an underscore between the name, 21 and 3 in instantclient_21_3 above.
As there is a bug in the Oracle script odbc_update_ini.sh, a work-around is provided here.

cd $workdir/instantclient_21_3
$ mkdir etc
$ cp /etc/odbcinst.ini etc/.
$ cp ~/.odbc.ini etc/odbc.ini

Run the configuration script now:

$ ./odbc_update_ini.sh .

Oracle has updated the local copy of the odbcinst.ini file. Let’s copy it to /etc to make the changes system-wide:

$ sudo cp etc/odbcinst.ini /etc/.

Check the ODBC driver file /etc/odbcinst.ini:

$ odbcinst -q -d
...
[Oracle 21 ODBC driver]

Correct.
See here for further configuration of the ODBC driver.
Let’s test the connection to the remote db via the instant client:

$ cd ${workdir}/instantclient_21_3
$ ./sqlplus scott/tiger@'(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = db)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl)))'

SQL*Plus: Release 21.0.0.0.0 - Production on Mon Oct 18 19:34:07 2021
Version 21.3.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

Last Successful login time: Sat Oct 16 2021 01:17:00 +02:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> 

The remote database is available and reachable natively.
Still in the sqlplus session, let’s populate the schema scott with the sample data. As the sample’s date values assume a different format, let’s switch to it in the Oracle session and avoid formatting errors:

SQL> alter session set nls_date_format = 'yyyy-mm-dd';

Let’s download the tables creation script using from here and the data populating script from here:

wget https://www.sqltutorial.org/wp-content/uploads/2020/04/oracle.txt --output-document=oracle.sql
wget https://www.sqltutorial.org/wp-content/uploads/2020/04/oracle-data.txt --output-document=oracle-data.sql

Create the tables and load the data now:

@oracle
@oracle-data

-- test the query:
set pagesize 10000
set linesize 200
set tab off
col country_name format a25
col STREET_ADDRESS format a30
col city format a20
SQL> SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN')';

COUNTRY_NAME              CO CO STREET_ADDRESS                 CITY
------------------------- -- -- ------------------------------ --------------------
United States of America  US US 2014 Jabberwocky Rd            Southlake
United States of America  US US 2011 Interiors Blvd            South San Francisco
United States of America  US US 2004 Charade Rd                Seattle
United Kingdom            UK UK 8204 Arthur St                 London
United Kingdom            UK UK Magdalen Centre, The Oxford Sc Oxford
                                ience Park

China                     CN

6 rows selected.
SQL> quit

The test data are ready.
Let’s edit debian’s ODBC DSN definitions and add the settings below:

$ vi ~/.odbc.ini
...
[OracleODBC-21]
...
Driver=Oracle 21 ODBC driver
DSN=OracleODBC-21
...
ServerName=(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = db)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl)))
...
UserID=scott
Password=tiger

Check the DSN:

$ odbcinst -q -s -n "OracleODBC-21" 
[OracleODBC-21]
AggregateSQLType=FLOAT
Application Attributes=T
Attributes=W
BatchAutocommitMode=IfAllSuccessful
...
ServerName=(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = db)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl)))
...
UserID=scott
Password=tiger

Test it using the ODBC Driver Manager test tool, isql:

$ isql -v OracleODBC-21
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> SELECT c.country_name, c.country_id, l.country_id, l.street_address, l.city FROM countries c LEFT JOIN locations l ON l.country_id = c.country_id WHERE c.country_id IN ('US', 'UK', 'CN')
Output:
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
| COUNTRY_NAME                            | COUNTRY_ID| COUNTRY_ID| STREET_ADDRESS                          | CITY          |
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
| United States of America                | US        | US        | 2014 Jabberwocky Rd                     | Southlake     |
| United States of America                | US        | US        | 2011 Interiors Blvd                     | South San Francisco           |
| United States of America                | US        | US        | 2004 Charade Rd                         | Seattle       |
| United Kingdom                          | UK        | UK        | 8204 Arthur St                          | London        |
| United Kingdom                          | UK        | UK        | Magdalen Centre, The Oxford Science Park| Oxford        |
| China                                   | CN        |           |                                         |               |
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
SQLRowCount returns -1
6 rows fetched

The ODBC connection is OK. Test the DSN with the python ODBC module pyodbc:

$ python3
import pyodbc 
cnxn = pyodbc.connect('DSN=OracleODBC-21')
cursor = cnxn.cursor()	
cursor.execute("""SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN')""")
row = cursor.fetchone() 
while row:
    print (row) 
    row = cursor.fetchone()
Output:
('United States of America', 'US', 'US', '2014 Jabberwocky Rd', 'Southlake')
('United States of America', 'US', 'US', '2011 Interiors Blvd', 'South San Francisco')
('United States of America', 'US', 'US', '2004 Charade Rd', 'Seattle')
('United Kingdom', 'UK', 'UK', '8204 Arthur St', 'London')
('United Kingdom', 'UK', 'UK', 'Magdalen Centre, The Oxford Science Park', 'Oxford')
('China', 'CN', None, None, None)
>>> 

Oracle is now fully accessible via ODBC under the debian account.
Instructions for the other data sources can be accessed through the following links:
SQLite
Firebird
HSQLDB
MariaDB
PostgreSQL
MongoDB
Microsoft SQLServer for Linux
Excel

Cet article Installing the ODBC drivers for Oracle RDBMS est apparu en premier sur Blog dbi services.

Installing the ODBC drivers for MariaDB

Thu, 2022-01-13 10:42

This article is part of a series that includes SQLite, Postgresql, Firebird, Oracle RDBMS, Microsoft SQL Server, HSQLDB, MongoDB, and Excel. The goal is to set up a self-standing environment for testing an ODBC extension for gawk presented here to be completed. Refer to SQLite for installing the required ODBC Driver Manager.
The test system is a debian v11 (bullseye).
The MariaDB ODBC drivers can be installed from the platform’s default package repositories. As it is missing on this test environment, let’s install the MariaDB RDBMS (v10.5.11-1 ) too, and the ODBC drivers (v3.1.9-1) as root:

# apt install mariadb-server
# apt install odbc-mariadb

A systemd service was set up and launched:

# systemctl status mariadb
● mariadb.service - MariaDB 10.5.11 database server
     Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2021-08-23 21:10:48 CEST; 5min ago
       Docs: man:mariadbd(8)
             https://mariadb.com/kb/en/library/systemd/
...

Check the ODBC driver file /etc/odbcinst.ini:

# odbcinst -q -d -n "MariaDB Unicode" 
[MariaDB Unicode]
Driver=libmaodbc.so
Description=MariaDB Connector/ODBC(Unicode)
Threading=0
UsageCount=1

Grant the necessary permissions to the test user debian using the provided native administration client, mysql:

# mysql
MariaDB [(none)]> GRANT CREATE, INSERT, SELECT, DELETE, UPDATE, DROP, ALTER ON *.* TO 'debian'@'localhost';
MariaDB [(none)]> exit;
Bye

As the user debian, create the sampledb database and its tables, and populate them using mysql again:

$ mysql --user=debian --password=debian
create database sampledb;
use sampledb;
-- create & populate the tables for mariadb by copying and pasting the statements from the above files;
-- test the data;
SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN');
Output:
+--------------------------+------------+------------+------------------------------------------+---------------------+
| country_name             | country_id | country_id | street_address                           | city                |
+--------------------------+------------+------------+------------------------------------------+---------------------+
| China                    | CN         | NULL       | NULL                                     | NULL                |
| United Kingdom           | UK         | UK         | 8204 Arthur St                           | London              |
| United Kingdom           | UK         | UK         | Magdalen Centre, The Oxford Science Park | Oxford              |
| United States of America | US         | US         | 2014 Jabberwocky Rd                      | Southlake           |
| United States of America | US         | US         | 2011 Interiors Blvd                      | South San Francisco |
| United States of America | US         | US         | 2004 Charade Rd                          | Seattle             |
+--------------------------+------------+------------+------------------------------------------+---------------------+
6 rows in set (0.001 sec)
\q

The data are OK. Configure ODBC by editing the user’s DSN file:

$ vi ~/.odbc.ini
[mysqldb]
Description=My mysql sample database
Driver=MariaDB Unicode
Database=sampledb

Check the DSN definition:

$ odbcinst -q -s -n mysqldb
[mysqldb]
Description=My mysql sample database
Driver=MariaDB Unicode
Database=sampledb
Socket=/var/run/mysqld/mysqld.sock

See below for an explanation about the highlighted “Socket=…” line.
Test the DSN via isql:

$ isql mysqldb -v debian debian
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> SELECT c.country_name, c.country_id, l.country_id, l.street_address, l.city FROM countries c LEFT JOIN locations l ON l.country_id = c.country_id WHERE c.country_id IN ('US', 'UK', 'CN');
+---------------------------+-----------+-----------+-----------------------------------------+---------------------+
| country_name              | country_id| country_id| street_address                          | city                |
+---------------------------+-----------+-----------+-----------------------------------------+---------------------+
| China                     | CN        |           |                                         |                     |
| United Kingdom            | UK        | UK        | 8204 Arthur St                          | London              |
| United Kingdom            | UK        | UK        | Magdalen Centre, The Oxford Science Park| Oxford              |
| United States of America  | US        | US        | 2014 Jabberwocky Rd                     | Southlake           |
| United States of America  | US        | US        | 2011 Interiors Blvd                     | South San Francisco |
| United States of America  | US        | US        | 2004 Charade Rd                         | Seattle             |
+---------------------------+-----------+-----------+-----------------------------------------+---------------------+
SQLRowCount returns 6
6 rows fetched

The ODBC connection works fine. Test the DSN from a python script using the pyodbc module:

$ python3
Python 3.9.2 (default, Feb 28 2021, 17:03:44) 
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
import pyodbc 
cnxn = pyodbc.connect(DSN='mysqldb;user=debian;password=debian')
cursor = cnxn.cursor()      
cursor.execute("""SELECT
...         c.country_name,
...         c.country_id,
...         l.country_id,
...         l.street_address,
...         l.city
... FROM
...         countries c
... LEFT JOIN locations l ON l.country_id = c.country_id
... WHERE
...         c.country_id IN ('US', 'UK', 'CN')""")

row = cursor.fetchone() 
 while row:
...     print (row) 
...     row = cursor.fetchone()
... 
Output:
('China', 'CN', None, None, None)
('United Kingdom', 'UK', 'UK', '8204 Arthur St', 'London')
('United Kingdom', 'UK', 'UK', 'Magdalen Centre, The Oxford Science Park', 'Oxford')
('United States of America', 'US', 'US', '2014 Jabberwocky Rd', 'Southlake')
('United States of America', 'US', 'US', '2011 Interiors Blvd', 'South San Francisco')
('United States of America', 'US', 'US', '2004 Charade Rd', 'Seattle')

Everything is just fine.
Note: When the MariaDB database is restarted, an error message such as the one below from isql is displayed when connecting to a database as a non-root user:

$ isql -v mysqldb debian debian
[S1000][unixODBC][ma-3.1.7]Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
[ISQL]ERROR: Could not SQLConnect

or from pyodbc:

pyodbc.Error: ('HY000', "[HY000] [ma-3.1.7]Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) (2002) (SQLDriverConnect)")

If this happens, just make sure the line Socket=/var/run/mysqld/mysqld.sock is present in ~/.odbc.ini. Alternatively, but not as good because as it is needed each time the database is restarted, create the symlink below as root:

# ln -s /run/mysqld/mysqld.sock /tmp/mysql.sock

MariaDB is now fully accessible from any ODBC application under the debian account.
Instructions for the other data sources can be accessed through the following links:
SQLite
Firebird
HSQLDB
PostgreSQL
Oracle
Microsoft SQLServer for Linux
MongoDB

Cet article Installing the ODBC drivers for MariaDB est apparu en premier sur Blog dbi services.

Installing the ODBC drivers for Firebird

Thu, 2022-01-13 10:31

This article is part of a series that includes SQLite, Postgresql, Microsoft SQLServer, Oracle RDBMS, HSQLDB, MariaDB, MongoDB, and Excel. The goal is to set up a self-standing environment for testing an ODBC extension for gawk presented here to be completed. Refer to SQLite for installing the required ODBC Driver Manager.
The test system is a debian v11 (bullseye).
As root, install the Firebird RDBMS and its ODBC drivers from the official repositories:

# apt install firebird3.0-server

A systemd service was set up and launched:

# systemctl status firebird3.0.service
● firebird3.0.service - Firebird Database Server ( SuperServer )
     Loaded: loaded (/lib/systemd/system/firebird3.0.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2021-10-18 15:00:25 CEST; 50min ago
    Process: 546 ExecStart=/usr/sbin/fbguard -daemon -forever (code=exited, status=0/SUCCESS)
   Main PID: 576 (fbguard)
      Tasks: 4 (limit: 4659)
     Memory: 10.1M
        CPU: 75ms
     CGroup: /system.slice/firebird3.0.service
             ├─576 /usr/sbin/fbguard -daemon -forever
             └─577 /usr/sbin/firebird

Oct 18 15:00:23 debian systemd[1]: Starting Firebird Database Server ( SuperServer )...
Oct 18 15:00:25 debian systemd[1]: Started Firebird Database Server ( SuperServer ).

The service runs as the newly created firebird account:

# ps -ef | grep firebird
firebird   28053       1  0 15:15 ?        00:00:00 /usr/sbin/fbguard -daemon -forever
firebird   28054   28053  0 15:15 ?        00:00:00 /usr/sbin/firebird

Create the symlink below:

# ln -s /usr/lib/x86_64-linux-gnu/libfbclient.so.3.0.7 /usr/lib/x86_64-linux-gnu/libgds.so

Get and install the binaries ODBC drivers for Firebird:

# wget https://sourceforge.net/projects/firebird/files/firebird-ODBC-driver/2.0.5-Release/OdbcFb-LIB-2.0.5.156.amd64.gz/download
# gunzip OdbcFb-LIB-2.0.5.156.amd64.gz
# tar xvf OdbcFb-LIB-2.0.5.156.amd64
# cp libOdbcFb.so /usr/lib/x86_64-linux-gnu/odbc/.

Edit the odbcinst.ini file:

# vi /etc/odbcinst.ini
...
[Firebird]
Description=InterBase/Firebird ODBC
Driver=/usr/lib/x86_64-linux-gnu/odbc/libOdbcFb.so
Setup=/usr/lib/x86_64-linux-gnu/odbc/libOdbcFb.so
Threading=1
FileUsage=1
CPTimeout=
CPReuse=

Check the ODBC driver file /etc/odbcinst.ini:

# odbcinst -q -d
...
[Firebird]

# odbcinst -q -d -n Firebird
[Firebird]
Description=InterBase/Firebird ODBC
Driver=/usr/lib/x86_64-linux-gnu/odbc/libOdbcFb.so
Setup=/usr/lib/x86_64-linux-gnu/odbc/libOdbcFb.so
Threading=1
FileUsage=1
CPTimeout=
CPReuse=

As debian, create the sampledb database using the native administrative tool isql-fb:

$ cd $workdir
$ mkdir firebird
$ cd firebird
$ isql-fb -user SYSDBA -password 'SYSDBA'

Use CONNECT or CREATE DATABASE to specify a database

SQL> create database "/home/debian/odbc4gawk/firebird/sampledb.fb" page_size 8192 user SYSDBA password 'SYSDBA';
SQL> commit;
SQL> quit;

As there are no sample scripts specifically for Firebird, use the postgresql’s ones to create the tables but first replace the “SERIAL” keyword in the “CREATE TABLE” statements with the equivalent firebird’s syntax “INT GENERATED BY DEFAULT AS IDENTITY”. Use your favorite text editor or even sed for this. Save the above file as create_tables_postgresl.sql. With the vi editor, this could be done as shown below:

$ vi create_tables_postgresl.sql
:1,$s/ SERIAL / INT GENERATED BY DEFAULT AS IDENTITY /g
:w create_tables_firebird.sql
:q

Launch again isql-fb with a connection to the newly created database and copy/paste the statements from create_tables_firebird.sql as they are just a few of them:

$ isql-fb /home/debian/odbc4gawk/firebird/sampledb.fb -u sysdba -p sysdba
Database: /home/debian/odbc4gawk/firebird/sampledb.fb, User: SYSDBA
SQL> show tables;

There are no tables in this database
Paste the create table statements here.
Also, copy/paste the statements from here and append a final commit; to confirm the INSERT. Finally, check the data:

SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN');
Output:
COUNTRY_NAME                             COUNTRY_ID COUNTRY_ID STREET_ADDRESS                           CITY                           
======================================== ========== ========== ======================================== =====================
China                                    CN                                                                          
United Kingdom                           UK         UK         8204 Arthur St                           London                         
United Kingdom                           UK         UK         Magdalen Centre, The Oxford Science Park Oxford                         
United States of America                 US         US         2014 Jabberwocky Rd                      Southlake                      
United States of America                 US         US         2011 Interiors Blvd                      South San Francisco            
United States of America                 US         US         2004 Charade Rd                          Seattle                        
SQL> 

Configure an ODBC DSN by editing the user’s DSN file:

$ vi ~/.odbc.ini
...
[myfbdb]
Description=Firebird
Driver=Firebird
Dbname=/home/debian/odbc4gawk/firebird/sampledb.fb
User=SYSDBA
Password=SYSDBA
Role=
CharacterSet=UTF8
ReadOnly=No
NoWait=No

Check it:

$ odbcinst -q -s -n 
...
[myfbdb]
...

$ odbcinst -q -s -n myfbdb
[myfbdb]
Description=Firebird
Driver=Firebird
Dbname=/home/debian/odbc4gawk/firebird/sampledb.fb
User=SYSDBA
Password=SYSDBA
Role=
CharacterSet=UTF8
ReadOnly=No
NoWait=No

Try a connection to the firebird db via ODBC:

$ isql -v myfbdb sysdba sysdba
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> SELECT c.country_name, c.country_id, l.country_id, l.street_address, l.city FROM countries c LEFT JOIN locations l ON l.country_id = c.country_id WHERE c.country_id IN ('US', 'UK', 'CN');
Output:
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
| country_name                            | country_id| country_id| street_address                          | city          |
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
| China                                   | CN        |           |                                         |               |
| United Kingdom                          | UK        | UK        | 8204 Arthur St                          | London        |
| United Kingdom                          | UK        | UK        | Magdalen Centre, The Oxford Science Park| Oxford        |
| United States of America                | US        | US        | 2014 Jabberwocky Rd                     | Southlake     |
| United States of America                | US        | US        | 2011 Interiors Blvd                     | South San Francisco           |
| United States of America                | US        | US        | 2004 Charade Rd                         | Seattle       |
+-----------------------------------------+-----------+-----------+-----------------------------------------+---------------+
SQLRowCount returns 6
6 rows fetched

Test the DSN with pyodbc:

$ python3
Python 3.9.2 (default, Feb 28 2021, 17:03:44) 
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyodbc 
>>> con = pyodbc.connect('DSN=myfbdb;UID=SYSDBA;PWD=SYSDBA;DBNAME=sampledb;')
Segmentation fault

Although isql has no problem with them, it looks like the official Firebird ODBC drivers don’t work well with pyodbc. The same behavior was noticed later with the gawk the extension (see article here …). For this reason, we tried the commercial drivers from Devart available here. After having installed them by following the clear and simple instructions and having created the DSN myfbdb_devart using those drivers, pyodbc could work fine.
After installation of those drivers, check the DSN definition:

$ odbcinst -q -s -n myfbdb_devart
[myfbdb_devart]
Description=Firebird
Driver=Devart ODBC Driver for Firebird
Database=/media/sf_customers/dbi/odbc4gawk/sampledb.fb
Port=3050
User=SYSDBA
Password=SYSDBA
Role=
CharacterSet=UTF8
ReadOnly=No
NoWait=No

And the drivers:

$ odbcinst -q -d -n "Devart ODBC Driver for Firebird"
[Devart ODBC Driver for Firebird]
Driver=/usr/local/lib/libdevartodbcfirebird.so

Retry the pyodbc module:

$ python3
Python 3.9.2 (default, Feb 28 2021, 17:03:44) 
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
import pyodbc 
cnxn = pyodbc.connect('DSN=myfbdb_devart;UID=SYSDBA;PWD=SYSDBA;DBNAME=sampledb;')
cursor = cnxn.cursor()      
cursor.execute("""SELECT
...         c.country_name,
...         c.country_id,
...         l.country_id,
...         l.street_address,
...         l.city
... FROM
...         countries c
... LEFT JOIN locations l ON l.country_id = c.country_id
... WHERE
...         c.country_id IN ('US', 'UK', 'CN')""")

row = cursor.fetchone() 
 while row:
...     print (row) 
...     row = cursor.fetchone()
... 
Output:
('China', 'CN', None, None, None)
('United Kingdom', 'UK', 'UK', '8204 Arthur St', 'London')
('United Kingdom', 'UK', 'UK', 'Magdalen Centre, The Oxford Science Park', 'Oxford')
('United States of America', 'US', 'US', '2014 Jabberwocky Rd', 'Southlake')
('United States of America', 'US', 'US', '2011 Interiors Blvd', 'South San Francisco')
('United States of America', 'US', 'US', '2004 Charade Rd', 'Seattle')

The Devart’s drivers work fine. However, let’s try to recompile the official ODBC drivers for Firebird from their source code available here. To compile them, follow the following steps:

$ apt install unixodbc-dev
$ apt install firebird-dev

Once downloaded and untarred, move the directory OdbcJdbc/Builds/Gcc.lin and compile the drivers:

$ export FBINCDIR=/usr/include/firebird
$ export FBLIBDIR=/usr/lib/x86_64-linux-gnu
$ make -e -d -f makefile.linux

The compiled libraries are put in OdbcJdbc/Builds/Gcc.lin/Release_x86_64. As root, install the shared library libOdbcFb.so in its usual sub-directory and make it readable for everyone:

# cp OdbcJdbc/Builds/Gcc.lin/Release_x86_64/libOdbcFb.so /usr/lib/x86_64-linux-gnu/odbc/.
# chmod a+r /usr/lib/x86_64-linux-gnu/odbc/libOdbcFb.so

With those recompiled drivers, isql still works and pyodbc still fails but only while executing the SELECT statement. As shown later, the gawk extension has no problem anymore: it works with the drivers from Devart as well as the recompiled drivers; thus, we still have the option of free drivers.
Despite pyodbc, Firebird is now fully accessible from any ODBC application under the debian account.
Instructions for the other data sources can be accessed through the following links:
SQLite
HSQLDB
MariaDB
PostgreSQL
Oracle
Microsoft SQLServer for Linux
MongoDB
Excel

Cet article Installing the ODBC drivers for Firebird est apparu en premier sur Blog dbi services.

Installing the ODBC drivers for PostgreSQL

Thu, 2022-01-13 10:22

This article is part of a series that includes SQLite, MariaDB, Firebird, Oracle RDBMS, Microsoft SQL Server, HSQLDB, MongoDB, and Excel. The goal is to set up a self-standing environment for testing an ODBC extension for gawk presented here to be completed. Refer to SQLite for installing the required ODBC Driver Manager.
The test system is a debian v11 (bullseye).
The postgresql drivers can be installed from the platform’s default package repositories using the following steps as root. As the postgresql service was missing on this test environment, let’s install it too:

# apt install postgresql

A systemd service has been created and started, check it:

# systemctl status postgresql
● postgresql.service - PostgreSQL RDBMS
     Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
     Active: active (exited) since Mon 2021-08-23 19:04:20 CEST; 1min 15s ago
   Main PID: 10528 (code=exited, status=0/SUCCESS)
      Tasks: 0 (limit: 4659)
     Memory: 0B
        CPU: 0
     CGroup: /system.slice/postgresql.service

Check its processes:

# ps -ef | grep postgres
postgres     682       1  0 Oct16 ?        00:00:02 /usr/lib/postgresql/13/bin/postgres -D /var/lib/postgresql/13/main -c config_file=/etc/postgresql/13/main/postgresql.conf
postgres     807     682  0 Oct16 ?        00:00:00 postgres: 13/main: checkpointer 
postgres     808     682  0 Oct16 ?        00:00:04 postgres: 13/main: background writer 
postgres     809     682  0 Oct16 ?        00:00:04 postgres: 13/main: walwriter 
postgres     810     682  0 Oct16 ?        00:00:01 postgres: 13/main: autovacuum launcher 
postgres     811     682  0 Oct16 ?        00:00:01 postgres: 13/main: stats collector 
postgres     812     682  0 Oct16 ?        00:00:00 postgres: 13/main: logical replication launcher

The processes run under the postgres account.
Install the ODBC drivers:

# apt install odbc-postgresql

Check this step:

# odbcinst -q -d
...
[PostgreSQL ANSI]
[PostgreSQL Unicode]

Two drivers have been installed, one for the ANSI character encoding and one for Unicode; check the Unicode one:

# odbcinst -q -d -n 'PostgreSQL Unicode'
[PostgreSQL Unicode]
Description=PostgreSQL ODBC driver (Unicode version)
Driver=psqlodbcw.so
Setup=libodbcpsqlS.so
Debug=0
CommLog=1
UsageCount=1

Verify that the default native administration tool, psql, is there:

# psql -V
# psql (PostgreSQL) 13.3 (Debian 13.3-1)

Switch to the postgres account and create the database:

# su – postgres
$ createdb sampledb

Launch psql and create the tables for postgresql using the scripts here:
As those are small files, copying and pasting their content into psql will do just fine.
Also, populate the tables for postgresql using the data here and test:

$ psql sampledb
sampledb=# SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN');
Output:
       country_name       | country_id | country_id |              street_address   |    city         
--------------------------+------------+------------+-------------------------------+-----------
 United States of America | US         | US         | 2014 Jabberwocky Rd           | Southlake
 United States of America | US         | US         | 2011 Interiors Blvd           | South San Francisco
 United States of America | US         | US         | 2004 Charade Rd               | Seattle
 United Kingdom           | UK         | UK         | 8204 Arthur St                | London
 United Kingdom           | UK         | UK         | Magdalen Centre, The Oxford   | Oxford
                          |            |            | Science Park                  | 
 China                    | CN         |            |                               | 
(6 rows)

To test ODBC, first edit ~/.odbc.ini and add the postgresql database details:

$ vi ~/.odbc.ini 
[mypostgresqldb]
Description=My Postgresql sample database
Driver=PostgreSQL Unicode
Database=sampledb

Verify that the edition was successful:

$ odbcinst -q -s -n mypostgresqldb
[mypostgresqldb]
Description=My Postgresql sample database
Driver=PostgreSQL Unicode
Database=sampledb

Next, launch isql against that database:

$ export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu/odbc:$LD_LIBRARY_PATH
$ isql -v mypostgresqldb
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> help
+------------------+-------------+-------------+------------+---------+
| TABLE_QUALIFIER  | TABLE_OWNER | TABLE_NAME  | TABLE_TYPE | REMARKS |
+------------------+-------------+-------------+------------+---------+
| sampledb         | public      | countries   | TABLE      |         |
| sampledb         | public      | departments | TABLE      |         |
| sampledb         | public      | dependents  | TABLE      |         |
| sampledb         | public      | employees   | TABLE      |         |
| sampledb         | public      | jobs        | TABLE      |         |
| sampledb         | public      | locations   | TABLE      |         |
| sampledb         | public      | regions     | TABLE      |         |
+------------------+-------------+-------------+------------+---------+
SQLRowCount returns 7
7 rows fetched
SQL> quit

The ODBC connectivity to postgresql is confirmed for the postgres account; this is fine for that user but we want the database to be usable by the debian test account too. To this effect, use the following steps from with psql:

sampledb=# CREATE ROLE debian superuser;
CREATE ROLE
sampledb=# GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO debian;
GRANT
sampledb=# ALTER ROLE debian with login;
ALTER ROLE
sampledb=# \q

Back as root, become debian and test the accessibility of the sampledb:

# su - debian
$ psql sampledb
sampledb=# select count(*) from employees;
 count 
-------
    40
(1 row)

Check with isql:

$ isql -v mypostgresqldb
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> SELECT c.country_name, c.country_id, l.country_id, l.street_address, l.city FROM countries c LEFT JOIN locations l ON l.country_id = c.country_id WHERE c.country_id IN ('US', 'UK', 'CN')
+---------------------------+-------------+-----------+-----------------------+----------------------+
| country_name              | country_id  | country_id| street_address        | city                 |
+---------------------------+-------------+-----------+-----------------------+----------------------+
| United States of America  | US          | US        | 2014 Jabberwocky Rd   | Southlake            |
| United States of America  | US          | US        | 2011 Interiors Blvd   | South San Francisco  |
| United States of America  | US          | US        | 2004 Charade Rd       | Seattle              |
| United Kingdom            | UK          | UK        | 8204 Arthur St        | London               |
| China                     | CN          |           |                       |                      |
+---------------------------+-------------+-----------+-----------------------+----------------------+ 
SQLRowCount returns 6
6 rows fetched

Note that the order of the result set may differ in other data sources; the SQL standard does not define the result set’s order and the ORDER BY clause should be used to enforce one if needed.
Finally, let’s verify the ODBC connectivity through pyodbc:

$ python3
import pyodbc Python 3.9.2 (default, Feb 28 2021, 17:03:44) 
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.

import pyodbc

# connect directly using the DRIVER definition, no DSN;
cnxn = pyodbc.connect('DRIVER={PostgreSQL Unicode};Direct=True;Database=sampledb;String Types= Unicode')

# using the DSN is OK too:
# cnxn = pyodbc.connect('DSN=mypostgresqldb')
cursor = cnxn.cursor()      
cursor.execute("""SELECT
...         c.country_name,
...         c.country_id,
...         l.country_id,
...         l.street_address,
...         l.city
... FROM
...         countries c
... LEFT JOIN locations l ON l.country_id = c.country_id
... WHERE
...         c.country_id IN ('US', 'UK', 'CN')""") 

row = cursor.fetchone() 
while row:
...     print (row) 
...     row = cursor.fetchone()
... 
Output:
('United States of America', 'US', 'US', '2014 Jabberwocky Rd', 'Southlake')
('United States of America', 'US', 'US', '2011 Interiors Blvd', 'South San Francisco')
('United States of America', 'US', 'US', '2004 Charade Rd', 'Seattle')
('United Kingdom', 'UK', 'UK', '8204 Arthur St', 'London')
('United Kingdom', 'UK', 'UK', 'Magdalen Centre, The Oxford Science Park', 'Oxford')
('China', 'CN', None, None, None)

Note that we used the PostgreSQL Unicode driver, not the ANSI one because the latter gives the error below:

pyodbc.Error: ('07006', '[07006] Received an unsupported type from Postgres. (14) (SQLGetData)')

postgreslq databases can now be used locally by any ODBC client application, e.g. any python program with the pyodbc module, or a desktop application such as LibreOffice. In the case of python, many native modules for postgresql are available but they require ad hoc function calls whereas ODBC lets one use the same statements with any database target, which simplifies the maintenance of ODBC applications.
Instructions for the other data sources can be accessed through the following links:
SQLite
HSQLDB
MariaDB
Firebird
Oracle
Microsoft SQLServer for Linux
MongoDB
Excel

Cet article Installing the ODBC drivers for PostgreSQL est apparu en premier sur Blog dbi services.

Installing the ODBC Driver Manager with SQLite on Linux

Thu, 2022-01-13 10:14

This article is part of a series that includes Firebird, Postgresql, Microsoft SQLServer, Oracle RDBMS, HSQLDB, MariaDB, MongoDB, and Excel. The goal is to set up a self-standing environment for testing an ODBC extension for gawk presented here to be completed. Excepting the present part which deals with the ODBC Driver manager’s installation which is a prerequisite, each part can be used independently from the others.
The test system is a debian v11 (bullseye).

Installation of the ODBC driver Manager

There are 2 main implementations of ODBC for Linux: UnixODBC (http://www.unixodbc.org/) and iODBC (http://www.iodbc.org); we picked the former for no particular reason.
The driver manager can be installed as root through the standard package management tool in Debian:

root@debian:~# apt-get install libodbc1
root@debian:~# apt install unixodbc 

root@debian:~# odbcinst -j
unixODBC 2.3.6
DRIVERS............: /etc/odbcinst.ini
SYSTEM DATA SOURCES: /etc/odbc.ini
FILE DATA SOURCES..: /etc/ODBCDataSources
USER DATA SOURCES..: /root/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8

The system-wide file /etc/odbcinst.ini stores the list of installed ODBC drivers and is maintained by root. It is the default one, but its location can be changed and set in the environment variable $ODBCINST.
Let’s create a working folder in user debian’s home directory, our test account:

$ id
uid=1000(debian) gid=1000(debian) groups=1000(debian)
$ mkdir ~/odbc4gawk
$ export workdir=~/odbc4gawk
$ mkdir $workdir
$ cd $workdir

As most of the ODBC drivers have their shared libraries installed in /usr/lib/x86_64-linux-gnu/odbc, this path must be added the $LD_LIBRARY_PATH environment variable of any user that must use ODBC, preferably in the ~/.bashrc file (if bash is the select shell):

export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu/odbc:$LD_LIBRARY_PATH

We are now ready to install the ODBC drivers for each data source of interest.

Installation of the SQLite ODBC driver

SQLite along with its administrative tool isql are installed by the native package manager as root:

# apt install sqlite3
# apt install libsqliteodbc

Each time an ODBC driver is installed, it updates the file /etc/odbcinst.ini with driver-dependent information, which can later be checked using the above command odbcinst, e.g.:
List the installed ODBC drivers:

$ odbcinst -q -d
[SQLite3]

Query a particular driver:

$ odbcinst -q -d -n SQLite
SQLite3]
Description=SQLite3 ODBC Driver
Driver=libsqlite3odbc.so
Setup=libsqlite3odbc.so
UsageCount=1

As user debian, edit the file ~/.odbc.ini and add the SQLite data source’s details we will use:

$ cd $workdir 
$ mkdir sqlite
$ cd sqlite
$ vi ~/.odbc.ini
[mysqlitedb]
Description=My SQLite sample database
Driver=SQLite3
Database=/home/debian/odbc4gawk/sqlite/sampledb

Let’s check it:
List all the DSN currently defined in ~/.odbc.ini

$ odbcinst -q -s
[mysqlitedb]

List our new DSN:

$ odbcinst -q -s -n mysqlitedb
[mysqlitedb]
Description=My SQLite sample database
Driver=SQLite3
Database=/home/debian/odbc4gawk/sqlite/sampledb

which is the information we just entered above.

The ~/.odbc.ini file allows to hide a drivers’ private settings so that they don’t need to be specified later. When connecting to a data source, the name that is specified here between square brackets, the DSN, is enough. It is also possible to directly specify in-line all the parameters in this section when connecting programmatically but the code will need to be edited (and maybe recompiled) in case one them has to be changed, so using a DSN is preferable.
The default file location is the current user’s home directory but it can be changed and its full path name set in the environment variable $ODBCINI.

Let’s test the connection via ODBC using the command isql included with the ODBC driver manager:

$ isql mysqlitedb -v
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+

The ODBC driver for SQLite does work.

To populate a database, start sqllite3:

$ /usr/bin/sqlite3
SQLite version 3.34.1 2021-01-20 14:10:07
Enter ".help" for usage hints.
Connected to a transient in-memory database.

Use “.open FILENAME” to reopen a persistent database.

sqlite> .open sampledb

The file sampledb has been created in the directory specified in ~/.odbc.ini.
Go to the following links, copy the SQL statements and paste them in sqllite3: table creation and insert data.

Still in sqlite3, check that the database has been populated by running the test SQL query listed above:

sqlite> SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN');
Output:
China|CN|||
United Kingdom|UK|UK|8204 Arthur St|London
United Kingdom|UK|UK|Magdalen Centre, The Oxford Science Park|Oxford
United States of America|US|US|2014 Jabberwocky Rd|Southlake
United States of America|US|US|2011 Interiors Blvd|South San Francisco
United States of America|US|US|2004 Charade Rd|Seattle

Now, quit sqlite3 and re-run the same query from isql, the ODBC-based utility:

$ isql -v mysqlitedb
SQL> SELECT c.country_name, c.country_id, l.country_id, l.street_address, l.city FROM countries c LEFT JOIN locations l ON l.country_id = c.country_id WHERE c.country_id IN ('US', 'UK', 'CN')
+---------------------------+-------------+-----------+---------------------------------------------+----------------------+
| country_name              | country_id  | country_id| street_address                              | city                 |
+---------------------------+-------------+-----------+---------------------------------------------+----------------------+
| China                     | CN          |           |                                             |                      |
| United Kingdom            | UK          | UK        | 8204 Arthur St                              | London               |
| United Kingdom            | UK          | UK        | Magdalen Centre, The Oxford Science Park    | Oxford               |
| United States of America  | US          | US        | 2014 Jabberwocky Rd                         | Southlake            |
| United States of America  | US          | US        | 2011 Interiors Blvd                         | South San Francisco  |
| United States of America  | US          | US        | 2004 Charade Rd                             | Seattle              |
+---------------------------+-------------+-----------+---------------------------------------------+----------------------+
SQLRowCount returns 0
6 rows fetched

Note: the output has been shrunk a bit to reduce horizontal scrolling.

Everything looks good so far. Let’s now install the pyodbc ODBC module for python as root;

# apt install pip
# needed to compile pyodbc:
# apt install unixodbc-dev
# pip install pyodbc

As debian, execute the following python script with the following connection string:

import pyodbc 
cnxn = pyodbc.connect(DSN='mysqlitedb;UID=SA;PWD=admin2021!')
cursor = cnxn.cursor()  
cursor.execute("""SELECT
        c.country_name,
        c.country_id,
        l.country_id,
        l.street_address,
        l.city
FROM
        countries c
LEFT JOIN locations l ON l.country_id = c.country_id
WHERE
        c.country_id IN ('US', 'UK', 'CN')""")
row = cursor.fetchone() 
while row:
    print (row) 
    row = cursor.fetchone()
Output:
('China', 'CN', None, None, None)
('United Kingdom', 'UK', 'UK', '8204 Arthur St', 'London')
('United Kingdom', 'UK', 'UK', 'Magdalen Centre, The Oxford Science Park', 'Oxford')
('United States of America', 'US', 'US', '2014 Jabberwocky Rd', 'Southlake')
('United States of America', 'US', 'US', '2011 Interiors Blvd', 'South San Francisco')
('United States of America', 'US', 'US', '2004 Charade Rd', 'Seattle')Output:

Everything works as expected.
We have completed the steps to access an SQLite database via ODBC from isql and from the python module pyodbc. It was quite straightforward with that RDBMS. Once the gawk interface is completed, it should return the same output.
Turn now to the links below for the other data sources:
Firebird
HSQLDB
MariaDB
PostgreSQL
Oracle
Microsoft SQLServer for Linux
MongoDB
Excel

Cet article Installing the ODBC Driver Manager with SQLite on Linux est apparu en premier sur Blog dbi services.

Oracle 21C Security : diagnostics_control and enable diagnostics

Wed, 2022-01-12 10:47

Some debug-events and debug-actions are not safe and should be exposed to users with caution. In previous releases, privilege control for the usage of these diagnostics was not sufficient.

With Oracle 21c, regular users can be blocked from using these diagnostics to better support separation of duty.

Indeed with Oracle 21c, we have a new mechanism to control the debug-events and debug-actions through ALTER SESSION and/or ALTER SYSTEM. This mechanism is implemented by two new features
-ENABLE DIAGNOSTICS system privilege
-DIAGNOSTICS_CONTROL initialization parameter

Let’s see how these features work
As the DIAGNOSTICS_CONTROL is set to IGNORE, The default behavior is that every user can perform diagnostic tasks without error if he has ALTER SESSION PRIVILEGE

SQL> show parameter diagnostics_control

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
diagnostics_control                  string      IGNORE
SQL>

Let’s consider the user EDGE with the following privileges

SQL>  select privilege from dba_sys_privs where GRANTEE='EDGE';

PRIVILEGE
----------------------------------------
CREATE TABLE
ALTER SESSION
CREATE SESSION

SQL>

The user EDGE can execute following query

SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> show user
USER is "EDGE"
SQL> ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';

Session altered.

SQL>

Let’s now set the DIAGNOSTICS_CONTROL to WARNING

SQL> alter system set diagnostics_control=WARNING;

System altered.

SQL> show parameter diagnostics_control

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
diagnostics_control                  string      WARNING
SQL>

And let’s retry the SET EVENTS action with the user EDGE

SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> show user
USER is "EDGE"
SQL>


SQL> ALTER SESSION SET EVENTS '1357 trace name context forever, level 2';

Session altered.

SQL>

The action is executed but we have an entry in database alert log

2022-01-12T12:56:22.136454+01:00
PDB1(3):User 'EDGE' has set the following debug-event(s) on the event-group 'session':
PDB1(3):'1357 trace name context forever, level 2'
PDB1(3):To disable such messages, refer to documentation about parameter 'diagnostics_control'.

Now let’s put the DIAGNOSTICS_CONTROL to ERROR

SQL> alter system set diagnostics_control=ERROR;

System altered.

SQL> show parameter diagnostics_control

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
diagnostics_control                  string      ERROR
SQL>

And let’s try again the SET EVENTS action with the user EDGE

SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> show user
USER is "EDGE"
SQL> ALTER SESSION SET EVENTS '1357 trace name context forever, level 2';
ERROR:
ORA-01031: insufficient privileges
ORA-49100: Failed to process event statement [1357 trace name context forever,
level 2]

SQL>

We can see that the statement is not executed and that we get an error.

We can conclude than using the parameter DIAGNOSTICS_CONTROL, we can control which users can perform diag events. One can see following in Oracle documentation

ERROR: If a user who does not have the SYSDBA or ENABLE DIAGNOSTICS privilege attempts to enable a diagnostic, then the attempt will fail and an ORA-01031: insufficient privileges error appears.
WARNING: A user who does not have the SYSDBA or ENABLE DIAGNOSTICS privilege will be able to enable a diagnostic, but a warning message is written to an alert log. The warning message is similar to the following:
IGNORE: The user can perform the diagnostic task without any error messages appearing. This setting is the default.

But how if I want to set the parameter to ERROR and want to allow some non-DBA users to perform diag events? Just grant them the new system privilege ENABLE DIAGNOSTICS.
Let do a test.

SQL>  select privilege from dba_sys_privs where GRANTEE='EDGE';

PRIVILEGE
----------------------------------------
ENABLE DIAGNOSTICS
CREATE TABLE
ALTER SESSION
CREATE SESSION

SQL> show parameter diagnostics_control

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
diagnostics_control                  string      ERROR
SQL>

And now let’s do a diag event

SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> show user
USER is "EDGE"

SQL> ALTER SESSION SET EVENTS '1357 trace name context forever, level 2';

Session altered.

SQL>

As now EDGE has the privilege ENABLE DIAGNOSTICS, he can perform diag events even if the DIAGNOSTICS_CONTROL is set to ERROR.

Conclusion

This new feature can be very useful to limit the users who can perform diag actions. More information can be found in the Oracle documentation

Cet article Oracle 21C Security : diagnostics_control and enable diagnostics est apparu en premier sur Blog dbi services.

PostgreSQL Flexible Server on Azure, Status

Mon, 2022-01-10 10:41

This Blog is about the actual status of the development on Microsoft’s side for the Azure Flexible Server for PostgreSQL.
The Flexible Server is announced “GA Ready” at November 30st, after shiftig the GA Raeady date several times.

But there are still some open requirements which are “Post GA”.

1. AD Integration => Post GA, promissed Q2 2022.
2. Customer managed Keys => Post GA, promissed Q2 2022.

According to the latency of 6 month for GA Ready it is expected that Post GA Features will have more or less the same latency.

On the Windows based Single Server for PostgreSQL offering from Microsoft any development is stopped, only bugs will be fixed.
The latest Major PostgreSQL Release which is available on Single Server is PostgreSQL 11, no PostgreSQL 12, no PostgreSQL 13 and no PostgreSQL 14.

The Single Server has many disadvantages, starting with the default collation US-Western 1252 and not stopping with password encryption md5 only.
On the Single Server we found out that with enabling Azure Services to connect, the pg_hba.conf ends up with about 11000 Entries for approx 2.9 million ip address of the Microsoft Universe.

Reading out pg_hba.conf in sql is blocked on the Flexible Server so there we can’t check this, my expectation will be the same.
Using Flexible Server within a privat VNET is mandatory in my mind, there is no possibility for the Azure Services to connect.

The Single Server offering based on big Windows based hosts with many instancies on, reading out postgressql.con with show all; shows ports within the 20000 region used.
To bring the connectivity down to the PostgreSQL default port 5432 Microsoft placed for each instance one gateway around, and there we observed that now and than the ssl connectivity breaks at these gateways.

Restarting the instance is fixing this issue, but it is not part of the monitoring, so no alarming when it is happening.

The Flexible Server based on Ubuntu 18.04 right now, during the customer preview process Microsoft was using for several times the outdated Ubuntu 16.04, hopefully we will not see this in GA Ready state.

The md5 password encryption can be overruled with scram-sha-256 which comes with PostgreSQL 10 and using community packages it is default since PostgreSQL 13.
Azure Felexibe Server for PostgreSQL scram-sha-256 workaround

Microsoft is offering Compute V4 for the Flexible Server offering based on Intel Cascade Lake Xeon CPUs:
Azure Flexibe Server for PostgreSQL Compute V4

The issue here is that Cascade Lake is official not supported by Ubuntu 18.04:
Ubuntu 18.04 CPU Compatibility

The observation was a performance drop using Compute V4:

V3 without replication:

latency average = 8.343 ms
tps = 479.469320 (including connections establishing)
tps = 479.735705 (excluding connections establishing)

V4 without replication:

latency average = 11.734 ms
tps = 340.893181 (including connections establishing)
tps = 341.094725 (excluding connections establishing)

Compute V5 Ice Lake is supported by Ubuntu 18.04, but till now not offered for the Flexible Server by Microsoft, hopefully it will come.
The main performance issue on the Flexible Server is slow storage, update like “Ultra SSD” promissed for 2022, so development is still ongoing.

Microsoft is using synchronus replication, with all pros and cons of sychronous replication.
Ome of the cons is leaking performance:
V3 without replication:

latency average = 8.343 ms
tps = 479.469320 (including connections establishing)
tps = 479.735705 (excluding connections establishing)

V3 with replication:

latency average = 13.797 ms
tps = 289.910220 (including connections establishing)
tps = 290.048492 (excluding connections establishing)

This performance drop is expected by using synchronous replication.
The other con is that GEO redundant replication is not possible and that the replica can not be used read only.

Microsoft is going to Linux, thats good news, we where able to discuss the issues with the Product Group an they are still working on it.
Sometimes it takes very long to fix issues, may be an issue of big organisations in the background, but there is still progress visible.

For now my prefered solution are own virtual machines, there much more options possible according to compute, storage and used OS.
By using an own Linux image with optimized tuned.conf, using Community Packages an the options Microsoft is offering for virutal machines it is possible to build an environment which is much faster, possible to use own SSL keys and certificates and if needed GEO redundant replication.

Cet article PostgreSQL Flexible Server on Azure, Status est apparu en premier sur Blog dbi services.

opatch hangs while patching OEM 13.4. concerning the log4j-vulnerability

Wed, 2021-12-29 16:27
By Clemens Bleile

To address the log4j-vulnerability I recently had to fix the Oracle Enterprise Manager 13.4.-installation for a customer.

REMARK: Please check the following link for details concerning the log4j-vulnerability and how Oracle is affected:

alert-cve-2021-44228

and the following MOS Notes:

– Impact of December 2021 Apache Log4j Vulnerabilities on Oracle Products and Services (CVE-2021-44228, CVE-2021-45046) (Doc ID 2827611.1)
– Impact of December 2021 Apache Log4j Vulnerabilities on Oracle on-premises products (CVE-2021-44228, CVE-2021-45046) (Doc ID 2830143.1)
– Security Alert For CVE-2021-44228,CVE-2021-45046 & CVE-2021-45105 Patch Availability Document for Oracle Enterprise Manager Cloud Control (Doc ID 2828296.1)

REMARK: The link and the MOS Notes were correct at the time of writing the Blog. But this may have changed in the meantime.

According the latter MOS Note the following has to be patched on OEM 13.4.:

1. Patch WLS to OCT 2021: Patch 33412599

unzip p33412599_122130_Generic.zip
cd 33412599
emctl stop oms -all
opatch apply
emctl start oms

REMARK: Patch 33691226 (12.2.1.3) is not applicable to EM Cloud Control 13c (13.4). See MOS Note “Security Alert CVE-2021-44228 / CVE-2021-45046 Patch Availability Document for Oracle WebLogic Server and Fusion Middleware (Doc ID 2827793.1)”.

2. Apply Patch 33672721 on OMS Middleware HOME (DB Plugin Home Patch)

unzip p33672721_134100_Generic.zip
cd 33672721
emctl stop oms
export PATH=$ORACLE_HOME/OMSPatcher:$PATH
omspatcher apply
emctl start oms

During the patching of WLS with the OCT 2021 patch, the opatch utility did hang:

/home/oracle/cbleile/log4j_patch_for_OEM/33412599/ [oms13c] opatch apply
...
OPatch detects the Middleware Home as "/u01/app/oracle/middleware134"

Verifying environment and performing prerequisite checks...

Using

ps -ef | grep opatch

I could see a hanging fuser-command. The reason for the hang was a stale NFS-mount, because the NFS-server was down. The same would have happened with the second patch and omspatcher. To be able to continue I could force the unmount of the stale NFS-mount with the command

umount -f -l /mnt/orarepo

The hanging opatch-command could be Ctrl-C’ed and the patching repeated. opatch went through then.

REMARK: To avoid the problem in the future it’s advised to soft-mount nfs as e.g. documented in this Blog.

The whole problem with opatch and fuser has also been documented here. In that Blog an alternative workaround has been provided by using a fake-fuser-command through the property_file option of opatch.

Finally everything could be patched successfully and I verified everything is OK with the Enterprise Manager 13.4.-installation:

/home/oracle/ [oms13c] opatch lspatches | grep 33412599
33412599;WLS PATCH SET UPDATE 12.2.1.3.210929
/home/oracle/ [oms13c] opatch lspatches | grep 33672721
33672721;Fix for bug 33672721

Cet article opatch hangs while patching OEM 13.4. concerning the log4j-vulnerability est apparu en premier sur Blog dbi services.

Script to calculate the network latency between the application and the Oracle DB-server

Wed, 2021-12-29 15:13
By Clemens Bleile

Network latency is crucial for applications, especially if lots of “row by row”-processing (some call it “slow by slow”-processing) is used by the application or if lots of data is transported between the database and the application. If there’s lots of data to be transported between the database and the application then increasing the fetchsize may help to improve the performance and throughput. But this is not the aim of this blog. What I wanted to provide is a simple script on how to meassure the network latency (network-round-trip-time) when running SQL against your database.

In today’s environment with the Cloud and Standby-DBs, it may easily happen that the application server suddenly is far away from the database server. This causes network latency to go up considerably. But what is the network latency exactly between my application server and my database server?

There’s a very good blog from Frits Hoogland describing network latency and how you may measure it in detail.

What I wanted to achieve however, was to provide a simpler method to measure the network latency. The objective was to install sqlcl on the application server and run a SQL with around 5000 network round trips and measure the network latency by calculating

(“Elapsed Time” – “DB Time”) / “network round trips” = Network Latency

Using sqlcl provides the advantage that I just need to have Java on the application server as a prerequisite.

So the first step is to install sqlcl by downloading it from here and unzip it on the Application Server:

REMARK: Download a current version of sqlcl, becuase I’m using “set feedback only” (to hide the query output), which is not available in older versions.

$ unzip sqlcl-21.4.0.348.1716.zip
$ alias sqlcl="$PWD/sqlcl/bin/sql"
$ sqlcl /nolog

SQLcl: Release 21.4 Production on Wed Dec 29 18:55:47 2021

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

SQL> quit
$ 

Here the setup-script for my tests:

$ cat setup_test_netlat_random.sql

connect sys/password@clemens-oradb/pdb1 as sysdba

set echo on
drop user test_netlat cascade;
create user test_netlat identified by WElcome_netlat_01;
grant dba to test_netlat;
grant select on v_$mystat to test_netlat;
grant select on v_$statname to test_netlat;
grant select on v_$SESS_TIME_MODEL to test_netlat;

connect test_netlat/WElcome_netlat_01@clemens-oradb/pdb1

drop table tlat purge;
create table tlat (id number, filler varchar2(200));
exec dbms_random.seed(0);
insert into tlat select rownum,dbms_random.string('L',dbms_random.value(100,100)) from all_objects where rownum <= 5000;
commit;
exec dbms_stats.gather_table_stats(user,'TLAT');

create or replace function net_roundtrips return number as
nrt number;
begin
   select mystat.value into nrt 
   from v$mystat mystat, v$statname statname
   where mystat.statistic# = statname.statistic#
   and statname.display_name = 'SQL*Net roundtrips to/from client';
   return nrt;
end;
/

create or replace function my_db_time_microsecs return number as
mydbtime number;
begin
   select value into mydbtime
   from V$SESS_TIME_MODEL 
   where sid=SYS_CONTEXT( 'USERENV', 'SID' ) 
   and stat_name='DB time';
   return mydbtime;
end;
/

set echo off

Just run it after adjusting it to your needs:

$ sqlcl /nolog

SQLcl: Release 21.4 Production on Wed Dec 29 20:37:13 2021

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

SQL> @setup_test_netlat_random.sql
Connected.
SQL> drop user test_netlat cascade;

User TEST_NETLAT dropped.

SQL> create user test_netlat identified by WElcome_netlat_01;

User TEST_NETLAT created.

SQL> grant dba to test_netlat;

Grant succeeded.

SQL> grant select on v_$mystat to test_netlat;

Grant succeeded.

SQL> grant select on v_$statname to test_netlat;

Grant succeeded.

SQL> grant select on v_$SESS_TIME_MODEL to test_netlat;

Grant succeeded.

SQL> 
SQL> connect test_netlat/WElcome_netlat_01@clemens-oradb/pdb1
Connected.
SQL> 
SQL> -- drop table tlat purge;
SQL> create table tlat (id number, filler varchar2(200));

Table TLAT created.

SQL> exec dbms_random.seed(0);

PL/SQL procedure successfully completed.

SQL> insert into tlat select rownum,dbms_random.string('L',dbms_random.value(100,100)) from all_objects where rownum  commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(user,'TLAT');

PL/SQL procedure successfully completed.

SQL> 
SQL> create or replace function net_roundtrips return number as
  2  nrt number;
  3  begin
  4     select mystat.value into nrt 
  5     from v$mystat mystat, v$statname statname
  6     where mystat.statistic# = statname.statistic#
  7     and statname.display_name = 'SQL*Net roundtrips to/from client';
  8     return nrt;
  9  end;
 10  /

Function NET_ROUNDTRIPS compiled

SQL> 
SQL> create or replace function my_db_time_microsecs return number as
  2  mydbtime number;
  3  begin
  4     select value into mydbtime
  5     from V$SESS_TIME_MODEL 
  6     where sid=SYS_CONTEXT( 'USERENV', 'SID' ) 
  7     and stat_name='DB time';
  8     return mydbtime;
  9  end;
 10  /

Function MY_DB_TIME_MICROSECS compiled

SQL> 
SQL> set echo off
SQL> 

I.e. I filled a table with 5000 rows. The filler column contains a 100 Bytes long random string. In addition I created 2 functions which return the current statistics for ‘SQL*Net roundtrips to/from client’ and ‘DB time’ for my session.

Below the script for the network latency test (adjust it to meet your needs as well):

$ cat test_netlat.sql 
connect test_netlat/WElcome_netlat_01@clemens-oradb/pdb1
set echo on
var roundtrips_begin number;
var roundtrips_end number;
var time_begin number;
var time_end number;
var db_time_start number;
var db_time_end number;
set feed only arraysize 15
-- select filler from tlat ;
-- select filler from tlat ;
set arraysize 1
begin
   :roundtrips_begin := NET_ROUNDTRIPS;
   :db_time_start := MY_DB_TIME_MICROSECS;
   :time_begin := dbms_utility.get_time;
end;
/
 
select filler from tlat ;

begin
   :roundtrips_end := NET_ROUNDTRIPS;
   :db_time_end := MY_DB_TIME_MICROSECS;
   :time_end := dbms_utility.get_time;
end;
/

set serveroutput on feed off
exec dbms_output.put_line(to_char(:roundtrips_end - :roundtrips_begin)||' network round trips.');
exec dbms_output.put_line(to_char((:time_end - :time_begin)*10)||' ms elapsed time.');
exec dbms_output.put_line(to_char((:db_time_end - :db_time_start)/1000)||' ms DB time.');
exec dbms_output.put_line(to_char(round((((:time_end - :time_begin)*10)-((:db_time_end - :db_time_start)/1000))/(:roundtrips_end - :roundtrips_begin),3))||' ms latency per round trip.');
exec dbms_output.put_line('--> (Elapsed Time - DB Time) / network round trips');
set echo off

What I’m basically doing is to get the number of “SQL*Net roundtrips to/from client” and the “DB Time” of my session before the run. In addition I take the begin-time with the dbms_utility.get_time function. With “arraysize 1” (1 row per fetch) I run a full-table-scan to get the 5000 rows – one by one over the network. After taking statistcs after the run I do produce the output:

– number of network round trips
– elapsed time of the query in ms
– DB time of the query in ms
– the calculated network latency: (Elapsed Time – DB Time) / network round trips

Here an example output:

$ sqlcl /nolog

SQLcl: Release 21.4 Production on Wed Dec 29 20:41:26 2021

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

SQL> @test_netlat
Connected.
SQL> var roundtrips_begin number;
SQL> var roundtrips_end number;
SQL> var time_begin number;
SQL> var time_end number;
SQL> var db_time_start number;
SQL> var db_time_end number;
SQL> set feed only arraysize 15
SQL> -- select filler from tlat ;
SQL> -- select filler from tlat ;
SQL> set arraysize 1
SQL> begin
  2     :roundtrips_begin := NET_ROUNDTRIPS;
  3     :db_time_start := MY_DB_TIME_MICROSECS;
  4     :time_begin := dbms_utility.get_time;
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL> 
SQL> select filler from tlat ;

5,000 rows selected. 

SQL> 
SQL> begin
  2     :roundtrips_end := NET_ROUNDTRIPS;
  3     :db_time_end := MY_DB_TIME_MICROSECS;
  4     :time_end := dbms_utility.get_time;
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL> set serveroutput on feed off
SQL> exec dbms_output.put_line(to_char(:roundtrips_end - :roundtrips_begin)||' network round trips.');
4953 network round trips.

SQL> exec dbms_output.put_line(to_char((:time_end - :time_begin)*10)||' ms elapsed time.');
18870 ms elapsed time.

SQL> exec dbms_output.put_line(to_char((:db_time_end - :db_time_start)/1000)||' ms DB time.');
876.625 ms DB time.

SQL> exec dbms_output.put_line(to_char(round((((:time_end - :time_begin)*10)-((:db_time_end - :db_time_start)/1000))/(:roundtrips_end - :roundtrips_begin),3))||' ms latency per round trip.');
3.633 ms latency per round trip.

SQL> exec dbms_output.put_line('--> (Elapsed Time - DB Time) / network round trips');
--> (Elapsed Time - DB Time) / network round trips

SQL> set echo off
SQL> 

I.e. I do have an average network latency of 3.633 ms for my 100 Bytes of data.

With the tc-utility I can add a 5ms delay for packets crossing my network interface wlxe84e063dd27f on my Linux-Server:

% tc qdisc add dev wlxe84e063dd27f root netem delay 5ms

Then repeat the test:

SQL> set arraysize 1
SQL> begin
  2     :roundtrips_begin := NET_ROUNDTRIPS;
  3     :db_time_start := MY_DB_TIME_MICROSECS;
  4     :time_begin := dbms_utility.get_time;
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL> 
SQL> select filler from tlat ;

5,000 rows selected. 

SQL> 
SQL> begin
  2     :roundtrips_end := NET_ROUNDTRIPS;
  3     :db_time_end := MY_DB_TIME_MICROSECS;
  4     :time_end := dbms_utility.get_time;
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL> set serveroutput on feed off
SQL> exec dbms_output.put_line(to_char(:roundtrips_end - :roundtrips_begin)||' network round trips.');
4953 network round trips.

SQL> exec dbms_output.put_line(to_char((:time_end - :time_begin)*10)||' ms elapsed time.');
45050 ms elapsed time.

SQL> exec dbms_output.put_line(to_char((:db_time_end - :db_time_start)/1000)||' ms DB time.');
747.075 ms DB time.

SQL> exec dbms_output.put_line(to_char(round((((:time_end - :time_begin)*10)-((:db_time_end - :db_time_start)/1000))/(:roundtrips_end - :roundtrips_begin),3))||' ms latency per round trip.');
8.945 ms latency per round trip.

SQL> exec dbms_output.put_line('--> (Elapsed Time - DB Time) / network round trips');
--> (Elapsed Time - DB Time) / network round trips

The latency went up from 3.633 ms on the first run to 8.945 ms on the second run.

The results have been confirmed with the oratcptest-tool from MOS Note “Assessing and Tuning Network Performance for Data Guard and RMAN (Doc ID 2064368.1)”:

$ java -jar oratcptest.jar clemens-oradb -mode=sync -length=100 -duration=20s -interval=20s -port=1550
[Requesting a test]
   Message payload        = 100 bytes
   Payload content type   = RANDOM
   Delay between messages = NO
   Number of connections  = 1
   Socket send buffer     = (system default)
   Transport mode         = SYNC
   Disk write             = NO
   Statistics interval    = 20 seconds
   Test duration          = 20 seconds
   Test frequency         = NO
   Network Timeout        = NO
   (1 Mbyte = 1024x1024 bytes)

(21:00:52) The server is ready.
                    Throughput             Latency
(21:01:12)      0.027 Mbytes/s            3.901 ms
(21:01:12) Test finished.
          Socket send buffer = 131768 bytes
             Avg. throughput = 0.027 Mbytes/s
                Avg. latency = 3.901 ms

With the 5ms additional latency:

$ java -jar oratcptest.jar clemens-oradb -mode=sync -length=100 -duration=20s -interval=20s -port=1550
[Requesting a test]
   Message payload        = 100 bytes
   Payload content type   = RANDOM
   Delay between messages = NO
   Number of connections  = 1
   Socket send buffer     = (system default)
   Transport mode         = SYNC
   Disk write             = NO
   Statistics interval    = 20 seconds
   Test duration          = 20 seconds
   Test frequency         = NO
   Network Timeout        = NO
   (1 Mbyte = 1024x1024 bytes)

(21:02:05) The server is ready.
                    Throughput             Latency
(21:02:25)      0.011 Mbytes/s            9.293 ms
(21:02:25) Test finished.
          Socket send buffer = 131768 bytes
             Avg. throughput = 0.011 Mbytes/s
                Avg. latency = 9.293 ms

To cleanup just drop the DB-user test_netlat:

SQL> drop user test_netlat cascade;

and (if set) remove the network delay on the network card:

% tc qdisc del dev wlxe84e063dd27f root
Summary: If you plan to have a longer distance between the application-server and the db-server (e.g. when going to the Cloud) then test the latency carefully. If there is a high number of network round trips between the application and the database then chances are high that the application becomes MUCH slower after moving the servers away from each other. Here the statistics about network round trips in your database since startup:
SQL> select value from v$sysstat sysstat, v$statname statname
  2  where sysstat.statistic# = statname.statistic#
  3  and statname.display_name = 'SQL*Net roundtrips to/from client';

Cet article Script to calculate the network latency between the application and the Oracle DB-server est apparu en premier sur Blog dbi services.

MacOS: Reduce fan noise when using Microsoft Teams

Tue, 2021-12-28 16:55

The Laptop I’m using for the business is a MacBook Pro, model 2019 (2.6 GHz 6-Core Intel Core i7). I’m thankful to be able to choose the Hardware and OS of my business laptop at dbi services. We have collegues using Laptops with Windows, some use Laptops with Linux and a considerable amout of people use MacBooks. However, what annoyed me with my Laptop were MS Teams meetings when the fan noise went up considerably. Many people have reported this several times to Apple and Microsoft but the issue has not been fixed yet. Users of Microsoft Windows have also reported that the load on their systems sometimes go up as well when they are in a MS Teams meeting. As tools like ZOOM or Webex do not have such an issue, it can be concluded that this issue is caused by the MS Teams software. Don’t get me wrong, MS Teams is a good software, but I desparately searched for methods to get my fan noise under better control. The issue may be even worse when working with an external display – most people obviously use external displays in their home offices.

Googling on this, one can find many suggestions on how to fix this issue, but most of them just recommend to make sure no dust is in the Laptop or suggest to not put too much load on the Mac. I.e. things most people check first and hence those suggestions are useless – especially because there is not much CPU-load on my Laptop when using MS Teams.

I tried many other things myself to fix this (like replacing an USB-3 cable to the external display with a DVI-cable as suggested by someone), but most of the recommendations did not help. One suggestion was to manually reduce/customize the fan speed with a tool like “Macs Fan Control”, but that caused the Mac to become so slow that it finally becomes unusable.

Here the 4 things which helped me to get this under better control:

1. Disable GPU hardware acceleration
Go to: Microsoft Teams -> Preferences -> General in the section Application and make a mark at:
Disable GPU hardware acceleration (requires restarting Teams)
As mentioned in that line you have to restart MS Teams after the change to disable GPU hardware acceleration.

2. Disable the Intel Turbo Boost
Install a tool like “Turbo Boost Switcher for Intel Macs” and disable Turbo Boost while using MS Teams.

3. Work with the external display “only” in your home office. I.e. using the MacBook built-in display and an external display together resulted in the fan being more active compared to running with the external display only and keeping the MacBook closed.

4. Some people also reported that the issue went away after using MS Teams Web (in the Browser) instead of the dedicated application.

With above modifications my fan sometimes still starts when using MS Teams, but it is OK and does not become too noisy.

Cet article MacOS: Reduce fan noise when using Microsoft Teams est apparu en premier sur Blog dbi services.

Exadata Cloud at Customer: Considerations for Backup & Recovery

Tue, 2021-12-28 15:59

Working in a project where the customer migrates from an on-premises environment to Exadata Cloud at Customer provides insights about the decisions to take and what to consider when using the Exadata Cloud at Customer environment. In a series of blogs I want to talk about the different aspects when going to the Exadata Cloud at Customer (ExaCC).

Let’s start with the most important task a DBA is responsible for: Backup & Recovery.

REMARK: This blog is about DB-Backups only. Backing up other important data like e.g. your wallets with the TDE-keys is not covered in this blog.

On the ExaCC we basically have 3 possibilities to make backups (and restore them):

1. Using the Console to create and maintain automatic backups
2. Configuring and customizing Backups with bkup_api
3. Schedule your own rman backups

REMARK: In general it’s advised to use the automatic features Oracle provides on the ExaCC out of the box. A good description on how to set this up with bkup_api has been provided here.

The first 2 methods are well documented by Oracle, but they may not be flexible enough:

– with method 1 a restore is possible only on CDB-level, not on PDB-level
– there’s no possibility to call a script after the backup automatically
– you are limited to the backup destinations NFS, Zero Data Loss Recovery Appliance, local Exadata Storage, public Cloud Object storage
– there is only a limited number of retention period options for the different destinations

I.e. for some customers the automatic backups are not the first choice. Hence it may be necessary to schedule backups manually on the ExaCC. The question is: How to do this? There are some restrictions:

– it’s not allowed to create a crontab as user oracle on ExaCC
– scheduling backups with the dbms_scheduler job-type backup_script requires additional scripts for monitoring and logging purposes. See MOS Note An Example to Schedule RMAN Backup Using 12c DBMS_SCHEDULER BACKUP_SCRIPT (Doc ID 2102623.1) on how to schedule backups from the DB-scheduler.

A method used in a project was to start backup-scripts from a remote server through ssh. The idea was to get the least loaded primary node and ssh to that node and start the backup-job with nohub in the background and exit. The host which submits the backups could e.g. be the Oracle Management Server (OMS), i.e. the Enterprise Manager/Cloud Control server.

To ease the setup, we used the DMK (Database Management Kit) Backup scripts provided by dbi services. The following backups should be scheduled per database:

– weekly inc0 to NFS
– daily inc1 to NFS
– archivelog backup every hour to NFS
– delete everything obsolete on NFS every day
– daily backup of backupsets to the tape library (all backupsets on NFS which haven’t been backed up to tape yet)
– once a month a longterm backup to the tape library

REMARK: One reason to run on ExaCC is data sovereignty. Hence in such environments you usually do not backup to the object storage on the public cloud.

The question then was on how to connect to the DB-servers as ssh is open only for the fixed VM-IP-addresses, but not for e.g. the scan- or VIP-addresses. I.e. we have to be able to connect to the cluster even if a node is down. We resolved it by writing a simple procedure in the root container of each database which returns the node-name and ORACLE-SID. By connecting to the DB with sqlplus over a DB-service we were sure to connect to a running node. The simple code fragment, which returns the host-name and instance-name (ORACLE_SID) of the running RAC-instance was:

create or replace procedure get_host_and_SID (connected_host out varchar2, connected_SID out varchar2) as
begin
   select host_name, instance_name into connected_host, connected_SID from v$instance;
end;
/

To get the host-name and SID from the DB server the follwing Bash-script snippet can be used:

MYPARAMS=$(sqlplus -S ${DB_USER_GET_HOST_SID}/${PWD_GET_HOST_SID}@${DB_CONNECT_STRING} << EOF
set heading off lines 200 pages 0 feed off
var conn_host varchar2(64);
var conn_sid varchar2(16);
set serveroutput on
begin
   get_host_and_sid(:conn_host, :conn_sid);
   dbms_output.put_line(:conn_host||' '||:conn_sid);
end;
/
exit
EOF
)

RC=$?
if [ $RC -ne 0 ]
then
   echo "Getting params from DB failed."
   exit 1
fi

MYHOSTNAME=$(echo $MYPARAMS | cut -d " " -f1)
MYSID=$(echo $MYPARAMS | cut -d " " -f2)

And finally submit the backup-job:

# submitting job with nohup in the background:
ssh -t opc@$MYHOSTNAME "sudo su - oracle -c 'nohup ${LOCAL_DIR}/dmk_ha/bin/check_${LOWER_PRIM_OR_STANDBY}.ksh $MYSID ${LOCAL_DIR}/dmk_dbbackup/bin/dmk_rman.ksh -s $MYSID -t ${DMK_TEMPLATE} -u always >>/tmp/RMAN_BACKUP_${MYSID}.log 2>&1 &'" >> $LOGFILE

Summary: There are several methods on how to backup your DBs on the ExaCC-system. If the provided methods provided by Oracle are not flexible enough then you may write your own backup-scripts. Keep in mind that an ExaCC-system has some restrictions. You cannot easily schedule cron-jobs or ssh to a VIP.

Cet article Exadata Cloud at Customer: Considerations for Backup & Recovery est apparu en premier sur Blog dbi services.

SQL Server: Cannot execute as the database principal “dbo” does not exist

Mon, 2021-12-27 10:31

By a customer this month, I see a very big big ErrorLog file (more than 1,5GB).
When I open the file, every 5 seconds, the following message is added:

The activated proc ‘[dbo].[IdentityServerNotificationCleanup]’ running on queue ‘<database>.dbo.IdentityServerNotificationsQueue’ output the following:  ‘Cannot execute as the database principal because the principal “dbo” does not exist, this type of principal cannot be impersonated, or you do not have permission.’

My first reflex was to open SSMS and right-click on the database to see the properties but I receive the this error message:

“Property Owner is not available for Database <database>” not allows me to open the properties.

I open a query and ask to show all owner_sid with the view sys.databases:

SELECT name, owner_sid FROM sys.databases

The result give me 2 type of sid:

  • 0x01 who is the SQL Login SA
  • 0x0105…. where is windows authentication

To have more information, I use SUSER_SNAME to have the name associad with the owner_sid:

SELECT name, owner_sid, SUSER_SNAME(owner_sid) as owner_name FROM sys.databases

As you can see in the result, some owner_name are NULL for a lot of database.

To avoid this situation, I will force the ‘sa’ account to be the owner.

I generate for each line, the command using sp_changedbowner and a clause where SNAME(owner_sid) is null in my query:

SELECT name, owner_sid,SUSER_SNAME(owner_sid) as owner_name,N'use ['+name+N'];EXEC sp_changedbowner ''sa'';' as Script_change_owner FROM sys.databases where SUSER_SNAME(owner_sid) is null

Applying the script generated and all databases have again a valid owner.

Finally, I don’t have anymore the message in the errorlog every 5 seconds and can correctly read the file after recycling it.
And of course, now I can open the properties through SSMS without error message.

I hope this blog and script can help you if you have the same error message…

 

Cet article SQL Server: Cannot execute as the database principal “dbo” does not exist est apparu en premier sur Blog dbi services.

Upgrade AHF and TFA on an ODA

Wed, 2021-12-22 03:32

TFA (Trace File Analyzer) is part of AHF (Autonomous Health Framework). Those tools are preinstalled and part of ODA (Oracle Database Appliance). As you might know patching and upgrading are normally always going through ODA global Bundle patches. AHF can, without any problem, be upgraded independently. In this blog I wanted to share with you how I upgraded TFA with latest v21.4 version. The upgrade is performed with root user. This version addresses CVE-2021-45105/CVE-2021-44228/CVE-2021-45046. For reminder Apache Log4j Vulnerabilities are covered by CVE-2021-44228 and CVE-2021-45046.

Check current version of TFA

First we can check if TFA is up and running and which version is currrently used.

[root@ODA01 ~]# /opt/oracle/dcs/oracle.ahf/bin/tfactl status
WARNING - TFA Software is older than 180 days. Please consider upgrading TFA to the latest version.

.------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID  | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+------+------+------------+----------------------+------------------+
| ODA01     | RUNNING       | 5388 | 5000 | 20.1.3.0.0 | 20130020200429161658 | COMPLETE         |
'-----------+---------------+------+------+------------+----------------------+------------------'


As we can see we are currently running TFA/AHF 20.1.3.0.0 version.

Check running processes

We can check as well the TFA running processes.

[root@ODA01 ~]# ps -ef | grep -i tfa | grep -v grep
root      4536     1  0 Oct18 ?        00:18:06 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null
root      5388     1  0 Oct18 ?        02:55:07 /opt/oracle/dcs/oracle.ahf/jre/bin/java -server -Xms512m -Xmx1024m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:HeapDumpPath=/opt/oracle/dcs/oracle.ahf/data/ODA01/diag/tfa -XX:ParallelGCThreads=5 oracle.rat.tfa.TFAMain /opt/oracle/dcs/oracle.ahf/tfa
[root@ODA01 ~]#


Check the location of AHF

It is important to check in which directory AHF is currently installed in order to provide appropriate directory in the upgrade command option. Thus the setup script will be able to see that there is a current version installed and will suggest to upgrade it. Otherwise another new AHF installation will be performed.

[root@ODA01 ~]# cat /etc/oracle.ahf.loc
/opt/oracle/dcs/oracle.ahf


AHF is installed on the ODA in the /opt/oracle/dcs/oracle.ahf directory.

Backup of the current AHF version

Before doing any modification it is important to backup current AHF version for fallback if needed. I have been doing a tar of the currrent installation directory.

[root@ODA01 ~]# cd /opt/oracle/dcs

[root@ODA01 dcs]# ls -ltrh
total 25M
drwxr-xr-x.  3 root   root     4.0K Jul  2  2019 rdbaas
drwxr-xr-x.  3 root   root     4.0K Jul  2  2019 scratch
drwx------   2 root   root     4.0K Jul  2  2019 dcsagent_wallet
drwxr-xr-x   2 root   root     4.0K Jul  4  2019 ft
drwxr-xr-x   2 root   root     4.0K Aug 11  2019 Inventory
drwxr-xr-x   4 root   root     4.0K May 17  2020 dcs-ui
-rwxr-xr-x   1 root   root     6.8K May 21  2020 configuredcs.pl
-rw-r--r--   1 root   root      25M May 21  2020 dcs-ui.zip
drwxr-xr-x   4 root   root     4.0K Sep  2  2020 repo
-rw-r--r--   1 root   root        0 Sep  2  2020 dcscontroller-stderr.log
-rw-r--r--   1 root   root     6.7K Sep  3  2020 dcscontroller-stdout.log
drwxr-xr-x   6 oracle oinstall  32K Sep  3  2020 commonstore
drwxr-xr-x  12 root   root     4.0K Sep  3  2020 oracle.ahf
drwxr-xr-x.  2 root   root     4.0K Sep  3  2020 agent
drwxr-xr-x.  2 root   root     4.0K Sep  3  2020 sample
drwxr-xr-x   4 root   root     4.0K Sep  3  2020 java
drwxr-xr-x.  3 root   root     4.0K Sep  3  2020 conf
drwxr-xr-x.  3 root   root     4.0K Sep  3  2020 dcscli
drwxr-xr-x.  2 root   root     4.0K Sep  3  2020 bin
drwx------.  5 root   root      20K Dec 21 00:00 log

[root@ODA01 dcs]# mkdir /root/backup_ahf_for_upgrade/

[root@ODA01 dcs]# tar -czf /root/backup_ahf_for_upgrade/oracle.ahf.20.1.3.0.0.tar ./oracle.ahf

[root@ODA01 dcs]# ls -ltrh /root/backup_ahf_for_upgrade
total 1.3G
-rw-r--r-- 1 root root 1.3G Dec 21 14:26 oracle.ahf.20.1.3.0.0.tar


Download new AHF version

You can download latest AHF version through my oracle support portal. Download patch 30166242 :
Patch 30166242: PLACEHOLDER – DOWNLOAD LATEST AHF (TFA and ORACHK/EXACHK)

I have created a directory on the ODA to upload the patch :

[root@ODA01 dcs]# mkdir /u01/app/patch/TFA


Upgrade AHF on the ODA

In this part we will see the procedure to upgrade AHF on the ODA. We first need to unzip the AHF-LINUX_v21.4.0.zip file and run ahf_setup. The installation script will recognise the existing 20.1.3 version and suggest to upgrade it.

root@ODA01 dcs]# cd /u01/app/patch/TFA

[root@ODA01 TFA]# ls -ltrh
total 394M
-rw-r--r-- 1 root root 393M Dec 21 10:16 AHF-LINUX_v21.4.0.zip

[root@ODA01 TFA]# unzip -q AHF-LINUX_v21.4.0.zip

[root@ODA01 TFA]# ls -ltrh
total 792M
-r-xr-xr-x 1 root root 398M Dec 20 19:28 ahf_setup
-rw-r--r-- 1 root root  384 Dec 20 19:30 ahf_setup.dat
-rw-r--r-- 1 root root 1.5K Dec 20 19:31 README.txt
-rw-r--r-- 1 root root  625 Dec 20 19:31 oracle-tfa.pub
-rw-r--r-- 1 root root 393M Dec 21 10:16 AHF-LINUX_v21.4.0.zip

[root@ODA01 TFA]# ./ahf_setup -ahf_loc /opt/oracle/dcs -data_dir /opt/oracle/dcs

AHF Installer for Platform Linux Architecture x86_64

AHF Installation Log : /tmp/ahf_install_214000_58089_2021_12_21-14_30_06.log

Starting Autonomous Health Framework (AHF) Installation

AHF Version: 21.4.0 Build Date: 202112200745

AHF is already installed at /opt/oracle/dcs/oracle.ahf

Installed AHF Version: 20.1.3 Build Date: 202004291616

Do you want to upgrade AHF [Y]|N : Y

Upgrading /opt/oracle/dcs/oracle.ahf

Shutting down AHF Services
Stopped OSWatcher
Nothing to do !
Shutting down TFA
/etc/init.d/init.tfa: line 661: /sbin/stop: No such file or directory
. . . . .
Killing TFA running with pid 5388
. . .
Successfully shutdown TFA..

Starting AHF Services
Starting TFA..
Waiting up to 100 seconds for TFA to be started..
. . . . .
. . . . .
. . . . .
Successfully started TFA Process..
. . . . .
TFA Started and listening for commands


Do you want AHF to store your My Oracle Support Credentials for Automatic Upload ? Y|[N] : N

AHF is successfully upgraded to latest version

.-----------------------------------------------------------------.
| Host      | TFA Version | TFA Build ID         | Upgrade Status |
+-----------+-------------+----------------------+----------------+
| ODA01     |  21.4.0.0.0 | 21400020211220074549 | UPGRADED       |
'-----------+-------------+----------------------+----------------'

Moving /tmp/ahf_install_214000_58089_2021_12_21-14_30_06.log to /opt/oracle/dcs/oracle.ahf/data/ODA01/diag/ahf/

[root@ODA01 TFA]#


Check new AHF version

We can check that the new version of AHF is 21.4.

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl version

AHF version: 21.4.0


Check TFA running processes

We can check that TFA is up and running.

[root@ODA01 TFA]# ps -ef | grep -i tfa | grep -v grep
root      4536     1  0 Oct18 ?        00:18:06 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null
root     61938     1 62 14:31 ?        00:01:36 /opt/oracle/dcs/oracle.ahf/jre/bin/java -server -Xms512m -Xmx1024m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:HeapDumpPath=/opt/oracle/dcs/oracle.ahf/data/ODA01/diag/tfa -XX:ParallelGCThreads=5 oracle.rat.tfa.TFAMain /opt/oracle/dcs/oracle.ahf/tfa
[root@ODA01 TFA]#


After the upgrade script is completed there might still be some TFA processes running in order to rebuild the inventory :

root     15469 15077  0 14:58 ?        00:00:00 sh -c /opt/oracle/dcs/oracle.ahf/tfa/bin/tfactl rediscover -mode full > /dev/null 2>&1
root     15470 15469  0 14:58 ?        00:00:00 /bin/sh /opt/oracle/dcs/oracle.ahf/tfa/bin/tfactl rediscover -mode full
root     15505 15500  0 14:58 ?        00:00:00 /bin/sh /opt/oracle/dcs/oracle.ahf/tfa/bin/tfactl.tfa rediscover -mode full
root     15524 15505  1 14:58 ?        00:00:00 /u01/app/19.0.0.0/grid/perl/bin/perl /opt/oracle/dcs/oracle.ahf/tfa/bin/tfactl.pl rediscover -mode full


Make sure all those processes are completed successfully (not existing any more) before stopping AHF. Otherwise your inventory status will end with a STOPPED status.

Check status of AHF

We can check AHF status.

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl statusahf


.-------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID   | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+-------+------+------------+----------------------+------------------+
| ODA01     | RUNNING       | 61938 | 5000 | 21.4.0.0.0 | 21400020211220074549 | COMPLETE         |
'-----------+---------------+-------+------+------------+----------------------+------------------'


No scheduler for any ID

orachk daemon is not running

[root@ODA01 TFA]#


TFA is running. No AHF scheduler. No orachk daemon.

Stop AHF and TFA

To check all is working as expected, let’s stop AHF.

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl stopahf

Stopping TFA from the Command Line
Stopped OSWatcher
Nothing to do !
Please wait while TFA stops
Please wait while TFA stops
TFA-00002 Oracle Trace File Analyzer (TFA) is not running
TFA Stopped Successfully
Successfully stopped TFA..

orachk scheduler is not running


There is still one process for TFA, the one from init.d :

[root@ODA01 TFA]# ps -ef | grep -i tfa | grep -v grep
root      4536     1  0 Oct18 ?        00:18:06 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null
[root@ODA01 TFA]#


We are going to stop it :

[root@ODA01 TFA]# /etc/init.d/init.tfa stop
Stopping TFA from init for shutdown/reboot
Nothing to do !
TFA Stopped Successfully
Successfully stopped TFA..


And there is no more TFA processes up and running :

[root@ODA01 TFA]# ps -ef | grep -i tfa | grep -v grep
[root@ODA01 TFA]#


Start AHF
[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl startahf

Starting TFA..
Waiting up to 100 seconds for TFA to be started..
. . . . .
. . . . .
Successfully started TFA Process..
. . . . .
TFA Started and listening for commands

INFO: Starting orachk scheduler in background. Details for the process can be found at /opt/oracle/dcs/oracle.ahf/data/ODA01/diag/orachk/compliance_start_211221_143845.log


We can check TFA status :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/tfactl status

.-------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID   | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+-------+------+------------+----------------------+------------------+
| ODA01     | RUNNING       | 87371 | 5000 | 21.4.0.0.0 | 21400020211220074549 | COMPLETE         |
'-----------+---------------+-------+------+------------+----------------------+------------------'


We can check AHF status as well and see that we have now scheduler and orachk daemon up and running :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl statusahf


.-------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID   | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+-------+------+------------+----------------------+------------------+
| ODA01 | RUNNING       | 87371 | 5000 | 21.4.0.0.0 | 21400020211220074549 | COMPLETE         |
'-----------+---------------+-------+------+------------+----------------------+------------------'

------------------------------------------------------------

Master node = ODA01

orachk daemon version = 21.4.0

Install location = /opt/oracle/dcs/oracle.ahf/orachk

Started at = Tue Dec 21 14:38:57 CET 2021

Scheduler type = TFA Scheduler

Scheduler PID:  87371

------------------------------------------------------------
ID: orachk.autostart_client_oratier1
------------------------------------------------------------
AUTORUN_FLAGS  =  -usediscovery -profile oratier1 -dball -showpass -tag autostart_client_oratier1 -readenvconfig
COLLECTION_RETENTION  =  7
AUTORUN_SCHEDULE  =  3 2 * * 1,2,3,4,5,6
------------------------------------------------------------
------------------------------------------------------------
ID: orachk.autostart_client
------------------------------------------------------------
AUTORUN_FLAGS  =  -usediscovery -tag autostart_client -readenvconfig
COLLECTION_RETENTION  =  14
AUTORUN_SCHEDULE  =  3 3 * * 0
------------------------------------------------------------

Next auto run starts on Dec 22, 2021 02:03:00

ID:orachk.AUTOSTART_CLIENT_ORATIER1


We can also check TFA processes :

[root@ODA01 TFA]#  ps -ef | grep -i tfa | grep -v grep
root     86989     1  0 14:38 ?        00:00:00 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null
root     87371     1 19 14:38 ?        00:00:13 /opt/oracle/dcs/oracle.ahf/jre/bin/java -server -Xms512m -Xmx1024m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:HeapDumpPath=/opt/oracle/dcs/oracle.ahf/data/ODA01/diag/tfa -XX:ParallelGCThreads=5 oracle.rat.tfa.TFAMain /opt/oracle/dcs/oracle.ahf/tfa
root     92789 87371 38 14:39 ?        00:00:00 /u01/app/19.0.0.0/grid/perl/bin/perl /opt/oracle/dcs/oracle.ahf/tfa/bin/tfactl.pl availability product Europe/Zurich
[root@ODA01 TFA]#


Stop AHF and TFA

We will stop AHF and TFA again.

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl stopahf

Stopping TFA from the Command Line
Nothing to do !
Please wait while TFA stops
Please wait while TFA stops
TFA-00002 Oracle Trace File Analyzer (TFA) is not running
TFA Stopped Successfully
Successfully stopped TFA..

Stopping orachk scheduler ...
Removing orachk cache discovery....
No orachk cache discovery found.



Unable to send message to TFA



Removed orachk from inittab


Stopped orachk


AHF status checks :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl statusahf

TFA-00002 Oracle Trace File Analyzer (TFA) is not running


No scheduler for any ID

orachk daemon is not running


TFA status checks :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/tfactl status
TFA-00002 Oracle Trace File Analyzer (TFA) is not running


Check processes and stop TFA init.d :

[root@ODA01 TFA]#  ps -ef | grep -i tfa | grep -v grep
root     86989     1  0 14:38 ?        00:00:00 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null

[root@ODA01 TFA]# /etc/init.d/init.tfa stop
Stopping TFA from init for shutdown/reboot
Nothing to do !
TFA Stopped Successfully
Successfully stopped TFA..

[root@ODA01 TFA]#  ps -ef | grep -i tfa | grep -v grep
[root@ODA01 TFA]#


Restart only TFA

Finally we only want to keep TFA up and running. No AHF scheduling or orachk daemon. So we are only going to start TFA.

[root@ODA01 TFA]# /etc/init.d/init.tfa start
Starting TFA..
Waiting up to 100 seconds for TFA to be started..
. . . . .
Successfully started TFA Process..
. . . . .
TFA Started and listening for commands


Final checks

TFA running processes :

[root@ODA01 TFA]#  ps -ef | grep -i tfa | grep -v grep
root      5344     1  0 14:43 ?        00:00:00 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null
root      5732     1 77 14:43 ?        00:00:11 /opt/oracle/dcs/oracle.ahf/jre/bin/java -server -Xms512m -Xmx1024m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:HeapDumpPath=/opt/oracle/dcs/oracle.ahf/data/ODA01/diag/tfa -XX:ParallelGCThreads=5 oracle.rat.tfa.TFAMain /opt/oracle/dcs/oracle.ahf/tfa
[root@ODA01 TFA]#


TFA status :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/tfactl status

.------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID  | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+------+------+------------+----------------------+------------------+
| ODA01 | RUNNING       | 5732 | 5000 | 21.4.0.0.0 | 21400020211220074549 | COMPLETE         |
'-----------+---------------+------+------+------------+----------------------+------------------'
[root@ODA01 TFA]#


AHF status :

[root@ODA01 TFA]# /opt/oracle/dcs/oracle.ahf/bin/ahfctl statusahf


.------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID  | Port | Version    | Build ID             | Inventory Status |
+-----------+---------------+------+------+------------+----------------------+------------------+
| ODA01 | RUNNING       | 5732 | 5000 | 21.4.0.0.0 | 21400020211220074549 | COMPLETE         |
'-----------+---------------+------+------+------------+----------------------+------------------'


No scheduler for any ID

orachk daemon is not running

[root@ODA01 TFA]#


Cleanup

We can still keep previous AHF version backup a few days just in case and remove it later.

The AHF installation files can be deleted :

[root@ODA01 ~]# cd /u01/app/patch/

[root@ODA01 patch]# ls -l TFA
total 810144
-rw-r--r-- 1 root root 411836201 Dec 21 10:16 AHF-LINUX_v21.4.0.zip
-r-xr-xr-x 1 root root 416913901 Dec 20 19:28 ahf_setup
-rw-r--r-- 1 root root       384 Dec 20 19:30 ahf_setup.dat
-rw-r--r-- 1 root root       625 Dec 20 19:31 oracle-tfa.pub
-rw-r--r-- 1 root root      1525 Dec 20 19:31 README.txt

[root@ODA01 patch]# rm -rf TFA

[root@ODA01 patch]# ls
[root@ODA01 patch]#


Cet article Upgrade AHF and TFA on an ODA est apparu en premier sur Blog dbi services.

Control-M /EM – How to manage Control-M utility restriction for Remote hosts

Fri, 2021-12-17 10:28

Hi team,
today we will see how to restrict the Control-M utility when using your remote hosts.
we will perform a quick test to show how it works before the modification , then after.
first of all ,we will create two jobs to give an example, each using a control m utility command :
One using ctmpsm command which allow you to list order and make many actions on your jobs, and the other ctmvar command which help you to list and define your variables.

  • example of ctmvar output command:


Jobs are running  fine so now we will update utility authorizations  and to do so we will update a specific file named REMOTE_UTILS_LIST

Update file on Linux
  • edit the CONFIG.dat file by updating the line REMOTE_UTILS_LIST with your favorite text editor:

Note: by default agent is authorized to run all utilities,in our example we will edit the line and remove the utility we want to restrict to the agent

important note:
Don’t forget to check if the line UPLOAD_REMOTE_UTILS to yes “Y”

Update file on Windows
  • type regedit in the windows bar

  • browse on your agent configuration to the line REMOTE_UTILS_LIST


Once you have edited the line (removing the utility you want to restrict) you just have to start you jobs and check if you are able to call the utility
NO agent restart is needed

result expected

For example by removing ctmpsm utility, the job will fail with error

ctmpsm: command not found

At this point, it will no longer be authorized for the agent as it has been disabled in the config file updated earlier , if you want to authorize it again add the command at the appropriate line (make sure to have a backup of the CONFIG.dat in any case).

Conclusion

Now you know how to manage your utilities when using your remote hosts on Linux and Windows,feel free to check dbi bloggers and also you can check the excellent BMC site to have tips and tricks to ease your life with Control-M!

Cet article Control-M /EM – How to manage Control-M utility restriction for Remote hosts est apparu en premier sur Blog dbi services.

log4shell: Determine affection, mitigate, example: Oracle Unified Directory

Fri, 2021-12-17 05:04

This post is about two security vulnerabilities reported on log4j 2.x framework: CVE-2021-44228, with a base score 10 (of 10 max.) in CVSS version 3.1 risk, and CVE-2021-45046, base scored 9. I will list some ways to determine affection, how to mitigate these vulnerabilities in general and Oracle Unified Directory in particular. Finally, there is a link with updated status to every Oracle software and why to migrate log4j 1.x to x >= 2.16.x.

Determine affection
  1. Use a vulnerability scanner
    It’s best to scan hosts company wide on a regular basis. I do not have experience here, use wikipedia as a (neutral) starting point. There is a useful list of scanners provided by OWASP (Open Web Application Security Project).
    To date, almost 166k CVE have been reported. The next vulnerability in software your are using will be reported sooner or later. It’s recommended to use a capable vulnerability scanner to landscape affection.
  2. Use log4j2-scan
    “log4j2-scan is a single binary command-line tool for CVE-2021-44228 vulnerability scanning and mitigation patch. It also supports nested JAR file scanning and patch. It also detects CVE-2021-45046 (log4j 2.15.0) and CVE-2021-4104 (log4j 1.x) vulnerabilities.”Download precompiled jar files or build your own
    How it works
    It runs in 5 steps:

      1. Find all .jar, .war, .ear, .aar files recursively.
      2. Find META-INF/maven/org.apache.logging.log4j/log4j-core/pom.properties entry from JAR file.
      3. Read groupId, artifactId, and version.
      4. Compare log4j2 version and print vulnerable version.
      5. If –fix option is used, backup vulnerable file and patch it.
        For example, original vulnerable.jar is copied to vulnerable.jar.bak

    Example: Analysing applications folder on macOS

    % java -jar logpresso-log4j2-scan-2.2.2.jar /Applications
    Logpresso CVE-2021-44228 Vulnerability Scanner 2.2.2 (2021-12-18)
    Scanning directory: /Applications
    [*] Found CVE-2021-44228 (log4j 2.x) vulnerability in /Applications/cnlabSpeedTest.app/Contents/Java/log4j-core-2.8.1.jar, log4j 2.8.1
    [*] Found CVE-2021-45046 (log4j 2.x) vulnerability in /Applications/SQLDeveloper.app/Contents/Resources/sqldeveloper/sqldeveloper/lib/log4j-core.jar, log4j 2.15.0
    Running scan (10s): scanned 14161 directories, 135423 files, last visit: /Applications/Microsoft Excel.app/Contents/Frameworks/mso99.framework/Versions/A/Resources/mso_intl.bundle/Contents/Resources/id.lproj
    Running scan (20s): scanned 25840 directories, 263162 files, last visit: /Applications/Microsoft PowerPoint.app/Contents/Frameworks/mso99.framework/Versions/A/Resources/mso_intl.bundle/Contents/Resources/el.lproj
    
    Scanned 26789 directories and 276904 files
    Found 2 vulnerable files
    Found 0 potentially vulnerable files
    Found 0 mitigated files
    Completed in 20.85 seconds
  3. Find log4j jar files
    This is a fast, but not reliable method as log4j could simply be in a renamed jar file or nested. A jar file is a zip archive.
    1. As root/administrator, search for log4j jar files.
      Linux, Solaris, AIX, Mac OS, …:
      find / -name *log4j*
      Windows:
      dir /s /b <folder><pattern>
    2. Check version used. If below 2.16, you are affected:
      zip -p <jarfile> META-INF/MANIFEST.MF
How to mitigate?

Disable log4j JNDI lookups using the following parameter when startup a java application:
-Dlog4j2.formatMsgNoLookups=true
Usually, look for a java.properties file to add above parameter. Here is an example to start a jar file with parameter:
<JAVA JRE HOME>/bin/java -jar <jar-file> -Dlog4j2.formatMsgNoLookups=true <additional parameter>

What is properties file?
Most Java application need to use properties at some point, generally to store simple parameters as key-value pairs, outside of compiled code. And so the language has first class support for properties – the java.util.Properties – a utility class designed for handling this type of configuration files. Key-value pairs can be loaded from properties files. Popular file endings are .properties or .xml. As long as a file’s content meet properties file format requirements, it can be parsed correctly by Properties class.

Example: Oracle Unified Directory

It seems Oracle is still analysing the situation. I could not do mitigation according to official vendor information Doc ID 2827793.1, that is by deleting JndiLookup.class file which was not present. Personally, I think deleting files is suboptimal as it could lead to errors in logfiles when accessing them and errors may lead to false-positives incidents in monitoring.

As an alternative, I modified Java startup parameter to mitigate:

  1. Edit the file:
    <OUD instance home>/config/java.properties
  2. Add additional parameter:
    start-ds.java-args=-Dlog4j2.formatMsgNoLookups=true
  3. Activate changed properties:
    <OUD instance home>/bin/dsjavaproperties
  4. Restart OUD
    <OUD instance home>/bin/stop-ds --restart
  5. Check if OUD started using parameter
    ps -ef | grep Dlog4j2.formatMsgNoLookups

I’m looking forward for an official patch, which I will apply on test environment first.

log4shell & Oracle software

Please check this webpage on a regular basis.

Log4j 1.x

Log4j 1.x is not affected by above CVEs, but others. And it’s end of live, consider migrating to a version greater/equal 2.16.x

Cet article log4shell: Determine affection, mitigate, example: Oracle Unified Directory est apparu en premier sur Blog dbi services.

Oracle Database Appliance 19.13: what’s new and how to patch?

Sat, 2021-12-11 04:55
Introduction

Patch 19.13 is now available on ODA. It’s time to test it.

What’s new?

This version brings October’s PSU to database and grid homes with their bug fixes, as usual. It also brings 21.4 for DB Systems (21c being an innovation release, understand just for testing). Something new that some of us have been waiting for a while, a multi-user mode for odacli tasks: you don’t need to be root anymore for managing the appliance. This new feature can only be enabled when provisioning, so don’t expect to catch it after patching. A parameter deployment feature is also available now, helping managing parameters across multiple instances (bare metal only).

This 19.13 will be the latest one for those still using virtualized ODA: virtualization is now integrated in all bare metal ODAs with KVM (instead of OVM), check this blog post if you’re not aware about that. Plan to redeploy your virtualized ODA to bare metal if your system is still supported for a couple of years.

Which ODA is compatible with 19.13?

For sure, ODA X8 in its 3 flavors (S, M, HA) is compatible as this is the current one (but it should be replaced quite soon). X7 and X6 are also compatible without any problem. X5-2HA is also still supported, this could be one of the latest patch for this nearly 7-year old ODA.

Is this patch a cumulative one?

Most of the patches are cumulative, meaning that you can apply them on top of a version older than the previous one. This 19.13 can be applied on top of 19.9 or later, 19.9 being 1 year old. This is something I would recommend to each of my customer: apply patches at least once a year to avoid longer patching, longer meaning multiple patches to apply to reach target version.

If you are still using an old version, I would say 18.x or older, consider reimaging instead of patching. Reimaging is much more comfortable and cleaner compared to 3 or more patches to apply.

Is there also a patch for my databases?

If you’re using 12.1, 12.2 or 19c, these databases can be patched. 11gR2 and 18c are no more supported on ODA, meaning that your databases will continue working but you should definitely consider an upgrade quite soon. Remember that if you choose to reimage to this latest version, you will not find any 18c or 11gR2 DB homes to deploy.

Download the patch and clone files

Since months now, the patch is lighter because it doesn’t include DB and GI patches anymore. Patching DB or GI is deploying a new home and migrating from the older one. As this new way of patching is based on DB and GI clones, download the corresponding clones to be able to apply the complete patch.

33518928 => the patch itself
30403673 => the GI clone needed for deploying new version (mandatory)
30403662 => the DB clone for deploying new version of 19c (if you use 19c databases)
27119402 => the DB clone for deploying new version of 12.2 (if you use 12.2 databases)
23494992 => the DB clone for deploying new version of 12.1 (if you use 12.1 databases)

Be sure to choose the very latest 19.13 when downloading, as most of the time My Oracle Support will propose an older version for GI clones and DB clones (patch number is the same for older versions).

Prepare the patching

A pre-patch is now provided and needed before applying an ODA patch, but prior running it, please check these prerequisites:

  • filesystems have 20% available free space (does not concern acfs volumes)
  • additional rpms manually installed should be removed
  • revert profile scripts to default’s one (concerns grid and oracle users)
  • make sure you can afford longer than planned downtime, 4 hours being the bare minimum for patching and… troubleshooting. 1 day is never too much.
Version precheck

Start to check current version on all components:

odacli describe-component | grep -v ^$
System Version
---------------
19.12.0.0.0
System node Name
---------------
dbi-oda-x8
Local System Version
---------------
19.12.0.0.0
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       19.12.0.0.0           up-to-date
GI                                        19.12.0.0.210720      up-to-date
DB                                        19.12.0.0.210720      up-to-date
DCSCONTROLLER                             19.12.0.0.0           up-to-date
DCSCLI                                    19.12.0.0.0           up-to-date
DCSAGENT                                  19.12.0.0.0           up-to-date
DCSADMIN                                  19.12.0.0.0           up-to-date
OS                                        7.9                   up-to-date
ILOM                                      5.0.2.24.r141466      up-to-date
BIOS                                      52050300              up-to-date
FIRMWARECONTROLLER                        VDV1RL04              up-to-date
FIRMWAREDISK                              1132                  up-to-date
HMP                                       2.4.8.0.600           up-to-date

Once the patch will be registered in the ODA repository, the “Available Version” column will be fed with versions provided within the patch.

On my ODA X8-2M, I only need 19c databases. As I’m already running on previous 19.12 version, patching shouldn’t be too risky.

Prepare the files and update the DCS tools

Copy the patch files to your ODA in a temp directory. Then unzip the files:

cd /opt/dbi/
for f in p*1913000*.zip; do unzip -n $f; done
Archive:  p30403662_1913000_Linux-x86-64.zip
 extracting: odacli-dcs-19.13.0.0.0-211109-DB-19.13.0.0.zip
Archive:  p30403673_1913000_Linux-x86-64.zip
 extracting: odacli-dcs-19.13.0.0.0-211109-GI-19.13.0.0.zip
Archive:  p33518928_1913000_Linux-x86-64.zip
 extracting: oda-sm-19.13.0.0.0-211127-server.zip

rm -rf p*1913000*.zip

Register the patch in the repository:

odacli update-repository -f /opt/dbi/oda-sm-19.13.0.0.0-211127-server.zip

odacli describe-component | grep -v ^$
System Version
---------------
19.12.0.0.0
System node Name
---------------
dbi-oda-x8
Local System Version
---------------
19.12.0.0.0
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       19.12.0.0.0           19.13.0.0.0
GI                                        19.12.0.0.210720      19.13.0.0.211019
DB                                        19.12.0.0.210720      19.13.0.0.211019
DCSCONTROLLER                             19.12.0.0.0           19.13.0.0.0
DCSCLI                                    19.12.0.0.0           19.13.0.0.0
DCSAGENT                                  19.12.0.0.0           19.13.0.0.0
DCSADMIN                                  19.12.0.0.0           19.13.0.0.0
OS                                        7.9                   up-to-date
ILOM                                      5.0.2.24.r141466      up-to-date
BIOS                                      52050300              up-to-date
FIRMWARECONTROLLER                        VDV1RL04              up-to-date
FIRMWAREDISK                              1132                  up-to-date
HMP                                       2.4.8.0.600           up-to-date

Don’t update the repository with GI and DB clones now because you will receive this kind of error:

DCS-10001:Internal error encountered: Cannot find the corresponding image for /opt/dbi/odacli-dcs-19.13.0.0.0-211109-GI-19.13.0.0.zip in img_metadata.

Update the DCS tooling of your ODA:

odacli update-dcsadmin -v 19.13.0.0.0
odacli update-dcscomponents -v 19.13.0.0.0
odacli update-dcsagent -v 19.13.0.0.0

Note that updating the DCS components is not done through a job:

odacli list-jobs | head -n 3;  odacli list-jobs | tail -n 4
ID                                       Description                      Created                             Status
---------------------------------------- -------------------------------- ----------------------------------- ---------
9eccda54-418f-4ac8-8196-197a17fe1f48     Repository Update                                                           December 9, 2021 11:01:49 AM CET    Success
d1daaedd-f522-448e-8f67-08db0e80b6db     DcsAdmin patching                                                           December 9, 2021 3:23:45 PM CET     Success
227f0207-85d0-4d4c-bee8-61545241a588     DcsAgent patching                                                           December 9, 2021 3:26:15 PM CET     Success

Now you can register GI and DB clones:

odacli update-repository -f /opt/dbi/odacli-dcs-19.13.0.0.0-211109-GI-19.13.0.0.zip
odacli update-repository -f /opt/dbi/odacli-dcs-19.13.0.0.0-211109-DB-19.13.0.0.zip

odacli list-jobs | head -n 3;  odacli list-jobs | tail -n 3
ID                                       Description                                                                 Created                             Status
---------------------------------------- --------------------------------------------------------------------------- ----------------------------------- ----------
53e78d25-f594-4784-a320-aa3df5e996a6     Repository Update                                                           December 9, 2021 3:48:14 PM CET     Success
38c4202e-b1e3-4ba7-8431-78c7b33ab776     Repository Update                                                           December 9, 2021 3:54:59 PM CET     Success

odacli describe-component | grep -v ^$
System Version
---------------
19.13.0.0.0
System node Name
---------------
dbi-oda-x8
Local System Version
---------------
19.13.0.0.0
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       19.12.0.0.0           19.13.0.0.0
GI                                        19.12.0.0.210720      19.13.0.0.211019
DB                                        19.12.0.0.210720      19.13.0.0.211019
DCSCONTROLLER                             19.13.0.0.0           up-to-date
DCSCLI                                    19.13.0.0.0           up-to-date
DCSAGENT                                  19.13.0.0.0           up-to-date
DCSADMIN                                  19.13.0.0.0           up-to-date
OS                                        7.9                   up-to-date
ILOM                                      5.0.2.24.r141466      up-to-date
BIOS                                      52050300              up-to-date
SHARED CONTROLLER FIRMWARE                VDV1RL04              up-to-date
LOCAL DISK FIRMWARE                       1132                  up-to-date
SHARED DISK FIRMWARE                      1132                  up-to-date
HMP                                       2.4.8.0.600           up-to-date

This update will be limited to Oracle software, so it shouldn’t last hours.

Pre-patching report

Let’s check if patching has the green light:

odacli create-prepatchreport -s -v 19.13.0.0.0

Job details
----------------------------------------------------------------
                     ID:  3e9c9d23-948e-4dbb-8b23-8ab2f6e9fb14
            Description:  Patch pre-checks for [OS, ILOM, GI, ORACHKSERVER]
                 Status:  Created
                Created:  December 9, 2021 4:26:59 PM CET
                Message:  Use 'odacli describe-prepatchreport -i 3e9c9d23-948e-4dbb-8b23-8ab2f6e9fb14' to check details of results

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------


odacli describe-prepatchreport -i 3e9c9d23-948e-4dbb-8b23-8ab2f6e9fb14

Patch pre-check report
------------------------------------------------------------------------
                 Job ID:  3e9c9d23-948e-4dbb-8b23-8ab2f6e9fb14
            Description:  Patch pre-checks for [OS, ILOM, GI, ORACHKSERVER]
                 Status:  SUCCESS
                Created:  December 9, 2021 4:26:59 PM CET
                 Result:  All pre-checks succeeded

Node Name
---------------
dbi-oda-x8

Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__OS__
Validate supported versions     Success   Validated minimum supported versions.
Validate patching tag           Success   Validated patching tag: 19.13.0.0.0.
Is patch location available     Success   Patch location is available.
Verify OS patch                 Success   Verified OS patch
Validate command execution      Success   Validated command execution

__ILOM__
Validate supported versions     Success   Validated minimum supported versions.
Validate patching tag           Success   Validated patching tag: 19.13.0.0.0.
Is patch location available     Success   Patch location is available.
Checking Ilom patch Version     Success   Patch already applied
Patch location validation       Success   Successfully validated location
Validate command execution      Success   Validated command execution

__GI__
Validate GI metadata            Success   Successfully validated GI metadata
Validate supported GI versions  Success   Validated minimum supported versions.
Validate available space        Success   Validated free space under /u01
Is clusterware running          Success   Clusterware is running
Validate patching tag           Success   Validated patching tag: 19.13.0.0.0.
Is system provisioned           Success   Verified system is provisioned
Validate ASM in online          Success   ASM is online
Validate kernel log level       Success   Successfully validated the OS log
                                          level
Validate minimum agent version  Success   GI patching enabled in current
                                          DCSAGENT version
Validate Central Inventory      Success   oraInventory validation passed
Validate patching locks         Success   Validated patching locks
Validate clones location exist  Success   Validated clones location
Validate DB start dependencies  Success   DBs START dependency check passed
Validate DB stop dependencies   Success   DBs STOP dependency check passed
Evaluate GI patching            Success   Successfully validated GI patching
Validate command execution      Success   Validated command execution

__ORACHK__
Running orachk                  Success   Successfully ran Orachk
Validate command execution      Success   Validated command execution

Prepatch lasted about 10 minutes on my ODA, and everything seems to be OK.

Patching infrastructure and GI

Let’s start the update-server:

odacli update-server -v 19.13.0.0.0
odacli describe-job -i f0915a98-5acf-4697-94e2-8c0192bcfce2

Job details
----------------------------------------------------------------
                     ID:  f0915a98-5acf-4697-94e2-8c0192bcfce2
            Description:  Server Patching
                 Status:  Success
                Created:  December 10, 2021 2:07:20 PM CET
                Message:  Successfully patched GI with RHP

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validating GI user metadata              December 10, 2021 2:07:33 PM CET    December 10, 2021 2:07:33 PM CET    Success
Creating repositories using yum          December 10, 2021 2:07:34 PM CET    December 10, 2021 2:07:38 PM CET    Success
Updating YumPluginVersionLock rpm        December 10, 2021 2:07:38 PM CET    December 10, 2021 2:07:38 PM CET    Success
Applying OS Patches                      December 10, 2021 2:07:38 PM CET    December 10, 2021 2:16:03 PM CET    Success
Creating repositories using yum          December 10, 2021 2:16:03 PM CET    December 10, 2021 2:16:04 PM CET    Success
Applying HMP Patches                     December 10, 2021 2:16:04 PM CET    December 10, 2021 2:16:22 PM CET    Success
Patch location validation                December 10, 2021 2:16:22 PM CET    December 10, 2021 2:16:22 PM CET    Success
oda-hw-mgmt upgrade                      December 10, 2021 2:16:22 PM CET    December 10, 2021 2:16:55 PM CET    Success
OSS Patching                             December 10, 2021 2:16:55 PM CET    December 10, 2021 2:16:55 PM CET    Success
Checking Ilom patch Version              December 10, 2021 2:16:55 PM CET    December 10, 2021 2:16:56 PM CET    Success
Patch location validation                December 10, 2021 2:16:56 PM CET    December 10, 2021 2:16:56 PM CET    Success
Save password in Wallet                  December 10, 2021 2:16:56 PM CET    December 10, 2021 2:16:56 PM CET    Success
Disabling IPMI v2                        December 10, 2021 2:16:56 PM CET    December 10, 2021 2:16:57 PM CET    Success
Apply Ilom patch                         December 10, 2021 2:16:57 PM CET    December 10, 2021 2:16:57 PM CET    Success
Copying Flash Bios to Temp location      December 10, 2021 2:16:57 PM CET    December 10, 2021 2:16:57 PM CET    Success
Starting the clusterware                 December 10, 2021 2:16:57 PM CET    December 10, 2021 2:20:08 PM CET    Success
registering image                        December 10, 2021 2:20:08 PM CET    December 10, 2021 2:20:08 PM CET    Success
registering working copy                 December 10, 2021 2:20:08 PM CET    December 10, 2021 2:20:08 PM CET    Success
registering image                        December 10, 2021 2:20:08 PM CET    December 10, 2021 2:20:08 PM CET    Success
Creating GI home directories             December 10, 2021 2:20:08 PM CET    December 10, 2021 2:20:08 PM CET    Success
Extract GI clone                         December 10, 2021 2:20:08 PM CET    December 10, 2021 2:20:08 PM CET    Success
Provisioning Software Only GI with RHP   December 10, 2021 2:20:08 PM CET    December 10, 2021 2:20:08 PM CET    Success
Patch GI with RHP                        December 10, 2021 2:20:08 PM CET    December 10, 2021 2:29:39 PM CET    Success
Updating GIHome version                  December 10, 2021 2:29:39 PM CET    December 10, 2021 2:29:42 PM CET    Success
Validate GI availability                 December 10, 2021 2:29:49 PM CET    December 10, 2021 2:29:49 PM CET    Success
Patch KVM CRS type                       December 10, 2021 2:29:49 PM CET    December 10, 2021 2:31:53 PM CET    Success
Update System version                    December 10, 2021 2:31:53 PM CET    December 10, 2021 2:31:53 PM CET    Success
Cleanup JRE Home                         December 10, 2021 2:31:53 PM CET    December 10, 2021 2:31:53 PM CET    Success
Add SYSNAME in Env                       December 10, 2021 2:31:53 PM CET    December 10, 2021 2:31:53 PM CET    Success
Setting ACL for disk groups              December 10, 2021 2:31:53 PM CET    December 10, 2021 2:31:57 PM CET    Success
Update lvm.conf file                     December 10, 2021 2:33:30 PM CET    December 10, 2021 2:33:30 PM CET    Success
Update previous workarounds              December 10, 2021 2:33:30 PM CET    December 10, 2021 2:33:30 PM CET    Success
preRebootNode Actions                    December 10, 2021 2:33:30 PM CET    December 10, 2021 2:36:52 PM CET    Success
Reboot Ilom                              December 10, 2021 2:36:52 PM CET    December 10, 2021 2:36:52 PM CET    Success

Server reboots 5 minutes after the patch ends. On my X8-2M this server patching lasted 30 minutes.

Let’s check the components’ versions now:

odacli describe-component | grep -v ^$ 
System Version
---------------
19.13.0.0.0
System node Name
---------------
dbi-oda-x8
Local System Version
---------------
19.13.0.0.0
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       19.13.0.0.0           up-to-date
GI                                        19.13.0.0.211019      up-to-date
DB                                        19.12.0.0.210720      19.13.0.0.211019
DCSCONTROLLER                             19.13.0.0.0           up-to-date
DCSCLI                                    19.13.0.0.0           up-to-date
DCSAGENT                                  19.13.0.0.0           up-to-date
DCSADMIN                                  19.13.0.0.0           up-to-date
OS                                        7.9                   up-to-date
ILOM                                      5.0.2.24.r141466      up-to-date
BIOS                                      52050300              up-to-date
SHARED CONTROLLER FIRMWARE                VDV1RL04              up-to-date
LOCAL DISK FIRMWARE                       1132                  up-to-date
SHARED DISK FIRMWARE                      1132                  up-to-date
HMP                                       2.4.8.0.600           up-to-date

This looks fine, but please check acfs volumes after reboot!!!

In my particular case, I had problem with acfs volumes not mounting after reboot, even if I tried manually:

mount.acfs -o all
mount.acfs: CLSU-00107: operating system function: open64; failed with error data: 1; at location: OOF_1
mount.acfs: CLSU-00101: operating system error message: Operation not permitted
mount.acfs: CLSU-00104: additional error information: open64 (/dev/asm/commonstore-175)
mount.acfs: ACFS-02017: Failed to open volume /dev/asm/commonstore-175. Verify the volume exists.
mount.acfs: ACFS-02014: Mount of /opt/oracle/dcs/commonstore failed.  Error -1 was returned.
mount.acfs: CLSU-00107: operating system function: open64; failed with error data: 1; at location: OOF_1
mount.acfs: CLSU-00101: operating system error message: Operation not permitted
mount.acfs: CLSU-00104: additional error information: open64 (/dev/asm/acfsclone-175)
mount.acfs: ACFS-02017: Failed to open volume /dev/asm/acfsclone-175. Verify the volume exists.
mount.acfs: ACFS-02014: Mount of /opt/oracle/oak/pkgrepos/orapkgs/clones failed.  Error -1 was returned.
...

I had to reinstall the drivers from the new GI home, and reboot again:

/u01/app/19.13.0.0/grid/bin/crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dbi-oda-x8'
CRS-2673: Attempting to stop 'ora.crsd' on 'dbi-oda-x8'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'dbi-oda-x8'
CRS-2673: Attempting to stop 'ora.cvu' on 'dbi-oda-x8'
CRS-2673: Attempting to stop 'ora.DATA.ACFSCLONE.advm' on 'dbi-oda-x8'
CRS-2673: Attempting to stop 'ora.DATA.CESAREVMS.advm' on 'dbi-oda-x8'
...
CRS-4133: Oracle High Availability Services has been stopped.

/u01/app/19.13.0.0/grid/bin/acfsroot install
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9314: Removing previous ADVM/ACFS installation.
ACFS-9315: Previous ADVM/ACFS components successfully removed.
ACFS-9294: updating file /etc/sysconfig/oracledrivers.conf
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9294: updating file /etc/sysconfig/oracledrivers.conf
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9154: Loading 'oracleoks.ko' driver.
ACFS-9154: Loading 'oracleadvm.ko' driver.
ACFS-9154: Loading 'oracleacfs.ko' driver.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9309: ADVM/ACFS installation correctness verified.


/u01/app/19.13.0.0/grid/bin/crsctl start crs

/u01/app/19.13.0.0/grid/bin/acfsroot enable

reboot

Once done, everything was back to normal.

Patching the storage

Patching the storage is only needed for older ODAs/patch levels. In case you need to apply a patch on the storage it’s easy: there is a pre-patch, and then the patch:

odacli create-prepatchreport -st -v 19.13.0.0.0
odacli update-storage -v 19.13.0.0.0

For HA ODAs using RAC, patching can be done in a rolling fashion:

odacli update-storage -v 19.13.0.0.0 --rolling

I never encountered troubles during storage patching, so it should be fine.

Patching the DB homes

Since 19.11, DB homes are created on an acfs filesystem. If you come from 19.9 or 19.10, you will need to configure this filesystem:

odacli configure-dbhome-storage -dg DATA

Time for patching the DB homes depends on the number of DB homes and number of databases. In this example, 2 DB homes are deployed:

odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
721c1fc8-b394-4682-b230-21a197e997b3     OraDB19000_home10    19.12.0.0.210720                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_10 CONFIGURED
2e702142-048e-42a2-a570-82318458f72d     OraDB19000_home11    19.12.0.0.210720                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_11 CONFIGURED

A prepatching is also needed here, for example if I want to patch the second home:

odacli create-prepatchreport -d -i 2e702142-048e-42a2-a570-82318458f72d -v 19.13.0.0.0
odacli describe-prepatchreport -i 2f5547b1-e95e-489f-bd68-aba0c4260765

Patch pre-check report
------------------------------------------------------------------------
                 Job ID:  2f5547b1-e95e-489f-bd68-aba0c4260765
            Description:  Patch pre-checks for [DB, ORACHKDB]: DbHome is OraDB19000_home11
                 Status:  SUCCESS
                Created:  December 10, 2021 6:12:44 PM CET
                 Result:  All pre-checks succeeded

Node Name
---------------
dbi-oda-x8

Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__DB__
Validate DB Home ID             Success   Validated DB Home ID:
                                          2e702142-048e-42a2-a570-82318458f72d
Validate patching tag           Success   Validated patching tag: 19.13.0.0.0.
Is system provisioned           Success   Verified system is provisioned
Validate minimum agent version  Success   Validated minimum agent version
Is GI upgraded                  Success   Validated GI is upgraded
Validate available space for    Success   Validated free space required under
db                                        /u01/app/odaorahome
Validate dbHomesOnACFS          Success   User has configured diskgroup for
configured                                Database homes on ACFS
Validate Oracle base            Success   Successfully validated Oracle Base
Is DB clone available           Success   Successfully validated clone file
                                          exists
Evaluate DBHome patching with   Success   Successfully validated updating
RHP                                       dbhome with RHP
Validate command execution      Success   Validated command execution

__ORACHK__
Running orachk                  Success   Successfully ran Orachk
Validate command execution      Success   Validated command execution

Now I can apply the patch on this DB home:

odacli update-dbhome -i 2e702142-048e-42a2-a570-82318458f72d -v 19.13.0.0.0 
odacli describe-job -i ca7673d2-3dd7-45b8-b3e4-9e818b20a4cb

Job details
----------------------------------------------------------------
                     ID:  ca7673d2-3dd7-45b8-b3e4-9e818b20a4cb
            Description:  DB Home Patching: Home Id is 2e702142-048e-42a2-a570-82318458f72d
                 Status:  Success
                Created:  December 10, 2021 6:27:49 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Adding USER SSH_EQUIVALENCE              December 10, 2021 6:27:54 PM CET    December 10, 2021 6:27:54 PM CET    Success
Adding USER SSH_EQUIVALENCE              December 10, 2021 6:27:54 PM CET    December 10, 2021 6:27:54 PM CET    Success
Adding USER SSH_EQUIVALENCE              December 10, 2021 6:27:54 PM CET    December 10, 2021 6:27:55 PM CET    Success
Creating wallet for DB Client            December 10, 2021 6:28:32 PM CET    December 10, 2021 6:28:32 PM CET    Success
Patch databases by RHP                   December 10, 2021 6:28:32 PM CET    December 10, 2021 6:33:15 PM CET    Success
updating database metadata               December 10, 2021 6:33:15 PM CET    December 10, 2021 6:33:15 PM CET    Success
Set log_archive_dest for Database        December 10, 2021 6:33:15 PM CET    December 10, 2021 6:33:18 PM CET    Success
Update System version                    December 10, 2021 6:33:18 PM CET    December 10, 2021 6:33:18 PM CET    Success
TDE parameter update                     December 10, 2021 6:33:45 PM CET    December 10, 2021 6:33:45 PM CET    Success

A new DB home has been created and my database is linked to this new one:

odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
721c1fc8-b394-4682-b230-21a197e997b3     OraDB19000_home10    19.12.0.0.210720                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_10 CONFIGURED
2e702142-048e-42a2-a570-82318458f72d     OraDB19000_home11    19.12.0.0.210720                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_11 CONFIGURED
abe18169-18e3-4a3f-a7c8-8c1443863910     OraDB19000_home15    19.13.0.0.211019                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_15 CONFIGURED

odacli list-databases | grep abe18169

617de33b-d9b4-4142-89a5-6bce21e8c00b     DBITSTEE   SI       19.13.0.0.211019     false      OLTP     odb2     ASM        CONFIGURED   abe18169-18e3-4a3f-a7c8-8c1443863910

The old DB home can be removed safely:

odacli delete-dbhome -i 2e702142-048e-42a2-a570-82318458f72d

For each database running in that DB home, a parameter needs to be changed:

su - oracle
. oraenv <<< DBITSTEE
sqlplus / as sysdba
alter system set "_enable_numa_support"=true scope=spfile sid='*';
exit
srvctl stop database -d DBITSTEE_SITE1
srvctl start database -d DBITSTEE_SITE1

This only concerns multi-processor ODAs (not S ones) and it will force an instance to use local memory modules, those associated to the processor where the instance is running. This should improve overall performance.

Patching the other DB homes is done the same way.

Remember that patching the standby databases will raise an error, as datapatch cannot be applied on a mounted or read only database. Patch should be applied on primary and it will then be applied automatically on standby as it’s related to database objects.

Please check my previous blog post about patching ODAs in Data Guard environment.

I would recommand to check on each primary the patch level after patching each DB home:

su – oracle
. oraenv <<< DBITSTEE
sqlplus / as sysdba
set serverout on
exec dbms_qopatch.get_sqlpatch_status;
Patch Id : 32876380
	Action : APPLY
	Action Time : 03-AUG-2021 16:15:57
	Description : OJVM RELEASE UPDATE: 19.12.0.0.210720 (32876380)
	Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/32876380/24269510/32876380_apply_G17741_CDB
ROOT_2021Aug03_15_59_00.log
	Status : SUCCESS

Patch Id : 32904851
	Action : APPLY
	Action Time : 03-AUG-2021 16:15:57
	Description : Database Release Update : 19.12.0.0.210720 (32904851)
	Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/32904851/24343243/32904851_apply_G17741_CDB
ROOT_2021Aug03_15_59_01.log
	Status : SUCCESS

Patch Id : 32876380
	Action : ROLLBACK
	Action Time : 10-DEC-2021 18:33:12
	Description : OJVM RELEASE UPDATE: 19.12.0.0.210720 (32876380)
	Logfile :
/u01/app/odaorabase/oracle/cfgtoollogs/sqlpatch/32876380/24269510/32876380_rollb
ack_DBITSTEE_2021Dec10_18_31_06.log
	Status : SUCCESS

Patch Id : 33192694
	Action : APPLY
	Action Time : 10-DEC-2021 18:33:15
	Description : OJVM RELEASE UPDATE: 19.13.0.0.211019 (33192694)
	Logfile :
/u01/app/odaorabase/oracle/cfgtoollogs/sqlpatch/33192694/24421575/33192694_apply
_DBITSTEE_2021Dec10_18_31_33.log
	Status : SUCCESS

Patch Id : 33192793
	Action : APPLY
	Action Time : 10-DEC-2021 18:33:15
	Description : Database Release Update : 19.13.0.0.211019 (33192793)
	Logfile :
/u01/app/odaorabase/oracle/cfgtoollogs/sqlpatch/33192793/24462514/33192793_apply
_DBITSTEE_2021Dec10_18_31_33.log
	Status : SUCCESS

PL/SQL procedure successfully completed.
exit
Final checks

Let’s get the final versions:

odacli describe-component | grep -v ^$
System Version
---------------
19.13.0.0.0
System node Name
---------------
dbi-oda-x8
Local System Version
---------------
19.13.0.0.0
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       19.13.0.0.0           up-to-date
GI                                        19.13.0.0.211019      up-to-date
DB {
[ OraDB19000_home10 ]                     19.12.0.0.210720      19.13.0.0.211019
[ OraDB19000_home15 ]                     19.13.0.0.211019      up-to-date
}
DCSCONTROLLER                             19.13.0.0.0           up-to-date
DCSCLI                                    19.13.0.0.0           up-to-date
DCSAGENT                                  19.13.0.0.0           up-to-date
DCSADMIN                                  19.13.0.0.0           up-to-date
OS                                        7.9                   up-to-date
ILOM                                      5.0.2.24.r141466      up-to-date
BIOS                                      52050300              up-to-date
SHARED CONTROLLER FIRMWARE                VDV1RL04              up-to-date
LOCAL DISK FIRMWARE                       1132                  up-to-date
SHARED DISK FIRMWARE                      1132                  up-to-date
HMP                                       2.4.8.0.600           up-to-date

One of my DB home is still running 19.12 because I didn’t patch it.

Cleanse the old patches

The old patches will never be used again. For this ODA, history was:

Deploy = 19.10.0.0.0 => Patch 19.12.0.0.0 => Patch 19.13.0.0.0

If you don’t remember the history of your ODA, have a look in this folder:

ls -lrt /opt/oracle/oak/pkgrepos/System/
total 44
-rwxr-xr-x. 1 root root 26255 Mar 18  2021 system_repos_metadata.xml
drwxr-xr-x  2 root root  4096 Aug 27 15:18 19.12.0.0.0
drwxr-xr-x. 5 root root  4096 Nov  3 09:56 19.10.0.0.0
drwxrwxr-x  2 root root  4096 Nov 27 17:16 19.13.0.0.0
drwxr-xr-x. 2 root root  4096 Dec  9 15:54 latest

You can presume this ODA was deployed with 19.10 and first patched with 19.12.

Let’s remove the previous patch:

odacli cleanup-patchrepo -cl -comp db,gi -v 19.12.0.0.0
Put back your own settings

Once everything is OK, don’t forget to put back your settings:

  • add your additional rpms manually if needed
  • put back your profile scripts for grid and oracle users
And what about DB Systems update?

If you use DB Systems on your ODA, meaning that some of your databases are running in dedicated VMs, the patch is not applied inside the DB System.

Let’s do a describe component in the VM itself:

odacli describe-component | grep -v ^$
System Version
---------------
19.12.0.0.0
System node Name
---------------
srvdb02
Local System Version
---------------
19.12.0.0.0
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       19.12.0.0.0           19.13.0.0.0
GI                                        19.12.0.0.210720      19.13.0.0.211019
DB                                        19.12.0.0.210720      19.13.0.0.211019
DCSCONTROLLER                             19.12.0.0.0           19.13.0.0.0
DCSCLI                                    19.12.0.0.0           19.13.0.0.0
DCSAGENT                                  19.12.0.0.0           19.13.0.0.0
DCSADMIN                                  19.12.0.0.0           19.13.0.0.0
OS                                        7.9                   up-to-date

Patching a DB System is done being connected to it, and commands are similar to what you’ve done on bare metal.

odacli update-dcsadmin -v 19.13.0.0.0
odacli update-dcscomponents -v 19.13.0.0.0
odacli update-dcsagent -v 19.13.0.0.0
odacli create-prepatchreport -s -v 19.13.0.0.0
odacli describe-prepatchreport -i 584efe26-ff23-4024-a1e9-8182b2b38a89
odacli update-server -v 19.13.0.0.0
odacli create-prepatchreport -d -i b1a0bfd9-5db5-4bce-8e4d-49f07480cc4e -v 19.13.0.0.0
odacli describe-prepatchreport -i 03b90bd8-a0b5-4998-83ba-c3943917d951
odacli update-dbhome -i b1a0bfd9-5db5-4bce-8e4d-49f07480cc4e -v 19.13.0.0.0
odacli delete-dbhome -i b1a0bfd9-5db5-4bce-8e4d-49f07480cc4e
odacli describe-component | grep -v ^$
System Version
---------------
19.13.0.0.0
System node Name
---------------
srvdb02
Local System Version
---------------
19.13.0.0.0
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       19.13.0.0.0           up-to-date
GI                                        19.13.0.0.211019      up-to-date
DB                                        19.13.0.0.211019      up-to-date
DCSCONTROLLER                             19.13.0.0.0           up-to-date
DCSCLI                                    19.13.0.0.0           up-to-date
DCSAGENT                                  19.13.0.0.0           up-to-date
DCSADMIN                                  19.13.0.0.0           up-to-date
OS                                        7.9                   up-to-date

This seems obvious but you cannot apply a patch on a DB System if the host is not up to date:

Validate BM versions     Failed    Operation: Update Server failed, node dbi-oda-x8 is not running with the supported version for OAK,GI.

Using DB Systems is a nice option if you want to keep things clean and isolated. But it’s much more work when it comes to patching, as it’s not much faster to patch a DB System compared to bare metal.

Conclusion

My ODA is now in the latest 19c version. This was not too difficult coming from a recent version, it could be more challenging if you come from an older one.

Cet article Oracle Database Appliance 19.13: what’s new and how to patch? est apparu en premier sur Blog dbi services.

Pages