Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 16 hours 22 min ago

Patching ODA lite to 12.2.1.4.0

Mon, 2018-07-30 10:30

Here is how to apply the latest patch for your ODA. As usual the Oracle documentation is not 100% accurate. I applied this patch on 2 ODAs X7-2M previously deployed in 12.2.1.2.0, no intermediate patch was needed.

1) Download the patch

The patch number is 28216794. This patch will update the following components:  dcs (odacli), operating system, bios/firmwares, ilom, GI, dbhomes and databases.
Download and copy the patch to a temporary folder on the server, for example /opt/patch. You’ll have to be root to apply the patch.

2) Check the actual versions and free space on disk

First check the current version:

odacli describe-component

System Version
---------------
12.2.1.2.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.2.1.2.0            up-to-date
GI                                        12.2.0.1.171017       up-to-date
DB {
[ OraDB12201_home1 ]                      12.2.0.1.171017       up-to-date
[ OraDB11204_home1 ]                      11.2.0.4.171017       up-to-date
}
DCSAGENT                                  12.2.1.2.0            up-to-date
ILOM                                      4.0.0.28.r121827      4.0.0.22.r120818
BIOS                                      41017600              41017100
OS                                        6.8                   up-to-date

The “available version” stands for “what’s available in the ODA repository?”. As this ODA has never been patched, it seems that some of the components are already in newer versions than those provided in the initial deployment package. Registering a new patch will refresh this repository.

Check that folders /, /u01 and /opt have enough free GB to process (>=10GB).

3) Prepare the patch files

Previous patch was slightly different from the others because Oracle simply forgot to double zip the file 2 and 3 of the patch. Now this patch is back to a more classic update: unzip and register the 3 unzipped files (they are zip files, too).

cd /opt/patch
unzip p28216794_122140_Linux-x86-64_1of3.zip
unzip p28216794_122140_Linux-x86-64_2of3.zip
unzip p28216794_122140_Linux-x86-64_3of3.zip

odacli update-repository -f /tmp/oda-sm-12.2.1.4.0-180708-server1of3.zip
odacli update-repository -f /tmp/oda-sm-12.2.1.4.0-180708-server2of3.zip
odacli update-repository -f /tmp/oda-sm-12.2.1.4.0-180708-server3of3.zip

Updating the repository, as other tasks through odacli, will generate a job-id you can look at to confirm that everything is running fine:

odacli describe-job -i "4087f0f4-2582-4621-b75c-59c9915a0cb5"

Job details
----------------------------------------------------------------
                     ID:  4087f0f4-2582-4621-b75c-59c9915a0cb5
            Description:  Repository Update
                 Status:  Success
                Created:  July 19, 2018 10:32:53 AM CEST
                Message:  oda-sm-12.2.1.4.0-180703-server1of3.zip

Note: you can update the repository with a single update-repository command including the 3 files. But it’s better to do the update separately in case of a corrupted file.

New feature with this patch, you can check if your system is ready for patching with this command:

odacli create-prepatchreport -s -v 12.2.1.4.0
odacli describe-prepatchreport -i f45f9750-ec9b-411f-ba53-43205cb17f87

4) Update the dcs-agent

First, you’ll have to update the dcs-agent:

/opt/oracle/dcs/bin/odacli update-dcsagent -v 12.2.1.4.0

odacli describe-job -i "5cc9174e-bd7a-435d-aaff-0113e9ab01bc"

Job details
----------------------------------------------------------------
                     ID:  5cc9174e-bd7a-435d-aaff-0113e9ab01bc
            Description:  DcsAgent patching
                 Status:  Success
                Created:  July 19, 2018 10:36:18 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Patch location validation                July 19, 2018 10:36:18 AM CEST      July 19, 2018 10:36:19 AM CEST      Success
dcs-agent upgrade                        July 19, 2018 10:36:19 AM CEST      July 19, 2018 10:36:19 AM CEST      Success

This update takes only few seconds. Check again the version and you will see a new 18c component:

odacli describe-component

System Version
---------------
12.2.1.4.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.2.1.2.0            12.2.1.4.0
GI                                        12.2.0.1.171017       12.2.0.1.180417
DB {
[ OraDB12201_home1 ]                      12.2.0.1.171017       12.2.0.1.180417
[ OraDB11204_home1 ]                      11.2.0.4.171017       11.2.0.4.180417
}
DCSAGENT                                  18.2.1.0.0            up-to-date
ILOM                                      4.0.0.28.r121827      4.0.2.20.b.r123704
BIOS                                      41017600              41021300
OS                                        6.8                   6.9
FIRMWARECONTROLLER                        QDV1RE0F              qdv1re14

5) Update the server

Updating the server will update OS, ILOM, BIOS, firmwares and the GI. Update process will assume that no additionnal packages have been installed. If you installed additionnal packages, please remove them as they can prevent the patch to apply correctly.

This part of the update is the longest one (about 1 hour), and will end with a reboot of the server.

odacli update-server -v 12.2.1.4.0

odacli describe-job -i "a2e296c2-1b3e-4ed1-a5cc-0cb7c4d23120"

Job details
----------------------------------------------------------------
                     ID:  a2e296c2-1b3e-4ed1-a5cc-0cb7c4d23120
            Description:  Server Patching
                 Status:  Success
                Created:  July 19, 2018 12:12:53 PM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Patch location validation                July 19, 2018 12:12:53 PM CEST      July 19, 2018 12:12:53 PM CEST      Success
dcs-controller upgrade                   July 19, 2018 12:12:54 PM CEST      July 19, 2018 12:12:54 PM CEST      Success
Patch location validation                July 19, 2018 12:12:54 PM CEST      July 19, 2018 12:12:54 PM CEST      Success
dcs-cli upgrade                          July 19, 2018 12:12:54 PM CEST      July 19, 2018 12:12:54 PM CEST      Success
Creating repositories using yum          July 19, 2018 12:12:54 PM CEST      July 19, 2018 12:12:55 PM CEST      Success
Applying HMP Patches                     July 19, 2018 12:12:55 PM CEST      July 19, 2018 12:14:31 PM CEST      Success
Patch location validation                July 19, 2018 12:14:31 PM CEST      July 19, 2018 12:14:31 PM CEST      Success
oda-hw-mgmt upgrade                      July 19, 2018 12:14:31 PM CEST      July 19, 2018 12:14:31 PM CEST      Success
Creating repositories using yum          July 19, 2018 12:14:31 PM CEST      July 19, 2018 12:14:35 PM CEST      Success
Applying OS Patches                      July 19, 2018 12:14:35 PM CEST      July 19, 2018 12:16:18 PM CEST      Success
OSS Patching                             July 19, 2018 12:16:18 PM CEST      July 19, 2018 12:16:18 PM CEST      Success
Applying Firmware Disk Patches           July 19, 2018 12:16:18 PM CEST      July 19, 2018 12:16:23 PM CEST      Success
Applying Firmware Expander Patches       July 19, 2018 12:16:23 PM CEST      July 19, 2018 12:16:26 PM CEST      Success
Applying Firmware Controller Patches     July 19, 2018 12:16:26 PM CEST      July 19, 2018 12:16:30 PM CEST      Success
Checking Ilom patch Version              July 19, 2018 12:16:31 PM CEST      July 19, 2018 12:16:33 PM CEST      Success
Patch location validation                July 19, 2018 12:16:33 PM CEST      July 19, 2018 12:16:34 PM CEST      Success
Apply Ilom patch                         July 19, 2018 12:16:35 PM CEST      July 19, 2018 12:16:36 PM CEST      Success
Copying Flash Bios to Temp location      July 19, 2018 12:16:37 PM CEST      July 19, 2018 12:16:37 PM CEST      Success
Starting the clusterware                 July 19, 2018 12:16:57 PM CEST      July 19, 2018 12:16:58 PM CEST      Success
clusterware patch verification           July 19, 2018 12:16:58 PM CEST      July 19, 2018 12:17:00 PM CEST      Success
Patch location validation                July 19, 2018 12:17:00 PM CEST      July 19, 2018 12:17:03 PM CEST      Success
Opatch updation                          July 19, 2018 12:17:33 PM CEST      July 19, 2018 12:17:35 PM CEST      Success
Patch conflict check                     July 19, 2018 12:17:35 PM CEST      July 19, 2018 12:18:16 PM CEST      Success
clusterware upgrade                      July 19, 2018 12:18:16 PM CEST      July 19, 2018 12:33:07 PM CEST      Success
Updating GiHome version                  July 19, 2018 12:33:07 PM CEST      July 19, 2018 12:33:11 PM CEST      Success
preRebootNode Actions                    July 19, 2018 12:33:30 PM CEST      July 19, 2018 12:34:11 PM CEST      Success
Reboot Ilom                              July 19, 2018 12:34:11 PM CEST      July 19, 2018 12:34:11 PM CEST      Success

If there is a problem during the patching process (you forgot to remove an additionnal rpm or one of the patch file is missing for example), you can relaunch the patching and it will skip the already patched components.

Once this part of the patch is successfuly applied, check again the components:

odacli describe-component

System Version
---------------
12.2.1.4.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.2.1.4.0            up-to-date
GI                                        12.2.0.1.180417       up-to-date
DB {
[ OraDB12201_home1 ]                      12.2.0.1.171017       12.2.0.1.180417
[ OraDB11204_home1 ]                      11.2.0.4.171017       11.2.0.4.180417
}
DCSAGENT                                  18.2.1.0.0            up-to-date
ILOM                                      4.0.2.20.b.r123704    up-to-date
BIOS                                      41017600              41021300
OS                                        6.9                   up-to-date
FIRMWARECONTROLLER                        QDV1RE14              up-to-date

Note: at this time, BIOS is not yet updated. Don’t know why but check this later and it will be OK.

6) Patch the dbhomes

You now need to patch the dbhomes separately. First of all, list them:

odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
90556d26-6756-4fed-9546-d44d55b6fc04     OraDB11204_home1     11.2.0.4.171017 (26609929, 26392168)     /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured
18a9c067-3629-409d-9bae-60d27516c914     OraDB12201_home1     12.2.0.1.171017 (27020386, 26710464)     /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured

Patch the first one. Target version is actually version of the patch. Remember that updating the ODA through a patch will never upgrade your database to a newer release, your 11gR2 databases will stay in 11.2.0.4 and 12cR1 databases will stay in 12.1.0.2 (if you’re using one or both of these versions). Only the PSU number will change (the fifth number which is actually a date).

odacli update-dbhome -i 90556d26-6756-4fed-9546-d44d55b6fc04 -v 12.2.1.4.0

odacli describe-job -i "c1abf083-d597-4673-b07b-d7cb79ec434a"

Job details
----------------------------------------------------------------
                     ID:  c1abf083-d597-4673-b07b-d7cb79ec434a
            Description:  DB Home Patching: Home Id is 90556d26-6756-4fed-9546-d44d55b6fc04
                 Status:  Success
                Created:  July 19, 2018 12:59:11 PM CEST
                Message:  WARNING::Failed to run the datapatch as db DB03_S2 is not registered with clusterware##WARNING::Failed to run the datapatch as db DB04_S2 is not registered with clusterware##WARNING::Failed to run the datapatch as db DB05_S2 is not r

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
clusterware patch verification           July 19, 2018 12:59:12 PM CEST      July 19, 2018 12:59:15 PM CEST      Success
Patch location validation                July 19, 2018 12:59:15 PM CEST      July 19, 2018 12:59:19 PM CEST      Success
Opatch updation                          July 19, 2018 12:59:39 PM CEST      July 19, 2018 12:59:40 PM CEST      Success
Patch conflict check                     July 19, 2018 12:59:40 PM CEST      July 19, 2018 12:59:49 PM CEST      Success
db upgrade                               July 19, 2018 12:59:49 PM CEST      July 19, 2018 1:01:22 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:22 PM CEST       July 19, 2018 1:01:22 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:22 PM CEST       July 19, 2018 1:01:22 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:22 PM CEST       July 19, 2018 1:01:23 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:23 PM CEST       July 19, 2018 1:01:23 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:23 PM CEST       July 19, 2018 1:01:23 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:23 PM CEST       July 19, 2018 1:01:23 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:23 PM CEST       July 19, 2018 1:01:23 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:23 PM CEST       July 19, 2018 1:01:24 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:24 PM CEST       July 19, 2018 1:01:24 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:24 PM CEST       July 19, 2018 1:01:24 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:24 PM CEST       July 19, 2018 1:01:24 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:24 PM CEST       July 19, 2018 1:01:24 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:24 PM CEST       July 19, 2018 1:01:25 PM CEST       Success

Despite the overall Success status, some database could stay in previous version for some reason:
– database is a StandBy database : no update is possible on the dictionary (dictionary will be updated when the primary server will be patched)
– database is not correctly registered in the Oracle Cluster repository (if you create the database with odacli in instance-only mode: create-database -io – Probably a bug)

After applying the patch, please check the version of your dictionary on each database with this statement :

select ACTION_TIME, ACTION, VERSION, ID, BUNDLE_SERIES, COMMENTS from  DBA_REGISTRY_HISTORY;
ACTION_TIME                    ACTION       VERSION                  ID BUNDLE_SERIES   COMMENTS
------------------------------ ------------ ---------------- ---------- --------------- ------------------------------
23/07/18 20:25:26,765974       APPLY        11.2.0.4             180417 PSU             PSU 11.2.0.4.180417

Then update the next dbhome:

odacli update-dbhome -i 18a9c067-3629-409d-9bae-60d27516c914 -v 12.2.1.4.0

odacli describe-job -i "ef53a8a6-bd74-40f6-a338-343489d41a1c"

Job details
----------------------------------------------------------------
                     ID:  ef53a8a6-bd74-40f6-a338-343489d41a1c
            Description:  DB Home Patching: Home Id is 18a9c067-3629-409d-9bae-60d27516c914
                 Status:  Success
                Created:  July 19, 2018 1:20:20 PM CEST
                Message:  WARNING::Failed to run datapatch on db DB12TST Failed to run Utlrp script##WARNING::Failed to run the datapatch as db DB12DEV: is not registered with clusterware

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
clusterware patch verification           July 19, 2018 1:20:41 PM CEST       July 19, 2018 1:20:42 PM CEST       Success
Patch location validation                July 19, 2018 1:20:42 PM CEST       July 19, 2018 1:20:42 PM CEST       Success
Opatch updation                          July 19, 2018 1:20:42 PM CEST       July 19, 2018 1:20:42 PM CEST       Success
Patch conflict check                     July 19, 2018 1:20:42 PM CEST       July 19, 2018 1:20:42 PM CEST       Success
db upgrade                               July 19, 2018 1:20:42 PM CEST       July 19, 2018 1:20:42 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:20:42 PM CEST       July 19, 2018 1:20:47 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:20:47 PM CEST       July 19, 2018 1:20:47 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:20:47 PM CEST       July 19, 2018 1:21:03 PM CEST       Success

As previously patched dbhome, this one also has warnings. Please check if each database is OK.

Check the /opt/oracle/dcs/log/dcs-agent.log for extended warning messages if needed, and then check the DB Version after the update of all dbhomes:

odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
90556d26-6756-4fed-9546-d44d55b6fc04     OraDB11204_home1     11.2.0.4.180417 (27441052, 27338049)     /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured
18a9c067-3629-409d-9bae-60d27516c914     OraDB12201_home1     12.2.0.1.180417 (27464465, 27674384)     /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured

Jump to step 8 if your databases are OK.

7) Update the databases where datapatch failed

If for some reason datapatch was not applied on some of your databases and it should have been, you’ll have to do it manually:

For 12c :

su – oracle
. oraenv <<< DB12TST
cd $ORACLE_HOME/OPatch
./datapatch -verbose

For 11gR2 :

su – oracle
. oraenv <<< DB11TST
sqlplus / as sysdba
@?/rdbms/admin/catbundle.sql psu apply

8) Patch the storage

No update-storage is needed for this patch on this ODA.

9) Optional: deploy the latest db clone files

If you’ll never deploy a new dbhome this step is not necessary. If you will, or simply if you don’t know if you’ll have to do that later, download and register the new db clone files in the repository to be able to create a new dbhome at the same patch level than the one already deployed, for example:

cd /opt/patch

unzip p27119402_122140_Linux-x86-64.zip
Archive:  p27119402_122140_Linux-x86-64.zip
 extracting: odacli-dcs-12.2.1.4.0-180617-DB-12.2.0.1.zip
  inflating: README.txt

update-image --image-files odacli-dcs-12.2.1.4.0-180617-DB-12.2.0.1.zip
Unpacking will take some time,  Please wait...
Unzipping odacli-dcs-12.2.1.4.0-180617-DB-12.2.0.1.zip

10) Control the final version of the components

Now the patching is done. ±2 hours were needed, if everything is OK.

oodacli describe-component

System Version
---------------
12.2.1.4.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.2.1.4.0            up-to-date
GI                                        12.2.0.1.180417       up-to-date
DB {
[ OraDB12201_home1 ]                      12.2.0.1.180417       up-to-date
[ OraDB11204_home1 ]                      11.2.0.4.180417       up-to-date
}
DCSAGENT                                  18.2.1.0.0            up-to-date
ILOM                                      4.0.2.20.b.r123704    up-to-date
BIOS                                      41021300              up-to-date
OS                                        6.9                   up-to-date
FIRMWARECONTROLLER                        QDV1RE14              up-to-date

11) Optional: patch the SFP firmware

If you’re using SFP network interfaces on your ODA X7 (fiber connected network), you may encounter network troubles (lost of pings, failover not working correctly, and so on). SFP are not part of the ODA patching process, but there is a firmware update available for those kind of SFP on Oracle X7 server familly (including ODA). Please review note 2373070.1 for more information :

Using the onboard SFP28 ports on an ODA X7-2 server node (Doc ID 2373070.1)

 

Cet article Patching ODA lite to 12.2.1.4.0 est apparu en premier sur Blog dbi services.

Logical standby: No view for data guard status on session level

Fri, 2018-07-27 09:26

On logical standby you have to switch off data guard at least on session level to make modifications on objects which are maintained by data guard.

This is done by command


alter session disable guard;

Dataguard can be reenabled for this session by issuing


alter session enable guard;

For executing these commands “alter database” privilege is at least needed, which is often not liked by IT security staff.

Command “alter session disable guard” is also used in login triggers of logical standby databases for power users who have to do modifications in objects which are maintained by data guard. Problem is that no view can be queried to get data guard status of the session.

Workaround is to put result of login trigger into a log table which can be queried by the user.

 

Cet article Logical standby: No view for data guard status on session level est apparu en premier sur Blog dbi services.

ODA, network interface and configure-firstnet

Fri, 2018-07-27 09:13

Deploying new ODA X7-2S or 2M, I have been curious how configure-firstnet would interact with the fiber and copper ethernet network interfaces. Reading documentation on the web, I could not clearly understand if it is mandatory to have the ODA connected to the LAN when performing an ODA reimage and/or running the configure-firstnet in additionnal of having ILOM connection. After digging deeper and few tests, I wanted to share my experience in this blog.

configure-firstnet

Running configure-firstnet might need a little attention as it can be run just one time. In the example below, I’m running the firstnet configuration with a VLAN.

configure first net

 network-script files

The network script files are stored in /etc/sysconfig/network-scripts.

Bounding is configured on btbond1 interface as :

  1. em2 as primary interface.
  2. active-backup mode : em3 is used as backup only and will be used if em2 is failling
    BOUNDING_OPTS=”mode=active-backup miimon=100 primary=em2″

The only officially supported bounding option on the ODA is “active-backup”. Manually updating the ifcfg-btbond1 file with BOUNDING_OPTS=”mode=4 miimon=100 lacp_rate=1″ in order to implement LACP (Link Aggregation Control Protocol) would work if your switch is configured so, but is not supported. Recommendation would be to use ODA with “active-backup” mode only.

This can be seen in the ifcfg-btbond1 configuration file.

10.80.6.17-002-btbond1

After running configure-firstnet, using a VLN, a new file ifcfg-btbond1.<vln_id> will be created. This file is having all the IP configuration (IP address, netmask, gateway). If no VLAN is used, the IP configuration will added into the ifcfg-btbond1 configuration file.

10.80.6.17-004-vlan

If we look more closely to the network configuration file, we will see em2 and em3 been configured with btbond1 as master.

10.80.6.17-001-em2&3

As we can see here there is, so far, no reference in the network configuration file as if you are using fiber or copper ethernet interface.

em2 and em3 interface : Fiber or copper?

em2 and em3 interfaces are automatically connected either on fiber or on copper ethernet according to the physical connection. Em2 and em3 interfaces would then either be using the 2 fiber ports (SFP28) or the 2 ethernet ports (NET1 – NET2).
In fact, as soon as a GBIC converter is plugged into the fiber channels and a reboot is performed, the em2 and em3 will be automatically link to the fiber adapter. And no need to have any fiber cabling.

No GBIC converter installed on the ODA

IMG_1158 (Small)

em2 and em3 interfaces would be seen as Twisted Pair.

10.80.6.17-003-no gbic

GBIC converter

IMG_1160 (Small)

GBIC converter installed on the ODA

IMG_1161 (Small)

After a server reboot, em2 and em3 interfaces would be seen as Fiber.

10.80.6.17-004 (after plugging gbic and reboot)

Conclusion

Based on this short experience, we can see that the Fiber is detected as soon as a GBIC adapter is plugged into the SFP28 interfaces. The ifconfig network scripts are totally independent of this choice. Therefore, there is no need to have the ODA em2 and em3 network connections to reimage and run the configure-firstnet. These connections will be mandatory for the next step when we will create the appliance.

 

Cet article ODA, network interface and configure-firstnet est apparu en premier sur Blog dbi services.

How to shrink tables with on commit materialized views

Fri, 2018-07-27 08:44

Usually it is not possible to shrink tables which are used by on commit materialized views.

The result is an ORA-10652 “Object has on-commit materialized views” error, for which in action section nothing is suggested.

There is a workaround for this error: Convert all materialized views which rely on your table to be shrinked from on-commit to on-demand views. Application must tolerate that the affected materialized views are not updated during shrinking space of the table.
The affected materialized views must be queried in dba_mviews and the sql query must be checked whether table to be shrinked is used by this materialized view.

This gives following procedure:
alter table table_name enable row movement;
alter materialized view materialized_view_name refresh on demand;
alter table table_name shrink space;
exec dbms_mview.refresh('materialized_view_name');
alter materialized view materialized_view_name refresh on commit;
alter table table_name disable row movement;

Note: For alter table enable row movement and alter table shrink space table must be accessible, otherwise locks on the table may cause delays or errors.
The statements alter materialized view on demand, exec dbms_mview.refresh and alter materialized view on commit must be executed for every materialized view which uses the table to be shrinked.

 

Cet article How to shrink tables with on commit materialized views est apparu en premier sur Blog dbi services.

Running SQL Server containers on K8s Docker for Windows CE stable channel

Fri, 2018-07-27 07:33

The release of Docker for Windows Stable version 18.06..0-ce-win70 comes with some great new features I looked for a while including the K8s support! That’s a pretty good news because this support has existed on Edge channel since the beginning of this year but no chance to install a beta version on my laptop from my side.

So, we get now a good opportunity to test locally our SQL Server image with a K8s single node architecture.

 

blog 141 - 1 -docker 18.06.0-ce-win72

 

One interesting point here is I may switch the context of K8s infrastructure as shown below:

blog 141 - 2 -docker k8s switch context

 

The first one (dbi8scluster) corresponds to my K8s cluster on Azure. I wrote about it some time ago and the second one is about my single K8s node on my Windows 10 laptop. So, switching to my different environments is very easy as shown below:

[dab@DBI:$]> kubectl get nodes
NAME                       STATUS     ROLES     AGE       VERSION
aks-nodepool1-78763348-0   NotReady   agent     57d       v1.9.6
aks-nodepool1-78763348-1   NotReady   agent     57d       v1.9.6
C:\Users\dab\Desktop
[dab@DBI:$]> kubectl get nodes
NAME                 STATUS    ROLES     AGE       VERSION
docker-for-desktop   Ready     master    1d        v1.10.3
C:\Users\dab\Desktop

 

It is also interesting to get a picture of the container installed and that run the K8s infrastructure on Docker. Assuming you already enabled the showing system containers option in advanced mode you may display all the K8s related containers as following:

[dab@DBI:$]> docker ps --format "table {{.ID}}\t {{.Names}}"
CONTAINER ID         NAMES
829a5941592e         k8s_compose_compose-7447646cf5-l5shp_docker_7e3ff6d9-908e-11e8-91f0-00155d0013a6_4
53666469f25d         k8s_compose_compose-api-6fbc44c575-7jrj8_docker_7e3ff0d0-908e-11e8-91f0-00155d0013a6_4
cd3772216e72         k8s_sidecar_kube-dns-86f4d74b45-v7mr9_kube-system_7e3cb79b-908e-11e8-91f0-00155d0013a6_4
8ae73505dfb0         k8s_dnsmasq_kube-dns-86f4d74b45-v7mr9_kube-system_7e3cb79b-908e-11e8-91f0-00155d0013a6_4
8066fbefc371         k8s_kubedns_kube-dns-86f4d74b45-v7mr9_kube-system_7e3cb79b-908e-11e8-91f0-00155d0013a6_4
2591d102e6fb         k8s_kube-proxy_kube-proxy-p9jv9_kube-system_7e43eaab-908e-11e8-91f0-00155d0013a6_4
80f6d997a225         k8s_POD_compose-7447646cf5-l5shp_docker_7e3ff6d9-908e-11e8-91f0-00155d0013a6_4
23751c4fd2fc         k8s_POD_kube-proxy-p9jv9_kube-system_7e43eaab-908e-11e8-91f0-00155d0013a6_4
f96406dedefb         k8s_POD_compose-api-6fbc44c575-7jrj8_docker_7e3ff0d0-908e-11e8-91f0-00155d0013a6_4
9149e9b91fd3         k8s_POD_kube-dns-86f4d74b45-v7mr9_kube-system_7e3cb79b-908e-11e8-91f0-00155d0013a6_4
2316ed63e2ee         k8s_kube-controller-manager_kube-controller-manager-docker-for-desktop_kube-system_120c685a17dc3d67e505450a6ea9243c_4
52defb42bbaf         k8s_kube-apiserver_kube-apiserver-docker-for-desktop_kube-system_814863b48e4b523c13081a7bb4c85f0d_4
3366ebf8f058         k8s_kube-scheduler_kube-scheduler-docker-for-desktop_kube-system_ea66a171667ec4aaf1b274428a42a7cf_4
bd903b9dce3f         k8s_etcd_etcd-docker-for-desktop_kube-system_d82203d846b07255217d0e72211752f0_4
7e650673b6d2         k8s_POD_kube-apiserver-docker-for-desktop_kube-system_814863b48e4b523c13081a7bb4c85f0d_4
24e4bfb59184         k8s_POD_kube-scheduler-docker-for-desktop_kube-system_ea66a171667ec4aaf1b274428a42a7cf_4
3422edb44165         k8s_POD_etcd-docker-for-desktop_kube-system_d82203d846b07255217d0e72211752f0_4
aeca6879906b         k8s_POD_kube-controller-manager-docker-for-desktop_kube-system_120c685a17dc3d67e505450a6ea9243c_4

 

We retrieve all the K8s components including the API server, the controller manager, the K8s scheduler, the kube-proxy and the etcd cluster database. It is not my intention to go further on this topic and I will probably get the opportunity to dig further in the next blog posts.

My first test consisted in deploying our dbi services custom image for development about SQL Server 2017 Linux on my K8s cluster node. I already did it with our dbi services production image on my previous K8s infrastructure on Azure and it could be interesting to check if we have to operate in the same way. This is at least what I expected and I was right. Just note  I didn’t perform the same test with Windows containers yet but it will be soon hopefully.

Obviously, I had to change my storage classes to StorageClassName = hostpath to point to my host storage as follows:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pv-data-sql
  labels:
    type: local
spec:
  storageClassName: hostpath
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /T/Docker/DMK/BACKUP
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pv-claim-data-sql
spec:
  storageClassName: hostpath
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

 

I just want to draw your attention to the hostPath because we have to apply some modifications from the initial path to be understood by K8s. On Windows side my path is T:/Docker/DMK/BACKUP and it contains a backup of my custom AdventureWorks database for our tests.

Here the command to deploy my persistence volume and the correspond persistent volume claim that will be used by my SQL Server pod:

[dab@DBI:$]> kubectl create -f .\docker_k8s_storage.yaml
persistentvolume "pv-data-sql" created
persistentvolumeclaim "pv-claim-data-sql" created

 

Then my development file didn’t change a lot as expected:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mssql
    spec:
      terminationGracePeriodSeconds: 10
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: pv-claim-data-sql
      containers:
      - name: mssql
        image: dbi/dbi_linux_sql2017:2017-CU4
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: MSSQL_SA_PASSWORD
          value: "Password1"
          # valueFrom:
          #   secretKeyRef:
          #     name: mssql
          #     key: SA_PASSWORD 
        - name: DMK
          value: "Y"
        volumeMounts:
        - name: mssqldb
          mountPath: "/backup"
---
apiVersion: v1
kind: Service
metadata:
  name: mssql-deployment
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 1433
      targetPort: 1433
  type: LoadBalancer

 

The command to deploy my SQL Server container pod and the correspond service is:

[dab@DBI:$]> kubectl create -f .\docker_k8s_sql.yaml
deployment.apps "mssql-deployment" created
service "mssql-deployment" created

 

So, let’s get a picture of my new deployed environment:

[dab@DBI:$]> kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
mssql-deployment   1         1         1            1           2m

[dab@DBI:$]> kubectl get services
NAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes         ClusterIP      10.96.0.1       <none>        443/TCP          1d
mssql-deployment   LoadBalancer   10.109.238.88   localhost     1433:30200/TCP   2m

[dab@DBI:$]> kubectl get pods -o wide
NAME                                READY     STATUS    RESTARTS   AGE       IP          NODE
mssql-deployment-6c69bb6f7c-pqb2d   1/1       Running   0          7m        10.1.0.39   docker-for-desktop

 

Everything seems to be deployed successfully. I use a load balancer service here but bear in mind I just have only one node. Here a description of my new deployed pod:

[dab@DBI:$]> kubectl describe pod mssql-deployment-6c69bb6f7c-pqb2d
Name:           mssql-deployment-6c69bb6f7c-pqb2d
Namespace:      default
Node:           docker-for-desktop/192.168.65.3
Start Time:     Fri, 27 Jul 2018 12:45:50 +0200
Labels:         app=mssql
                pod-template-hash=2725662937
Annotations:    <none>
Status:         Running
IP:             10.1.0.39
Controlled By:  ReplicaSet/mssql-deployment-6c69bb6f7c
Containers:
  mssql:
    Container ID:   docker://a17039dcdcb22c1b4b80c73fb17e73df90efda19ed77e215ef92bf86c9bfc538
    Image:          dbi/dbi_linux_sql2017:2017-CU4
    Image ID:       docker://sha256:2a693c121c33c390f944df7093b8c902f91940fa966ae8e5190a7a4a5b0681d2
    Port:           1433/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 27 Jul 2018 12:45:51 +0200
    Ready:          True
    Restart Count:  0
    Environment:
      ACCEPT_EULA:        Y
      MSSQL_SA_PASSWORD:  Password1
      DMK:                Y
    Mounts:
      /backup from mssqldb (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5qts2 (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  mssqldb:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pv-claim-data-sql
    ReadOnly:   false
  default-token-5qts2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-5qts2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                         Message
  ----    ------                 ----  ----                         -------
  Normal  Scheduled              3m    default-scheduler            Successfully assigned mssql-deployment-6c69bb6f7c-pqb2d to docker-for-desktop
  Normal  SuccessfulMountVolume  3m    kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "pv-data-sql"
  Normal  SuccessfulMountVolume  3m    kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "default-token-5qts2"
  Normal  Pulled                 3m    kubelet, docker-for-desktop  Container image "dbi/dbi_linux_sql2017:2017-CU4" already present on machine
  Normal  Created                3m    kubelet, docker-for-desktop  Created container
  Normal  Started                3m    kubelet, docker-for-desktop  Started container

 

And finally, let’s have a look at a sample of my SQL Server log:

========================== 2018-07-27 10:46:43 Restoring AdventureWorks database OK ==========================
========================== 2018-07-27 10:46:43 Installing TSQLt ==========================
2018-07-27 10:46:44.01 spid51      Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.
Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.
2018-07-27 10:46:44.02 spid51      Configuration option 'clr enabled' changed from 0 to 1. Run the RECONFIGURE statement to install.
2018-07-27 10:46:44.02 spid51      Configuration option 'clr strict security' changed from 1 to 0. Run the RECONFIGURE statement to install.
2018-07-27 10:46:44.02 spid51      Configuration option 'max server memory (MB)' changed from 2147483647 to 3072. Run the RECONFIGURE statement to install.
Configuration option 'clr enabled' changed from 0 to 1. Run the RECONFIGURE statement to install.
Configuration option 'clr strict security' changed from 1 to 0. Run the RECONFIGURE statement to install.
Configuration option 'max server memory (MB)' changed from 2147483647 to 3072. Run the RECONFIGURE statement to install.
2018-07-27 10:46:47.39 spid51      Starting up database 'dbi_tools'.
2018-07-27 10:46:47.72 spid51      Parallel redo is started for database 'dbi_tools' with worker pool size [2].
2018-07-27 10:46:47.74 spid51      Parallel redo is shutdown for database 'dbi_tools' with worker pool size [2].
Installed at 2018-07-27 10:46:47.863
2018-07-27 10:46:48.54 spid51      AppDomain 3 (dbi_tools.dbo[runtime].2) created.

(1 rows affected)

+-----------------------------------------+
|                                         |
| Thank you for using tSQLt.              |
|                                         |
| tSQLt Version: 1.0.5873.27393           |
|                                         |
+-----------------------------------------+
0
========================== 2018-07-27 10:46:49 Installing TSQLt OK ==========================
======= 2018-07-27 10:46:49 MSSQL CONFIG COMPLETED =======
2018-07-27 10:51:03.35 spid55      Using 'dbghelp.dll' version '4.0.5'
2018-07-27 10:51:10.32 spid55      Attempting to load library 'xplog70.dll' into memory. This is an informational message only. No user action is required.
2018-07-27 10:51:10.40 spid55      Using 'xplog70.dll' version '2017.140.3022' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required.

 

My SQL Server pod is mounted with my restored AdventureWorks database accessible from my /backup path inside my container. We also added to the image, the tSQLt framework for our unit tests.

Let’s connect to my SQL Server pod by using mssql-cli CLI:

C:\WINDOWS\system32>mssql-cli -S localhost -U sa -P Password1
Version: 0.15.0
Mail: sqlcli@microsoft.com
Home: http://github.com/dbcli/mssql-cli
master>

Time: 0.000s
master> select name from sys.databases;
+--------------------+
| name               |
|--------------------|
| master             |
| tempdb             |
| model              |
| msdb               |
| AdventureWorks_dbi |
| dbi_tools          |
+--------------------+
(6 rows affected)
Time: 0.406s

 

My first deployment is successful. There are plenty of topics to cover about K8s and SQL Server containers and other writes-up will come soon for sure. Stay tuned!

 

 

 

 

Cet article Running SQL Server containers on K8s Docker for Windows CE stable channel est apparu en premier sur Blog dbi services.

How to install MariaDB 10.2.9 from binary tarball on CentOS 7 ?

Thu, 2018-07-26 03:22

 

In this blog, I will show you how to install it from binary tarball. This installation will be a VirtualBox machine (2 GB RAM , 20 GB of disk)

There are several ways to install MariaDB on your Linux:

  • rpm
  • binary tarball
  • building it from source

Prerequisites:

First ssh to your linux server

Update it:

 [root@deploy mariadb]$ yum update -y

Install wget:

[root@deploy mariadb]$ yum install wget
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.spreitzer.ch

Remove all existing mariadb packages

[root@deploy ~]$ rpm -qa | grep mariadb
mariadb-libs-5.5.56-2.el7.x86_64
[root@deploy ~]$ yum remove mariadb-libs-5.5.56-2.el7.x86_64
Loaded plugins: fastestmirror
Resolving Dependencies

Create the directory for the binaries

[root@deploy ~]$ mkdir /mariadbBinaries

Create the directory for the datas

[root@deploy ~]$ mkdir /mariadbData/mysqld1 -p

Create a directory where we are going to put the my.cnf configuration file for mariaDB

[root@deploy ~]$ mkdir /mariadbConf
[root@deploy ~]$ touch /mariadbConf/my.cnf

Change the ownership of the directories to mysql:

[root@deploy ~]$ chown -R mysql:mysql /mariadbBinaries/ /mariadbConf/ /mariadbData/

Create the mysql group and user:

[root@deploy ~]$ groupadd mysql
[root@deploy ~]$ useradd -g mysql mysql
[root@deploy ~]$ passwd mysql
[root@deploy ~]$ echo "mysql ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
[root@deploy ~]$ su - mysql
[mysql@deploy ~]$ cd /mariadbBinaries/
[mysql@deploy mariadbBinaries]$ [mysql@deploy mariadbBinaries]$ wget https://downloads.mariadb.org/interstitial/mariadb-10.2.9/bintar-linux-x86_64/mariadb-10.2.9-linux-x86_64.tar.gz

Check our tarball is here:

[mysql@deploy mariadbBinaries]$ ls -l
total 442528
-rw-rw-r--. 1 mysql mysql 453146319 Sep 25 22:39 mariadb-10.2.9-linux-x86_64.tar.gz

Let’s detar the tarball:

[mysql@deploy mariadbBinaries]$ tar zxvf mariadb-10.2.9-linux-x86_64.tar.gz
(omitted output)

Create a symbolic link between /mariadbConf/my.cnf and /etc/my.cnf

[mysql@deploy ~]$ sudo ln -s /mariadbConf/my.cnf /etc/my.cnf



 

 

Cet article How to install MariaDB 10.2.9 from binary tarball on CentOS 7 ? est apparu en premier sur Blog dbi services.

How to deploy an AWS infrastructure using CloudFormation?

Thu, 2018-07-26 03:13

# What is Cloud formation

As you know, one of the big challenges of cloud computing is to deploy infrastructure as fast as possible and in the most easy way.
Of course, this challenge is more than accomplished but when it comes to deploy a large infrastructure, we need another tool if there are not native to the cloud provider which is called Orchestration tool. Below an overview of some cloud providers and its native orchestration tool:

 

 

 

Cet article How to deploy an AWS infrastructure using CloudFormation? est apparu en premier sur Blog dbi services.

SQL Server on Linux – I/O internal thoughts

Tue, 2018-07-24 07:55

Let’s start the story from the beginning with some funny testing I tried to perform with SQL Server on Linux a couple of months ago. At that time, I wanted to get some pictures of syscalls from SQL Server as I already did in a past on Windows side with sysinternal tools as procmon and stack traces. On Linux strace is probably one of the best counterparts.

blog 140 - 0 - 0 -  banner

Firstly, please note this blog is just from my own researches and it doesn’t constitute in any cases an official documentation from Microsoft and may lead likely to some inaccuracies. This kind of write up is definitely not easy especially when you’re not the direct developer of the product and because things change quickly nowadays making at the same time your blog post biased :) Anyway, I was just curious to figure out how SQL Server deals with I/O on Linux side and the main safety point here is certainly to show how you may achieve it on Linux. So let’s start from the beginning with already what we know on the Windows operating system: The SQL Server engine goes through Win32 API and functions like CreateFile(), ReadFile(), WriteFile() to deal with I/O but let’s focus specifically on the CreateFile() function here. CreateFile() is used to create or to open an existing file or an I/O device with some specific flags. Some of them as FILE_FLAG_WRITE_THROUGH are used to meet the Write-Ahead Logging (WAL) Protocol by bypassing all system / disk caches (cf. Microsoft article).

On April 10 2018 I did my first tests on the Linux side with SQL Server 2017 CU5 and here was my first strace output after creating dbi_db database:

blog 140 - 0 - 2 - strace stack twitter

blog 140 - 0 - 1 - strace stack

It was an expected output for me because on Linux SQL Server uses an low-level open() system call – that is the counterpart of createfile() on Windows – but the surprising thing was with O_DIRECT flag only. I’m not a system developer and from my position, I may understand benefits from using O_DIRECT with database systems because it is driving by AIO (asynchronous I/O) and by the fact we may completely bypass any kernel space buffers (by default on Linux). I get the opportunity to thanks @dbaffaleuf with our interesting discussions on this topic

But referring to the documentation we may read the following sample as well about O_DIRECT:

File I/O is done directly to/from user-space buffers. The O_DIRECT flag on its own makes an effort to transfer data synchronously, but does not give the guarantees of the O_SYNC flag that data and necessary metadata are transferred

From my understanding using O_DIRECT that implies durable writes on block devices are not guaranteed and from my trace I noticed the transaction log seemed to be open with O_DIRECT in this case …

2837  20:44:32 open("/u02/sqlserverlog/mssqlserver/dbi_test_log.ldf", O_RDWR|O_CREAT|O_EXCL|O_DIRECT, 0660) = 187
2837  10:13:09 fstat(187, {st_mode=S_IFREG|0660, st_size=0, ...}) = 0

 

… From a WAL protocol perspective using such flag might lead to not meet the requirements because we might experience data loss in case of a system / storage outage unless either implementing another low-level mechanism like fsync() for transaction log and checkpoints or to be sure the storage guarantees writes are O_DIRECT safe. At this stage I expected to find out more fsync() related entries in my trace but no chance as shown below (I put only a sample but in fact no other relevant syscalls in the entire trace that might indicate forcing synchronization stuff)

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 72.59   12.993418        2089      6220      2503 futex
 10.78    1.929811        7509       257           epoll_wait
  9.08    1.625657       47813        34        28 restart_syscall
  3.28    0.587733        8643        68           io_getevents
  1.97    0.351806       70361         5           nanosleep
  1.92    0.344254       34425        10        10 rt_sigtimedwait
  0.32    0.056943           3     22116           clock_gettime
  0.02    0.003530         101        35           munmap
  0.01    0.002149          32        67           io_submit
  0.01    0.001706           6       273           epoll_ctl
  0.01    0.000897           6       154           read
  0.00    0.000765          11        68           writev
  0.00    0.000605           4       136        68 readv
  0.00    0.000591           4       137           write
  0.00    0.000381          10        40        23 open
  …
------ ----------- ----------- --------- --------- ----------------
100.00   17.900633                 29827      2638 total

 

A couple of weeks ago, I wanted to update my test environment with SQL Server 2017 CU8 (on Linux) I noticed the following messages (in the red rectangle):

blog 140 - 2 - forceflush

 

Hmm .. that was pretty new and interesting and in fact, the aforementioned messages were related to this Microsoft article (Thanks @Bobwardms to pointed me out). This new behavior is available since SQL Server 2017 CU6 and the article describes that how Microsoft has introduced a change with a new “forced flush” mechanism for SQL Server on Linux system.

In a nutshell for all scenarios, a new flush mechanism guarantees data is safely written to a stable media for transaction logs and during checkpoints. Let’s dig further into the both methods.

Let’s say first I applied the same pattern for all the tests that follow. The test’s protocol included one dbo.t1 table with only one column (id int). I inserted for each test a bunch of data rows (67 rows to be more precise related to 67 distinct implicit transactions) without any other user concurrent activities. It remains some internal stuff from SQL Server but I guess we may consider them as negligible compared to my tests.

insert dbo.t1 values (1)
go 67

 

  • Default forced flush mechanism behavior

 

In this scenario referring to my strace file output database files are still open with O_DIRECT only as shown below (my database’s name is toto this time)

4745  10:54:34 open("/u01/sqlserverdata/mssqlserver/toto.mdf", O_RDWR|O_CREAT|O_EXCL|O_DIRECT, 0660) = 188

 

I also used the following command to get directly a picture summary of the number of calls per syscall

$ sudo strace -f -c -o sql_strace.txt $(pidof sqlservr |sed 's/\([0-9]*\)/\-p \1/g')

 

Here the sample output I got:

$ cat sql_strace.txt
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 71.36   24.078231        2685      8969      3502 futex
 12.13    4.093680      120402        34        30 restart_syscall
 10.03    3.384817       12583       269           epoll_wait
  2.67    0.901541       13258        68           io_getevents
  1.77    0.598830       46064        13        13 rt_sigtimedwait
  1.66    0.560893       93482         6           nanosleep
  0.23    0.077873           2     31422           clock_gettime
  0.09    0.030924         462        67           fsync
  0.01    0.003212         321        10           madvise
  0.01    0.003026          22       136           write
  …
------ ----------- ----------- --------- --------- ----------------
100.00   33.742080                 42195      3651 total

 

This time I noticed additional calls of fsync() to guarantee writes on blocks – 67 calls that seem to be related to 67 transactions right? I double checked in my strace output and it seems that is the case but I may be wrong on this point so please feel free to comment. Another interesting point is that I also continued to notice asynchronous IO from io_getevents function that appeared from my trace. That make sense for me. Writes of data are asynchronous while those on transaction logs are synchronous by design. In this mode fsync() is triggered during transaction commits and checkpoints.

 

  • Enabling trace flag 3979

Enabling trace flag 3979 has effect to disable the forced flush behavior replaced by writethrough and alternatewritethrough options. Referring to the Microsoft article the former will translate the well-known FILE_FLAG_WRITE_THROUGH flag requests into O_DSYNC opens but with some performance optimization stuff by using fdatasync() rather than fsync() Indeed, fdatasync() is supposed to generate less I/O activity because it doesn’t require to synchronize file metadata (only the data portion of the file is concerned here).

Anyway, my strace sample file output below confirmed that both data file and transaction log were open with both O_DIRECT and O_DSYNC meaning we are bypassing the kernel buffer space and we are also forcing synchronous I/O for both files.  Does it matter? Well, from my understanding writing dirty data pages is still an asynchronous process when checkpoints occur. How much better it performs in this case? I don’t know and it will require likely another bunch of strong tests on different use cases –

5279  11:07:36 open("/u01/sqlserverdata/mssqlserver/dbi_test.mdf", O_RDWR|O_CREAT|O_EXCL|O_DSYNC|O_DIRECT, 0660) = 185
…
5279  11:07:38 open("/u02/sqlserverlog/mssqlserver/dbi_test_log.ldf", O_RDWR|O_CREAT|O_EXCL|O_DSYNC|O_DIRECT, 0660) = 185

 

As previously I noticed 67 calls of fdatasync() (and not anymore fsync() here) related likely to my 67 implicit transactions and I still continued to notice asynchronous IO driving by io_getevents() or io_submit() syscalls.

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 73.90   12.678046        2360      5371      1927 futex
 10.80    1.853571        7788       238           epoll_wait
  9.05    1.551962       45646        34        30 restart_syscall
  1.93    0.331275       47325         7         7 rt_sigtimedwait
  1.84    0.316524      105508         3           nanosleep
  1.61    0.276806        4131        67           io_getevents
  0.71    0.121815           7     18259           clock_gettime
  0.06    0.010184         152        67           fdatasync
  0.02    0.003851          26       146           read
  0.02    0.003217          12       272           epoll_ctl
  0.01    0.002139          32        67           io_submit
  0.01    0.002070          15       136           write
  …
------ ----------- ----------- --------- --------- ----------------
100.00   17.156630                 25170      2054 total

 

 

Finally, at the moment of this write up, let’s say that Microsoft recommends enabling the trace flag 3979 as well as changing the default values of writethrough and alternatewritethrough options to 0 to revert back to old behavior before CU6 illustrated in the first section of this blog post but only in the case your storage guarantees your writes will be “O_DIRECT” safe. I think you may understand why now – if my understanding of Linux I/O is obviously correct- :)

See you!

 

 

Cet article SQL Server on Linux – I/O internal thoughts est apparu en premier sur Blog dbi services.

Release 18.0.0.0.0 Version 18.3.0.0.0 On-Premises binaries

Tue, 2018-07-24 05:57

Good news, the latest Patchset for Oracle 12cR2 (which is not named patchset anymore, is actually called release 18c and numbered 18.0.0.0.0) is available for download on OTN. It is great because OTN download does not require access to Support and Software Updates. It is available to anybody under the Free Developer License Terms (basically development, testing, prototyping, and demonstrating for an application that is not in production and for non-commercial use). We all complained about the ‘Cloud First’ strategy because we were are eager to download the latest version. But the positive aspect of it is that we have now on OTN a release that has been stabilized after a few release updates. In the past, only the first version of the latest release was available there. Now we have one with many bug fixed.

Of course, I didn’t wait and I have tested 18c as soon as it was available on the Oracle Cloud thanks to the ACE Director program that provided me with some Cloud Credits. In this post, I’ll update my Cloud database to run it with the on-premises binary. Because that’s the big strength of Oracle: we can run the same software, 100% compatible, on the Cloud and on our own servers. There are some limitations in the features available, but technically it is the same software.

Oracle Cloud First

Here is my Cloud version of Oracle 18c installed on February (18.1) updated on April (18.2) and July (18.3):

SQLcl: Release 18.2 Production on Tue Jul 24 11:02:56 2018
 
Copyright (c) 1982, 2018, Oracle. All rights reserved.
 
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
 
SQL> host $ORACLE_HOME/OPatch/opatch lspatches
28090523;Database Release Update : 18.3.0.0.180717 (28090523)
OPatch succeeded.
 
SQL> select banner from v$version;
 
BANNER
--------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
 
SQL> select banner_full from v$version;
 
BANNER_FULL
----------------------------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
 
SQL> select banner_legacy from v$version;
 
BANNER_LEGACY
--------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
 
SQL> exec dbms_qopatch.get_sqlpatch_status;
 
Patch Id : 27676517
Action : APPLY
Action Time : 18-APR-2018 20:44:50
Description : Database Release Update : 18.2.0.0.180417 (27676517)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_CDB1_CDBROOT_2018Apr18_20_43_27.log
Status : SUCCESS
 
Patch Id : 28090523
Action : APPLY
Action Time : 18-JUL-2018 11:38:20
Description : Database Release Update : 18.3.0.0.180717 (28090523)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/28090523/22329768/28090523_apply_CDB1_CDBROOT_2018Jul18_11_36_38.log
Status : SUCCESS
 
PL/SQL procedure successfully completed.

LINUX.X64_180000_db_home.zip

I have installed the on-premises 18c available on OTN. The good things with the new releases are:

  • No need to extract installer files. Just unzip the Oracle Home and link the executable
  • This Oracle Home image already includes the latest Release Updates


SQLcl: Release 18.2 Production on Tue Jul 24 11:02:56 2018
 
Copyright (c) 1982, 2018, Oracle. All rights reserved.
 
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
 
SQL> host $ORACLE_HOME/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)
OPatch succeeded.
 

We have 4 updates from July here for the following components:

  • The Database (28090523)
  • The Java in the Oracle Home, aka JDK (27908644)
  • The Java in the database, aka OJVM (27923415)
  • The clusterware component for the database to match the CRS, aka OCW (28090553)

So, now we have an image of the Oracle Home which already contains all the latest updates… except one:

$ cat $ORACLE_HOME/sqldeveloper/sqldeveloper/bin/version.properties
 
COMPANY=Oracle
PRODUCT=SQL Developer
VERSION=17.30003410937f
VER=17.3.2
VER_FULL=17.3.2.341.0937
BUILD_LABEL=341.0937
BUILD_NUM=341.0937
EDITION=

Unfortunately, that’s an old version of SQL Developer here, and with no SQLcl. Then just download this additional one and unzip it in the Oracle Home.

DataPatch

So, what happens when I open the database that I have created on 18.1 and patched with 18.2 and 18.3 RUs on the Oracle Cloud? There are two updates for the database (DBRU and OJVM). The DBRU is already there then DataPatch has only to apply the OJVM:

[oracle@VM183x dbhome_1]$ $ORACLE_HOME/OPatch/datapatch
SQL Patching tool version 18.0.0.0.0 Production on Tue Jul 24 10:57:55 2018
Copyright (c) 2012, 2018, Oracle. All rights reserved.
 
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_11104_2018_07_24_10_57_5 5/sqlpatch_invocation.log
 
Connecting to database...OK
Gathering database info...done
 
Note: Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
(Doc ID 1585822.1)
 
Bootstrapping registry and package to current versions...done
Determining current state...done
 
Current state of interim SQL patches:
Interim patch 27923415 (OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)):
Binary registry: Installed
PDB CDB$ROOT: Not installed
PDB PDB$SEED: Not installed
PDB PDB1: Not installed
 
Current state of release update SQL patches:
Binary registry:
18.3.0.0.0 Release_Update 1806280943: Installed
PDB CDB$ROOT:
Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 18-JUL-18 11.38.20.321353 AM
PDB PDB$SEED:
Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 18-JUL-18 11.38.21.802495 AM
PDB PDB1:
Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 18-JUL-18 11.38.23.230513 AM
 
Adding patches to installation queue and performing prereq checks...done
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED PDB1
No interim patches need to be rolled back
No release update patches need to be installed
The following interim patches will be applied:
27923415 (OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415))
 
Installing patches...
Patch installation complete. Total patches installed: 3
 
Validating logfiles...done
Patch 27923415 apply (pdb CDB$ROOT): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_CDBROOT_2018Jul24_10_58_08. log (no errors)
Patch 27923415 apply (pdb PDB$SEED): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_PDBSEED_2018Jul24_10_58_56. log (no errors)
Patch 27923415 apply (pdb PDB1): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_PDB1_2018Jul24_10_58_56.log (no errors)
SQL Patching tool complete on Tue Jul 24 10:59:21 2018

Now here is the history of patches:

SQL> exec dbms_qopatch.get_sqlpatch_status;
 
Patch Id : 27676517
Action : APPLY
Action Time : 18-APR-2018 20:44:50
Description : Database Release Update : 18.2.0.0.180417 (27676517)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_CDB1_CDBROOT_2018Apr18_20_43_27.log
Status : SUCCESS
 
Patch Id : 28090523
Action : APPLY
Action Time : 18-JUL-2018 11:38:20
Description : Database Release Update : 18.3.0.0.180717 (28090523)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/28090523/22329768/28090523_apply_CDB1_CDBROOT_2018Jul18_11_36_38.log
Status : SUCCESS
 
Patch Id : 27923415
Action : APPLY
Action Time : 24-JUL-2018 10:59:19
Description : OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_CDBROOT_2018Jul24_10_58_08.log
Status : SUCCESS
 
PL/SQL procedure successfully completed.

This is all good. Despite the different release schedules, the level of software is exactly the same. And we can start on-premises on a release with low regression risk (18c like a patchset) but many fixes (several release updates). For the moment only the Linux port is there. The other platforms should come this summer.

 

Cet article Release 18.0.0.0.0 Version 18.3.0.0.0 On-Premises binaries est apparu en premier sur Blog dbi services.

Syncing Active Directory users and groups to PostgreSQL

Mon, 2018-07-23 10:55

A lot of companies use Active Directory to manage their users and groups. What most of this companies also want to do is to manage their database users and groups in Active Directory. PostgreSQL comes with ldap/kerberos authentication by default but does not provide anything that helps with managing users and groups in an external directory. And even for the authentication the user already needs to be existent in PostgreSQL. One tool you might want to have a look at and that helps with this requirement is pg-ldap-sync.

As usual I am using CentOS 7 for the scope of this post. For getting pg-ldap-sync onto the system PostgreSQL needs to be installed as pg_config is expected to be there. Once you have that several packages need to be installed (the openldap-clients is not required but it is handy to have it just in case you want to test some ldapsearch commands against Active Directory):

[root@pgadsync ~]$ yum install -y ruby rubygem-rake rubygems ruby-devel openldap-clients git

pg-ldap-sync can either be installed directly with ruby commands or you can install it from Git:

[postgres@pgadsync ~]$ git clone https://github.com/larskanis/pg-ldap-sync.git
[postgres@pgadsync ~]$ cd pg-ldap-sync
[postgres@pgadsync pg-ldap-sync]$ gem install bundler
[postgres@pgadsync pg-ldap-sync]$ bundle install
[postgres@pgadsync pg-ldap-sync]$ bundle exec rake install
[postgres@pgadsync pg-ldap-sync]$ which pg_ldap_sync 
~/bin/pg_ldap_sync
[postgres@pgadsync pg-ldap-sync]$ cd ..
[postgres@pgadsync ~]$ bin/pg_ldap_sync --help
Usage: bin/pg_ldap_sync [options]
    -v, --[no-]verbose               Increase verbose level
    -c, --config FILE                Config file [/etc/pg_ldap_sync.yaml]
    -t, --[no-]test                  Don't do any change in the database

And then, of course, you need something in the Active Directory for synchronization. In my test Active Directory I create a new “Organizational Unit” called “PostgreSQL”:

Selection_001

Inside this “Organizational Unit” there is a user which is used for authenticating against Active Directory:
Selection_002

Then we have two other “Organizational Units”, one for the PostgreSQL DBAs and one for the groups we’d like to sync:
Selection_003

There are three people in the pgadmins unit:
Selection_004

There is one group in the groups unit:
Selection_005

… and the group has two members:
Selection_006

This is what we want to synchronize to PostgreSQL. The final requirement is that two roles need to be there is PostgreSQL (you’ll notice later why that is important):

postgres@pgbox:/home/postgres/ [PG10] psql -X postgres
psql (10.3)
Type "help" for help.

postgres=# \du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

postgres=# create role ldap_users;
CREATE ROLE
postgres=# create role ldap_groups;
CREATE ROLE
postgres=# \du
                                    List of roles
  Role name  |                         Attributes                         | Member of 
-------------+------------------------------------------------------------+-----------
 ldap_groups | Cannot login                                               | {}
 ldap_users  | Cannot login                                               | {}
 postgres    | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

postgres=# 

With pg-ldap-sync each instance you want to have synchronized needs a separate yaml file like this one:

# With this sample config the distinction between LDAP-synchronized
# groups/users from is done by the membership to ldap_user and
# ldap_group. These two roles has to be defined manally before
# pg_ldap_sync can run.

# Connection parameters to LDAP server
# see also: http://net-ldap.rubyforge.org/Net/LDAP.html#method-c-new
ldap_connection:
  host: 172.22.30.1
  port: 389
  auth:
    method: :simple
    username: CN=pgadsync,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
    password: xxxxx
#  encryption:
#    method: :simple_tls

# Search parameters for LDAP users which should be synchronized
ldap_users:
  base: OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
  # LDAP filter (according to RFC 2254)
  # defines to users in LDAP to be synchronized
#  filter: (&(objectClass=person)(objectClass=organizationalPerson)(givenName=*)(sn=*)(sAMAccountName=*))
  filter: (sAMAccountName=*)
  # this attribute is used as PG role name
  name_attribute: sAMAccountName
  # lowercase name for use as PG role name
  lowercase_name: true

# Search parameters for LDAP groups which should be synchronized
ldap_groups:
  base: OU=pggroups,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
  filter: (cn=dbas)
  # this attribute is used as PG role name
  name_attribute: cn
  # lowercase name for use as PG role name
  lowercase_name: false
  # this attribute must reference to all member DN's of the given group
  member_attribute: member

# Connection parameters to PostgreSQL server
# see also: http://rubydoc.info/gems/pg/PG/Connection#initialize-instance_method
pg_connection:
  host: 192.168.22.99
  dbname: postgres
  user: postgres
  password: postgres

pg_users:
  # Filter for identifying LDAP generated users in the database.
  # It's the WHERE-condition to "SELECT rolname, oid FROM pg_roles"
  filter: oid IN (SELECT pam.member FROM pg_auth_members pam JOIN pg_roles pr ON pr.oid=pam.roleid WHERE pr.rolname='ldap_users')
  # Options for CREATE RULE statements
  create_options: LOGIN IN ROLE ldap_users

pg_groups:
  # Filter for identifying LDAP generated groups in the database.
  # It's the WHERE-condition to "SELECT rolname, oid FROM pg_roles"
  filter: oid IN (SELECT pam.member FROM pg_auth_members pam JOIN pg_roles pr ON pr.oid=pam.roleid WHERE pr.rolname='ldap_groups')
  # Options for CREATE RULE statements
  create_options: NOLOGIN IN ROLE ldap_groups
#grant_options:

When you have a look at the “pg_users” and “pg_groups” you will notice why the two PostgreSQL roles created above are required. They are used to distinguish the users and groups coming from the directory and those created locally.

Ready to sync:

[postgres@pgadsync ~]$ bin/pg_ldap_sync -c etc/pg_ldap_sync.yaml -vv 
I, [2018-07-23T14:23:46.350588 #29270]  INFO -- : found user-dn: CN=dba1,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:23:46.360073 #29270]  INFO -- : found user-dn: CN=dba2,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:23:46.363133 #29270]  INFO -- : found user-dn: CN=dba3,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:23:46.474105 #29270]  INFO -- : found group-dn: CN=dbas,OU=pggroups,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:23:46.517468 #29270]  INFO -- : user stat: create: 3 drop: 0 keep: 0
I, [2018-07-23T14:23:46.517798 #29270]  INFO -- : group stat: create: 1 drop: 0 keep: 0
I, [2018-07-23T14:23:46.518047 #29270]  INFO -- : membership stat: grant: 2 revoke: 0 keep: 0
I, [2018-07-23T14:23:46.518201 #29270]  INFO -- : SQL: CREATE ROLE "dba1" LOGIN IN ROLE ldap_users
I, [2018-07-23T14:23:46.522229 #29270]  INFO -- : SQL: CREATE ROLE "dba2" LOGIN IN ROLE ldap_users
I, [2018-07-23T14:23:46.525156 #29270]  INFO -- : SQL: CREATE ROLE "dba3" LOGIN IN ROLE ldap_users
I, [2018-07-23T14:23:46.528058 #29270]  INFO -- : SQL: CREATE ROLE "dbas" NOLOGIN IN ROLE ldap_groups
I, [2018-07-23T14:23:46.531065 #29270]  INFO -- : SQL: GRANT "dbas" TO "dba3","dba1" 

… and that’s it. Users and groups are now available in PostgreSQL:

postgres=# \du
                                        List of roles
  Role name  |                         Attributes                         |     Member of     
-------------+------------------------------------------------------------+-------------------
 dba1        |                                                            | {ldap_users,dbas}
 dba2        |                                                            | {ldap_users}
 dba3        |                                                            | {ldap_users,dbas}
 dbas        | Cannot login                                               | {ldap_groups}
 ldap_groups | Cannot login                                               | {}
 ldap_users  | Cannot login                                               | {}
 postgres    | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

When you add anther user to the directory:

Selection_007

… and run the sync again all remaining users will of course not be touched but the new one gets created (notice that I copied the dba4 in the directory, this is why the user is member of the dbas group):

[postgres@pgadsync ~]$ bin/pg_ldap_sync -c etc/pg_ldap_sync.yaml -vv 
I, [2018-07-23T14:27:26.314729 #29273]  INFO -- : found user-dn: CN=dba1,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:27:26.323719 #29273]  INFO -- : found user-dn: CN=dba2,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:27:26.326764 #29273]  INFO -- : found user-dn: CN=dba3,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:27:26.328800 #29273]  INFO -- : found user-dn: CN=dba4,OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:27:26.394066 #29273]  INFO -- : found group-dn: CN=dbas,OU=pggroups,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com
I, [2018-07-23T14:27:26.434236 #29273]  INFO -- : found pg-user: "dba1"
I, [2018-07-23T14:27:26.434443 #29273]  INFO -- : found pg-user: "dba2"
I, [2018-07-23T14:27:26.434531 #29273]  INFO -- : found pg-user: "dba3"
I, [2018-07-23T14:27:26.439065 #29273]  INFO -- : found pg-group: "dbas" with members: ["dba3", "dba1"]
I, [2018-07-23T14:27:26.439357 #29273]  INFO -- : user stat: create: 1 drop: 0 keep: 3
I, [2018-07-23T14:27:26.439468 #29273]  INFO -- : group stat: create: 0 drop: 0 keep: 1
I, [2018-07-23T14:27:26.439656 #29273]  INFO -- : membership stat: grant: 1 revoke: 0 keep: 2
I, [2018-07-23T14:27:26.439759 #29273]  INFO -- : SQL: CREATE ROLE "dba4" LOGIN IN ROLE ldap_users
I, [2018-07-23T14:27:26.441692 #29273]  INFO -- : SQL: GRANT "dbas" TO "dba4" 

To more tips: When you want the complete ldap path for a user can do it like this:
Selection_008

It is advisable to test the filters you have in the yaml like:

[postgres@pgadsync ~]$ ldapsearch -x -h 172.22.30.1 -D "pgadsync@test.dbiservices.com" -W "(sAMAccountName=*)" -b "OU=pgadmins,OU=PostgreSQL,DC=test,DC=dbiservices,DC=com"  | grep sAMAccountName
Enter LDAP Password: 
# filter: (sAMAccountName=*)
sAMAccountName: dba1
sAMAccountName: dba2
sAMAccountName: dba3
sAMAccountName: dba4

You might wonder how you can assign the permissions then. Just pre-create the role and give the permissions you want:

postgres=# drop role dbas;
DROP ROLE
postgres=# create role dbas in role ldap_groups;
CREATE ROLE
postgres=# grant CONNECT ON DATABASE postgres to dbas;
GRANT

The assignments to that group will come from the directory once you run the next synchronization.

Hope that helps …

 

Cet article Syncing Active Directory users and groups to PostgreSQL est apparu en premier sur Blog dbi services.

New features and known issues with RMAN tool on Oracle database 12.1.0.2

Fri, 2018-07-20 04:50

Oracle Database 12c has new enhancements and additions in Recovery Manager (RMAN).
The RMAN tool continues to enhance and extend the reliability, efficiency, and availability of Oracle Database Backup and Recovery.
Below, I will mention couple of new features for the RMAN duplicate command, but also how to avoid issues that can happen on the creation of the temporary files.

FEATURES:

<INFO>Using BACKUPSET clause :

In previous releases, active duplicates were performed using implicit image copy backups, transferred directly to the destination server. From 12.1 it is also possible to perform active duplicates using backup sets by including the USING BACKUPSET clause.
Compared to the other method (image copy backups), the unused block compression associated with a backup set reduces the amount of the data pulled across the network.

<INFO>Using SECTION SIZE clause:

The section size clause takes into account the parallel degree and the size of the datafile that will be used.
In my case I have configured the parallel degree to 6:

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 6 BACKUP TYPE TO BACKUPSET;

new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 6 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored

Starting restore at 19-JUL-2018 14:11:06
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
using channel ORA_AUX_DISK_4
using channel ORA_AUX_DISK_5
using channel ORA_AUX_DISK_6
channel ORA_AUX_DISK_3: using network backup set from service PROD2_SITE1
channel ORA_AUX_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_3: restoring datafile 00005 to /u02/oradata/PROD/data.dbf
channel ORA_AUX_DISK_3: restoring section 2 of 7

------
channel ORA_AUX_DISK_2: starting datafile backup set restore
channel ORA_AUX_DISK_2: using network backup set from service PROD2_SITE1
channel ORA_AUX_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_2: restoring datafile 00005 to /u02/oradata/PROD/data.dbf
channel ORA_AUX_DISK_2: restoring section 7 of 7

ISSUES :
<WARN>Duplicating on 12cR1, creation of the temp files is not handled correctly.
Duplicating from active or from backup, using Oracle 12cR1, you can run into some issues with the temporary files.

oracle@dbisrv02:/u01/app/oracle/product/12.1.0/dbhome_1/dbs/ [PROD] rman target sys/pwd00@<TNS_NAME_TARGET> auxiliary sys/pwd00@<TNS_NAME_AUXILIARY> 
Recovery Manager: Release 12.1.0.2.0 - Production on Thu Jul 19 13:31:20 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: <TNS_NAME_TARGET> (DBID=xxxxxxxxxx)
connected to auxiliary database: <TNS_NAME_AUXILIARY> (not mounted)

duplicate target database to <TNS_NAME_AUXILIARY> from active database using backupset section size 500m;

----------------------------------------
contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 19-JUL-2018 14:26:09

<INFO>Querying the v$tempfile will not reveal any error

SQL> select file#,name,status from v$tempfile;

     FILE# NAME                           STATUS
---------- ------------------------------ -------
         1 /u02/oradata/<AUXILIARY>/temp01.dbf   ONLINE

<INFO>But querying the dba_temp_files, or run some transactions against your database that need usage of the temporary tablespace, you will got :

SQL> select * from dba_temp_files;
select * from dba_temp_files
              *
ERROR at line 1:
ORA-01187: cannot read from file  because it failed verification tests
ORA-01110: data file 201: '/u02/oradata/<AUXILIARY>/temp01.dbf'

Solution1 : Drop and recreate your temporary tablespace(s) manually. Could be difficult if you have several of them, OR
Solution2 : Drop temp files from your <to_be_cloned_DB>, on the OS side, before launching the duplicate. For more details you can consult this note from MOS :  2250889.1

SQL> col TABLESPACE_NAME format a50;
SQL> col file_name format a50;
SQL> select file_name,TABLESPACE_NAME from dba_temp_files;

FILE_NAME                                          TABLESPACE_NAME
-------------------------------------------------- --------------------------------------------------
/u02/oradata/<AUXILIARY>/temp01.dbf                       TEMP

SQL>startup nomount;

rm -rf /u02/oradata/<AUXILIARY>/temp01.dbf

 

oracle@dbisrv02:/u01/app/oracle/product/12.1.0/dbhome_1/dbs/ [PROD] rman target sys/pwd00@<TNS_NAME_TARGET> auxiliary sys/pwd00@<TNS_NAME_AUXILIARY> 
Recovery Manager: Release 12.1.0.2.0 - Production on Thu Jul 19 13:31:20 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: <TNS_NAME_TARGET> (DBID=xxxxxxxxxx)
connected to auxiliary database: <TNS_NAME_AUXILIARY> (not mounted)

duplicate target database to <TNS_NAME_AUXILIARY> from active database using backupset section size 500m;

At then end of the duplicate action, you should be able to use the database without any action performed against temp files :

SQL> select file#,name,status from v$tempfile;

     FILE# NAME                           STATUS
---------- ------------------------------ -------
         1 /u02/oradata/<AUXILIARY>/temp01.dbf   ONLINE

Additionally, if you are running your auxiliary DB using the Oracle Grid Infra, you need to remove it from Grid during your actions and add again once you finished.

SQL> alter system set db_unique_name='PROD_SITE2' scope=spfile;
alter system set db_unique_name='PROD_SITE2' scope=spfile
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-65500: could not modify DB_UNIQUE_NAME, resource exists

--remove from GRID
[grid@dbisrv02 ~]$ srvctl stop database -d PROD
[grid@dbisrv02 ~]$ srvctl remove database -d PROD
Remove the database PROD? (y/[n]) Y

SQL> startup
ORACLE instance started.

Total System Global Area  788529152 bytes
Fixed Size                  2929352 bytes
Variable Size             314576184 bytes
Database Buffers          465567744 bytes
Redo Buffers                5455872 bytes
Database mounted.
Database opened.

SQL> alter system set db_unique_name='PROD_SITE2' scope=spfile;

System altered.
 

Cet article New features and known issues with RMAN tool on Oracle database 12.1.0.2 est apparu en premier sur Blog dbi services.

Installing ZFS on OEL7 UEK4 for Docker storage

Thu, 2018-07-19 23:08

The Oracle Database is fully supported on Docker according that Linux is Red Hat Enterprise Linux 7 or Oracle Enterprise Linux 7 with Unbreakable Enterprise 4. This is documented in MOS Note 2216342.1. Given the size of the Oracle database in GigaBytes even empty, the way it is installed at build with many file updates, and the per-block modifications of the datafiles, a block level copy-on-write filesystem is a must and deduplication and compression are appreciated. This makes ZFS a good option for the Docker storage driver, but also the external volumes. By the way, the Docker documentation about the storage drivers mention that zfs is a good choice for high-density workloads such as PaaS and this of course includes Database as a Service.

I’ve run this example on OEL 7.2 created in the the Oracle Cloud:
CaptureCreateInstance

kernel-uek-devel

We need to install the kernel headers. Of course, it is probably better to run a ‘yum update’ and reboot in order to run the latest kernel.
Here, I’m just installing the headers for the current kernel:

[root@localhost opc]# yum -y install kernel-uek-devel-$(uname -r)
...
Installed:
kernel-uek-devel.x86_64 0:4.1.12-112.14.13.el7uek
Dependency Installed:
cpp.x86_64 0:4.8.5-28.0.1.el7_5.1 gcc.x86_64 0:4.8.5-28.0.1.el7_5.1 glibc-devel.x86_64 0:2.17-222.el7
glibc-headers.x86_64 0:2.17-222.el7 kernel-headers.x86_64 0:3.10.0-862.9.1.el7 libdtrace-ctf.x86_64 0:0.8.0-1.el7
libmpc.x86_64 0:1.0.1-3.el7 mpfr.x86_64 0:3.1.1-4.el7
Dependency Updated:
glibc.x86_64 0:2.17-222.el7 glibc-common.x86_64 0:2.17-222.el7 libgcc.x86_64 0:4.8.5-28.0.1.el7_5.1
libgomp.x86_64 0:4.8.5-28.0.1.el7_5.1

DKMS

We need Dynamic Kernel Module Support to load ZFS modules. I had problems in the past with this so I install it step by step to verify that everything is ok. First, enable the EPEL repository:

[root@localhost opc]# yum install -y yum-utils
[root@localhost opc]# yum-config-manager --enable ol7_developer_EPEL

Then install DKMS:

[root@localhost opc]# yum -y install -y dkms
...
Installed:
dkms.noarch 0:2.4.0-1.20170926git959bd74.el7
Dependency Installed:
elfutils-default-yama-scope.noarch 0:0.170-4.el7 elfutils-libelf-devel.x86_64 0:0.170-4.el7
kernel-debug-devel.x86_64 0:3.10.0-862.9.1.el7 zlib-devel.x86_64 0:1.2.7-17.el7
Dependency Updated:
elfutils-libelf.x86_64 0:0.170-4.el7 elfutils-libs.x86_64 0:0.170-4.el7 zlib.x86_64 0:1.2.7-17.el7

Install ZFS repository

There is a zfs-release package that installs the /etc/yum.repos.d/zfs.repo:

[root@localhost opc]# sudo rpm -Uvh http://download.zfsonlinux.org/epel/zfs-release.el7_4.noarch.rpm
Retrieving http://download.zfsonlinux.org/epel/zfs-release.el7_4.noarch.rpm
warning: /var/tmp/rpm-tmp.yvRURo: Header V4 RSA/SHA256 Signature, key ID f14ab620: NOKEY
Preparing... ################################# [100%] Updating / installing...
1:zfs-release-1-5.el7_4 ################################# [100%]

Basically, all it contains is the following enabled section:

[zfs] name=ZFS on Linux for EL7 - dkms
baseurl=http://download.zfsonlinux.org/epel/7.4/$basearch/
enabled=1
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

Install ZFS

This is the important part, installing ZFS:

[root@localhost opc]# sudo yum install -y zfs
...
======================================================================================================================
Package Arch Version Repository Size
======================================================================================================================
Installing:
zfs x86_64 0.7.9-1.el7_4 zfs 413 k
Installing for dependencies:
kernel-devel x86_64 3.10.0-862.9.1.el7 ol7_latest 16 M
libnvpair1 x86_64 0.7.9-1.el7_4 zfs 30 k
libuutil1 x86_64 0.7.9-1.el7_4 zfs 35 k
libzfs2 x86_64 0.7.9-1.el7_4 zfs 130 k
libzpool2 x86_64 0.7.9-1.el7_4 zfs 591 k
lm_sensors-libs x86_64 3.4.0-4.20160601gitf9185e5.el7 ol7_latest 41 k
spl x86_64 0.7.9-1.el7_4 zfs 29 k
spl-dkms noarch 0.7.9-1.el7_4 zfs 456 k
sysstat x86_64 10.1.5-13.el7 ol7_latest 310 k
zfs-dkms noarch 0.7.9-1.el7_4 zfs 4.9 M

The most important is to check that the zfs module is installed correctly:

zfs.ko:
Running module version sanity check.
- Original module
- No original module exists within this kernel
- Installation
- Installing to /lib/modules/4.1.12-112.14.13.el7uek.x86_64/extra/

I’ve seen cases where it was not and then the module cannot load. You can also check:

[root@localhost opc]# dkms status
spl, 0.7.9, 4.1.12-112.14.13.el7uek.x86_64, x86_64: installed
zfs, 0.7.9, 4.1.12-112.14.13.el7uek.x86_64, x86_64: installed

If you have a problem (such as “modprobe: FATAL: Module zfs not found” when loading the module), check the status and maybe re-install it with:

dkms remove zfs/0.7.9 --all
dkms --force install zfs/0.7.9

If everything is ok, you can load the module:

[root@localhost opc]# /sbin/modprobe zfs
[root@localhost opc]#

Create a ZFS filesystem

If the ZFS module was not loaded you have this error:

[root@localhost opc]# zpool list
The ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.

If it has been loaded correctly, you have no ZFS Storage Pool yet:

[root@localhost opc]# zpool list
no pools available

First I need to add a disk to my machine. Here I have only one disk created when I created the Compute Service:

[root@localhost opc]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdb 202:16 0 128G 0 disk
├─xvdb1 202:17 0 500M 0 part /boot
└─xvdb2 202:18 0 127.5G 0 part
├─vg_main-lv_root 249:0 0 123.5G 0 lvm /
└─vg_main-lv_swap 249:1 0 4G 0 lvm [SWAP]

I add a new disk in the Storage tab:
CaptureCreateStorage
And attach it and attach it to my Cloud Instance:
CaptureAttachStorage

Here is the new disk visible from the system:

[root@localhost opc]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdb 202:16 0 128G 0 disk
├─xvdb1 202:17 0 500M 0 part /boot
└─xvdb2 202:18 0 127.5G 0 part
├─vg_main-lv_root 249:0 0 123.5G 0 lvm /
└─vg_main-lv_swap 249:1 0 4G 0 lvm [SWAP] xvdc 202:32 0 120G 0 disk
&nbsp
[root@localhost opc]# ls -l /dev/xvdc /dev/block/202:32
lrwxrwxrwx 1 root root 7 Jul 19 15:05 /dev/block/202:32 -> ../xvdc
brw-rw---- 1 root disk 202, 32 Jul 19 15:05 /dev/xvdc

Here is where I add a ZFS Storage Pool for Docker:

[root@localhost opc]# zpool create -f zpool-docker -m /var/lib/docker /dev/xvdc
 
[root@localhost opc]# zpool status
pool: zpool-docker
state: ONLINE
scan: none requested
config:
 
NAME STATE READ WRITE CKSUM
zpool-docker ONLINE 0 0 0
xvdc ONLINE 0 0 0
 
[root@localhost opc]# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zpool-docker 119G 118K 119G - 0% 0% 1.00x ONLINE -

And while I’m there I set some attributes to enable compression and deduplication. And as Docker writes to layers with 32k I/O I set the recordsize accordingly:

zfs set compression=on zpool-docker
zfs set dedup=on zpool-docker
zfs set recordsize=32k zpool-docker

Just to test that everything is ok, I install Docker as I did in a previous post:

[root@localhost opc]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@localhost opc]# yum-config-manager --enable ol7_addons
[root@localhost opc]# yum -y install docker-ce
[root@localhost opc]# systemctl start docker

Docker layers

I pull a small image and start a container on it:

[root@localhost opc]# docker run oraclelinux:7-slim

Here is the image and the ZFS dataset for its layer, mounted under /var/lib/docker/zfs:

[root@localhost opc]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
oraclelinux 7-slim b1af4ba0cf19 12 days ago 117MB
 
[root@localhost opc]# docker inspect oraclelinux:7-slim | jq -r .[0].GraphDriver
 
{
"Data": {
"Dataset": "zpool-docker/fe31ff466872588506b1a3a3575c64d458beeb94d15bea593e5048237abf4fcc",
"Mountpoint": "/var/lib/docker/zfs/graph/fe31ff466872588506b1a3a3575c64d458beeb94d15bea593e5048237abf4fcc"
},
"Name": "zfs"
}

And here is the container layer:

[root@localhost opc]# docker container ls -a
 
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9eb7610c1fc5 oraclelinux:7-slim "/bin/bash" 6 minutes ago Exited (0) 6 minutes ago inspiring_shannon
 
[root@localhost opc]# docker inspect inspiring_shannon | jq -r .[0].GraphDriver
 
{
"Data": {
"Dataset": "zpool-docker/5d1022761b6dee28a25e21ec8c5c73d99d09863f11439bbf86e742856f844982",
"Mountpoint": "/var/lib/docker/zfs/graph/5d1022761b6dee28a25e21ec8c5c73d99d09863f11439bbf86e742856f844982"
},
"Name": "zfs"
}

If you don’t have jq just ‘yum install jq’. It is very convenient to filter and display the ‘inspect’ output.

We can see those datasets from ZFS list:

[root@localhost opc]# zfs list -o creation,space,snapshot_count,written -r | sort
 
CREATION NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD SSCOUNT WRITTEN
Thu Jul 19 15:13 2018 zpool-docker 115G 126M 0B 964K 0B 125M none 964K
Thu Jul 19 15:38 2018 zpool-docker/fe31ff466872588506b1a3a3575c64d458beeb94d15bea593e5048237abf4fcc 115G 125M 0B 125M 0B 0B none 0
Thu Jul 19 15:39 2018 zpool-docker/5d1022761b6dee28a25e21ec8c5c73d99d09863f11439bbf86e742856f844982 115G 87K 0B 87K 0B 0B none 87K
Thu Jul 19 15:39 2018 zpool-docker/5d1022761b6dee28a25e21ec8c5c73d99d09863f11439bbf86e742856f844982-init 115G 46K 0B 46K 0B 0B none 0

Here, sorted by creation time, we see the datasets used by each layer. The initial files before having any image are less than 1MB. The image uses 125MB. The container creation has written 87KB and 46KB additional once running.

 

Cet article Installing ZFS on OEL7 UEK4 for Docker storage est apparu en premier sur Blog dbi services.

How to install Docker Enterprise Edition on CentOS 7 ?

Thu, 2018-07-19 07:54

In this blog we are going to see how to install Docker EE trial edition on CentOS 7 hosts. As you may know or not, Docker has two editions: Docker Community Edition  (CE) and Docker Enterprise Edition (EE). To make it simple, let’s say that Docker EE is designed for production environment. More infos here.

 

This will be our architecture:

  • 1 manager node
    • hostname: docker-ee-manager1
  • 1 worker node + Docker Trust Registry (DTR) node
    • hostname: docker-ee-worker1

Both nodes should be in the same network range.

We will assume that CentOS 7 is already installed on all hosts:

[root@docker-ee-manager1 ~] cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)

 

[root@docker-ee-worker1 ~]$ cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)

 

 Create docker user and group
[root@docker-ee-manager1 ~]$ groupadd docker
[root@docker-ee-manager1 ~]$ useradd -g docker docker
[root@docker-ee-manager1 ~]$ echo "docker ALL=(ALL) ALL: NOPASSWD" >> /etc/sudoers
[root@docker-ee-manager1 ~]$ su - docker

Do the same on worker

[root@docker-ee-worker1 ~]$ groupadd docker
[root@docker-ee-worker1 ~]$ useradd -g docker docker
[root@docker-ee-worker1 ~]$ echo "docker ALL=(ALL) ALL: NOPASSWD" >> /etc/sudoers
[root@docker-ee-worker1 ~]$ su - docker

 

 

Get the Docker URL for installating Docker EE

Then you need to go to this link, make sure that you already have a Docker account, it’s free you can make one very quickly.

 

 

Fill the formula and you will have access to this :

 

dockerstore

storebits

 

Copy the url and save the license key in a safe location, you will need it later.

 

1. Configure Docker URL
[docker@docker-ee-manager1 ~]$ export DOCKERURL="<YOUR_LINK>"
[docker@docker-ee-manager1 ~]$ sudo -E sh -c 'echo "$DOCKERURL/centos" > /etc/yum/vars/dockerurl'

 

2. We install required packages
[docker@docker-ee-manager1 ~]$ sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2

 

3. Add the Docker-EE repository
[docker@docker-ee-manager1 ~]$ sudo -E yum-config-manager \
--add-repo \
"$DOCKERURL/centos/docker-ee.repo"

 

4. Install docker-ee package
[docker@docker-ee-manager1 ~]$ sudo yum -y install docker-ee
[docker@docker-ee-manager1 ~]$ sudo systemctl enable docker.service
[docker@docker-ee-manager1 ~]$ sudo systemctl start docker.service

 

Repeat step 1 to 4 for worker1 node

 

Install UCP on manager

Simple command, just run this on your manage

 

[docker@docker-ee-manager1 ~]$ docker container run --rm -it --name ucp   -v /var/run/docker.sock:/var/run/docker.sock   docker/ucp:3.0.2 install   --host-address <YOUR_IP>   --interactive
INFO[0000] Your engine version 17.06.2-ee-15, build 64ddfa6 (3.10.0-514.el7.x86_64) is compatible with UCP 3.0.2 (736cf3c)
Admin Username: admin
Admin Password:
Confirm Admin Password:
WARN[0014] None of the hostnames we'll be using in the UCP certificates [docker-ee-manager1 127.0.0.1 172.17.0.1 <YOUR_IP>] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases:
INFO[0000] Found existing UCP config com.docker.ucp.config-2
Do you want to proceed with the install with config com.docker.ucp.config-2? (y/n): y
y
INFO[0032] Installing UCP with host address 10.29.14.101 - If this is incorrect, please specify an alternative address with the '--host-address' flag
INFO[0032] Deploying UCP Service... (waiting for all 2 nodes to complete)
INFO[0083] Installation completed on docker-ee-manager1 (node uvzvuefehznf22k4wa5zg9cy1)
INFO[0083] Installation completed on docker-ee-worker1 (node z7gq7z3336jnwcyojyqq1h3wa)
INFO[0083] UCP Instance ID: x0fg0phnkgzm5730thoncucn2
INFO[0083] UCP Server SSL: SHA-256 Fingerprint=E6:2F:38:69:5D:26:A8:06:D3:8B:11:69:D9:DC:3A:77:CE:16:EA:23:9C:D0:D8:8F:34:D6:97:9D:4B:D2:E2:D2
INFO[0083] Login to UCP at https://<YOUR_IP>1:443
INFO[0083] Username: admin
INFO[0083] Password: (your admin password)

Ignore if there is a insecure message and accept exception. We can see the UCP admin interface. Enter your credentials and upload your license key ucplogin     ucpinterface  

Adding a worker node

  clickonnodes   And click on Add Node Addnode Then tell UCP that you want to deploy a new worker node and copy the command displayed. chooseWorker     Connect to the worker and run this command to join the worker to the cluster

[docker@docker-ee-worker1 ~]$ docker swarm join --token SWMTKN-1-4kt4gyk00n69tiywlzhst8dwsgo4oblylnsl1aww2048isi44u-7j9hmcrsn3lr048yu30xlnsv7 <IP_OF_MANAGER>:2377
This node joined a swarm as a worker.

 

Now, we have two nodes: one manager and one worker

 

Now2nodes

 

Install Docker Trusted Registry

 

The docker EE includes a DTR which is a secure registry where you can store your docker images.  The DTR will be installed on the worker node, it’s not recommended to install it on a manager node.

To install it, you just need to run this command:

[docker@docker-ee-worker1 ~]$ docker run -it --rm docker/dtr install --ucp-node docker-ee-worker1 --ucp-url https://<IP_OF_MANAGER> --ucp-username admin --ucp-password <YOUR_PASSWORD> --ucp-ca "-----BEGIN CERTIFICATE-----
MIIBggIUJ+Y+MFXH1XcyJnCU4ACq26v5ZJswCgYIKoZIzj0EAwIw
HTEbMBkGA1UEAxMSVUNQIENsaWVudCBSb290IENBMB4XDTE4MDcxOTA4MjEwMFoX
DTIzMDcxODA4MjEwMFowHTEbMBkGA1UEAxMSVUNQIENsaWVudCBSb290IENBMFkw
EwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEDJxHOIhHoV4NBZGnEQClFShjQfpoL5mQ
LH7E6x6GL4AexYtdWgGIcOlV2NXQpdadBK9cZG2z6r7+zwCj7EP/iqNFMEMwDgYD
VR0P7ojp1CIMAoGCCqGSM49BAMCA0gAMEUCIQDqbBiCqXgFdtIb6uP9
EdDTI1YGWn97AFPU+YJ9s1/CSAIgBsqIn1v7BVNjJ3AeUQfo1d8Kfc//ZwHYr4XW
uWIHmkM=
-----END CERTIFICATE-----"

You can find the certificate here: https://<IP_OF_MANAGER>/manage/settings/certs

findCertificate

Then go to the DTR URL which is https://<IP_OF_WORKER>  and enter your credentials

 

DTRLogin

 

 

 

Here we are:

 

DTRUI

Congratulations, you just have installed Docker EE. Hope this helps  :-)

 

 

Cet article How to install Docker Enterprise Edition on CentOS 7 ? est apparu en premier sur Blog dbi services.

Google Cloud Spanner – inserting data

Thu, 2018-07-19 04:17

In a previous post I’ve created a Google Cloud Spanner database and inserted a few rows from the GUI. This is definitely not a solution fo many rows and here is a post about using the command line.

If I start the Google Shell from the icon on the Spanner page for my project, everything is set. But if I run it from elsewhere, using the https://console.cloud.google.com/cloudshell as I did in A free persistent Google Cloud service with Oracle XE I have to set the project:

franck_pachot@cloudshell:~$ gcloud config set project superb-avatar-210409
Updated property [core/project].
franck_pachot@superb-avatar-210409:~$

Instance

I create my Spanner instance with 3 nodes across the world:
¨
franck_pachot@superb-avatar-210409:~$ time gcloud spanner instances create franck --config nam-eur-asia1 --nodes=3 --description Franck
Creating instance...done.
 
real 0m3.940s
user 0m0.344s
sys 0m0.092s

Database

and Spanner database – created in 6 seconds:

franck_pachot@superb-avatar-210409:~$ time gcloud spanner databases create test --instance=franck
Creating database...done.
&nbssp;
real 0m6.832s
user 0m0.320s
sys 0m0.128s

Table

The DDL for table creation can also be run from there:

franck_pachot@superb-avatar-210409:~$ gcloud spanner databases ddl update test --instance=franck --ddl='create table DEMO1 ( ID1 int64, TEXT string(max) ) primary key (ID1)'
DDL updating...done.
'@type': type.googleapis.com/google.protobuf.Empty

I’m now ready to insert one million rows. Here is my table:

franck_pachot@superb-avatar-210409:~$ gcloud spanner databases ddl describe test --instance=franck
--- |-
CREATE TABLE DEMO1 (
ID1 INT64,
TEXT STRING(MAX),
) PRIMARY KEY(ID1)

Insert

The gcloud command line has a limited insert possibility:

franck_pachot@superb-avatar-210409:~$ time for i in $(seq 1 1000000) ; do gcloud beta spanner rows insert --table=DEMO1 --database=test --instance=franck --data=ID1=${i},TEXT=XXX${i} ; done
commitTimestamp: '2018-07-18T11:09:45.065684Z'
commitTimestamp: '2018-07-18T11:09:50.433133Z'
commitTimestamp: '2018-07-18T11:09:55.752857Z'
commitTimestamp: '2018-07-18T11:10:01.044531Z'
commitTimestamp: '2018-07-18T11:10:06.285764Z'
commitTimestamp: '2018-07-18T11:10:11.106936Z'
^C

Ok, let’s stop there. Calling a service for each row is not efficient with a latency of 5 seconds.

API

I’ll use the API from Python. Basically, a connection is a Spanner Client:

franck_pachot@superb-avatar-210409:~$ python3
Python 3.5.3 (default, Jan 19 2017, 14:11:04)
[GCC 6.3.0 20170118] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from google.cloud import spanner
>>> spanner_client = spanner.Client()
>>> instance = spanner_client.instance('franck')
>>> database = instance.database('test')
>>>

Batch Insert

With this I can send a batch of rows to insert. Here is the full Python script I used to insert one million, by batch of 1000 rows:

from google.cloud import spanner
spanner_client = spanner.Client()
instance = spanner_client.instance('franck')
database = instance.database('test')
for j in range(1000):
records=[] for i in range(1000):
records.append((1+j*1000+i,u'XXX'+str(i)))
with database.batch() as batch:
batch.insert(table='DEMO1',columns=('ID1', 'TEXT',),values=records)

This takes 2 minutes:

franck_pachot@superb-avatar-210409:~$ time python3 test.py
 
real 2m52.707s
user 0m21.776s
sys 0m0.668s
franck_pachot@superb-avatar-210409:~$

If you remember my list of blogs on Variations on 1M rows insert that’s not so fast. But remember that rows are distributed across 3 nodes in 3 continents but here inserting with constantly increasing value have all batched rows going to the same node. The PRIMARY KEY in Google Spanner is not only there to declare a constraint but also determines the organization of data.

Query

The select can also be run from there from a read-only transaction called ‘Snapshot’ because it is doing MVCC consistent reads:

frank_pachot@superb-avatar-210409:~$ python3
Python 3.5.3 (default, Jan 19 2017, 14:11:04)
[GCC 6.3.0 20170118] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from google.cloud import spanner
>>> with spanner.Client().instance('franck').database('test').snapshot() as snapshot:
... results = snapshot.execute_sql('SELECT COUNT(*) FROM DEMO1')
... for row in results:
... print(row)
...
[1000000]

The advantage of the read-only transaction is that it can do consistent reads without locking. The queries executed in a read-write transaction have to acquire some locks in order to guarantee consistency when reading across multiple nodes.

Interleave

So, you can look at the PRIMARY KEY as a partition by range, and we have also reference partitioning with INTERLEAVE IN PARENT. This reminds me of the Oracle CLUSTER segment that is so rarely used because storing the tables separately is finally the better compromise on performance and flexibility for a multi-purpose database.

Here is my creation of DEMO2 where ID1 is a foreign key referencing DEMO1

franck_pachot@superb-avatar-210409:~$ time gcloud spanner databases ddl update test --instance=franck --ddl='create table DEMO2 ( ID1 int64, ID2 int64, TEXT string(max) ) primary key (ID1,ID2), interleave in parent DEMO1 on delete cascade'
DDL updating...done.
'@type': type.googleapis.com/google.protobuf.Empty
 
real 0m24.418s
user 0m0.356s
sys 0m0.088s

I’m now inserting 5 detail rows per each parent row:

from google.cloud import spanner
database = spanner.Client().instance('franck').database('test')
for j in range(1000):
records=[] for i in range(1000):
for k in range(5):
records.append((1+j*1000+i,k,u'XXX'+str(i)+' '+str(k)))
with database.batch() as batch:
batch.insert(table='DEMO2',columns=('ID1','ID2','TEXT'),values=records)

This ran in 6 minutes.

Join (Cross Apply)

Here is the execution plan for

SELECT * FROM DEMO1 join DEMO2 using(ID1) where DEMO2.TEXT=DEMO1.TEXT

where I join the two tables and apply a filter on the join:
CaptureSpannerCrossApply

Thanks to the INTERLEAVE the join is running locally. Each row from DEMO1 (the Input of the Cross Apply) is joined with DEMO2 (the Map of Cross Apply) locally. Only the result is serialized. On this small number of rows we do not see the benefit from having the rows in multiple nodes. There are only 2 nodes with rows here (2 local executions) and probably one node contains most of the rows. The average time per node is 10.72 seconds and the elapsed time is 20.9 seconds, so I guess that one node ran un 20.9 seconds and the other in 1.35 only.

The same without the tables interleaved (here as DEMO3) is faster to insert but the join will be more complex where DEMO1 must be distributed to all nodes.
CaptureSpannerDistributedCrossApply
Without interleave, the input table of the local Cross Apply is a Batch Scan, which is actually like a temporary table distributed to all nodes (seems to have 51 chunks here), created by the ‘Create Batch’. This is called Distributed Cross Applied.

So what?

Google Spanner has only some aspects of SQL and Relational databases. But it is still, like the NoSQL databases, a database where the data model is focused at one use case only because the data model and the data organization have to be designed for specific data access.

 

Cet article Google Cloud Spanner – inserting data est apparu en premier sur Blog dbi services.

Control File issues on duplicating with non patched Oracle version.

Wed, 2018-07-18 02:34

Introduction :

RMAN has the ability to duplicate, or clone, a database from a backup or from an active database.
It is possible to create a duplicate database on a remote server with the same file structure,
or on a remote server with a different file structure or on the local server with a different file structure.
For some old and  non patched Oracle versions such as that earlier than 11.2.0.4 , the duplicate (from active or backup) can be a real
challenge even for those DBAs with years of experience,  due to different bugs encountered.

The scenario specified  below will focus on control file issues revealed by duplication from active database an Oracle 11.2.0.2 version EE.

<INFO>Make sure to use nohup command line-utility which allows to run command/process or shell script.

Demonstration :

Step1: Prepare your script:

vi script_duplicate.ksh

#!/bin/ksh
export ORACLE_HOME=$ORACLE_HOME
export PATH=$PATH1:$ORACLE_HOME/bin
rman target sys/pwd@TNS_NAME_TARGET auxiliary sys/pwd@TNS_NAME_AUXILIARY log=duplicate.log cmdfile=/home/oracle/rman_bkup.cmd

vi rman_bkup.cmd
run
{
allocate channel ch1 device type disk;
allocate channel ch2 device type disk;
allocate channel ch3 device type disk;
allocate auxiliary channel dh1 device type disk;
allocate auxiliary channel dh2 device type disk;
allocate auxiliary channel dh3 device type disk;
duplicate target database to <AUXILIARY_NAME> from active database nofilenamecheck;
release channel ch3;
release channel ch2;
release channel ch1;
}

and launch like that : nohup ./script_duplicate.ksh &

Step2: Check instance parameters.
Depending on the PSU level of your instance, even before starting the duplicate, can fail with this error.

RMAN-00571: ===================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS
RMAN-00571: ===================================================
RMAN-03002: failure of Duplicate Db command at 11/02/2011 06:05:48
RMAN-04014: startup failed: ORA-00600: internal error code, arguments: [kck_rls_check must use (11,0,0,0,0) or lower], [kdt.c], [9576], [11.2.0.2.0], [], [], [], [], [], [], [], []
RMAN-04017: startup error description: ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
RMAN-03015: error occurred in stored script Memory Script
RMAN-04014: startup failed: ORA-00600: internal error code, arguments: [kck_rls_check must use (11,0,0,0,0) or lower], [kdt.c], [9576], [11.2.0.2.0], [], [], [], [], [], [], [], []
RMAN-04017: startup error description: ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance

According with Oracle Support note : 1064264.1

1. Edit the pfile, add parameter:
_compression_compatibility= "11.2.0"

2. Restart the instance using the pfile
SQL> startup pfile='<fullpath name of pfile>'

3. Create the SPFILE again
SQL> create spfile from pfile;

4. Restart the instance with the SPFILE
SQLl> shutdown immediate;
SQL> startup

and relaunch the previous command (Step 1).

Step3 : Control file issue, trying to open the database.
After transferring the datafiles , your duplicate will crash with these errors , trying to open the database.

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 15/07/2018 17:39:30
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script



SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-19838: Cannot use this control file to open database

Basically is because of a known bug (Bug 11063122 in 11gr2).
Controlfile created during the duplicate in 11gr2 will store redolog file locations as of primary.
We need to recreate control file changing the locations of redo logfiles and datafiles and open database with resetlogs.
In the controlfile recreation script the database name is the source <db_name> and the directory names for redo logs are still pointing to the source database .

The workaround is :

1. Backup as trace your control file (cloned DB)

sql> alter database backup controlfile to trace ;

2. Open the file  , and extract the section RESETLOGS, to modify like that :

CREATE CONTROLFILE REUSE DATABASE "<src_db_name>" RESETLOGS  ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 11680
LOGFILE
  GROUP 9  '<path_of_the_cloned_DB>redo09.log'  SIZE 150M BLOCKSIZE 512,
  GROUP 10 '<path_of_the_cloned_DB>/redo10.log'  SIZE 150M BLOCKSIZE 512,
  GROUP 11 '<path_of_the_cloned_DB>/redo11.log'  SIZE 150M BLOCKSIZE 512,

DATAFILE
  '<path_of_the_cloned_DB>/system01.dbf',
  '<path_of_the_cloned_DB>/undotbs01.dbf',
  '<path_of_the_cloned_DB>/sysaux01.dbf',
  '<path_of_the_cloned_DB>/users01.dbf',
-------------more datafiles
CHARACTER SET EE8ISO8859P2;

Save as trace_control.ctl

3. SQL> alter system set db_name=<new db_name> scope=spfile;
4. SQL> startup nomount
5. SQL>@trace_control.ctl
      --control file created and multiplexed in all the destinations mentioned on your spfile 
6. SQL> alter database open resetlogs

<INFO>If your source db had activity during the duplicate process you should apply manually some required archivelogs.

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of alter db command at 15/07/2018 19:21:30
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: '/u01/oradata/DBName/system01.dbf'

Search on source database , for those  archivelogs with sequence# greater or equal to 399747 and apply them manually on the target DB.

If somehow those are not available you need to take an incremental backup to roll forward your cloned database.

7. SQL> recover database using backup controlfile;

ORA-00279: change 47260162325 generated at  15/07/2018 19:27:40 needed for thread 1
ORA-00289: suggestion : <path>o1_mf_1_399747_%u_.arc
ORA-00280: change 47260162325 for thread 1 is in sequence #399747

Once the required archivelogs files have been applied , try again to open your database:

RMAN> alter database open resetlogs;

database opened

RMAN> exit

Conclusion :
If you’re the kind of Oracle administrator who has the power to approve or deny, you must know how dangerous it is to run your applications with  non patched Oracle databases.
Your data within your organization is better protected if your are taking advantage of patches issued by Oracle and running your production data against supported Oracle versions only.

 

Cet article Control File issues on duplicating with non patched Oracle version. est apparu en premier sur Blog dbi services.

Restarting a failed transportable tablespace metadata import

Tue, 2018-07-17 09:39

I’m currently working in a project to migrate a Datawarehouse-database from Solaris to Linux Intel (Endian-change). We do use the cross platform incremental backups method as described in My Oracle Support Note 1389592.1 for that.
I.e. incremental backups are applied to database files and during the migration the recovered datafiles are attached to the target database via the transportable tablespace method. When testing the transportable tablespace metadata import I got an error for a table:


Processing object type TRANSPORTABLE_EXPORT/TABLE
ORA-39083: Object type TABLE:"DWHT"."TDWHTAB" failed to create with error:
ORA-01843: not a valid month
Failing sql is:
CREATE TABLE "DWHT"."TDWHTAB" (...,"PROCESSING_TM" DATE DEFAULT '01-01-0001' NOT NULL ENABLE,...

I.e. metadata of all tables in the tablespace were successfully imported except one. The reason for the error was a default-defintion of column “PROCESSING_TM” without a TO_DATE-casting. I.e. it relied on the NLS_DATE_FORMAT-setting. E.g.


SQL> create table t1 ("PROCESSING_TM" DATE DEFAULT '01-01-0001' NOT NULL ENABLE);
create table t1 ("PROCESSING_TM" DATE DEFAULT '01-01-0001' NOT NULL ENABLE)
  *
ERROR at line 1:
ORA-01843: not a valid month
 
SQL> alter session set nls_date_format='dd-mm-yyyy';
 
Session altered.
 
SQL> create table t1 ("PROCESSING_TM" DATE DEFAULT '01-01-0001' NOT NULL ENABLE);
 
Table created.

There are different possibilities on why a metadata import fails for specific tables (see e.g. My Oracle Support Note 1082116.1 for another reason). The question was: How to restart the import without going through the whole process of copying datafiles from the source-database again? As we imported the metadata through a network-link the tablespace on the source database and the tablespace on the target database were still set to “READ ONLY”.

So first of all I fixed the error in the table definition on the source database:


alter table "DWHT"."TDWHTAB" modify ("PROCESSING_TM" DATE DEFAULT TO_DATE('01-01-0001', 'DD-MM-YYYY'));

That was possible even having the associated tablespace of the table on READ ONLY as it is just a metadata update and does not touch the tablespace.

On the target database I cleaned up the tablespace of the failed table metadata import:


drop tablespace DWH_DATA including contents;

It’s of course important to NOT specify “AND DATAFILES” when dropping the tablespace to avoid having to copy all files again.

In my case I actually transported 146 tablespaces and modified the impdp-parameterfile xttplugin.par to contain only the single tablespace, which contained failing tables. Of course you have to drop and include all tablespaces, which have tables with dependencies between each other. In my case that was not an issue as my tables had no dependencies to tables in other tablespaces.

Afterwards I could just restart my metadata-import with the single tablespace:


$ cat xttplugin2.par
directory=DATA_PUMP_DIR
logfile=tts_imp2.log
network_link=ttslink
transport_full_check=no
transport_tablespaces=DWH_DATA
transport_datafiles='/u01/DWH/oradata/DWH_DATA_001.dbf','/u01/DWH/oradata/DWH_DATA_002.dbf'
$ imdp parfile=./xttplugin2.par
Password: / as sysdba

So to restart a failed transportable tablespace metadata import, just fix the root cause, drop the associated tablespace on the target database without dropping the datafiles and restart the import again. That becomes handy especially in cross platform incremental backup migration scenarios.

 

Cet article Restarting a failed transportable tablespace metadata import est apparu en premier sur Blog dbi services.

Google Cloud Spanner – no decimal numeric data types

Mon, 2018-07-16 10:51

Google Cloud Spanner is a distributed relational database focused on scalability without compromising consistency and integrity. It is available only as a managed service in Google Cloud. Basically, the idea is to keep the scalability advantages of NoSQL database (like Bigtable) but adding transactions, relational tables, SQL, structured data,… as in the relational databases we love for decades.
The commercial pitch includes all the NoSQL buzzwords, with the addition of the legacy properties of SQL databases:
Cloud Spanner is a fully managed, mission-critical, relational database service that offers transactional consistency at global scale, schemas, SQL (ANSI 2011 with extensions), and automatic, synchronous replication for high availability.
Here I’m testing something that is not mentioned, but is taken for granted with all SQL databases: the ability to add numbers without erroneous arithmetic results.

It is easy to test on the Google Cloud (which offers 1 year trials) by creating an instance:
CaptureSpanner001CreateInstance

Then create a Spanner database:
CaptureSpanner002CreateDatabase

And create a table:
CaptureSpanner003CreateTable

The table creation can also use the SQL create table statement. Here I’m testing one of the most important features of SQL databases: the numeric datatypes. This is where humans and computers do not speak the same language: Humans have full hands of 10 fingers, where computers deal only with binary digits. Humans numbers are decimal. Computer numbers are binary.

It seems that Google Spanner is binary only. According to the documentation, the only numeric types are:

  • INT64 for signed integers up to 9E18
  • FLOAT64 for floating point IEEE-754

So, there are no decimal datatypes and decimal values will be approximated by binary values. This is ok to store computer numbers, but not human numbers such as prices, salaries,…

In order to show the problem I’ve created a table with FLOAT64:

CREATE TABLE NUMBERS (
ID INT64 NOT NULL,
AMOUNT FLOAT64,
) PRIMARY KEY (ID)

The SQL Query interface do not allow for DML other than SELECT:
DML not supported

So we can use the API or this simple from from the ‘data’ tab:
CaptureSpannerInsert

I’ve added 10 rows with ‘0.1’ which is easy to represent in decimal arithmetic, but not in binary arithmetic. Look at the sum:
CaptureQuery1

This is binary arithmetic applied to decimal numbers: approximation. You can select each rows and see ‘0.1’ but when you sum all the 10 rows together, you get less than 1. That’s probably close enough for some ‘BigData’ usage, accountants will not like it.

If you wonder why it takes 100 milliseconds for this 10 rows table, remember that this is a distributed database across 3 continents. Here is the execution plan:
CapturePlan

So what?

Do not forget that all the new trends for databases, in the ‘micro-services’ era, are focused at specific use-cases. They do not compete with the ‘old’ relational databases which are general purpose and have integrated, version after version, all the different ways to store and process data shared by multiple applications. Those NoSQL and NewSQL can be considered as an alternative only within the scope of what they are designed for. Spanner was desgined for Google internal use in Google AdWords and then provided as a service for similar use. It was developed to solve a specific problem: the lack of transactions in Bigtable.

Note that the Open Source alternative that is close to Google Spanner is CockroachDB which has a DECIMAL datatype to store fixed-point decimal numbers.

 

Cet article Google Cloud Spanner – no decimal numeric data types est apparu en premier sur Blog dbi services.

Drilling down the pgSentinel Active Session History

Sun, 2018-07-15 13:35

In pgSentinel: the sampling approach for PostgreSQL I mentioned that one of the advantages of the ASH approach is the ability to drill down from an overview of the database activity, down to the details where we can do some tuning. The idea is to always focus on the components which are relevant to our tuning goal:

  • Filter/Group by the relevant dimension to focus on what you want to tune (a program, session, query, time window,…)
  • Sort by the most active samples, to spend time only where you know you can improve significantly

The idea is to start at a high level. Here is a GROUP BY BACKEND_TYPE to show the activity of the ‘client backend’ and the ‘autovacuum worker':

select count(*), backend_type
from pg_active_session_history
where ash_time>=current_timestamp - interval '5 minutes'
group by backend_type
order by 1 desc
;
count | backend_type
-------+-------------------
1183 | client backend
89 | autovacuum worker

I selected only the last 5 minutes (the total retention is defined by pgsentinel_ash.max_entries and the sampling frequency by pgsentinel_ash.pull_frequency).

I ordered by the number of samples for each one, which gives a good idea of the proportion: most of the activity here for ‘client backend’. It may be more interesting to show a percentage, such as 93% activity is from the client and 7% is from the vacuum. However, this removes an interesting measure about the overall activity. The fact that we have 1183 samples within 5 minutes is an indication of the total load. In 5 minutes, we have 300 seconds, which means that each session can have 300 samples, when being 100% active in the database during that time. 1183 samples during 5 minutes mean that we have on average 1183/300 = 4 sessions active. This measure, calculated from the number of samples divided by the number of seconds, and known as Average Active Sessions (AAS) gives two different piece of information:

  • The overall activity in the database, similar to the load average at OS level
  • The relative activity of an aggregate (per session, program, event, time…)
AAS (Average Active Sessions)

In the previous post I counted the number of samples with count(distinct ash_time) because I knew that I had several sessions active during the whole time. But if there are periods of inactivity during those 5 minutes, there are no samples at all. And when drilling down to more detail, there will be some samples with no activity for a specific group. Here I calculate the number of seconds covered by the samples, using a window function:

with ash as (
select *,ceil(extract(epoch from max(ash_time)over()-min(ash_time)over()))::numeric samples
from pg_active_session_history where ash_time>=current_timestamp - interval '5 minutes'
) select round(count(*)::numeric/samples,2) as "AAS",
backend_type
from ash
group by samples,
backend_type
order by 1 desc fetch first 20 rows only
;
AAS | backend_type
-------+-------------------
3.95 | client backend
0.29 | autovacuum worker
(2 rows)

From this output, I know that I have about 4 client sessions running. This is what I want to tune.

Drill down on wait events

Adding the WAIT_EVENT_TYPE to the GROUP BY, I can have more detail about the resources used by those sessions:

with ash as (
select *,ceil(extract(epoch from max(ash_time)over()-min(ash_time)over()))::numeric samples
from pg_active_session_history where ash_time>=current_timestamp - interval '5 minutes'
) select round(count(*)::numeric/samples,2) as "AAS",
backend_type,wait_event_type
from ash
group by samples,
backend_type,wait_event_type
order by 1 desc fetch first 20 rows only
;
AAS | backend_type | wait_event_type
-------+-------------------+-----------------
2.57 | client backend | IO
0.94 | client backend | CPU
0.45 | client backend | LWLock
0.16 | autovacuum worker | CPU
0.12 | autovacuum worker | IO
0.00 | autovacuum worker | LWLock
(6 rows)

This gives a better idea about which system component may be tuned to reduce the response time or the throughput. IO is the major component here with 2.57 AAS being on an I/O call. Let’s get more information about which kind of I/O.

Drilling down to the wait event:

with ash as (
select *,ceil(extract(epoch from max(ash_time)over()-min(ash_time)over()))::numeric samples
from pg_active_session_history where ash_time>=current_timestamp - interval '5 minutes'
) select round(count(*)::numeric/samples,2) as "AAS",
backend_type,wait_event_type,wait_event
from ash
group by samples,
backend_type,wait_event_type,wait_event
order by 1 desc fetch first 20 rows only
;
AAS | backend_type | wait_event_type | wait_event
-------+-------------------+-----------------+------------------
1.52 | client backend | IO | DataFileWrite
0.94 | client backend | CPU | CPU
0.46 | client backend | IO | DataFileExtend
0.41 | client backend | IO | DataFileRead
0.33 | client backend | LWLock | WALWriteLock
0.15 | autovacuum worker | CPU | CPU
0.12 | client backend | LWLock | buffer_mapping
0.10 | autovacuum worker | IO | DataFileRead
0.08 | client backend | IO | WALInitWrite
0.08 | client backend | IO | BufFileWrite
0.02 | client backend | IO | WALWrite
0.01 | autovacuum worker | IO | DataFileWrite
0.01 | client backend | IO | DataFilePrefetch
0.00 | client backend | LWLock | buffer_content
0.00 | autovacuum worker | LWLock | buffer_mapping
(15 rows)

This gives more information. The average 2.57 sessions active on IO are actually writing for 1.52 of them, reading for 0.46 of them, and waiting for the datafile to be extended for 0.46 of them. That helps to focus on the areas where we might improve the performance, without wasting time on the events which are only a small part of the session activity.

Drill-down on queries

This was a drill-down on the system axis (wait events are system call instrumentation). This is useful when we think something is wrong on the system or the storage. But performance tuning must also drive the investigation on the application axis. The higher level is the user call, the TOP_LEVEL_QUERY:

with ash as (
select *,ceil(extract(epoch from max(ash_time)over()-min(ash_time)over()))::numeric samples
from pg_active_session_history where ash_time>=current_timestamp - interval '5 minutes'
) select round(count(*)::numeric/samples,2) as "AAS",
backend_type,top_level_query
from ash
group by samples,
backend_type,top_level_query
order by 1 desc fetch first 20 rows only
;
AAS | backend_type | top_level_query
-------+-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
0.95 | client backend | SELECT * FROM mypgio('pgio3', 50, 3000, 131072, 255, 8);
0.95 | client backend | SELECT * FROM mypgio('pgio2', 50, 3000, 131072, 255, 8);
0.95 | client backend | SELECT * FROM mypgio('pgio4', 50, 3000, 131072, 255, 8);
0.95 | client backend | SELECT * FROM mypgio('pgio1', 50, 3000, 131072, 255, 8);
0.25 | autovacuum worker | autovacuum: VACUUM ANALYZE public.pgio2
0.02 | client backend | commit;
0.01 | client backend | select * from pg_active_session_history where pid=21837 order by ash_time desc fetch first 1 rows only;
0.01 | client backend | with ash as ( +
| | select *,ceil(extract(epoch from max(ash_time)over()-min(ash_time)over()))::numeric samples +
| | from pg_active_session_history where ash_time>=current_timestamp - interval '5 minutes' +
...

Here I see 4 user calls responsible for most of the 4 active sessions related to the ‘client backend’, each one with AAS=0.95 and this is actually what is running: the PGIO benchmark (see https://kevinclosson.net/) with 4 sessions calling mypgio function.

The function we see in TOP_LEVEL_QUERY is itself running some queries, and the big advantage of the pgSentinel extension, over pg_stat_activity, is the capture of the actual statement running, with the actual values of the parameters:

with ash as (
select *,ceil(extract(epoch from max(ash_time)over()-min(ash_time)over()))::numeric samples
from pg_active_session_history where ash_time>=current_timestamp - interval '5 minutes'
) select round(count(*)::numeric/samples,2) as "AAS",
backend_type,substr(query,1,100)
from ash
group by samples,
backend_type,substr(query,1,100)
order by 1 desc fetch first 20 rows only
;
AAS | backend_type | substr
-------+-------------------+----------------------------------------------------------------------------------------
0.26 | autovacuum worker |
0.02 | client backend | commit
0.02 | client backend | SELECT sum(scratch) FROM pgio1 WHERE mykey BETWEEN 3567 AND 3822
0.01 | client backend | SELECT sum(scratch) FROM pgio4 WHERE mykey BETWEEN 5729 AND 5984
0.01 | client backend | SELECT sum(scratch) FROM pgio4 WHERE mykey BETWEEN 5245 AND 5500
0.01 | client backend | truncate table l_ash.ps
0.01 | client backend | SELECT sum(scratch) FROM pgio1 WHERE mykey BETWEEN 3249 AND 3504
0.01 | client backend | SELECT sum(scratch) FROM pgio1 WHERE mykey BETWEEN 57 AND 312
0.01 | client backend | UPDATE pgio4 SET scratch = scratch + 1 WHERE mykey BETWEEN 3712 AND 3720
0.01 | client backend | SELECT sum(scratch) FROM pgio2 WHERE mykey BETWEEN 1267 AND 1522
0.01 | client backend | SELECT sum(scratch) FROM pgio1 WHERE mykey BETWEEN 703 AND 958
0.01 | client backend | SELECT sum(scratch) FROM pgio2 WHERE mykey BETWEEN 2025 AND 2280
0.01 | client backend | insert into l_ash.ps_diff +
| | select ps1.pid,ps1.uname,ps1.pr,ps1.ni,ps1.virt,ps1.res,ps1.shr,ps1.s,ps1.
0.01 | client backend | UPDATE pgio4 SET scratch = scratch + 1 WHERE mykey BETWEEN 2690 AND 2698
0.01 | client backend | SELECT sum(scratch) FROM pgio3 WHERE mykey BETWEEN 5463 AND 5718
0.01 | client backend | SELECT sum(scratch) FROM pgio4 WHERE mykey BETWEEN 1467 AND 1722
0.01 | client backend | SELECT sum(scratch) FROM pgio1 WHERE mykey BETWEEN 4653 AND 4908
(20 rows)

Here, no session is at the top. We have a few samples for each execution. This is because each execution is different (different values for the parameters) and they have a balanced execution time. If we had one query being longer with one specific set of parameter values, it would show up at the top here.

Finally, we can also aggregate at a higher level than QUERY with QUERYID which is per prepared statement and do not change when executing with different parameter values. If we want to get the text, then we can join with PG_STAT_STATEMENTS

with ash as (
select *,datid dbid,ceil(extract(epoch from max(ash_time)over()-min(ash_time)over()))::numeric samples
from pg_active_session_history where ash_time>=current_timestamp - interval '5 minutes'
) select round(count(*)::numeric/samples,2) as "AAS",dbid,
backend_type,queryid,pg_stat_statements.query
from ash left outer join pg_stat_statements using(dbid,queryid)
group by samples,dbid,
backend_type,queryid,pg_stat_statements.query
order by 1 desc fetch first 15 rows only
;
AAS | dbid | backend_type | queryid | query
-------+-------+----------------+------------+------------------------------------------------------------------------------------------------------
0.89 | 17487 | client backend | 837728477 | SELECT sum(scratch) FROM pgio2 WHERE mykey BETWEEN 100926 AND 101181
0.70 | 17487 | client backend | 3411884874 | SELECT sum(scratch) FROM pgio4 WHERE mykey BETWEEN $1 AND $2
0.68 | 17487 | client backend | 1046864277 | SELECT sum(scratch) FROM pgio3 WHERE mykey BETWEEN 1591 AND 1846
0.67 | 17487 | client backend | 2994234299 | SELECT sum(scratch) FROM pgio1 WHERE mykey BETWEEN $1 AND $2
0.33 | 17487 | client backend | 1648177216 | UPDATE pgio1 SET scratch = scratch + 1 WHERE mykey BETWEEN 2582 AND 2590
0.32 | 17487 | client backend | 3381000939 | UPDATE pgio3 SET scratch = scratch + $1 WHERE mykey BETWEEN $2 AND $3
0.30 | 17487 | client backend | 1109524376 | UPDATE pgio4 SET scratch = scratch + 1 WHERE mykey BETWEEN 5462 AND 5470
0.11 | 17487 | client backend | 3355133240 | UPDATE pgio2 SET scratch = scratch + $1 WHERE mykey BETWEEN $2 AND $3
0.05 | 17547 | client backend | 2771355107 | update l_ash.parameters set value=now(),timestamp=now() where name=$1
0.05 | 17547 | client backend | 1235869898 | update l_ash.parameters set value=$1,timestamp=now() where name=$2
0.02 | 13806 | client backend | 935474258 | select * from pg_active_session_history where pid=$1 order by ash_time desc fetch first $2 rows only
0.01 | 13806 | client backend | 164740364 | with ash as ( +

This shows the main queries running: SELECT and UPDATE on the PGIO1,PGIO2,PGIO3,PGIO4. They run with different parameter values but have the same QUERYID. It seems that PG_STAT_STATEMENTS is not very consistent when capturing the query text: some show the parameter, some other show the values. But you must know that those are the prepared statements. We do not have 0.89 average sessions running the ‘SELECT sum(scratch) FROM pgio2 WHERE mykey BETWEEN 100926 AND 101181′. This is the ‘SELECT sum(scratch) FROM pgio2′ running with different parameter values and for whatever reasons, the PG_STAT_STATEMENTS extension displays one of the set of values rather than ‘BETWEEN $1 AND $2′.

Time dimension

Of course we can also query all samples and drill-down with a graphical tool. For the time axis, this is a better visualization. Here is a quick Excel PivotChart from those 5 minutes samples:
pg_active_session_history
I always have 4 sessions running, as we have seen in the average, but the wait event detail is not uniform during the timeline. This is where you will drill down on the time axis. This can be helpful to investigate a short duration issue. Or to try to understand non-uniform response time. For example, coming from Oracle, I’m not used to this pattern where, from one second to the other, the wait profile is completely different. Probably because of all the background activity such as Vacuum, WAL, sync buffers to disk, garbage collection,… The workload here, PGIO, the SLOB method for PostgreSQL, is short uniform queries. It would be interesting to have some statistics about the response time variation.

Note that in this database cluster, in addition to the PGIO workload, I have a small application running and committing very small changes occasionally and this why you see the peaks with 1 session on WALWrite and 4 sessions waiting on WALWriteLock. This adds to the chaos of waits.

This extension providing active session sampling is only the first component of pgSentinel so do not spend too much time building queries, reports and graphs on this and let’s see when will come with pgSentinel:

pgSentinel is in progress….@postgresql @amplifypostgres @PostgreSQLFR @BertrandDrouvot @ckikof pic.twitter.com/Pwq8vB69MI

— pgSentinel (@Pg_Sentinel) July 11, 2018

 

Cet article Drilling down the pgSentinel Active Session History est apparu en premier sur Blog dbi services.

PGDay Amsterdam – follow up 2 – Where do null values go to in a hash partitioned table?

Fri, 2018-07-13 23:17

This is the second follow up which covers this question: When you hash partition a table in PostgreSQL 11 where do null values for the partitioned column go to? Lets go…

In the demo I used this little table:

postgres=# select version();
                                                            version                                                          
-----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 11beta1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bi
(1 row)
postgres=# create table part2 ( a int, list varchar(10) ) partition by hash (a);
CREATE TABLE
postgres=# create table part2_1 partition of part2 FOR VALUES WITH (MODULUS 3, REMAINDER 0);
CREATE TABLE
postgres=# create table part2_2 partition of part2 FOR VALUES WITH (MODULUS 3, REMAINDER 1);
CREATE TABLE
postgres=# create table part2_3 partition of part2 FOR VALUES WITH (MODULUS 3, REMAINDER 2);
CREATE TABLE
postgres=# \d+ part2
                                          Table "public.part2"
 Column |         Type          | Collation | Nullable | Default | Storage  | Stats target | Description 
--------+-----------------------+-----------+----------+---------+----------+--------------+-------------
 a      | integer               |           |          |         | plain    |              | 
 list   | character varying(10) |           |          |         | extended |              | 
Partition key: HASH (a)
Partitions: part2_1 FOR VALUES WITH (modulus 3, remainder 0),
            part2_2 FOR VALUES WITH (modulus 3, remainder 1),
            part2_3 FOR VALUES WITH (modulus 3, remainder 2)

The data we played with was this:

postgres=# insert into part2 (a,list) values (1,'beer');
INSERT 0 1
postgres=# insert into part2 (a,list) values (2,'whine');
INSERT 0 1
postgres=# insert into part2 (a,list) values (3,'schnaps');
INSERT 0 1
postgres=# select * from only part2_1;
 a | list  
---+-------
 2 | whine
(1 row)

postgres=# select * from only part2_2;
 a |  list   
---+---------
 3 | schnaps
(1 row)

postgres=# select * from only part2_3;
 a | list 
---+------
 1 | beer
(1 row)

We have the data evenly distributed over the three partitions. When we insert a row which contains a NULL value for the column we partitioned on:

postgres=# insert into part2 (a,list) values (null,'cocktail');
INSERT 0 1

… where does that column go to?

postgres=# select * from only part2_1;
 a |   list   
---+----------
 2 | whine
   | cocktail
(2 rows)

postgres=# select * from only part2_2;
 a |  list   
---+---------
 3 | schnaps
(1 row)

postgres=# select * from only part2_3;
 a | list 
---+------
 1 | beer
(1 row)

It goes to the first partition and every following NULL key row gets there as well:

postgres=# insert into part2 (a,list) values (null,'rum');
INSERT 0 1
postgres=# select * from only part2_1;
 a |   list   
---+----------
 2 | whine
   | cocktail
   | rum
(3 rows)

I couldn’t find anything in the documentation about that so I did send a mail to the general mailing list and here is the answer: “The calculated hash value for the null value will be zero, therefore, it will fall to the partition having remainder zero.”

 

Cet article PGDay Amsterdam – follow up 2 – Where do null values go to in a hash partitioned table? est apparu en premier sur Blog dbi services.

Database Vault : Rules, Rule Sets and Command Rules

Fri, 2018-07-13 14:31

In a previous blog I talked about protecting data using Realms. With Database Vault we can also protect our database against some SQL statements. These statements can include SELECT, ALTER SYSTEM, database definition language (DDL), and data manipulation language (DML) statements.
We can do this with Command Rules. In this blog I am demonstrating how we can use a Command Rule to prevent SYS from creating a new pluggable database in a multitenant environment.

Before starting the demonstration, we can see that there are some predefined Command Rules which apply to all users.

SQL> show user
USER is "C##DBV_OWNER_ROOT"
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> SELECT COMMAND, RULE_SET_NAME FROM DVSYS.DBA_DV_COMMAND_RULE;

COMMAND              RULE_SET_NAME
-------------------- --------------------------------------------------
ALTER PROFILE        Can Maintain Accounts/Profiles
ALTER SYSTEM         Allow Fine Grained Control of System Parameters
ALTER USER           Can Maintain Own Account
CHANGE PASSWORD      Can Maintain Own Account
CREATE PROFILE       Can Maintain Accounts/Profiles
CREATE USER          Can Maintain Accounts/Profiles
DROP PROFILE         Can Maintain Accounts/Profiles
DROP USER            Can Maintain Accounts/Profiles

8 rows selected.
SQL>

Because of these default Command Rules, for example, user sys cannot create a user once Database Vault is enabled.

SQL> conn sys/root@pdb1 as sysdba
Connected.
SQL> create user myuser identified by test;
create user myuser identified by test
                                 *
ERROR at line 1:
ORA-01031: insufficient privileges

To grant a user the ability to use these commands, you can grant the user the role that the rule set checks.

SQL> SELECT PRIVILEGE FROM DBA_SYS_PRIVS WHERE GRANTEE = 'DV_ACCTMGR';

PRIVILEGE
----------------------------------------
DROP PROFILE
ALTER PROFILE
ALTER USER
CREATE PROFILE
CREATE USER
CREATE SESSION
DROP USER

7 rows selected.

SQL>

To allow sys to create a user we can grant the DV_ACCTMGR role to SYS

SQL> show user
USER is "C##DBV_ACCTMGR_ROOT"

SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL>

SQL> grant  DV_ACCTMGR to sys;

Grant succeeded.

And now SYS can create a user

SQL> conn sys/root@pdb1 as sysdba
Connected.
SQL> create user myuser identified by test;

User created.

SQL>

Before starting the demonstration let’s verify that user SYS, by default, can create a pluggable database

SQL> conn sys as sysdba
Enter password:
Connected.
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL> create pluggable database PDB2 ADMIN USER pdb2adm IDENTIFIED BY root create_file_dest='/u01/app/oracle/oradata/DBSEC/PDB2';

Pluggable database created.

SQL>

To prevent sys from creating a pluggable database, we are first going to create a RULE. This rule will determine when the command rule will be fired.

SQL> exec DVSYS.DBMS_MACADM.CREATE_RULE(rule_name => 'MY_PDB_RULE', 
                                        rule_expr => 'SYS_CONTEXT(''USERENV'', ''SESSION_USER'') != ''SYS''');

PL/SQL procedure successfully completed.

SQL>

After we have to create a RULE SET which is a collection of one or more rules. We can associate a rule set with a realm authorization, factor assignment, command rule, or secure application role.

SQL> exec DVSYS.DBMS_MACADM.CREATE_RULE_SET(rule_set_name => 'MY_PDB_RULESET', 
                                            description => ' About managing Pdbs', 
                                            enabled => DBMS_MACUTL.G_YES, eval_options => DBMS_MACUTL.G_RULESET_EVAL_ANY,
                                            audit_options => DBMS_MACUTL.G_RULESET_AUDIT_FAIL + DBMS_MACUTL.G_RULESET_AUDIT_SUCCESS, 
                                            fail_options => DBMS_MACUTL.G_RULESET_FAIL_SILENT, fail_message => '', 
                                            fail_code => '', 
                                            handler_options => DBMS_MACUTL.G_RULESET_HANDLER_OFF, 
                                            handler => '',
                                            is_static => FALSE);

PL/SQL procedure successfully completed.
SQL>

We then add the RULE to the RULE SET

BEGIN
DVSYS.DBMS_MACADM.ADD_RULE_TO_RULE_SET(
                                       rule_set_name => 'MY_PDB_RULESET',
                                       rule_name => 'MY_PDB_RULE');
END;
   /

PL/SQL procedure successfully completed.

And finally create a COMMAND RULE which will prevent SYS to execute a CREATE PLUGGABLE DATABASE statement

SQL> exec DVSYS.DBMS_MACADM.CREATE_COMMAND_RULE(command=> 'CREATE PLUGGABLE DATABASE', 
                                                rule_set_name => 'MY_PDB_RULESET', 
                                                object_owner => DBMS_ASSERT.ENQUOTE_NAME('%',FALSE), 
                                                object_name => '%',
                                                enabled => 'Y');

PL/SQL procedure successfully completed.

SQL>

And now if we try to create a Pdb with SYS

SQL> show user
USER is "SYS"
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL>  CREATE PLUGGABLE DATABASE PDB3 ADMIN USER pdb3adm IDENTIFIED BY root create_file_dest='/u01/app/oracle/oradata/DBSEC/PDB3';
 CREATE PLUGGABLE DATABASE PDB3 ADMIN USER pdb3adm IDENTIFIED BY root create_file_dest='/u01/app/oracle/oradata/DBSEC/PDB3'
*
ERROR at line 1:
ORA-47400: Command Rule violation for CREATE PLUGGABLE DATABASE on PDB3

SQL>
 

Cet article Database Vault : Rules, Rule Sets and Command Rules est apparu en premier sur Blog dbi services.

Pages