Feed aggregator

New options for pg_basebackup in PostgreSQL 15

Yann Neuhaus - 8 hours 7 min ago

If, and you should, are doing base backups of your PostgreSQL instances you’ve probably used pg_basebackup. Another use case for pg_basebackup is to create a starting point for replicas when you are setting up streaming replication. Up to now, pg_basebackup will send the result always to the node/machine where you started it. That means: If you started pg_basebackup on your client or backup machine, then the resulting base backup will go there. No other options are available in recent versions of PostgreSQL. This will change with PostgreSQL 15.

The commit which implemented this is this (don’t be confused by the commit date, it really happened yesterday, not in 2021):

commit 3500ccc39b0dadd1068a03938e4b8ff562587ccc (HEAD -> master, origin/master, origin/HEAD)
Author: Robert Haas 
Date:   Tue Nov 16 15:20:50 2021 -0500

    Support base backup targets.
    
    pg_basebackup now has a --target=TARGET[:DETAIL] option. If specfied,
    it is sent to the server as the value of the TARGET option to the
    BASE_BACKUP command. If DETAIL is included, it is sent as the value of
    the new TARGET_DETAIL option to the BASE_BACKUP command.  If the
    target is anything other than 'client', pg_basebackup assumes that it
    will now be the server's job to write the backup in a location somehow
    defined by the target, and that it therefore needs to write nothing
    locally. However, the server will still send messages to the client
    for progress reporting purposes.
    
    On the server side, we now support two additional types of backup
    targets.  There is a 'blackhole' target, which just throws away the
    backup data without doing anything at all with it. Naturally, this
    should only be used for testing and debugging purposes, since you will
    not actually have a backup when it finishes running. More usefully,
    there is also a 'server' target, so you can now use something like
    'pg_basebackup -Xnone -t server:/SOME/PATH' to write a backup to some
    location on the server. We can extend this to more types of targets
    in the future, and might even want to create an extensibility
    mechanism for adding new target types.
    
    Since WAL fetching is handled with separate client-side logic, it's
    not part of this mechanism; thus, backups with non-default targets
    must use -Xnone or -Xfetch.
    
    Patch by me, with a bug fix by Jeevan Ladhe.  The patch set of which
    this is a part has also had review and/or testing from Tushar Ahuja,
    Suraj Kharage, Dipesh Pandit, and Mark Dilger.
    
    Discussion: http://postgr.es/m/CA+TgmoaYZbz0=Yk797aOJwkGJC-LK3iXn+wzzMx7KdwNpZhS5g@mail.gmail.com

To demonstrate the feature I’ve setup two nodes. This is my client:

postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:ca:ce:73 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.240/24 brd 192.168.100.255 scope global dynamic enp1s0
       valid_lft 3155sec preferred_lft 3155sec
    inet6 fe80::5054:ff:feca:ce73/64 scope link 
       valid_lft forever preferred_lft forever

… and this is my server:

postgres@debian11pg:/u01/app/postgres/product/DEV/db_0/ [pgdev] ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:c7:93:6c brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.241/24 brd 192.168.100.255 scope global dynamic enp1s0
       valid_lft 2278sec preferred_lft 2278sec
    inet6 fe80::5054:ff:fec7:936c/64 scope link 
       valid_lft forever preferred_lft forever

What happens if I start a base backup on the client is, that the data is send from the server to my client and I’ll have the base backup locally stored on the client:

postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ mkdir /var/tmp/backup
postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ ./pg_basebackup -h 192.168.100.241 -F t -D /var/tmp/backup/
postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ ls -l /var/tmp/backup/
total 39624
-rw------- 1 postgres postgres   138469 Jan 21 07:41 backup_manifest
-rw------- 1 postgres postgres 23652864 Jan 21 07:41 base.tar
-rw------- 1 postgres postgres 16778752 Jan 21 07:41 pg_wal.tar

What the above introduces is to tell the server to store the backup. The option for this is this one:

postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ ./pg_basebackup --help | grep -A 1 TARGET
  -t, --target=TARGET[:DETAIL]
                         backup target (if other than client)

So now, again executing pg_basebackup from the client, we can do this:

postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ ssh 192.168.100.241 'mkdir /var/tmp/bb'
postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ ./pg_basebackup -h 192.168.100.241 -X none -t server:/var/tmp/bb/
postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ ssh 192.168.100.241 'ls -l /var/tmp/bb/'
total 23236
-rw------- 1 postgres postgres   138469 Jan 21 07:57 backup_manifest
-rw------- 1 postgres postgres 23654400 Jan 21 07:57 base.tar

Now the backup is generated on the server without sending it to the client. For testing purposes there is also the “blackhole” target, which just throws away the backup:

postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ ./pg_basebackup -h 192.168.100.241 -X none -t blackhole
postgres@debian11latest:/u01/app/postgres/product/DEV/db_0/bin$ 

Why is that feature cool? Because now there is the infrastructure to implement other targets, maybe S3, or whatever.

Another feature that just got committed is this: Extend the options of pg_basebackup to control compression.

This gives you more options for compression:

postgres@debian11pg:/u01/app/postgres/product/DEV/db_0/bin/ [pgdev] ./pg_basebackup --help | grep -A 1 LEVEL
  -Z, --compress={gzip,none}[:LEVEL] or [LEVEL]
                         compress tar output with given compression method or level

For the moment there is only “none” and “gzip”:

postgres@debian11pg:/u01/app/postgres/product/DEV/db_0/bin/ [pgdev] mkdir /var/tmp/aa
postgres@debian11pg:/u01/app/postgres/product/DEV/db_0/bin/ [pgdev] ./pg_basebackup -D /var/tmp/aa/ -Z none
postgres@debian11pg:/u01/app/postgres/product/DEV/db_0/bin/ [pgdev] ls /var/tmp/aa/
backup_label     base              global        pg_dynshmem  pg_ident.conf  pg_logical    pg_notify    pg_serial     pg_stat      pg_subtrans  pg_twophase  pg_wal   postgresql.auto.conf
backup_manifest  current_logfiles  pg_commit_ts  pg_hba.conf  pg_log         pg_multixact  pg_replslot  pg_snapshots  pg_stat_tmp  pg_tblspc    PG_VERSION   pg_xact  postgresql.conf
postgres@debian11pg:/u01/app/postgres/product/DEV/db_0/bin/ [pgdev] rm -rf /var/tmp/aa/*
postgres@debian11pg:/u01/app/postgres/product/DEV/db_0/bin/ [pgdev] ./pg_basebackup -D /var/tmp/aa/ -F t -Z gzip:4
postgres@debian11pg:/u01/app/postgres/product/DEV/db_0/bin/ [pgdev] ls -l /var/tmp/aa/
total 3316
-rw------- 1 postgres postgres  138314 Jan 21 08:39 backup_manifest
-rw------- 1 postgres postgres 3235329 Jan 21 08:39 base.tar.gz
-rw------- 1 postgres postgres   17075 Jan 21 08:39 pg_wal.tar.gz

The same applies here: The infrastructure is now in place, and other options can be added. Nice features, thanks to all involved.

Cet article New options for pg_basebackup in PostgreSQL 15 est apparu en premier sur Blog dbi services.

Upgrading Plex Media Server on a Raspberry Pi

The Anti-Kyte - Thu, 2022-01-20 14:37

I’m running Plex on my RaspberryPi so that I can stream my movies to multiple devices on my home network.
The periodic upgrades to the Plex software can get a bit fiddly.
Therefore, to save me looking up how to do the same things every time I need to upgrade, I thought I’d put them in one place…

Finding information about my Pi

I’m running Raspberry Pi OS (formerly Raspbian) on my Pi.

Connecting to the Pi via ssh, I can run the following commands to discover information about my system.

To find out what OS version I’m running :

cat /etc/os-release |grep 'PRETTY NAME'

the model of the Pi :

cat /proc/cpuinfo |grep Model

the specific ARM architecture of your Pi

/proc/cpuinfo |grep 'model name'

the name of your pi on my local (home) network

hostname -f

the IP address of your Pi on your local network

hostname -I
Upgrading Raspbian

This is not usually a requirement for upgrading Plex, but I thought I’d mention it here just in case.

There is a big warning about in-place upgrades in the documentation

In my case, I only really use my Pi for the plex server so the in-place upgrade does not present a huge risk.
If I happen to lose everything, re-installation and configuration would not be too onerous a task.
Therefore, I follow the link provided in the documentation and ended up with these instructions.

Upgrading Plex Server

I run my Pi headless (i.e. no monitor attached), so I’m usually accessing the Plex management web page on another machine when I see it’s upgrade time :

I can click on the link and download the update file here.
Alternatively, you can get it directly from the Plex Downloads site.

On the download site, I need to select “Linux” when asked to choose a platform :

Then click on the “Choose Distribution” button :

Now I want to select one of the ARM distros ( in my case v7) :

I can now copy the file from my machine to the pi via SFTP.
There is a good tutorial here about using the freeware sftp client Filezilla to transfer files to and from a pi via sftp.

Installing the upgrade

For this bit, I need to have a terminal session on the pi ( I usually use SSH to remote in).

We don’t want to go to all the trouble of running the upgrade if we’re already running the latest available version. One way to confirm the current version, is :

apt-cache policy plexmediaserver

First, if we’ve not done so already, make sure we have the most up-to-date packages on the Pi :

sudo apt update -y && sudo apt upgrade -y

Now re-start the pi to make sure any changes take effect

sudo reboot

Obviously, you’ll lose your connection to the Pi whilst it’s rebooting and will need to reconnect once it’s back up.

Next we need to stop the Plexmidiaserver ( assuming it starts on boot, which mine does) :

sudo service plexmediaserver stop

Now we need to manually install of deb file we’ve downloaded :

sudo apt install ./plexmediaserver_1.25.2.5319-c43dc0277_armhf.deb

The plexmediaserver should now be up and running. You can check by running :

sudo service plexmediaserver status

…and should be running the version we’ve just installed :

Plex Clients

Once you’ve completed the Server install, you may need to “remind” the plex clients running on your TVs of the IP adress of your server.

Remember, you can get the IP address of your pi by running :

hostname -I

Your looking for the first bit of information returned by this command.
On a home network, the address will usually be in the format :

192.168.1.nnn

…where nnn is a 3-digit number.

Setting up IBM Db2 on Linux – root installation

Yann Neuhaus - Thu, 2022-01-20 12:11

In the first post about Db2 I went for the non-root installation method. While this is great if you want to get started fast and to do some tests this also comes with limitations. Most installations, I guess, are root installations, especially if you want to run more than one instance on a host, and you want to have them started automatically when the systems comes up. Lets see how that works.

As last time, I’ll start with a SLES 15SP3 minimal installation:

sles15-db2-root:~ $ uname -a
Linux sles15-db2-root 5.3.18-59.37-default #1 SMP Mon Nov 22 12:29:04 UTC 2021 (d10168e) x86_64 x86_64 x86_64 GNU/Linux
sles15-db2-root:~ $ cat /etc/os-release
NAME="SLES"
VERSION="15-SP3"
VERSION_ID="15.3"
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP3"
ID="sles"
ID_LIKE="suse"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:15:sp3"
DOCUMENTATION_URL="https://documentation.suse.com/"

Instead of trial and error as in the last post, lets directly install the packages which are required for a non-root installation:

sles15-db2-root:~ $ zypper in -y xorg-x11-xauth sudo vi binutils libnuma1 libstdc++6-32bit pam-32bit

I’ll use exactly the same Db2 distribution as in the previous post:

sles15-db2-root:~ $ ls -l
total 2034632
-rw------- 1 root root        268 Jan 20 11:23 .bash_history
drwxr-xr-x 1 root root          0 Mar  3  2021 bin
drwx------ 1 root root          0 Mar  3  2021 .gnupg
drwxr-xr-x 1 root root         36 Jul  9  2021 inst-sys
-rw-r--r-- 1 root root 2083458415 Jan 20 11:38 v11.5.6_linuxx64_server_dec.tar.gz
sles15-db2-root:~ $ tar axf v11.5.6_linuxx64_server_dec.tar.gz 
sles15-db2-root:~ $ rm v11.5.6_linuxx64_server_dec.tar.gz 
sles15-db2-root:~ $ ls -l
total 4
-rw------- 1 root root 268 Jan 20 11:23 .bash_history
drwxr-xr-x 1 root root   0 Mar  3  2021 bin
drwx------ 1 root root   0 Mar  3  2021 .gnupg
drwxr-xr-x 1 root root  36 Jul  9  2021 inst-sys
drwxr-xr-x 1 root root 240 Jun 11  2021 server_dec
sles15-db2-root:~ $ cd server_dec
sles15-db2-root:~/server_dec $ ls -l
total 84
drwxr-xr-x 1 bin bin   112 Jun 11  2021 db2
-r--r--r-- 1 bin bin  4987 Jun 11  2021 db2checkCOL_readme.txt
-r--r--r-- 1 bin bin 16808 Jun 11  2021 db2checkCOL.tar.gz
-r-xr-xr-x 1 bin bin  5254 Jun 11  2021 db2ckupgrade
-r-xr-xr-x 1 bin bin  5207 Jun 11  2021 db2_deinstall
-r-xr-xr-x 1 bin bin  5077 Jun 11  2021 db2_install
-r-xr-xr-x 1 bin bin  5041 Jun 11  2021 db2ls
-r-xr-xr-x 1 bin bin  5059 Jun 11  2021 db2prereqcheck
-r-xr-xr-x 1 bin bin  5059 Jun 11  2021 db2setup
-r-xr-xr-x 1 bin bin  5095 Jun 11  2021 installFixPack

This time we’ll use “db2prereqcheck” to check if all requirements are met for the installation:

sles15-db2-root:~/server_dec $ ./db2prereqcheck

==========================================================================

Thu Jan 20 11:50:39 2022
Checking prerequisites for DB2 installation. Version "11.5.6.0". Operating system "Linux" 
   
Validating "Linux distribution " ... 

Requirement not matched for DB2 database "Server" with pureScale feature . Version: "10.5.0.0". 
Summary of prerequisites that are not met on the current system: 
DBT3613E  The db2prereqcheck utility failed to verify the prerequisites for TSA. Ensure your machine meets all the TSA installation prerequisites. 
...

DBT3507E  The db2prereqcheck utility failed to find the following package or file: "kernel-source". 
DBT3507E  The db2prereqcheck utility failed to find the following package or file: "". 
DBT3507E  The db2prereqcheck utility failed to find the following package or file: "gcc-c++". 
DBT3507E  The db2prereqcheck utility failed to find the following package or file: "cpp". 
DBT3507E  The db2prereqcheck utility failed to find the following package or file: "gcc". 

Seems we are missing another few packages:

sles15-db2-root:~/server_dec $ zypper in kernel-source gcc-c++ cpp gcc

Next try (“-i” tells db2prereqcheck to only check for the current version of Db2, not all versions as above):

sles15-db2-root:~/server_dec $ ./db2prereqcheck -i
...
Validating "/lib/libpam.so*" ... 
   Requirement matched. 
DBT3533I  The db2prereqcheck utility has confirmed that all installation prerequisites were met. 

All fine, so lets go:

sles15-db2-root:~/server_dec $ ./db2_install 
Read the license agreement file in the db2/license directory.

***********************************************************
To accept those terms, enter "yes". Otherwise, enter "no" to cancel the install process. [yes/no]
yes
 
 
Default directory for installation of products - /opt/ibm/db2/V11.5

***********************************************************
Install into default directory (/opt/ibm/db2/V11.5) ? [yes/no] 
yes
 
 
Specify one of the following keywords to install DB2 products.

  SERVER 
  CONSV 
  CLIENT 
  RTCL 
 
Enter "help" to redisplay product names.

Enter "quit" to exit.

***********************************************************
SERVER
***********************************************************
Do you want to install the DB2 pureScale Feature? [yes/no] 
no
DB2 installation is being initialized.

 Total number of tasks to be performed: 59 
Total estimated time for all tasks to be performed: 2775 second(s) 

Task #1 start
Description: Checking license agreement acceptance 
Estimated time 1 second(s) 
Task #1 end 

Task #2 start
Description: Base Client Support for installation with root privileges 
Estimated time 3 second(s) 
Task #2 end 

Task #3 start
Description: Product Messages - English 
Estimated time 14 second(s) 
Task #3 end 

Task #4 start
Description: Base client support 
Estimated time 355 second(s) 
Task #4 end 

Task #5 start
Description: Java Runtime Support 
Estimated time 217 second(s) 
Task #5 end 

Task #6 start
Description: Java Help (HTML) - English 
Estimated time 7 second(s) 
Task #6 end 

Task #7 start
Description: Base server support for installation with root privileges 
Estimated time 6 second(s) 
Task #7 end 

Task #8 start
Description: Global Secure ToolKit 
Estimated time 76 second(s) 
Task #8 end 

Task #9 start
Description: Java support 
Estimated time 11 second(s) 
Task #9 end 

Task #10 start
Description: SQL procedures 
Estimated time 3 second(s) 
Task #10 end 

Task #11 start
Description: ICU Utilities 
Estimated time 59 second(s) 
Task #11 end 

Task #12 start
Description: Java Common files 
Estimated time 18 second(s) 
Task #12 end 

Task #13 start
Description: Base server support 
Estimated time 582 second(s) 
Task #13 end 

Task #14 start
Description: Control Center Help (HTML) - English 
Estimated time 13 second(s) 
Task #14 end 

Task #15 start
Description: Relational wrappers common 
Estimated time 3 second(s) 
Task #15 end 

Task #16 start
Description: DB2 data source support 
Estimated time 6 second(s) 
Task #16 end 

Task #17 start
Description: ODBC data source support 
Estimated time 260 second(s) 
Task #17 end 

Task #18 start
Description: Teradata data source support 
Estimated time 3 second(s) 
Task #18 end 

Task #19 start
Description: Spatial Extender server support 
Estimated time 21 second(s) 
Task #19 end 

Task #20 start
Description: Scientific Data Sources 
Estimated time 5 second(s) 
Task #20 end 

Task #21 start
Description: JDBC data source support 
Estimated time 200 second(s) 
Task #21 end 

Task #22 start
Description: IBM Software Development Kit (SDK) for Java(TM) 
Estimated time 49 second(s) 
Task #22 end 

Task #23 start
Description: DB2 LDAP support 
Estimated time 4 second(s) 
Task #23 end 

Task #24 start
Description: DB2 Instance Setup wizard 
Estimated time 23 second(s) 
Task #24 end 

Task #25 start
Description: Structured file data sources 
Estimated time 5 second(s) 
Task #25 end 

Task #26 start
Description: Integrated Flash Copy Support 
Estimated time 3 second(s) 
Task #26 end 

Task #27 start
Description: Oracle data source support 
Estimated time 4 second(s) 
Task #27 end 

Task #28 start
Description: Connect support 
Estimated time 3 second(s) 
Task #28 end 

Task #29 start
Description: Application data sources 
Estimated time 4 second(s) 
Task #29 end 

Task #30 start
Description: Spatial Extender client 
Estimated time 3 second(s) 
Task #30 end 

Task #31 start
Description: SQL Server data source support 
Estimated time 4 second(s) 
Task #31 end 

Task #32 start
Description: Communication support - TCP/IP 
Estimated time 3 second(s) 
Task #32 end 

Task #33 start
Description: Tivoli SA MP 
Estimated time 300 second(s) 
Task #33 end 

Task #34 start
Description: Base application development tools 
Estimated time 36 second(s) 
Task #34 end 

Task #35 start
Description: Parallel Extension 
Estimated time 3 second(s) 
Task #35 end 

Task #36 start
Description: EnterpriseDB code 
Estimated time 3 second(s) 
Task #36 end 

Task #37 start
Description: Replication tools 
Estimated time 60 second(s) 
Task #37 end 

Task #38 start
Description: Sample database source 
Estimated time 4 second(s) 
Task #38 end 

Task #39 start
Description: itlm 
Estimated time 3 second(s) 
Task #39 end 

Task #40 start
Description: DB2 Text Search 
Estimated time 107 second(s) 
Task #40 end 

Task #41 start
Description: Command Line Processor Plus 
Estimated time 8 second(s) 
Task #41 end 

Task #42 start
Description: Sybase data source support 
Estimated time 3 second(s) 
Task #42 end 

Task #43 start
Description: Informix data source support 
Estimated time 4 second(s) 
Task #43 end 

Task #44 start
Description: Federated Data Access Support 
Estimated time 3 second(s) 
Task #44 end 

Task #45 start
Description: First Steps 
Estimated time 3 second(s) 
Task #45 end 

Task #46 start
Description: Pacemaker 
Estimated time 100 second(s) 
Task #46 end 

Task #47 start
Description: Product Signature for DB2 Server Edition 
Estimated time 6 second(s) 
Task #47 end 

Task #48 start
Description: Guardium Installation Manager Client 
Estimated time 36 second(s) 
Task #48 end 

Task #49 start
Description: Setting DB2 library path 
Estimated time 180 second(s) 
Task #49 end 

Task #50 start
Description: Installing or updating DB2 HA scripts for IBM Tivoli System Automation for Multiplatforms (Tivoli SA MP) 
Estimated time 40 second(s) 
Task #50 end 

Task #51 start
Description: Installing or updating Db2 resource agent scripts for Pacemaker 
Estimated time 20 second(s) 
Task #51 end 

Task #52 start
Description: Executing control tasks 
Estimated time 20 second(s) 
Task #52 end 

Task #53 start
Description: Updating global registry 
Estimated time 20 second(s) 
Task #53 end 

Task #54 start
Description: Starting DB2 Fault Monitor 
Estimated time 10 second(s) 
Task #54 end 

Task #55 start
Description: Updating the db2ls and db2greg link 
Estimated time 1 second(s) 
Task #55 end 

Task #56 start
Description: Registering DB2 licenses 
Estimated time 5 second(s) 
Task #56 end 

Task #57 start
Description: Setting default global profile registry variables 
Estimated time 1 second(s) 
Task #57 end 

Task #58 start
Description: Initializing instance list 
Estimated time 5 second(s) 
Task #58 end 

Task #59 start
Description: Updating global profile registry 
Estimated time 3 second(s) 
Task #59 end 

The execution completed with warnings.

For more information see the DB2 installation log at
"/tmp/db2_install.log.7770".

This is quite some output. There is one warning in the tmp file:

WARNING: DBI20105E  An error occurred while installing the following file set:
"TSAMP". Because these files were not successfully installed, functionality
that depends on these files might not work as expected.

As this seems to be related to Tivoli System Automation, I don’t care. As an additional step you can validate the installation:

sles15-db2-root:~/server_dec $ /opt/ibm/db2/V11.5/bin/db2val
DBI1379I  The db2val command is running. This can take several minutes.

DBI1335I  Installation file validation for the DB2 copy installed at
      /opt/ibm/db2/V11.5 was successful.

DBI1343I  The db2val command completed successfully. For details, see
      the log file /tmp/db2val-220120_120906.log.

A root installation does not create an instance automatically, this is a manual post-installation task. Before doing that we need the Groups and Users:

sles15-db2-root:~ $ groupadd -g 999 db2iadm1
sles15-db2-root:~ $ groupadd -g 998 db2fsdm1
sles15-db2-root:~ $ groupadd -g 997 dasadm1
sles15-db2-root:~ $ useradd -u 1004 -g db2iadm1 -m -d /home/db2inst1 db2inst1 
sles15-db2-root:~ $ useradd -u 1003 -g db2fsdm1 -m -d /home/db2fenc1 db2fenc1 
sles15-db2-root:~ $ useradd -u 1002 -g dasadm1 -m -d /home/dasusr1 dasusr1
sles15-db2-root:~ $ passwd db2fenc1
sles15-db2-root:~ $ passwd dasusr1
sles15-db2-root:~ $ passwd dasusr1

Nest step: Instance creation:

sles15-db2-root:~ $ /opt/ibm/db2/V11.5/instance/db2icrt -a server -u db2fenc1 db2inst1
DBI1446I  The db2icrt command is running.

DB2 installation is being initialized.

 Total number of tasks to be performed: 4 
Total estimated time for all tasks to be performed: 309 second(s) 

Task #1 start
Description: Setting default global profile registry variables 
Estimated time 1 second(s) 
Task #1 end 

Task #2 start
Description: Initializing instance list 
Estimated time 5 second(s) 
Task #2 end 

Task #3 start
Description: Configuring DB2 instances 
Estimated time 300 second(s) 
Task #3 end 

Task #4 start
Description: Updating global profile registry 
Estimated time 3 second(s) 
Task #4 end 

The execution completed successfully.

For more information see the DB2 installation log at "/tmp/db2icrt.log.32397".
DBI1070I  Program db2icrt completed successfully.

Have a look the generated tmp file, because it contains the port number for the instance:

sles15-db2-root:~ $ cat /tmp/db2icrt.log.32397
...
Required steps:
You can connect to the DB2 instance "db2inst1" using the port number "25010". Record it for future reference.
...

All the files of the instance went to the home of the “db2inst1” user:

sles15-db2-root:~ $ ls -al /home/db2inst1/sqllib/
total 88
drwxrwsr-t 20 db2inst1 db2iadm1  4096 Jan 20 12:14 .
drwxr-xr-x  9 db2inst1 db2iadm1   182 Jan 20 12:14 ..
lrwxrwxrwx  1 root     db2iadm1     5 Jan 20 12:14 .32 -> lib32
lrwxrwxrwx  1 root     db2iadm1     5 Jan 20 12:14 .64 -> lib64
drwxr-sr-x  2 db2inst1 db2iadm1    23 Jun 11  2021 acs
drwxr-sr-x  2 db2inst1 db2iadm1  4096 Jan 20 12:14 adm
lrwxrwxrwx  1 root     db2iadm1    23 Jan 20 12:14 adsm -> /opt/ibm/db2/V11.5/adsm
drwxr-s---  2 db2inst1 db2iadm1     6 Jan 20 12:14 backup
lrwxrwxrwx  1 root     db2iadm1    22 Jan 20 12:14 bin -> /opt/ibm/db2/V11.5/bin
lrwxrwxrwx  1 root     db2iadm1    22 Jan 20 12:14 bnd -> /opt/ibm/db2/V11.5/bnd
drwxrwsr-t  2 db2inst1 db2iadm1  4096 Jan 20 12:14 cfg
drwxrwsr-t  2 db2inst1 db2iadm1    24 Jan 20 12:14 cfgcache
lrwxrwxrwx  1 root     db2iadm1    23 Jan 20 12:14 conv -> /opt/ibm/db2/V11.5/conv
drwxrwsr-t  2 db2inst1 db2iadm1    23 Jan 20 12:14 ctrl
drwxrwsr-t  2 db2inst1 db2iadm1     6 Jan 20 12:14 ctrlha
drwxrwsr-t  2 db2inst1 db2iadm1   148 Jan 20 12:14 dasfcn
-rwxr-xr-x  1 db2inst1 db2iadm1 13116 Jan 20 12:14 db2cshrc
drwxrwsrwt  3 db2inst1 db2iadm1    22 Jan 20 12:14 db2dump
-rw-rw-r--  1 db2inst1 db2iadm1    20 Jan 20 12:14 db2nodes.cfg
-rwxr-xr-x  1 db2inst1 db2iadm1  9088 Jan 20 12:14 db2profile
-rw-rw-r--  1 db2inst1 db2iadm1 28672 Jan 20 12:14 db2systm
drwxr-sr-x  5 db2inst1 db2iadm1    95 Jan 20 12:14 db2tss
lrwxrwxrwx  1 root     db2iadm1    22 Jan 20 12:14 doc -> /opt/ibm/db2/V11.5/doc
lrwxrwxrwx  1 root     db2iadm1    29 Jan 20 12:14 federation -> /opt/ibm/db2/V11.5/federation
-rw-r--r--  1 db2inst1 db2iadm1     4 Jan 20 12:14 .ftok
drwxrwsr-t  4 db2inst1 db2iadm1   127 Jan 20 12:14 function
lrwxrwxrwx  1 root     db2iadm1    22 Jan 20 12:14 gse -> /opt/ibm/db2/V11.5/gse
lrwxrwxrwx  1 root     db2iadm1    24 Jan 20 12:14 gskit -> /opt/ibm/db2/V11.5/gskit
lrwxrwxrwx  1 root     db2iadm1    26 Jan 20 12:14 include -> /opt/ibm/db2/V11.5/include
lrwxrwxrwx  1 root     db2iadm1    26 Jan 20 12:14 infopop -> /opt/ibm/db2/V11.5/infopop
-rw-rw-r--  1 db2inst1 db2iadm1     2 Jan 20 12:14 .insttype
-rw-r--r--  1 db2inst1 db2iadm1   120 Jan 20 12:14 .instuse
lrwxrwxrwx  1 root     db2iadm1    23 Jan 20 12:14 java -> /opt/ibm/db2/V11.5/java
lrwxrwxrwx  1 root     db2iadm1    23 Jan 20 12:14 json -> /opt/ibm/db2/V11.5/json
lrwxrwxrwx  1 root     db2iadm1     5 Jan 20 12:14 lib -> lib64
lrwxrwxrwx  1 root     db2iadm1    24 Jan 20 12:14 lib32 -> /opt/ibm/db2/V11.5/lib32
lrwxrwxrwx  1 root     db2iadm1    24 Jan 20 12:14 lib64 -> /opt/ibm/db2/V11.5/lib64
drwxrwsr-t  2 db2inst1 db2iadm1    26 Jan 20 12:14 log
lrwxrwxrwx  1 root     db2iadm1    22 Jan 20 12:14 map -> /opt/ibm/db2/V11.5/map
lrwxrwxrwx  1 root     db2iadm1    23 Jan 20 12:14 misc -> /opt/ibm/db2/V11.5/misc
lrwxrwxrwx  1 root     db2iadm1    22 Jan 20 12:14 msg -> /opt/ibm/db2/V11.5/msg
drwxrwsr-x  2 db2inst1 db2iadm1     6 Jan 20 12:14 .netls
lrwxrwxrwx  1 root     db2iadm1    21 Jan 20 12:14 pd -> /opt/ibm/db2/V11.5/pd
-rw-rw-r--  1 db2inst1 db2iadm1    16 Jan 20 12:14 profile.env
lrwxrwxrwx  1 root     db2iadm1    25 Jan 20 12:14 python -> /opt/ibm/db2/V11.5/python
lrwxrwxrwx  1 root     db2iadm1    20 Jan 20 12:14 R -> /opt/ibm/db2/V11.5/R
lrwxrwxrwx  1 root     db2iadm1    25 Jan 20 12:14 Readme -> /opt/ibm/db2/V11.5/Readme
lrwxrwxrwx  1 root     db2iadm1    26 Jan 20 12:14 samples -> /opt/ibm/db2/V11.5/samples
drwxr-sr-x  3 db2inst1 db2iadm1   119 Jan 20 12:14 security
drwxr-sr-x  3 db2inst1 db2iadm1    20 Jan 20 12:14 security32
drwxr-sr-x  3 db2inst1 db2iadm1    20 Jan 20 12:14 security64
drwxrwsrwx  2 db2inst1 db2iadm1     6 Jan 20 12:14 tmp
lrwxrwxrwx  1 root     db2iadm1    24 Jan 20 12:14 tools -> /opt/ibm/db2/V11.5/tools
drwxrwsrwx  2 db2inst1 db2iadm1     6 Jan 20 12:14 uif
-rwxr-xr-x  1 db2inst1 db2iadm1     0 Jan 20 12:14 usercshrc
-rwxr-xr-x  1 db2inst1 db2iadm1     0 Jan 20 12:14 userprofile

Lets bring the instance up and create a database, exactly as in the last post:

sles15-db2-root:~ $ su - db2inst1
db2inst1@sles15-db2-root:~> db2ls

Install Path                       Level   Fix Pack   Special Install Number   Install Date                  Installer UID 
---------------------------------------------------------------------------------------------------------------------
/opt/ibm/db2/V11.5               11.5.6.0        0                            Thu Jan 20 12:05:28 2022 CET             0 

      
      

db2inst1@sles15-db2-root:~> . sqllib/userprofile 
db2inst1@sles15-db2-root:~> db2ilist 
db2inst1
db2inst1@sles15-db2-root:~> db2start
01/20/2022 12:20:41     0   0   SQL1063N  DB2START processing was successful.
SQL1063N  DB2START processing was successful.


b2inst1@sles15-db2-root:~> db2
(c) Copyright IBM Corporation 1993,2007
Command Line Processor for DB2 Client 11.5.6.0

You can issue database manager commands and SQL statements from the command 
prompt. For example:
    db2 => connect to sample
    db2 => bind sample.bnd

For general help, type: ?.
For command help, type: ? command, where command can be
the first few keywords of a database manager command. For example:
 ? CATALOG DATABASE for help on the CATALOG DATABASE command
 ? CATALOG          for help on all of the CATALOG commands.

To exit db2 interactive mode, type QUIT at the command prompt. Outside 
interactive mode, all commands must be prefixed with 'db2'.
To list the current command option settings, type LIST COMMAND OPTIONS.

For more detailed help, refer to the Online Reference Manual.

db2 => create database db1
DB20000I  The CREATE DATABASE command completed successfully.
db2 => connect to db1

   Database Connection Information

 Database server        = DB2/LINUXX8664 11.5.6.0
 SQL authorization ID   = DB2INST1
 Local database alias   = DB1

db2 => 

Finally we want this instance to start automatically when the systems comes up:

db2inst1@sles15-db2-root:~> db2greg -getinstrec instancename='db2inst1'
Retrieved record:
   Service      = |DB2|
   Version      = |11.5.6.0|
   InstanceName = |db2inst1|
   InstancePath = |/home/db2inst1/sqllib|
   Usage        = |N/A|
   StartAtBoot  = 1
   Maintenance  = 0
   InstallPath  = |/opt/ibm/db2/V11.5|
   RemoteProf   = |N/A|
   Comment      = |N/A|
db2inst1@sles15-db2-root:~> db2iauto -on db2inst1 
db2inst1@sles15-db2-root:~> exit
sles15-db2-root:~ $ systemctl reboot

After rebooting we can confirm, that the instance is started:

sles15-db2-root:~ # su - db2inst1
db2inst1@sles15-db2-root:~> ps -ef | grep db2
root       543     1  0 13:13 ?        00:00:00 /opt/ibm/db2/V11.5/bin/db2fmcd
root      1354     1  8 13:14 ?        00:00:00 db2wdog 0 [db2inst1]
db2inst1  1356  1354 10 13:14 ?        00:00:00 db2sysc 0
root      1425  1354  0 13:14 ?        00:00:00 db2ckpwd 0
root      1426  1354  0 13:14 ?        00:00:00 db2ckpwd 0
root      1427  1354  0 13:14 ?        00:00:00 db2ckpwd 0
db2inst1  1429  1354  1 13:14 ?        00:00:00 db2vend (PD Vendor Process - 1) 0
db2inst1  1438  1354  1 13:14 ?        00:00:00 db2acd 0 ,0,0,0,1,0,0,00000000,0,0,0000000000000000,0000000000000000,00000000,00000000,00000000,00000000,00000000,00000000,0000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,000000002d85b000,0000000000000000,0000000000000000,1,0,0,,,,,a89e68,14,1e014,2,0,1,0000000000041fc0,0x240000000,0x240000000,1600000,2,2,10
root      1458  1374  0 13:14 pts/0    00:00:00 su - db2inst1
db2inst1  1459  1458  0 13:14 pts/0    00:00:00 -bash
db2inst1  1867  1459 99 13:14 pts/0    00:00:00 ps -ef
db2inst1  1868  1459  0 13:14 pts/0    00:00:00 grep --color=auto db2
db2inst1@sles15-db2-root:~> db2
(c) Copyright IBM Corporation 1993,2007
Command Line Processor for DB2 Client 11.5.6.0

You can issue database manager commands and SQL statements from the command 
prompt. For example:
    db2 => connect to sample
    db2 => bind sample.bnd

For general help, type: ?.
For command help, type: ? command, where command can be
the first few keywords of a database manager command. For example:
 ? CATALOG DATABASE for help on the CATALOG DATABASE command
 ? CATALOG          for help on all of the CATALOG commands.

To exit db2 interactive mode, type QUIT at the command prompt. Outside 
interactive mode, all commands must be prefixed with 'db2'.
To list the current command option settings, type LIST COMMAND OPTIONS.

For more detailed help, refer to the Online Reference Manual.

db2 => connect to db1

   Database Connection Information

 Database server        = DB2/LINUXX8664 11.5.6.0
 SQL authorization ID   = DB2INST1
 Local database alias   = DB1

db2 => 

As this is on Linux there should be a systemd service definition somewhere:

db2inst1@sles15-db2-root:~> systemctl list-unit-files | grep db2
db2fmcd.service                                                           enabled         disabled     
db2inst1@sles15-db2-root:~> systemctl status db2fmcd.service
● db2fmcd.service - DB2 v11.5.6.0
     Loaded: loaded (/etc/systemd/system/db2fmcd.service; enabled; vendor preset: disabled)
     Active: active (running) since Thu 2022-01-20 13:13:16 CET; 1min 54s ago
   Main PID: 543 (db2fmcd)
      Tasks: 44
     CGroup: /system.slice/db2fmcd.service
             ├─ 543 /opt/ibm/db2/V11.5/bin/db2fmcd
             ├─1354 db2wdog 0 [db2inst1]
             ├─1356 db2sysc 0
             ├─1425 db2ckpwd 0
             ├─1426 db2ckpwd 0
             ├─1427 db2ckpwd 0
             ├─1429 db2vend (PD Vendor Process - 1) 0
             └─1438 db2acd 0 ,0,0,0,1,0,0,00000000,0,0,0000000000000000,0000000000000000,00000000,00000000,00000000,00000000,00000000,00000000,0000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,000000002d85b000,0000000000000000,0000000000000000,1,0,0,,,,,a89e68,14,1e014,2,0,1,0000000000041fc0,0x240000000,0x240000000,1600000,2,2,10

That’s it for the root installation. For me, personally, there is not much benefit compared to the non-root installation. For real systems this is the way to go for, especially when you check the limitations of the non-root installation.

Cet article Setting up IBM Db2 on Linux – root installation est apparu en premier sur Blog dbi services.

ODA KVM Machines : Adding a Network Interface on a Virtual Machine

Yann Neuhaus - Thu, 2022-01-20 10:55

In a previous blog , I was talking about virtual machines on ODA servers. Sometimes it comes that we want to add another network interface for other purpose.
In this blog I am describing the main tasks to add a second network interface card on KVM virtual machine. Let’s describe the context
I have an ODA X8-2M with appliance 19.12. This ODA contains virtual machines for the applications. The virtual machine only has one NIC used for the public connexion. The idea is to add a second network interface that will be used for the backups
-The oda server : oda-dev-01
-The virtual machine : vmserver_appli
We show the actual network configuration for the virtual machine vmserver-appli. Only eth0 is present in the VM.

[root@vmserver-appli ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:e9:26:18 brd ff:ff:ff:ff:ff:ff
    inet 10.18.35.139/24 brd 10.18.35.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f7a:d225:d568:5afa/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
[root@vmserver-appli ~]#

As we can see below the eth0 is actually using the vNetwork pubnet of the physical ODA server. The pubnet is configured with the btbond1.

[root@oda-dev-01 ~]# odacli list-vnetworks
Name                  Type             Interface        Bridge                Uniform   Created                  Updated
--------------------  ---------------  ---------------  --------------------  --------  -----------------------  -----------------------
pubnet                Bridged          btbond1          pubnet                NO        2021-09-27 10:55:41      2021-09-27 10:55:41
                                                                                        CEST                     CEST

[root@oda-dev-01 ~]#

In the ODA there is a second configured bonding with btbond2.

[root@oda-dev-01 network-scripts]# cat ifcfg-btbond2
# This file is automatically created by the ODA software.
DEVICE=btbond2
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
TYPE=BOND
IPV6INIT=no
NM_CONTROLLED=no
PEERDNS=no
BONDING_OPTS="mode=active-backup miimon=100 primary=p7p3"
[root@oda-dev-01 network-scripts]#

As we can see btbond2 is not used yet compared to btbond1 used for the pubnet vNetwork

[root@oda-dev-01 network-scripts]# cat ifcfg-btbond1
# This file is automatically created by the ODA software.
DEVICE=btbond1
ONBOOT=yes
USERCTL=no
TYPE=BOND
IPV6INIT=no
NM_CONTROLLED=no
PEERDNS=no
BONDING_OPTS="mode=active-backup miimon=100 primary=p7p1"
BRIDGE=pubnet
[root@oda-dev-01 network-scripts]#

So the idea is to create backupnet vNetwork based on btbond2. Later the backupnet vNetwork will be attached to the virtual machine.The command odacli create-vnetwork is used to create backupnet

[root@oda-dev-01 network-scripts]# odacli create-vnetwork --name backupnet --bridge backupnet --type bridged --interface btbond2 --ip 10.18.45.55 --gateway 10.18.45.1 --netmask 255.255.255.128

Job details
----------------------------------------------------------------
                     ID:  f3490869-1d38-4562-928f-644e971b24cd
            Description:  vNetwork backupnet creation
                 Status:  Created
                Created:  January 20, 2022 10:39:12 AM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@oda-dev-01 network-scripts]#

We verify that the job is ok

[root@oda-dev-01 network-scripts]# odacli describe-job -i f3490869-1d38-4562-928f-644e971b24cd

Job details
----------------------------------------------------------------
                     ID:  f3490869-1d38-4562-928f-644e971b24cd
            Description:  vNetwork backupnet creation
                 Status:  Success
                Created:  January 20, 2022 10:39:12 AM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate Virtual Network doesn't exist   January 20, 2022 10:39:12 AM CET    January 20, 2022 10:39:12 AM CET    Success
Validate interface to use exists         January 20, 2022 10:39:12 AM CET    January 20, 2022 10:39:12 AM CET    Success
Validate interfaces to create not exist  January 20, 2022 10:39:12 AM CET    January 20, 2022 10:39:12 AM CET    Success
Validate interface is not default        January 20, 2022 10:39:12 AM CET    January 20, 2022 10:39:12 AM CET    Success
Create bridge from interface             January 20, 2022 10:39:12 AM CET    January 20, 2022 10:39:18 AM CET    Success
Create metadata                          January 20, 2022 10:39:18 AM CET    January 20, 2022 10:39:18 AM CET    Success
Persist metadata                         January 20, 2022 10:39:18 AM CET    January 20, 2022 10:39:18 AM CET    Success

[root@oda-dev-01 network-scripts]#

And we can then list the vNetworks in the ODA

[root@oda-dev-01 network-scripts]# odacli list-vnetworks
Name                  Type             Interface        Bridge                Uniform   Created                  Updated
--------------------  ---------------  ---------------  --------------------  --------  -----------------------  -----------------------
pubnet                Bridged          btbond1          pubnet                NO        2021-09-27 10:55:41      2021-09-27 10:55:41
                                                                                        CEST                     CEST
backupnet             Bridged          btbond2          backupnet             NO        2022-01-20 10:39:18 CET  2022-01-20 10:39:18 CET

[root@oda-dev-01 network-scripts]#

Let’s describe the VM that I want to add a second network interface. We can see that only one vNetwork (pubnet) is attached to the VM.

[root@oda-dev-01 network-scripts]# odacli describe-vm -n vmserver-appli
VM details
--------------------------------------------------------------------------------
                       ID:  14f10064-a048-419d-a850-18b1d9b2b69e
                     Name:  vmserver-appli
                  Created:  2021-10-21 14:51:50 CEST
                  Updated:  2021-10-21 14:51:50 CEST
               VM Storage:  storevms
              Description:  Web site standby
            VM image path:  /u05/app/sharedrepo/storevms/.ACFS/snaps/vm_vmserver-appli/vmserver-appli
                  VM size:  20.00 GB
                   Source:  V1009690-01.iso
                  OS Type:  NONE
               OS Variant:  NONE
        Graphics settings:  vnc,listen=127.0.0.1
             Display Port:  127.0.0.1:1

 Status
--------------------------
             Current node:  oda-dev-01
            Current state:  ONLINE
             Target state:  ONLINE

 Parameters
--------------------------
           Preferred node:  NONE
              Boot option:  NONE
               Auto start:  YES
                Fail over:  NO
             NUMA enabled:  NO

                            Config                     Live
                            -------------------------  -------------------------
                   Memory:  4.00 GB                    4.00 GB
               Max Memory:  8.00 GB                    8.00 GB
               vCPU count:  3                          3
           Max vCPU count:  6                          6
                 CPU Pool:  cpuvms                     cpuvms
        Effective CPU set:  1-9,17-25,33-41,49-57      1-9,17-25,33-41,49-57
                    vCPUs:  0:1-9,17-25,33-41,49-57    0:1-9,17-25,33-41,49-57
                            1:1-9,17-25,33-41,49-57    1:1-9,17-25,33-41,49-57
                            2:1-9,17-25,33-41,49-57    2:1-9,17-25,33-41,49-57
                   vDisks:  vdisk2webs21:vdc           vdisk2webs21:vdc
                            vdisk1webs21:vdb           vdisk1webs21:vdb
                vNetworks:  pubnet:52:54:00:e9:26:18   pubnet:52:54:00:e9:26:18

[root@oda-dev-01 network-scripts]#

Again on the virtual machine only one NIC eth0 is configured

[root@vmserver-appli ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:e9:26:18 brd ff:ff:ff:ff:ff:ff
    inet 10.18.35.139/24 brd 10.18.35.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f7a:d225:d568:5afa/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
[root@vmserver-appli ~]#

To add a second network interface, let’s attach the backupnet vNetwork to the VM . Just note the use both –live and –config
–live: The change is applied to the running VM instance, but does not persist after the VM stops.
–config: The change persists in the configuration of VM instance and takes effect on subsequent reboots.

[root@oda-dev-01 network-scripts]# odacli modify-vm -n vmserver-appli -avn backupnet --live --config

Job details
----------------------------------------------------------------
                     ID:  8da2a4cc-528c-47d9-bcb0-4481010c73f3
            Description:  VM vmserver-appli modification
                 Status:  Created
                Created:  January 20, 2022 10:48:25 AM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@oda-dev-01 network-scripts]#

Let’s verify that the job returns SUCCESS

[root@oda-dev-01 network-scripts]# odacli describe-job -i 8da2a4cc-528c-47d9-bcb0-4481010c73f3

Job details
----------------------------------------------------------------
                     ID:  8da2a4cc-528c-47d9-bcb0-4481010c73f3
            Description:  VM vmserver-appli modification
                 Status:  Success
                Created:  January 20, 2022 10:48:25 AM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate VM exists                       January 20, 2022 10:48:25 AM CET    January 20, 2022 10:48:25 AM CET    Success
Validate network attachment pre-reqs     January 20, 2022 10:48:25 AM CET    January 20, 2022 10:48:25 AM CET    Success
Attach vNetworks                         January 20, 2022 10:48:25 AM CET    January 20, 2022 10:48:25 AM CET    Success
Save configuration in ACFS               January 20, 2022 10:48:25 AM CET    January 20, 2022 10:48:25 AM CET    Success
Save live VM configuration in ACFS       January 20, 2022 10:48:25 AM CET    January 20, 2022 10:48:26 AM CET    Success
Modify VM metadata                       January 20, 2022 10:48:26 AM CET    January 20, 2022 10:48:26 AM CET    Success
Persist metadata                         January 20, 2022 10:48:26 AM CET    January 20, 2022 10:48:26 AM CET    Success

[root@oda-dev-01 network-scripts]#

Describing again the VM, we can see that the backupnet vNetwork is now attached and persistent

[root@oda-dev-01 network-scripts]# odacli describe-vm -n vmserver-appli
VM details
--------------------------------------------------------------------------------
                       ID:  14f10064-a048-419d-a850-18b1d9b2b69e
                     Name:  vmserver-appli
                  Created:  2021-10-21 14:51:50 CEST
                  Updated:  2022-01-20 10:48:26 CET
               VM Storage:  storevms
              Description:  Web site standby
            VM image path:  /u05/app/sharedrepo/storevms/.ACFS/snaps/vm_vmserver-appli/vmserver-appli
                  VM size:  20.00 GB
                   Source:  V1009690-01.iso
                  OS Type:  NONE
               OS Variant:  NONE
        Graphics settings:  vnc,listen=127.0.0.1
             Display Port:  127.0.0.1:1

 Status
--------------------------
             Current node:  oda-dev-01
            Current state:  ONLINE
             Target state:  ONLINE

 Parameters
--------------------------
           Preferred node:  NONE
              Boot option:  NONE
               Auto start:  YES
                Fail over:  NO
             NUMA enabled:  NO

                            Config                     Live
                            -------------------------  -------------------------
                   Memory:  4.00 GB                    4.00 GB
               Max Memory:  8.00 GB                    8.00 GB
               vCPU count:  3                          3
           Max vCPU count:  6                          6
                 CPU Pool:  cpuvms                     cpuvms
        Effective CPU set:  1-9,17-25,33-41,49-57      1-9,17-25,33-41,49-57
                    vCPUs:  0:1-9,17-25,33-41,49-57    0:1-9,17-25,33-41,49-57
                            1:1-9,17-25,33-41,49-57    1:1-9,17-25,33-41,49-57
                            2:1-9,17-25,33-41,49-57    2:1-9,17-25,33-41,49-57
                   vDisks:  vdisk2webs21:vdc           vdisk2webs21:vdc
                            vdisk1webs21:vdb           vdisk1webs21:vdb
                vNetworks:  backupnet:52:54:00:b2:ae:94  backupnet:52:54:00:b2:ae:94
                            pubnet:52:54:00:e9:26:18   pubnet:52:54:00:e9:26:18

[root@oda-dev-01 network-scripts]#

Connecting on the VM, we can see that there is a second network interface card eth1.

[root@vmserver-appli ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:e9:26:18 brd ff:ff:ff:ff:ff:ff
    inet 10.18.35.139/24 brd 10.18.35.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f7a:d225:d568:5afa/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:b2:ae:94 brd ff:ff:ff:ff:ff:ff
[root@vmserver-appli ~]#

Now you can configure this second network card according to your needs

Conclusion

Hope this blog will help. Just a recommendation is to always restart VMs with odacli stop-vm after modifications, this will validate that your changes are persistent or not

Cet article ODA KVM Machines : Adding a Network Interface on a Virtual Machine est apparu en premier sur Blog dbi services.

The Contents of the Database Controlfile -- 3 : Recreating the Controlfile

Hemant K Chitale - Thu, 2022-01-20 03:57

 In my previous posts, I have shown the Logical Structure and the Physical Contents of the Controlfile.  It is clear that some of the sections (e.g. for "ARCHIVED LOG" entries) have many slots (or "records") not all of which are currently in use.

What happens if you recreate the controlfile ?  Since my first post, I have generated some more Backup Pieces and ArchiveLogs (and purged ArchiveLogs) so the RECORDS_UESD or INDEXes of "BACKUP DATAFILE", "BACKUP PIECE" and "ARCHIVED LOG" in this listing are different from that first listing.



SQL> select *
2 from v$controlfile_record_section
3 order by type
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ACM OPERATION 104 64 11 0 0 0 0
ARCHIVED LOG 584 383 383 198 197 1444 0
AUXILIARY DATAFILE COPY 584 128 0 0 0 0 0
BACKUP CORRUPTION 44 1115 0 0 0 0 0
BACKUP DATAFILE 200 1063 210 1 210 210 0
BACKUP PIECE 780 1006 160 1 160 160 0
BACKUP REDOLOG 76 215 215 51 50 265 0
BACKUP SET 96 1022 149 1 149 149 0
BACKUP SPFILE 124 131 33 1 33 33 0
CKPT PROGRESS 8180 11 0 0 0 0 0
COPY CORRUPTION 40 1227 0 0 0 0 0
DATABASE 316 1 1 0 0 0 0
DATABASE BLOCK CORRUPTION 80 8384 0 0 0 0 0
DATABASE INCARNATION 56 292 4 1 4 4 0
DATAFILE 520 1024 34 0 0 1532 0
DATAFILE COPY 736 1000 3 1 3 3 0
DATAFILE HISTORY 568 57 0 0 0 0 0
DELETED OBJECT 20 818 818 14 13 831 0
FILENAME 524 4146 29 0 0 0 0
FLASHBACK LOG 84 2048 2 0 0 0 0
FOREIGN ARCHIVED LOG 604 1002 0 0 0 0 0
GUARANTEED RESTORE POINT 256 2048 1 0 0 2 0
INSTANCE SPACE RESERVATION 28 1055 1 0 0 0 0
LOG HISTORY 56 292 292 54 53 637 0
MTTR 100 8 1 0 0 0 0
MULTI INSTANCE REDO APPLY 556 1 0 0 0 0 0
OFFLINE RANGE 200 1063 696 1 696 696 0
PDB RECORD 780 10 5 0 0 26 0
PDBINC RECORD 144 113 0 0 0 0 0
PROXY COPY 928 1004 0 0 0 0 0
RECOVERY DESTINATION 180 1 1 0 0 0 0
REDO LOG 72 16 4 0 0 15 0
REDO THREAD 256 8 1 0 0 0 0
REMOVABLE RECOVERY FILES 32 1000 6 0 0 0 0
RESTORE POINT 256 2108 0 0 0 0 0
RMAN CONFIGURATION 1108 50 3 0 0 7 0
RMAN STATUS 116 141 141 98 97 379 0
STANDBY DATABASE MATRIX 400 128 128 0 0 0 0
TABLESPACE 180 1024 22 0 0 75 0
TABLESPACE KEY HISTORY 108 151 0 0 0 0 0
TEMPORARY FILENAME 56 1024 7 0 0 36 0
THREAD INSTANCE NAME MAPPING 80 8 8 0 0 0 0

42 rows selected.

SQL>
SQL> alter database backup controlfile to trace as '/home/oracle/controlfile_creation.txt';

Database altered.

SQL>

SQL> !cat /home/oracle/controlfile_creation.txt
-- The following are current System-scope REDO Log Archival related
-- parameters and can be included in the database initialization file.
--
-- LOG_ARCHIVE_DEST=''
-- LOG_ARCHIVE_DUPLEX_DEST=''
--
-- LOG_ARCHIVE_FORMAT=%t_%s_%r.dbf
--
-- DB_UNIQUE_NAME="ORCLCDB"
--
-- LOG_ARCHIVE_CONFIG='SEND, RECEIVE, NODG_CONFIG'
-- LOG_ARCHIVE_MAX_PROCESSES=4
-- STANDBY_FILE_MANAGEMENT=MANUAL
-- FAL_CLIENT=''
-- FAL_SERVER=''
--
-- LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
-- LOG_ARCHIVE_DEST_1='OPTIONAL REOPEN=300 NODELAY'
-- LOG_ARCHIVE_DEST_1='ARCH NOAFFIRM NOVERIFY SYNC'
-- LOG_ARCHIVE_DEST_1='REGISTER'
-- LOG_ARCHIVE_DEST_1='NOALTERNATE'
-- LOG_ARCHIVE_DEST_1='NODEPENDENCY'
-- LOG_ARCHIVE_DEST_1='NOMAX_FAILURE NOQUOTA_SIZE NOQUOTA_USED NODB_UNIQUE_NAME'
-- LOG_ARCHIVE_DEST_1='VALID_FOR=(PRIMARY_ROLE,ONLINE_LOGFILES)'
-- LOG_ARCHIVE_DEST_STATE_1=ENABLE

--
-- Below are two sets of SQL statements, each of which creates a new
-- control file and uses it to open the database. The first set opens
-- the database with the NORESETLOGS option and should be used only if
-- the current versions of all online logs are available. The second
-- set opens the database with the RESETLOGS option and should be used
-- if online logs are unavailable.
-- The appropriate set of statements can be copied from the trace into
-- a script file, edited as necessary, and executed when there is a
-- need to re-create the control file.
--
-- Set #1. NORESETLOGS case
--
-- The following commands will create a new control file and use it
-- to open the database.
-- Data used by Recovery Manager will be lost.
-- Additional logs may be required for media recovery of offline
-- Use this only if the current versions of all online logs are
-- available.

-- After mounting the created controlfile, the following SQL
-- statement will place the database in the appropriate
-- protection mode:
-- ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "ORCLCDB" NORESETLOGS FORCE LOGGING ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 1024
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/opt/oracle/oradata/ORCLCDB/redo01.log' SIZE 200M BLOCKSIZE 512,
GROUP 2 '/opt/oracle/oradata/ORCLCDB/redo02.log' SIZE 200M BLOCKSIZE 512,
GROUP 3 '/opt/oracle/oradata/ORCLCDB/redo03.log' SIZE 200M BLOCKSIZE 512
-- STANDBY LOGFILE
-- GROUP 4 '/opt/oracle/oradata/ORCLCDB/stdbredo01.log' SIZE 200M BLOCKSIZE 512
DATAFILE
'/opt/oracle/oradata/ORCLCDB/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/users01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_system_j80dl8qy_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_sysaux_j80dl94j_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_undotbs1_j80dl96d_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_rman_tbs_j80f07n8_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_tpcctab_jyl0mzsp_.dbf'
CHARACTER SET AL32UTF8
;

-- Configure RMAN configuration record 1
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ARCHIVELOG DELETION POLICY','TO BACKED UP 1 TIMES TO DISK');
-- Configure RMAN configuration record 2
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET');
-- Configure RMAN configuration record 3
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO REDUNDANCY 2');
-- Commands to re-create incarnation table
-- Below log names MUST be changed to existing filenames on
-- disk. Any one log file from each branch can be used to
-- re-create incarnation records.
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- Recovery is required if any of the datafiles are restored backups,
-- or if the last shutdown was not normal or immediate.
RECOVER DATABASE

-- All logs need archiving and a log switch is needed.
ALTER SYSTEM ARCHIVE LOG ALL;

-- Database can now be opened normally.
ALTER DATABASE OPEN;

-- Open all the PDBs.
ALTER PLUGGABLE DATABASE ALL OPEN;

-- Commands to add tempfiles to temporary tablespaces.
-- Online tempfiles have complete space information.
-- Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/temp01.dbf'
SIZE 138412032 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "PDB$SEED";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/pdbseed/temp012019-05-04_23-32-15-038-PM.dbf'
SIZE 76546048 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "ORCLPDB1";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp01.dbf'
SIZE 135266304 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp02.dbf'
SIZE 52428800 REUSE AUTOEXTEND OFF;
ALTER SESSION SET CONTAINER = "RMANCAT";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_temp_j80dl97t_.dbf'
SIZE 77594624 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "CDB$ROOT";
-- End of tempfile additions.
--
--
--
----------------------------------------------------------
-- The following script can be used on the standby database
-- to re-populate entries for a standby controlfile created
-- on the primary and copied to the standby site.
----------------------------------------------------------
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/opt/oracle/oradata/ORCLCDB/stdbredo01.log'
SIZE 200M BLOCKSIZE 512 REUSE;
-- Set #2. RESETLOGS case
--
-- The following commands will create a new control file and use it
-- to open the database.
-- Data used by Recovery Manager will be lost.
-- The contents of online logs will be lost and all backups will
-- be invalidated. Use this only if online logs are damaged.

-- After mounting the created controlfile, the following SQL
-- statement will place the database in the appropriate
-- protection mode:
-- ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "ORCLCDB" RESETLOGS FORCE LOGGING ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 1024
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/opt/oracle/oradata/ORCLCDB/redo01.log' SIZE 200M BLOCKSIZE 512,
GROUP 2 '/opt/oracle/oradata/ORCLCDB/redo02.log' SIZE 200M BLOCKSIZE 512,
GROUP 3 '/opt/oracle/oradata/ORCLCDB/redo03.log' SIZE 200M BLOCKSIZE 512
-- STANDBY LOGFILE
-- GROUP 4 '/opt/oracle/oradata/ORCLCDB/stdbredo01.log' SIZE 200M BLOCKSIZE 512
DATAFILE
'/opt/oracle/oradata/ORCLCDB/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/users01.dbf',
'/opt/oracle/oradata/ORCLCDB/pdbseed/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf',
'/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_system_j80dl8qy_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_sysaux_j80dl94j_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_undotbs1_j80dl96d_.dbf',
'/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_rman_tbs_j80f07n8_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf',
'/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_tpcctab_jyl0mzsp_.dbf'
CHARACTER SET AL32UTF8
;

-- Configure RMAN configuration record 1
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ARCHIVELOG DELETION POLICY','TO BACKED UP 1 TIMES TO DISK');
-- Configure RMAN configuration record 2
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET');
-- Configure RMAN configuration record 3
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO REDUNDANCY 2');
-- Commands to re-create incarnation table
-- Below log names MUST be changed to existing filenames on
-- disk. Any one log file from each branch can be used to
-- re-create incarnation records.
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- ALTER DATABASE REGISTER LOGFILE '/opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_1_%u_.arc';
-- Recovery is required if any of the datafiles are restored backups,
-- or if the last shutdown was not normal or immediate.
RECOVER DATABASE USING BACKUP CONTROLFILE

-- Database can now be opened zeroing the online logs.
ALTER DATABASE OPEN RESETLOGS;

-- Open all the PDBs.
ALTER PLUGGABLE DATABASE ALL OPEN;

-- Commands to add tempfiles to temporary tablespaces.
-- Online tempfiles have complete space information.
-- Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/temp01.dbf'
SIZE 138412032 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "PDB$SEED";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/pdbseed/temp012019-05-04_23-32-15-038-PM.dbf'
SIZE 76546048 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "ORCLPDB1";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp01.dbf'
SIZE 135266304 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp02.dbf'
SIZE 52428800 REUSE AUTOEXTEND OFF;
ALTER SESSION SET CONTAINER = "RMANCAT";
ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_temp_j80dl97t_.dbf'
SIZE 77594624 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
ALTER SESSION SET CONTAINER = "CDB$ROOT";
-- End of tempfile additions.
--
--
--
----------------------------------------------------------
-- The following script can be used on the standby database
-- to re-populate entries for a standby controlfile created
-- on the primary and copied to the standby site.
----------------------------------------------------------
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/opt/oracle/oradata/ORCLCDB/stdbredo01.log'
SIZE 200M BLOCKSIZE 512 REUSE;

SQL>


The trace that is generated with the "alter database backup controlfile to trace ..." command actually contains the SQL statements that can be used to recreate the controlfile.  There are a number of useful comments also added.
Essentially, there are two sets of SQL commands.  

The first set, titled "Set #1" is for the NORESETLOGS scenario where all the Online Redo Log files are available.
The second set, titled "Set #2" is for a RESETLOGS scenario where Incomplete Recovery is required because not all the Online Redo Log files are available.

In my case, I have all my Online Redo Log files so I do not lose any data and I can use the commands from Set #1.

Recreating the Controlfile :


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

SQL> @Recreate_Controlfile.SQL
SQL> spool Recreate_Controlfile
SQL>
SQL> STARTUP NOMOUNT
ORACLE instance started.

Total System Global Area 1207955544 bytes
Fixed Size 9134168 bytes
Variable Size 671088640 bytes
Database Buffers 520093696 bytes
Redo Buffers 7639040 bytes
SQL> CREATE CONTROLFILE REUSE DATABASE "ORCLCDB" NORESETLOGS FORCE LOGGING ARCHIVELOG
2 MAXLOGFILES 16
3 MAXLOGMEMBERS 3
4 MAXDATAFILES 1024
5 MAXINSTANCES 8
6 MAXLOGHISTORY 292
7 LOGFILE
8 GROUP 1 '/opt/oracle/oradata/ORCLCDB/redo01.log' SIZE 200M BLOCKSIZE 512,
9 GROUP 2 '/opt/oracle/oradata/ORCLCDB/redo02.log' SIZE 200M BLOCKSIZE 512,
10 GROUP 3 '/opt/oracle/oradata/ORCLCDB/redo03.log' SIZE 200M BLOCKSIZE 512
11 -- STANDBY LOGFILE
12 -- GROUP 4 '/opt/oracle/oradata/ORCLCDB/stdbredo01.log' SIZE 200M BLOCKSIZE 512
13 DATAFILE
14 '/opt/oracle/oradata/ORCLCDB/system01.dbf',
15 '/opt/oracle/oradata/ORCLCDB/sysaux01.dbf',
16 '/opt/oracle/oradata/ORCLCDB/undotbs01.dbf',
17 '/opt/oracle/oradata/ORCLCDB/pdbseed/system01.dbf',
18 '/opt/oracle/oradata/ORCLCDB/pdbseed/sysaux01.dbf',
19 '/opt/oracle/oradata/ORCLCDB/users01.dbf',
20 '/opt/oracle/oradata/ORCLCDB/pdbseed/undotbs01.dbf',
21 '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf',
22 '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf',
23 '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf',
24 '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf',
25 '/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf',
26 '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_system_j80dl8qy_.dbf',
27 '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_sysaux_j80dl94j_.dbf',
28 '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_undotbs1_j80dl96d_.dbf',
29 '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_rman_tbs_j80f07n8_.dbf',
30 '/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf',
31 '/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf',
32 '/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_tpcctab_jyl0mzsp_.dbf'
33 CHARACTER SET AL32UTF8
34 ;

Control file created.

SQL>
SQL> -- Configure RMAN configuration record 1
SQL> VARIABLE RECNO NUMBER;
SQL> EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ARCHIVELOG DELETION POLICY','TO BACKED UP 1 TIMES TO DISK');

PL/SQL procedure successfully completed.

SQL> -- Configure RMAN configuration record 2
SQL> VARIABLE RECNO NUMBER;
SQL> EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET');

PL/SQL procedure successfully completed.

SQL> -- Configure RMAN configuration record 3
SQL> VARIABLE RECNO NUMBER;
SQL> EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO REDUNDANCY 2');

PL/SQL procedure successfully completed.

SQL> -- Recovery is required if any of the datafiles are restored backups,
SQL> -- or if the last shutdown was not normal or immediate.
SQL> -- I will be doing a SHUTDOWN IMMEDIATE so I do not need a RECOVER DATABASE
SQL> -- RECOVER DATABASE
SQL>
SQL> -- All logs need archiving and a log switch is needed.
SQL> ALTER SYSTEM ARCHIVE LOG ALL;

System altered.

SQL>
SQL> -- Database can now be opened normally.
SQL> ALTER DATABASE OPEN;

Database altered.

SQL>
SQL> -- Open all the PDBs.
SQL> ALTER PLUGGABLE DATABASE ALL OPEN;

Pluggable database altered.

SQL>
SQL> -- Commands to add tempfiles to temporary tablespaces.
SQL> -- Online tempfiles have complete space information.
SQL> -- Other tempfiles may require adjustment.
SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/temp01.dbf'
2 SIZE 138412032 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;

Tablespace altered.

SQL> ALTER SESSION SET CONTAINER = "PDB$SEED";

Session altered.

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/pdbseed/temp012019-05-04_23-32-15-038-PM.dbf'
2 SIZE 76546048 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;

Tablespace altered.

SQL> ALTER SESSION SET CONTAINER = "ORCLPDB1";

Session altered.

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp01.dbf'
2 SIZE 135266304 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;

Tablespace altered.

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/temp02.dbf'
2 SIZE 52428800 REUSE AUTOEXTEND OFF;

Tablespace altered.

SQL> ALTER SESSION SET CONTAINER = "RMANCAT";

Session altered.

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/opt/oracle/oradata/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/datafile/o1_mf_temp_j80dl97t_.dbf'
2 SIZE 77594624 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;

Tablespace altered.

SQL> ALTER SESSION SET CONTAINER = "CDB$ROOT";

Session altered.

SQL> -- End of tempfile additions.
SQL>
SQL> spool off
SQL>
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 71
Next log sequence to archive 73
Current log sequence 73
SQL>


The ARCHIVE LOG LIST output shows that I did NOT have to do a RESETLOGS. The CREATE CONTROLFILE and ALTER DATABASE OPEN commands succeeded.

What do I now see in the Logical Entries in the Controlfile ?


SQL> select *
2 from v$controlfile_record_section
3 order by type
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ACM OPERATION 104 64 11 0 0 0 0
ARCHIVED LOG 584 308 3 1 3 3 0
AUXILIARY DATAFILE COPY 584 128 0 0 0 0 0
BACKUP CORRUPTION 44 1115 0 0 0 0 0
BACKUP DATAFILE 200 1063 0 0 0 0 0
BACKUP PIECE 780 1006 0 0 0 0 0
BACKUP REDOLOG 76 215 0 0 0 0 0
BACKUP SET 96 1022 0 0 0 0 0
BACKUP SPFILE 124 131 0 0 0 0 0
CKPT PROGRESS 8180 11 0 0 0 0 0
COPY CORRUPTION 40 1227 0 0 0 0 0
DATABASE 316 1 1 0 0 0 0
DATABASE BLOCK CORRUPTION 80 8384 0 0 0 0 0
DATABASE INCARNATION 56 292 2 1 2 2 0
DATAFILE 520 1024 34 0 0 39 0
DATAFILE COPY 736 1000 0 0 0 0 0
DATAFILE HISTORY 568 57 0 0 0 0 0
DELETED OBJECT 20 818 0 0 0 0 0
FILENAME 524 4146 27 0 0 0 0
FLASHBACK LOG 84 2048 0 0 0 0 0
FOREIGN ARCHIVED LOG 604 1002 0 0 0 0 0
GUARANTEED RESTORE POINT 256 2048 0 0 0 0 0
INSTANCE SPACE RESERVATION 28 1055 1 0 0 0 0
LOG HISTORY 56 292 1 1 1 1 0
MTTR 100 8 1 0 0 0 0
MULTI INSTANCE REDO APPLY 556 1 0 0 0 0 0
OFFLINE RANGE 200 1063 12 1 12 12 0
PDB RECORD 780 10 4 0 0 6 0
PDBINC RECORD 144 113 0 0 0 0 0
PROXY COPY 928 1004 0 0 0 0 0
RECOVERY DESTINATION 180 1 1 0 0 0 0
REDO LOG 72 16 3 0 0 0 0
REDO THREAD 256 8 1 0 0 0 0
REMOVABLE RECOVERY FILES 32 1000 0 0 0 0 0
RESTORE POINT 256 2108 0 0 0 0 0
RMAN CONFIGURATION 1108 50 3 0 0 3 0
RMAN STATUS 116 141 0 0 0 0 0
STANDBY DATABASE MATRIX 400 128 128 0 0 0 0
TABLESPACE 180 1024 22 0 0 7 0
TABLESPACE KEY HISTORY 108 151 0 0 0 0 0
TEMPORARY FILENAME 56 1024 5 0 0 8 0
THREAD INSTANCE NAME MAPPING 80 8 8 0 0 0 0

42 rows selected.

SQL>


Recreating the Controlfile resulted in loss of information on all ArchivedLogs and RMAN Backups !
Compare the RECORDS_USED and INDEXes for "ARCHIVED LOG" and "BACKUP %" entries against the listing obtained before recreated the Controlfile.  (A few ArchivedLogs were generated by the "ALTER SYSTEM ARCHIVE LOG ALL" command in the script itself).

Can I add that information back ?


RMAN> catalog start with '/opt/oracle/FRA/ORCLCDB';

using target database control file instead of recovery catalog
searching for all files that match the pattern /opt/oracle/FRA/ORCLCDB

List of Files Unknown to the Database
=====================================
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2021_09_26/o1_mf_s_1084311179_jo0xnh8o_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143716_jy7pz5dd_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143804_jy7q1xp1_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094145522_jy7rqmov_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469527_jyko4r0n_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469675_jyko9dj3_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469696_jykob1pp_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094477941_jykxcrnx_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094481585_jyl0xoq0_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko09yl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko0b3k_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko6m5w_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko7djl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko98yc_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_ncnnf_TAG20220120T112134_jyko9zkl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T133854_jykxch2m_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T133854_jykxchwb_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q04z2_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q08kj_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko1qll_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko20ty_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/88129263B99F4BBDE0530100007F7BDF/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko3xgo_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/88129263B99F4BBDE0530100007F7BDF/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko3y0y_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko33fq_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko33wn_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_60_jyl0zmoz_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_61_jyl10qd1_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_62_jyl11rbq_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_63_jyl12p73_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_64_jyl13l2w_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_65_jyl14s7l_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_66_jyl1qrjm_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_67_jyl81z9z_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_68_jyl861ll_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_69_jyl8l6mo_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_70_jyl8p3d5_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_71_jyl8szdf_.arc

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2021_09_26/o1_mf_s_1084311179_jo0xnh8o_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143716_jy7pz5dd_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094143804_jy7q1xp1_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_16/o1_mf_s_1094145522_jy7rqmov_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469527_jyko4r0n_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469675_jyko9dj3_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094469696_jykob1pp_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094477941_jykxcrnx_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2022_01_20/o1_mf_s_1094481585_jyl0xoq0_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko09yl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko0b3k_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko6m5w_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko7djl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T111944_jyko98yc_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_ncnnf_TAG20220120T112134_jyko9zkl_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T133854_jykxch2m_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2022_01_20/o1_mf_annnn_TAG20220120T133854_jykxchwb_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q04z2_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_16/o1_mf_nnndf_TAG20220116T164908_jy7q08kj_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko1qll_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko20ty_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/88129263B99F4BBDE0530100007F7BDF/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko3xgo_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/88129263B99F4BBDE0530100007F7BDF/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko3y0y_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko33fq_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/C07D1F2BA98C23D0E0530100007F7D34/backupset/2022_01_20/o1_mf_nnndf_TAG20220120T111624_jyko33wn_.bkp
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_60_jyl0zmoz_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_61_jyl10qd1_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_62_jyl11rbq_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_63_jyl12p73_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_64_jyl13l2w_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_65_jyl14s7l_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_66_jyl1qrjm_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_67_jyl81z9z_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_68_jyl861ll_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_69_jyl8l6mo_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_70_jyl8p3d5_.arc
File Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_71_jyl8szdf_.arc

RMAN>
RMAN> list backup summary;


List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
1 B F A DISK 26-SEP-21 1 1 NO TAG20210926T213259
2 B F A DISK 16-JAN-22 1 1 NO TAG20220116T164836
3 B F A DISK 16-JAN-22 1 1 NO TAG20220116T165004
4 B F A DISK 16-JAN-22 1 1 NO TAG20220116T171842
5 B F A DISK 20-JAN-22 1 1 NO TAG20220120T111847
6 B F A DISK 20-JAN-22 1 1 NO TAG20220120T112115
7 B F A DISK 20-JAN-22 1 1 NO TAG20220120T112136
8 B F A DISK 20-JAN-22 1 1 NO TAG20220120T133901
9 B F A DISK 20-JAN-22 1 1 NO TAG20220120T143945
10 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
11 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
12 B A A DISK 20-JAN-22 1 1 YES TAG20220120T111944
13 B A A DISK 20-JAN-22 1 1 YES TAG20220120T111944
14 B A A DISK 20-JAN-22 1 1 YES TAG20220120T111944
15 B F A DISK 20-JAN-22 1 1 NO TAG20220120T112134
16 B A A DISK 20-JAN-22 1 1 YES TAG20220120T133854
17 B A A DISK 20-JAN-22 1 1 YES TAG20220120T133854
18 B F A DISK 16-JAN-22 1 1 NO TAG20220116T164908
19 B F A DISK 16-JAN-22 1 1 NO TAG20220116T164908
20 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
21 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
22 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
23 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
24 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624
25 B F A DISK 20-JAN-22 1 1 YES TAG20220120T111624

RMAN> list archivelog all;

List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
4 1 60 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_60_jyl0zmoz_.arc

5 1 61 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_61_jyl10qd1_.arc

6 1 62 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_62_jyl11rbq_.arc

7 1 63 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_63_jyl12p73_.arc

8 1 64 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_64_jyl13l2w_.arc

9 1 65 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_65_jyl14s7l_.arc

10 1 66 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_66_jyl1qrjm_.arc

11 1 67 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_67_jyl81z9z_.arc

12 1 68 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_68_jyl861ll_.arc

13 1 69 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_69_jyl8l6mo_.arc

14 1 70 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_70_jyl8p3d5_.arc

1 1 70 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_70_jylbnvgo_.arc

15 1 71 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_71_jyl8szdf_.arc

2 1 71 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_71_jylbo521_.arc

3 1 72 A 20-JAN-22
Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/archivelog/2022_01_20/o1_mf_1_72_jylbofq3_.arc


RMAN>


Yes ! RMAN can "add the information" back into the Controlfile  using the CATALOG command  (also see this earlier post on the CATALOG command)-- as long as you know where the Backups and ArchiveLogs ARE ! (Another good reason to use a single FRA location !)


SQL> select *
2 from v$controlfile_record_section
3 order by type
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ACM OPERATION 104 64 11 0 0 0 0
ARCHIVED LOG 584 308 15 1 15 15 0
AUXILIARY DATAFILE COPY 584 128 0 0 0 0 0
BACKUP CORRUPTION 44 1115 0 0 0 0 0
BACKUP DATAFILE 200 1063 37 1 37 37 0
BACKUP PIECE 780 1006 26 1 26 26 0
BACKUP REDOLOG 76 215 112 1 112 112 0
BACKUP SET 96 1022 26 1 26 26 0
BACKUP SPFILE 124 131 10 1 10 10 0
CKPT PROGRESS 8180 11 0 0 0 0 0
COPY CORRUPTION 40 1227 0 0 0 0 0
DATABASE 316 1 1 0 0 0 0
DATABASE BLOCK CORRUPTION 80 8384 0 0 0 0 0
DATABASE INCARNATION 56 292 2 1 2 2 0
DATAFILE 520 1024 34 0 0 39 0
DATAFILE COPY 736 1000 0 0 0 0 0
DATAFILE HISTORY 568 57 0 0 0 0 0
DELETED OBJECT 20 818 0 0 0 0 0
FILENAME 524 4146 28 0 0 0 0
FLASHBACK LOG 84 2048 0 0 0 0 0
FOREIGN ARCHIVED LOG 604 1002 0 0 0 0 0
GUARANTEED RESTORE POINT 256 2048 0 0 0 0 0
INSTANCE SPACE RESERVATION 28 1055 1 0 0 0 0
LOG HISTORY 56 292 1 1 1 1 0
MTTR 100 8 1 0 0 0 0
MULTI INSTANCE REDO APPLY 556 1 0 0 0 0 0
OFFLINE RANGE 200 1063 12 1 12 12 0
PDB RECORD 780 10 4 0 0 6 0
PDBINC RECORD 144 113 0 0 0 0 0
PROXY COPY 928 1004 0 0 0 0 0
RECOVERY DESTINATION 180 1 1 0 0 0 0
REDO LOG 72 16 3 0 0 0 0
REDO THREAD 256 8 1 0 0 0 0
REMOVABLE RECOVERY FILES 32 1000 8 0 0 0 0
RESTORE POINT 256 2108 0 0 0 0 0
RMAN CONFIGURATION 1108 50 3 0 0 3 0
RMAN STATUS 116 141 3 1 3 3 0
STANDBY DATABASE MATRIX 400 128 128 0 0 0 0
TABLESPACE 180 1024 22 0 0 7 0
TABLESPACE KEY HISTORY 108 151 0 0 0 0 0
TEMPORARY FILENAME 56 1024 5 0 0 8 0
THREAD INSTANCE NAME MAPPING 80 8 8 0 0 0 0

42 rows selected.

SQL>
SQL> select dest_id, standby_dest, deleted, status, count(*)
2 from v$archived_log
3 group by dest_id, standby_dest, deleted, status
4 order by 1,2,3,4
5 /

DEST_ID STA DEL S COUNT(*)
---------- --- --- - ----------
1 NO NO A 15

SQL>


The RECORDS_USED for ARCHIVED LOG" and "BACKUP DATAFILE" and "BACKUP PIECE" are now properly reset.
I would NOT advice that you periodically recreate your Controlfile.  

But this is an available method to "reset" the controlfile. I could also take this opportunity to increase controlled-values like MAXLOGFILES, MAXMEMBERS and MAXDATAFILES (or even MAXINSTANCES !).


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> quit

SQL> @Recreate_Controlfile.SQL
SQL> spool Recreate_Controlfile
SQL>
SQL> STARTUP NOMOUNT
ORACLE instance started.

Total System Global Area 1207955544 bytes
Fixed Size 9134168 bytes
Variable Size 671088640 bytes
Database Buffers 520093696 bytes
Redo Buffers 7639040 bytes
SQL> CREATE CONTROLFILE REUSE DATABASE "ORCLCDB" NORESETLOGS FORCE LOGGING ARCHIVELOG
2 MAXLOGFILES 24
3 MAXLOGMEMBERS 4
4 MAXDATAFILES 2048
5 MAXINSTANCES 8
6 MAXLOGHISTORY 512
7 LOGFILE
8 GROUP 1 '/opt/oracle/oradata/ORCLCDB/redo01.log' SIZE 200M BLOCKSIZE 512,
9 GROUP 2 '/opt/oracle/oradata/ORCLCDB/redo02.log' SIZE 200M BLOCKSIZE 512,
10 GROUP 3 '/opt/oracle/oradata/ORCLCDB/redo03.log' SIZE 200M BLOCKSIZE 512
....
....
....


RMAN> catalog start with '/opt/oracle/FRA/ORCLCDB';
....
....
....



SQL> select *
2 from v$controlfile_record_section
3 order by type
4 /

TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX LAST_RECID CON_ID
---------------------------- ----------- ------------- ------------ ----------- ---------- ---------- ----------
ACM OPERATION 104 64 11 0 0 0 0
ARCHIVED LOG 584 532 18 1 18 18 0
AUXILIARY DATAFILE COPY 584 128 0 0 0 0 0
BACKUP CORRUPTION 44 1115 0 0 0 0 0
BACKUP DATAFILE 200 1063 37 1 37 37 0
BACKUP PIECE 780 1006 26 1 26 26 0
BACKUP REDOLOG 76 215 112 1 112 112 0
BACKUP SET 96 1022 26 1 26 26 0
BACKUP SPFILE 124 131 10 1 10 10 0
CKPT PROGRESS 8180 11 0 0 0 0 0
COPY CORRUPTION 40 1227 0 0 0 0 0
DATABASE 316 1 1 0 0 0 0
DATABASE BLOCK CORRUPTION 80 8384 0 0 0 0 0
DATABASE INCARNATION 56 292 2 1 2 2 0
DATAFILE 520 2048 34 0 0 39 0
DATAFILE COPY 736 1000 0 0 0 0 0
DATAFILE HISTORY 568 57 0 0 0 0 0
DELETED OBJECT 20 818 0 0 0 0 0
FILENAME 524 6242 27 0 0 0 0
FLASHBACK LOG 84 2048 0 0 0 0 0
FOREIGN ARCHIVED LOG 604 1002 0 0 0 0 0
GUARANTEED RESTORE POINT 256 2048 0 0 0 0 0
INSTANCE SPACE RESERVATION 28 1055 1 0 0 0 0
LOG HISTORY 56 584 1 1 1 1 0
MTTR 100 8 1 0 0 0 0
MULTI INSTANCE REDO APPLY 556 1 0 0 0 0 0
OFFLINE RANGE 200 1063 12 1 12 12 0
PDB RECORD 780 10 4 0 0 6 0
PDBINC RECORD 144 113 0 0 0 0 0
PROXY COPY 928 1004 0 0 0 0 0
RECOVERY DESTINATION 180 1 1 0 0 0 0
REDO LOG 72 24 3 0 0 0 0
REDO THREAD 256 8 1 0 0 0 0
REMOVABLE RECOVERY FILES 32 1000 9 0 0 0 0
RESTORE POINT 256 2108 0 0 0 0 0
RMAN CONFIGURATION 1108 50 3 0 0 3 0
RMAN STATUS 116 141 1 1 1 1 0
STANDBY DATABASE MATRIX 400 128 128 0 0 0 0
TABLESPACE 180 2048 22 0 0 7 0
TABLESPACE KEY HISTORY 108 151 0 0 0 0 0
TEMPORARY FILENAME 56 2048 5 0 0 8 0
THREAD INSTANCE NAME MAPPING 80 8 8 0 0 0 0

42 rows selected.

SQL>


Note how I have increased the "DATAFILE" RECORDS_TOTAL for "DATAFILE" from 1024 to 2048 and for "REDO LOG" from 16 to 24 and "LOGHISTORY" from 292 to 584.


Categories: DBA Blogs

Getting explain plans out of Db2

Yann Neuhaus - Thu, 2022-01-20 03:03

The last post about Db2 was about getting it up and running. As the issue we had to solve was about performance, getting an explain plan for the problematic statement(s) was the obvious step to do. In PostgreSQL you can just use EXPLAIN for that, and you’re done. In Db2 the procedure is a bit different, but once you know the tools, it is quite easy as well.

Let’s start by creating a new database:

db2@sles15db2:~> . sqllib/db2profile 
db2@sles15db2:~> db2
(c) Copyright IBM Corporation 1993,2007
Command Line Processor for DB2 Client 11.5.6.0

You can issue database manager commands and SQL statements from the command 
prompt. For example:
    db2 => connect to sample
    db2 => bind sample.bnd

For general help, type: ?.
For command help, type: ? command, where command can be
the first few keywords of a database manager command. For example:
 ? CATALOG DATABASE for help on the CATALOG DATABASE command
 ? CATALOG          for help on all of the CATALOG commands.

To exit db2 interactive mode, type QUIT at the command prompt. Outside 
interactive mode, all commands must be prefixed with 'db2'.
To list the current command option settings, type LIST COMMAND OPTIONS.

For more detailed help, refer to the Online Reference Manual.

db2 => create database db1
DB20000I  The CREATE DATABASE command completed successfully.

As we need something to explain, we’ll copy the “tables” catalog table without any data:

db2 => connect to db1

   Database Connection Information

 Database server        = DB2/LINUXX8664 11.5.6.0
 SQL authorization ID   = DB2
 Local database alias   = DB1
db2 => create table t like syscat.tables
DB20000I  The SQL command completed successfully.
db2 => select count(*) from t;

1          
-----------
          0

  1 record(s) selected.

Generate some data:

db2 => insert into t select * from syscat.tables
DB20000I  The SQL command completed successfully.
db2 => insert into t select * from syscat.tables
DB20000I  The SQL command completed successfully.
db2 => insert into t select * from t
DB20000I  The SQL command completed successfully.
db2 => insert into t select * from t
DB20000I  The SQL command completed successfully.
db2 => insert into t select * from t
DB20000I  The SQL command completed successfully.
db2 => insert into t select * from t
DB20000I  The SQL command completed successfully.
db2 => insert into t select * from t
DB20000I  The SQL command completed successfully.
db2 => select count(*) from t

1          
-----------
      14016

  1 record(s) selected.

The structure of the table is like this:

db2 => describe table t

                                Data type                     Column
Column name                     schema    Data type name      Length     Scale Nulls
------------------------------- --------- ------------------- ---------- ----- ------
TABSCHEMA                       SYSIBM    VARCHAR                    128     0 No    
TABNAME                         SYSIBM    VARCHAR                    128     0 No    
OWNER                           SYSIBM    VARCHAR                    128     0 No    
OWNERTYPE                       SYSIBM    CHARACTER                    1     0 No    
TYPE                            SYSIBM    CHARACTER                    1     0 No    
STATUS                          SYSIBM    CHARACTER                    1     0 No    
BASE_TABSCHEMA                  SYSIBM    VARCHAR                    128     0 Yes   
BASE_TABNAME                    SYSIBM    VARCHAR                    128     0 Yes   
ROWTYPESCHEMA                   SYSIBM    VARCHAR                    128     0 Yes   
ROWTYPENAME                     SYSIBM    VARCHAR                    128     0 Yes   
CREATE_TIME                     SYSIBM    TIMESTAMP                   10     6 No    
ALTER_TIME                      SYSIBM    TIMESTAMP                   10     6 No    
INVALIDATE_TIME                 SYSIBM    TIMESTAMP                   10     6 No    
STATS_TIME                      SYSIBM    TIMESTAMP                   10     6 Yes   
COLCOUNT                        SYSIBM    SMALLINT                     2     0 No    
TABLEID                         SYSIBM    SMALLINT                     2     0 No    
TBSPACEID                       SYSIBM    SMALLINT                     2     0 No    
CARD                            SYSIBM    BIGINT                       8     0 No    
NPAGES                          SYSIBM    BIGINT                       8     0 No    
MPAGES                          SYSIBM    BIGINT                       8     0 No    
FPAGES                          SYSIBM    BIGINT                       8     0 No    
NPARTITIONS                     SYSIBM    BIGINT                       8     0 No    
NFILES                          SYSIBM    BIGINT                       8     0 No    
TABLESIZE                       SYSIBM    BIGINT                       8     0 No    
OVERFLOW                        SYSIBM    BIGINT                       8     0 No    
TBSPACE                         SYSIBM    VARCHAR                    128     0 Yes   
INDEX_TBSPACE                   SYSIBM    VARCHAR                    128     0 Yes   
LONG_TBSPACE                    SYSIBM    VARCHAR                    128     0 Yes   
PARENTS                         SYSIBM    SMALLINT                     2     0 Yes   
CHILDREN                        SYSIBM    SMALLINT                     2     0 Yes   
SELFREFS                        SYSIBM    SMALLINT                     2     0 Yes   
KEYCOLUMNS                      SYSIBM    SMALLINT                     2     0 Yes   
KEYINDEXID                      SYSIBM    SMALLINT                     2     0 Yes   
KEYUNIQUE                       SYSIBM    SMALLINT                     2     0 No    
CHECKCOUNT                      SYSIBM    SMALLINT                     2     0 No    
DATACAPTURE                     SYSIBM    CHARACTER                    1     0 No    
CONST_CHECKED                   SYSIBM    CHARACTER                   32     0 No    
PMAP_ID                         SYSIBM    SMALLINT                     2     0 Yes   
PARTITION_MODE                  SYSIBM    CHARACTER                    1     0 No    
LOG_ATTRIBUTE                   SYSIBM    CHARACTER                    1     0 No    
PCTFREE                         SYSIBM    SMALLINT                     2     0 No    
APPEND_MODE                     SYSIBM    CHARACTER                    1     0 No    
REFRESH                         SYSIBM    CHARACTER                    1     0 No    
REFRESH_TIME                    SYSIBM    TIMESTAMP                   10     6 Yes   
LOCKSIZE                        SYSIBM    CHARACTER                    1     0 No    
VOLATILE                        SYSIBM    CHARACTER                    1     0 No    
ROW_FORMAT                      SYSIBM    CHARACTER                    1     0 No    
PROPERTY                        SYSIBM    VARCHAR                     32     0 No    
STATISTICS_PROFILE              SYSIBM    CLOB                  10485760     0 Yes   
COMPRESSION                     SYSIBM    CHARACTER                    1     0 No    
ROWCOMPMODE                     SYSIBM    CHARACTER                    1     0 No    
ACCESS_MODE                     SYSIBM    CHARACTER                    1     0 No    
CLUSTERED                       SYSIBM    CHARACTER                    1     0 Yes   
ACTIVE_BLOCKS                   SYSIBM    BIGINT                       8     0 No    
DROPRULE                        SYSIBM    CHARACTER                    1     0 No    
MAXFREESPACESEARCH              SYSIBM    SMALLINT                     2     0 No    
AVGCOMPRESSEDROWSIZE            SYSIBM    SMALLINT                     2     0 No    
AVGROWCOMPRESSIONRATIO          SYSIBM    REAL                         4     0 No    
AVGROWSIZE                      SYSIBM    SMALLINT                     2     0 No    
PCTROWSCOMPRESSED               SYSIBM    REAL                         4     0 No    
LOGINDEXBUILD                   SYSIBM    VARCHAR                      3     0 Yes   
CODEPAGE                        SYSIBM    SMALLINT                     2     0 No    
COLLATIONSCHEMA                 SYSIBM    VARCHAR                    128     0 No    
COLLATIONNAME                   SYSIBM    VARCHAR                    128     0 Yes   
COLLATIONSCHEMA_ORDERBY         SYSIBM    VARCHAR                    128     0 No    
COLLATIONNAME_ORDERBY           SYSIBM    VARCHAR                    128     0 Yes   
ENCODING_SCHEME                 SYSIBM    CHARACTER                    1     0 No    
PCTPAGESSAVED                   SYSIBM    SMALLINT                     2     0 No    
LAST_REGEN_TIME                 SYSIBM    TIMESTAMP                   10     6 Yes   
SECPOLICYID                     SYSIBM    INTEGER                      4     0 No    
PROTECTIONGRANULARITY           SYSIBM    CHARACTER                    1     0 No    
AUDITPOLICYID                   SYSIBM    INTEGER                      4     0 Yes   
AUDITPOLICYNAME                 SYSIBM    VARCHAR                    128     0 Yes   
AUDITEXCEPTIONENABLED           SYSIBM    CHARACTER                    1     0 No    
DEFINER                         SYSIBM    VARCHAR                    128     0 No    
ONCOMMIT                        SYSIBM    CHARACTER                    1     0 No    
LOGGED                          SYSIBM    CHARACTER                    1     0 No    
ONROLLBACK                      SYSIBM    CHARACTER                    1     0 No    
LASTUSED                        SYSIBM    DATE                         4     0 No    
CONTROL                         SYSIBM    CHARACTER                    1     0 No    
TEMPORALTYPE                    SYSIBM    CHARACTER                    1     0 No    
TABLEORG                        SYSIBM    CHARACTER                    1     0 No    
EXTENDED_ROW_SIZE               SYSIBM    CHARACTER                    1     0 No    
PCTEXTENDEDROWS                 SYSIBM    REAL                         4     0 No    
REMARKS                         SYSIBM    VARCHAR                    254     0 Yes   

  85 record(s) selected.

If we create an index on the “tabname” column and later filter on that column we should get an index access:

db2 => create index i on t(tabname)
DB20000I  The SQL command completed successfully.
db2 => quit
DB20000I  The QUIT command completed successfully.

You have several options to create explain plans in Db2 and one of the options is to use the CURRENT EXPLAIN MODE special register, so let’s try that:

db2@sles15db2:~> echo "select count(*) from t where tabname='t';" > 1.sql
db2@sles15db2:~> db2 set current explain mode explain
DB20000I  The SQL command completed successfully.
db2@sles15db2:~> db2 -tvf  1.sql     
select count(*) from t where tabname='t'
SQL0219N  The required Explain table "DB2.EXPLAIN_INSTANCE" does not exist.  
SQLSTATE=42704

The error message is pretty clear, somehow we need to create the explain tables. There are two options for this and we’ll use the EXPLAIN.DDL script which comes with the Db2 installation:

db2@sles15db2:~> ls -l ./sqllib/misc/EXPLAIN.DDL
-r--r--r-- 1 db2 db2 48371 Jun 11  2021 ./sqllib/misc/EXPLAIN.DDL
db2@sles15db2:~> grep -i "create table" ./sqllib/misc/EXPLAIN.DDL
CREATE TABLE EXPLAIN_INSTANCE ( EXPLAIN_REQUESTER VARCHAR(128 OCTETS) NOT NULL,
CREATE TABLE EXPLAIN_STATEMENT ( EXPLAIN_REQUESTER VARCHAR(128 OCTETS) NOT NULL,
CREATE TABLE EXPLAIN_ARGUMENT ( EXPLAIN_REQUESTER   VARCHAR(128 OCTETS)  NOT NULL,
CREATE TABLE EXPLAIN_OBJECT ( EXPLAIN_REQUESTER    VARCHAR(128 OCTETS) NOT NULL,
CREATE TABLE EXPLAIN_OPERATOR ( EXPLAIN_REQUESTER VARCHAR(128 OCTETS) NOT NULL,
CREATE TABLE EXPLAIN_PREDICATE ( EXPLAIN_REQUESTER VARCHAR(128 OCTETS) NOT NULL,
CREATE TABLE EXPLAIN_STREAM ( EXPLAIN_REQUESTER VARCHAR(128 OCTETS) NOT NULL,
CREATE TABLE EXPLAIN_DIAGNOSTIC ( EXPLAIN_REQUESTER VARCHAR(128 OCTETS) NOT NULL,
CREATE TABLE EXPLAIN_DIAGNOSTIC_DATA ( EXPLAIN_REQUESTER VARCHAR(128 OCTETS) NOT NULL,
CREATE TABLE OBJECT_METRICS ( EXECUTABLE_ID     VARCHAR(32 OCTETS) FOR BIT DATA NOT NULL,
CREATE TABLE ADVISE_INSTANCE (
CREATE TABLE ADVISE_INDEX(
CREATE TABLE ADVISE_WORKLOAD (
CREATE TABLE ADVISE_MQT (
CREATE TABLE ADVISE_PARTITION (
CREATE TABLE ADVISE_TABLE (
CREATE TABLE EXPLAIN_ACTUALS ( EXPLAIN_REQUESTER VARCHAR(128 OCTETS) NOT NULL,

Quite a few tables get created by this script and we’ll not go into the details of which table contains what (this is out of the scope of this post), but just execute it:

db2@sles15db2:~> db2 -tvf ./sqllib/misc/EXPLAIN.DDL

******* IMPORTANT ********** 

USAGE: db2 -tf EXPLAIN.DDL   

******* IMPORTANT ********** 


UPDATE COMMAND OPTIONS USING C OFF
DB20000I  The UPDATE COMMAND OPTIONS command completed successfully.
...
COMMIT WORK
DB20000I  The SQL command completed successfully.

All done, lets try again:

db2@sles15db2:~> db2 set current explain mode explain
DB20000I  The SQL command completed successfully.
db2@sles15db2:~> db2 -tvf  1.sql
select count(*) from t where tabname='t'
SQL0217W  The statement was not executed as only Explain information requests 
are being processed.  SQLSTATE=01604

Looks better, but how do we get the explain plan? One option is to use db2exfmt:

db2@sles15db2:~> db2exfmt -d DB1 -# 0 -w -1 -g TIC -n % -s % -o explain.txt
DB2 Universal Database Version 11.5, 5622-044 (c) Copyright IBM Corp. 1991, 2017
Licensed Material - Program Property of IBM
IBM DATABASE 2 Explain Table Format Tool

Connecting to the Database.
Connect to Database Successful.
Binding package - Bind was Successful
Output is in explain.txt.
Executing Connect Reset -- Connect Reset was Successful.
db2@sles15db2:~> db2 set current explain mode no
DB20000I  The SQL command completed successfully.

This generates the “explain.txt” file with lots of information. What we are looking for is the explain plan and there is a graphical representation in the file, which is exactly what we need:

Access Plan:
-----------
        Total Cost:             6.77696
        Query Degree:           1


      Rows 
     RETURN
     (   1)
      Cost 
       I/O 
       |
        1 
     GRPBY 
     (   2)
     6.77689 
        1 
       |
     35.8378 
     IXSCAN
     (   3)
     6.77614 
        1 
       |
      14016 
 INDEX: DB2     
        I
       Q1

Another option is to use db2expln:

db2@sles15db2:~> db2expln -database db1 -statement "select count(*) from t where tabname = 't'" -terminal

DB2 Universal Database Version 11.5, 5622-044 (c) Copyright IBM Corp. 1991, 2017
Licensed Material - Program Property of IBM
IBM DB2 Universal Database SQL and XQUERY Explain Tool

DB2 Universal Database Version 11.5, 5622-044 (c) Copyright IBM Corp. 1991, 2017
Licensed Material - Program Property of IBM
IBM DB2 Universal Database SQL and XQUERY Explain Tool

******************** DYNAMIC ***************************************

==================== STATEMENT ==========================================

        Isolation Level          = Cursor Stability
        Blocking                 = Block Unambiguous Cursors
        Query Optimization Class = 5

        Partition Parallel       = No
        Intra-Partition Parallel = No

        SQL Path                 = "SYSIBM", "SYSFUN", "SYSPROC", "SYSIBMADM", 
                                   "DB2"


Statement:
  
  select count(*)
  from t 
  where tabname ='t' 


Section Code Page = 1208

Estimated Cost = 6.775021
Estimated Cardinality = 1.000000

Access Table Name = DB2.T  ID = 2,4
|  Index Scan:  Name = DB2.I  ID = 1
|  |  Regular Index (Not Clustered)
|  |  Index Columns:
|  |  |  1: TABNAME (Ascending)
|  #Columns = 0
|  Skip Inserted Rows
|  Avoid Locking Committed Data
|  Currently Committed for Cursor Stability
|  #Key Columns = 1
|  |  Start Key: Inclusive Value
|  |  |  1: 't' 
|  |  Stop Key: Inclusive Value
|  |  |  1: 't' 
|  Index-Only Access
|  Index Prefetch: Sequential(1), Readahead
|  Lock Intents
|  |  Table: Intent Share
|  |  Row  : Next Key Share
|  Sargable Index Predicate(s)
|  |  Predicate Aggregation
|  |  |  Column Function(s)
Aggregation Completion
|  Column Function(s)
Return Data to Application
|  #Columns = 1

End of section

Compared to db2exfmt this gives a much more compact output.

Cet article Getting explain plans out of Db2 est apparu en premier sur Blog dbi services.

Hierarchical Aggregation of Columns

Tom Kyte - Wed, 2022-01-19 21:26
Hello all, Greetings of the new year! Goal - generate an output which can be directly used for pasting hierarchical data into a flat file Link to liveSQL with sample data and scripts - https://livesql.oracle.com/apex/livesql/s/mw9olr17o7dir21jysn5jni1u Expected output <b>-#HDR#- 1 - HEADER1 - HEADER2 - -#Line#- 1 - 1 - DETAIL11 - DETAIL12 ; -#Line#- 1 - 2 - DETAIL21 - DETAIL22 ;</b> Hierarchy example: <b> Purchase Order HEADER - Only 1 Purchase Order LINE - 1-many under 1 HEADER Purchase Order Schedule - 1-many under 1 LINE Purchase Order Distribution - 1-many under 1 SCHEDULE</b> SQL included in LiveSQL currently produces the following: <b>-#HDR#- 1 - HEADER1 - HEADER2 - -#Line#- 1 - 1 - DETAIL11 - DETAIL12 ; -#HDR#- 1 - HEADER1 - HEADER2 - -#Line#- 1 - 2 - DETAIL21 - DETAIL22 ;</b> Idea is to have a single column which shows the header and detail level information. Assumption - single level detail. However, data exists with 3 levels of detail. Request your inputs on generating expected output so that it can be extended to support multiple detail levels. Thank you and stay safe! BR, Prashant ATMAN
Categories: DBA Blogs

VirtualBox 6.1.32

Tim Hall - Wed, 2022-01-19 07:38

VirtualBox 6.1.32 has been released. The downloads and changelog are in the usual places. I’ve installed it on Windows 11 and macOS Big Sur hosts with no dramas. Some time in the next 24 hours I’ll upload updated versions of my Oracle Linux 7 and Oracle Linux 8 vagrant boxes. https://app.vagrantup.com/oraclebase Cheers Tim…

The post VirtualBox 6.1.32 first appeared on The ORACLE-BASE Blog.VirtualBox 6.1.32 was first posted on January 19, 2022 at 2:38 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Hash Aggregation – 2

Jonathan Lewis - Wed, 2022-01-19 06:03

In the note I wrote a couple of days ago about the way the optimizer switches from hash group by to sort group by if you add an order by X,Y clause to a query that has a group by X,Y I mentioned that this had been something I’d noticed about 15 years ago (shortly after Oracle introduced hash aggregation, in fact) but it was only the second of two details I’d noted when experimenting with this new operation. The first thing I’d noticed came from an even simpler example, and here’s a (cryptic) clue to what it was:


column operation_type format a24

select
        operation_type, count(*) 
from
        V$sql_workarea 
group by 
        operation_type
order by 
        operation_type
;

OPERATION_TYPE             COUNT(*)
------------------------ ----------
APPROXIMATE NDV                   1
BUFFER                          130
CONNECT-BY (SORT)                10
GROUP BY (HASH)                  12
GROUP BY (SORT)                 240
HASH-JOIN                       466
IDX MAINTENANCE (SORT)           39
LOAD WRITE BUFFERS               10
RANGE TQ COMPUTATION             13
RESULT CACHE                      4
SORT (v1)                        10
SORT (v2)                       147
WINDOW (SORT)                    35


The clue isn’t in the query, it’s in what’s missing from the result set, so here’s some code to create and query some data to make it easier to spot the anomaly:

rem
rem     Script:         hash_agg.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sept 2007
rem

create table t1
as
with generator as (
        select  rownum id
        from    dual
        connect by
                level <= 1e4 -- > comment to avoid wordpress format issue
)
select
        lpad(mod(rownum-1,1000),6)      small_vc_K,
        lpad(rownum-1,6)                small_vc_M
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid wordpress format issue
;
    
set serveroutput off

prompt  ===========================
prompt  1000 distinct values (hash)
prompt  ===========================

select
        /*+ gather_plan_statistics 1000 */
        count(*)
from
        (
        select  /*+ no_merge */
                distinct small_vc_K
        from
                t1
        )
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

prompt  ===========================
prompt  1000 distinct values (sort)
prompt  ===========================

select
        /*+ gather_plan_statistics 1000 */
        count(*)
from
        (
        select  /*+ no_merge no_use_hash_aggregation */
                distinct small_vc_K
        from
                t1
        )
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));


I’ve added the /*+ gather_plan_statistics */ hint to the query so that I can check on the rowsource execution stats and (particularly) the memory and/or temporary space used; and I’ve blocked hash aggregation in one of the two queries, so I expect to see a “hash unique” operation in the first query and a “sort unique” operation in the second. Here’s what I get from 19.11.0.0:


===========================
1000 distinct values (hash)
===========================

  COUNT(*)
----------
      1000

SQL_ID  1baadqgv02r6b, child number 0
-------------------------------------
select  /*+ gather_plan_statistics 1000 */  count(*) from  (  select
/*+ no_merge */   distinct small_vc_K  from   t1  )

Plan hash value: 171694178

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |   765 (100)|      1 |00:00:00.07 |    2637 |   2632 |       |       |          |
|   1 |  SORT AGGREGATE      |      |      1 |      1 |            |      1 |00:00:00.07 |    2637 |   2632 |       |       |          |
|   2 |   VIEW               |      |      1 |   1000 |   765  (56)|   1000 |00:00:00.07 |    2637 |   2632 |       |       |          |
|   3 |    HASH UNIQUE       |      |      1 |   1000 |   765  (56)|   1000 |00:00:00.07 |    2637 |   2632 |  1889K|  1889K| 1417K (0)|
|   4 |     TABLE ACCESS FULL| T1   |      1 |   1000K|   405  (17)|   1000K|00:00:00.04 |    2637 |   2632 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------


17 rows selected.

===========================
1000 distinct values (sort)
===========================

  COUNT(*)
----------
      1000

SQL_ID  a66rqhgw7a6dk, child number 0
-------------------------------------
select  /*+ gather_plan_statistics 1000 */  count(*) from  (  select
/*+ no_merge no_use_hash_aggregation */   distinct small_vc_K  from
t1  )

Plan hash value: 1750119335

-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |   765 (100)|      1 |00:00:00.22 |    2637 |       |       |          |
|   1 |  SORT AGGREGATE      |      |      1 |      1 |            |      1 |00:00:00.22 |    2637 |       |       |          |
|   2 |   VIEW               |      |      1 |   1000 |   765  (56)|   1000 |00:00:00.22 |    2637 |       |       |          |
|   3 |    SORT UNIQUE       |      |      1 |   1000 |   765  (56)|   1000 |00:00:00.22 |    2637 | 48128 | 48128 |43008  (0)|
|   4 |     TABLE ACCESS FULL| T1   |      1 |   1000K|   405  (17)|   1000K|00:00:00.02 |    2637 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------


Look back at the summary of v$sql_workarea. Can you now spot something that might be missing? Since we’ve now got two execution plans with their sql_ids, let’s run a much more precise query against the view.

select
        sql_id, child_number, operation_id, operation_type , 
        total_executions, last_memory_used/1024 last_mem_kb
from
        v$sql_workarea
where
        sql_id in (
                '1baadqgv02r6b',
                'a66rqhgw7a6dk'
)
order by
        sql_id, child_number, operation_id
/

SQL_ID        CHILD_NUMBER OPERATION_ID OPERATION_TYPE       TOTAL_EXECUTIONS LAST_MEM_KB
------------- ------------ ------------ -------------------- ---------------- -----------
1baadqgv02r6b            0            3 GROUP BY (HASH)                     1       1,417
a66rqhgw7a6dk            0            3 GROUP BY (SORT)                     1          42

The first thing to note, of course, is that the sort aggregate at operation 1 doesn’t report a workarea at all; but this shouldn’t be a surprise, the operation is simply counting rows as they arrive, there’s no real sorting going on.

Operation 3 in both cases is the more interesting one. In the plan it’s reported as a “hash/sort unique” but the workarea operation has changed this to a “group by (hash/sort)”.  It’s a little detail that probably won’t matter to most people most of the time – but it’s the sort of thing that can waste a couple of hours of time when you’re trying to analyze a performance oddity.

And another thing …

You’ll have noticed, by the way, that the hash unique operation actually demanded far more memory than the sort unique operation; 1.4MB compared to 42KB. In a large production system this may be fairly irrelevant, especially since the discrepancy tends to disappear for higher volumes, and hardly matters if the operation spills to disc. However, if you have a large number of processes doing a lot of small queries using hash aggregation you may find that cumulatively they use up an unexpected fraction of your pga_aggregate_target.

There is some scope (with the usual “confirm with Oracle support” caveat) for modifying this behaviour with a fix_control:

16792882    QKSFM_COMPILATION_16792882  Cardinality threshold for hash aggregation(raised to power of 10)

alter session set "_fix_control"='16792882:3';

The default value is 0, the maximum value seems to be 3 and the description about raising to power of 10 seems to be the wrong way round, but I haven’t done any careful testing. When I set the value to 3 the hash unique with an estimated output of 1,000 switched to a sort unique.(hence my suggestion about 10N rather than N10).

Footnote

In the previous note I pointed out that Oracle would use a single sort group by operation rather than a consecutive hash group by / sort order by pair of operations. This simpler example helps to explain why. If you check the CBO trace files for the two you can check the costs of the inline aggregate view.

From a 19.11.0.0 trace file here are two costing fragments for the hash group by strategy; the first bit is the cost of the tablescan that acquires the rows (operation 4) the second is the final cost of the inline view / query block:


  Best:: AccessPath: TableScan
         Cost: 404.639881  Degree: 1  Resp: 404.639881  Card: 1000000.000000  Bytes: 0.000000


Final cost for query block SEL$2 (#2) - All Rows Plan:
  Best join order: 1
  Cost: 764.843155  Degree: 1  Card: 1000000.000000  Bytes: 7000000.000000
  Resc: 764.843155  Resc_io: 337.000000  Resc_cpu: 1069607888
  Resp: 764.843155  Resp_io: 337.000000  Resc_cpu: 1069607888


I won’t show you the equivalent extracts for the example where I blocked hash aggregation because the numbers are identical. So there’s no way that Oracle will want to do hash group by followed by sort order by, when it can do just a single sort group by that costs exactly the same as the hash group by operation alone. This is a shame, and a little ironic because when Oracle Corp introduce hash aggregation they made a bit of a fuss about how much more efficient it was than sorting- but it looks like no-one told the optimizer about this.

Why the Oak Table Was So Great

Cary Millsap - Tue, 2022-01-18 17:57

This weekend, I watched a wonderful TEDx video by Barbara Sher, called “Isolation Is the Dream-Killer, Not Your Attitude.” Please watch this video. It’s 21 minutes, 18 seconds long.

It reminded me about what was so great about the Oak Table. That’s right: was. It’s not anymore.

Here’s what it was. People I admired, trusted, and liked would gather at a home in Denmark owned by a man named Mogens Nørgaard. Mogens is the kindest and most generous host I have ever encountered. He would give his whole home—every inch—to keep as many of us as he could, for a week, once or twice a year. Twenty, maybe thirty of us. We ate, drank, and slept, all for free, as much and for as long as we wanted. 

And the “us” in that sentence was no normal, regular, everyday “us.” It was Tom Kyte, Lex de Haan, Anjo Kolk, Jonathan Lewis, Graham Wood, Tanel Põder, Toon Koppelaars, Chris Antognini, Steve Adams, Stephan Haisley, James Morle, John Beresniewicz, Jože Senegačnik, Bryn Llewellyn, Tuomas Pystynen, Andy Zitelli, Johannes Djernæs, Michael Möller, Dan Norris, Carel Jan-Engel, Pete Sharman, Tim Gorman, Kellyn Pot'Vin, Alex Gorbachev, Frits Hoogland, Karen Morton, Robyn Sands, Greg Rahn, and—my goodness—I’m leaving out even more people than I’m listing.

We spent a huge amount of our time sitting together at Mogens’s big oak table, which was big enough for about eight people. Or, in actuality, about twice that. We’d just work. And talk. If there wasn’t a meal on the table, then it would be filled with laptops and power cords covering every square inch. Oops, I mean millimeter. That table had millimeters.

And here’s what was so great about the Oak Table: you could say what you wanted—whatever it was!—and you could have it. You could just say your dream and your obstacle, and someone around the table would know how to make your dream come true.

It’s tricky even trying to remember good examples of people’s dreams, because I’m so far removed from it now. Some of them were nerdy things like, “I wonder how long an Oracle PARSE call would take if we did a 256-table join?” You’d hear, “Hmm, interesting. I think I have a test for that,” and then the next thing you know, Jonathan Lewis would be working on your problem. Or, “Hey, does anyone know how to do such-and-such in vim?” And Johannes Djernæs or Michael Möller would show you how easy it was.

I got into a career-saving conversation late one night with Robyn Sands. She had asked, “Is anybody else having trouble finding good PL/SQL developers? I can’t figure out where they are, if there even are any. Are there?” We talked for a while about why they were so scarce, and then I connected the dots that, hey, I have two superb PL/SQL developers at home on the bench, and I had been desperately trying to find them good work. The story that Robyn and I started some 3:00am over beers resulted in a superb consumer femtocell device for Robyn and a year’s worth of much-needed revenue for my tiny little team.

It was a world where you could have anything you want. Better yet, it was a world where you could dream properly. Today, in isolation, it’s hard to even dream right. After nearly two years of being locked away, I can barely conceive of a world that’s plentiful and joyous like those Oak Table years. I feel much smaller now. (Oh, and it wasn’t COVID-19 that killed that Oak Table experience. It died years before that—but, obviously, it’s a factor today.)

I want it back. I want my friends back. How are we going to do this?

What kind of countertop should you get for your kitchen? 

OraQA - Tue, 2022-01-18 09:22

Remodeling your dream home can be quite a task. You have so much to change in the home, thus we thought of extending a little helping hand. 

The kitchen is a space that sees a lot of foot traffic, heat, scratches, and water. You need some material that withstands all of this. You need a material that’s robust, durable, and good-looking. 

Most homeowners fall into the trap of buying a good-looking countertop, but end up paying too much for it. Does it guarantee durability? They don’t care to check all of this. 

What about you? Do you wish to make the same mistakes? If not, then keep reading this post. We will shed light on the kind of countertop you should choose for your kitchen. 

#Marble 

Marble is an exquisite material. It is pleasing to the eyes and has a luxurious feel as well. 

You will find that a lot of people save up to invest in marble countertops. They say that marble is naturally heat-resistant, but it is not exactly scratch-resistant. Don’t commit the mistake of chopping directly on marble. 

Marble is expensive; thus, you need to take good care of the material. If you are determined to take good care of it, feel free to buy it from one of the reputed stores. 

#Granite 

Granite is durable, good-looking, and popular. You will find many people investing in granite countertops. The interesting thing about this material is that it’s available in both light and dark colors. 

Moreover, you can get them in a variety of shapes too. Don’t miss the variations and flecks on this material. 

#Quartz 

Quartz happens to be a very durable and beautiful material. If you have lots of family members and enjoy cooking, consider this material as a kitchen countertop. It is very durable, so it can withstand heat, water, and stains. 

Stain-removal is easy – you can’t say the same thing about marble. Removing stains requires soap and water. Just wipe the stain to remove it. 

Quartz is also available in different variations. Since marble is expensive, manufacturers are making engineered stone, and quartz is one of them. 

Some of the types look like marble. 

Concluding Thoughts 

Do you want granite, marble, or a quartz countertop in your kitchen? If yes, check out the countertops from Granite au Sommet. They have a wonderful collection! 

You can compare the texture, colors, and durability on your own. Everyone has a different opinion and choice, so feel free to choose the material you like and can afford.

The post What kind of countertop should you get for your kitchen?  appeared first on ORA QA.

How can I register my SMTP service in Oracle Apex?

Tom Kyte - Tue, 2022-01-18 08:46
In the company where I work, we have the services of JD Edwards and Orchestrator Studio, but we are using APEX for fast applications. The problem is that we have not managed to use the JD Edwards SMTP in APEX or an external one to test that the apex_mail.send() function works and so far we have not succeeded, and we watched videos and followed the steps and nothing and the documentation offered by Oracle is not entirely clear even confusing that you have to do, another thing we tried was to create a service in Orchestrator Studio and it works with JS the drawback is that the code is on the client side and not the server and the alternative was to use <b>UTL_HTTP</b> or <b>apex_web_service. make_rest_request()</b> and neither as I read that <b>UTL_HTTP</b> has some limits with apex and <b>apex_web_service.make_rest_request(</b>) blocks the service because the URL is not secure. I have read most of the forum and nothing I can do to make it send APEX mails because it really depends on a database developer to do that and I have been using this tool for a very short time. <b>I do not know if you could help me which are the steps I must follow to be able to send mails with APEX as in Cloud and in my local machine (localhost). </b> Sources consulted: <b>https://docs.oracle.com/en/database/oracle/application-express/21.1/aeapi/Configuring-Oracle-Application-Express-to-Send-Email.html#GUID-596E11FE-9289-4238-A3CA-D33F508E40F7 https://stackoverflow.com/questions/65528885/invoke-a-rest-api-json-in-plsql https://www.youtube.com/watch?v=sI37Be2EZhk https://technology.amis.nl/database/invoke-a-rest-service-from-plsql-make-an-http-post-request-using-utl_http-in-oracle-database-11g-xe/</b>
Categories: DBA Blogs

Performance issues in production

Tom Kyte - Tue, 2022-01-18 08:46
Hi Tom, We are supporting a Siebel application having Oracle 11g as backed. Last month we upgraded database to 19C and after that we are facing performance issues on daily basis. When we checked DB reports like AWR and ASH, I have below observations: a) In AWR report random queries are coming on top and there are no consistency so I am assuming issue is not related to a particular SQL. b) In the DB reports we can see top event is "gc buffer busy acquire" most of the time and this event is related to a insert statement. c) All top SQL visible in AWR/ASH reports are having multiple plans. Below is the snap shot for one of the SQL: SNAP_ID NODE BEGIN_INTERVAL_TIME SQL_ID PLAN_HASH_VALUE EXECS AVG_ETIME AVG_LIO ---------- ------ ------------------------------ ------------- --------------- ------------ ------------ -------------- 2459 2 12-JAN-22 06.00.34.887 4hdhq06bahcnq 912142910 207 17.761 .0 2460 1 12-JAN-22 06.30.10.663 4hdhq06bahcnq 728991164 293 .024 1,376.6 2460 2 12-JAN-22 06.30.10.670 4hdhq06bahcnq 912142910 107 95.319 2,889,181.3 2461 1 12-JAN-22 07.00.04.996 4hdhq06bahcnq 728991164 439 .035 1,251.3 2461 2 12-JAN-22 07.00.04.990 4hdhq06bahcnq 912142910 149 66.469 2,106,097.7 My question from you guys are: 1. To investigate "gc buffer busy acquire" event what could be the sequence of investigation? 2. Since most of the SQLs coming on top in DB reports are generating multiple plans, Do we have any parameters in 19C which can be responsible for so many plans? Thanks
Categories: DBA Blogs

Library cache lock in wait class Concurrency

Tom Kyte - Tue, 2022-01-18 08:46
In the Production Application we randomly notice a hiccup (slowness) in the database. The AWR shows as `Waiting for event "library cache lock" in wait class "Concurrency"?. The respective SQL statement is one of the frequently running statement (15,000 times per minute) in the application. The SQL is a simple READ on primary key from a table which is cached in a logical memory. The table holds just 40,000 records and doesn?t grow at runtime. During the regular season, the same SQL statement elapsed time would be in milliseconds. But during the hiccup (slowness) time, the same SQL statement takes minutes to return the results. The other important thing to note is that the issue initiates most of the time at either 0th minute, 30th minute or 45th minute of an hour and resolves its own in couple of minutes. However, sometime the issue lasts more than an hour before it resolves its own. We would appreciate if you can help on what could be the reason the SQL performs slower for few minutes randomly.
Categories: DBA Blogs

Core Primary key concepts not mentioned in my Oracle training classes

Tom Kyte - Tue, 2022-01-18 08:46
Good Morning, Is there some underlying rule or DBA agreement that every table should have a primary key? To me it doesn't make logical sense that every table we define needs a primary key. Just recently I came across a primary key made up of two columns with each having about 60% unique values in a 1.5 million row table. The optimizer chooses to perform a full table when both those columns are used in the "Where" clause since the primary key isn't very selective. This table doesn't even have a child table. My assumption is that a primary key is only needed when it has a child table. On the other side, if a large, frequently accessed table doesn't have any child tables and doesn't have any unique column, should we create a column based off the value of a sequence and add a unique index (instead of primary key) on it? I would assume that isn't needed in all cases. An example where it would be needed is in the case where the application inserts a row and then plans to access it within the same session. In summary of questions: - Is there some underlying rule or DBA agreement that every table should have a primary key? - My assumption is that a primary key is only needed when it has a child table? - On the other side, if a large, frequently accessed table doesn't have any child tables and doesn't have a unique column, should we create a column based off the value of a sequence and add a unique index (instead of primary key) on it? Thanks, John
Categories: DBA Blogs

The contents of the Database Controlfile -- 2 : Physical Structure

Hemant K Chitale - Tue, 2022-01-18 08:41

 In my previous post, I had demonstrated how you can view the "count of records" in different sections of the controlfile.  However, you cannot actually view the data in those records.

You can, however, do a Physical Dump of the controlfile :



QL> select name, block_size*file_size_blks/1024
2 from v$controlfile;

NAME
------------------------------------------------------------------------------------------------------------------------------------
BLOCK_SIZE*FILE_SIZE_BLKS/1024
------------------------------
/opt/oracle/oradata/ORCLCDB/control01.ctl
18688

/opt/oracle/oradata/ORCLCDB/control02.ctl
18688


SQL> !ls -l /opt/oracle/oradata/ORCLCDB/control01.ctl
-rw-r-----. 1 oracle oinstall 19152896 Jan 18 22:15 /opt/oracle/oradata/ORCLCDB/control01.ctl

SQL> !bc
bc 1.06.95
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.
19152896/1024
18704
quit

SQL>
SQL> alter session set tracefile_identifier='my_cntrlfile_dump';

Session altered.

SQL>
SQL> select *
2 from v$diag_info
3 where name = 'Default Trace File'
4 /

INST_ID NAME
---------- ----------------------------------------------------------------
VALUE
------------------------------------------------------------------------------------------------------------------------------------
CON_ID
----------
1 Default Trace File
/opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_3207_my_cntrlfile_dump.trc
0


SQL>
SQL> alter session set events 'immediate trace name controlf level 10';

Session altered.

SQL> !ls -l /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_3207_my_cntrlfile_dump.trc
-rw-r-----. 1 oracle oinstall 1449760 Jan 18 22:17 /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_3207_my_cntrlfile_dump.trc

SQL>
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.12.0.0.0
oracle19c>view /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_3207_my_cntrlfile_dump.trc


The "alter session set events 'immediate trace name controlf level 10'" command does a Physical Dump of the trace file.

I can't reproduce the whole file here, but am presenting the interesting bits.


DUMP OF CONTROL FILES, Seq # 1036136918 = 0x3dc231d6
V10 STYLE FILE HEADER:
Compatibility Vsn = 318767104=0x13000000
Db ID=2778483057=0xa59c4971, Db Name='ORCLCDB'
Activation ID=0=0x0
Control Seq=1036136918=0x3dc231d6, File size=1168=0x490
File Number=0, Blksiz=16384, File Type=1 CONTROL
Logical block number 1 (header block)
Dump of memory from 0x00007F02655C9000 to 0x00007F02655CD000
7F02655C9000 0000C215 00000001 00000000 04010000 [................]
7F02655C9010 0000791A 00000000 13000000 A59C4971 [.y..........qI..]
7F02655C9020 4C43524F 00424443 3DC231D6 00000490 [ORCLCDB..1.=....]
7F02655C9030 00004000 00010000 00000000 00000000 [.@..............]
7F02655C9040 32474154 30313230 54363239 33333132 [TAG20210926T2133]
7F02655C9050 00003230 00000000 00000000 00000000 [02..............]
7F02655C9060 AACA5B83 3C0C08F2 01025B25 00008000 [.[.....<%[......]
7F02655C9070 413A3EE4 00000000 00000000 00000000 [.>:A............]
7F02655C9080 00000000 00000000 00000000 00000000 [................]
Repeat 7 times
7F02655C9100 00000000 00000000 00000008 00000008 [................]
7F02655C9110 00000008 00000000 00000000 00000000 [................]
7F02655C9120 00000001 00000000 00000000 00000000 [................]
7F02655C9130 00000000 00000000 00000000 00000000 [................]
Repeat 1003 times
7F02655CCFF0 00000000 00000000 00000000 00001501 [................]
... notihing interesting
... except that, funnily, some RMAN Backup TAG is visible !


Logical block number 44
Dump of memory from 0x00007F02655AD000 to 0x00007F02655B1000
7F02655AD000 0000C215 00000059 3DC230EF 0401FFFF [....Y....0.=....]
7F02655AD010 0000461E 00030003 00000000 6F2F0000 [.F............/o]
7F02655AD020 6F2F7470 6C636172 726F2F65 74616461 [pt/oracle/oradat]
7F02655AD030 524F2F61 44434C43 65722F42 33306F64 [a/ORCLCDB/redo03]
7F02655AD040 676F6C2E 00000000 00000000 00000000 [.log............]
7F02655AD050 00000000 00000000 00000000 00000000 [................]
Repeat 28 times
7F02655AD220 00020003 00000000 6F2F0000 6F2F7470 [........../opt/o]
7F02655AD230 6C636172 726F2F65 74616461 524F2F61 [racle/oradata/OR]
7F02655AD240 44434C43 65722F42 32306F64 676F6C2E [CLCDB/redo02.log]
7F02655AD250 00000000 00000000 00000000 00000000 [................]
Repeat 28 times
7F02655AD420 00000000 00000000 00000000 00010003 [................]
7F02655AD430 00000000 6F2F0000 6F2F7470 6C636172 [....../opt/oracl]
7F02655AD440 726F2F65 74616461 524F2F61 44434C43 [e/oradata/ORCLCD]
7F02655AD450 65722F42 31306F64 676F6C2E 00000000 [B/redo01.log....]
7F02655AD460 00000000 00000000 00000000 00000000 [................]
Repeat 28 times
7F02655AD630 00000000 00000000 00070004 00000000 [................]
7F02655AD640 6F2F0000 6F2F7470 6C636172 726F2F65 [../opt/oracle/or]
7F02655AD650 74616461 524F2F61 44434C43 73752F42 [adata/ORCLCDB/us]
7F02655AD660 30737265 62642E31 00000066 00000000 [ers01.dbf.......]
7F02655AD670 00000000 00000000 00000000 00000000 [................]
Repeat 28 times
7F02655AD840 00000000 00040004 00000000 6F2F0000 [............../o]
7F02655AD850 6F2F7470 6C636172 726F2F65 74616461 [pt/oracle/oradat]
7F02655AD860 524F2F61 44434C43 6E752F42 62746F64 [a/ORCLCDB/undotb]
7F02655AD870 2E313073 00666264 00000000 00000000 [s01.dbf.........]
7F02655AD880 00000000 00000000 00000000 00000000 [................]
Repeat 28 times
7F02655ADA50 00010004 00000000 6F2F0000 6F2F7470 [........../opt/o]
7F02655ADA60 6C636172 726F2F65 74616461 524F2F61 [racle/oradata/OR]
7F02655ADA70 44434C43 79732F42 6D657473 642E3130 [CLCDB/system01.d]
7F02655ADA80 00006662 00000000 00000000 00000000 [bf..............]
7F02655ADA90 00000000 00000000 00000000 00000000 [................]
Repeat 27 times
7F02655ADC50 00000000 00000000 00000000 00030004 [................]
7F02655ADC60 00000000 6F2F0000 6F2F7470 6C636172 [....../opt/oracl]
7F02655ADC70 726F2F65 74616461 524F2F61 44434C43 [e/oradata/ORCLCD]
7F02655ADC80 79732F42 78756173 642E3130 00006662 [B/sysaux01.dbf..]
7F02655ADC90 00000000 00000000 00000000 00000000 [................]
Repeat 28 times
7F02655ADE60 00000000 00000000 00010007 00000000 [................]
7F02655ADE70 6F2F0000 6F2F7470 6C636172 726F2F65 [../opt/oracle/or]
7F02655ADE80 74616461 524F2F61 44434C43 65742F42 [adata/ORCLCDB/te]
7F02655ADE90 3130706D 6662642E 00000000 00000000 [mp01.dbf........]
7F02655ADEA0 00000000 00000000 00000000 00000000 [................]
Repeat 28 times
7F02655AE070 00000000 00050004 00000000 6F2F0000 [............../o]
7F02655AE080 6F2F7470 6C636172 726F2F65 74616461 [pt/oracle/oradat]
7F02655AE090 524F2F61 44434C43 64702F42 65657362 [a/ORCLCDB/pdbsee]
7F02655AE0A0 79732F64 6D657473 642E3130 00006662 [d/system01.dbf..]
7F02655AE0B0 00000000 00000000 00000000 00000000 [................]
Repeat 28 times
7F02655AE280 00060004 00000000 6F2F0000 6F2F7470 [........../opt/o]
7F02655AE290 6C636172 726F2F65 74616461 524F2F61 [racle/oradata/OR]
7F02655AE2A0 44434C43 64702F42 65657362 79732F64 [CLCDB/pdbseed/sy]
7F02655AE2B0 78756173 642E3130 00006662 00000000 [saux01.dbf......]
7F02655AE2C0 00000000 00000000 00000000 00000000 [................]
Repeat 27 times
7F02655AE480 00000000 00000000 00000000 00080004 [................]
7F02655AE490 00000000 6F2F0000 6F2F7470 6C636172 [....../opt/oracl]
7F02655AE4A0 726F2F65 74616461 524F2F61 44434C43 [e/oradata/ORCLCD]
7F02655AE4B0 64702F42 65657362 6E752F64 62746F64 [B/pdbseed/undotb]
7F02655AE4C0 2E313073 00666264 00000000 00000000 [s01.dbf.........]
7F02655AE4D0 00000000 00000000 00000000 00000000 [................]
Repeat 27 times
.... and so on ..
... So, in the above section we see the physical datafiles of the database.




7F02655B0760 6F000000 6F2F7470 6C636172 72702F65 [...opt/oracle/pr]
7F02655B0770 6375646F 39312F74 62642F63 656D6F68 [oduct/19c/dbhome]
7F02655B0780 642F315F 732F7362 6370616E 524F5F66 [_1/dbs/snapcf_OR]
7F02655B0790 44434C43 00662E42 00000000 00000000 [CLCDB.f.........]
7F02655B07A0 00000000 00000000 00000000 00000000 [................]
Repeat 27 times
7F02655B0960 00000000 00000000 00000000 6F000000 [...............o]
7F02655B0970 6F2F7470 6C636172 72702F65 6375646F [pt/oracle/produc]
7F02655B0980 39312F74 62642F63 656D6F68 642F315F [t/19c/dbhome_1/d]
7F02655B0990 732F7362 6370616E 524F5F66 44434C43 [bs/snapcf_ORCLCD]
7F02655B09A0 00662E42 00000000 00000000 00000000 [B.f.............]
7F02655B09B0 00000000 00000000 00000000 00000000 [................]
... here we see the Snapshot Controlfile



Logical block number 177
Dump of memory from 0x00007F02655C1000 to 0x00007F02655C5000
7F02655C1000 0000C215 00000163 3DC230C2 0401FFFF [....c....0.=....]
7F02655C1010 0000993A 00000001 59530006 58554153 [:.........SYSAUX]
7F02655C1020 00000000 00000000 00000000 00000000 [................]
Repeat 8 times
7F02655C10B0 00000000 00010003 00000000 00000000 [................]
7F02655C10C0 00000000 00000000 00000000 59530006 [..............SY]
7F02655C10D0 4D455453 00000000 00000000 00000000 [STEM............]
7F02655C10E0 00000000 00000000 00000000 00000000 [................]
Repeat 7 times
7F02655C1160 00000000 00000000 00010001 00000000 [................]
7F02655C1170 00000000 00000000 00000000 00000002 [................]
7F02655C1180 4E550008 42544F44 00003153 00000000 [..UNDOTBS1......]
7F02655C1190 00000000 00000000 00000000 00000000 [................]
Repeat 7 times
7F02655C1210 00000000 00000000 00000000 00010004 [................]
7F02655C1220 00000000 00000000 00000000 00000000 [................]
7F02655C1230 00000004 53550005 00535245 00000000 [......USERS.....]
7F02655C1240 00000000 00000000 00000000 00000000 [................]
Repeat 8 times
7F02655C12D0 00010007 00000000 00000000 00000000 [................]
7F02655C12E0 00000000 00000003 45540004 0000504D [..........TEMP..]
7F02655C12F0 00000000 00000000 00000000 00000000 [................]
Repeat 6 times
7F02655C1360 00000000 00000000 00010000 00000000 [................]
7F02655C1370 00000000 00000000 00000000 00000000 [................]
7F02655C1380 00000000 00010001 00000000 00000000 [................]
7F02655C1390 00000000 00000000 00000000 59530006 [..............SY]
7F02655C13A0 4D455453 00000000 00000000 00000000 [STEM............]
7F02655C13B0 00000000 00000000 00000000 00000000 [................]
Repeat 7 times
7F02655C1430 00000000 00000000 00020005 00000000 [................]
7F02655C1440 00000000 00000000 00000000 00000001 [................]
7F02655C1450 59530006 58554153 00000000 00000000 [..SYSAUX........]
7F02655C1460 00000000 00000000 00000000 00000000 [................]
.... and so on ...
... this section has Tablespace Names.




Logical block number 193
Dump of memory from 0x00007F02655C1000 to 0x00007F02655C5000
7F02655C1000 0000C215 00000183 3DC1CC8F 0401FFFF [...........=....]
7F02655C1010 00000887 48435241 4C455649 4420474F [....ARCHIVELOG D]
7F02655C1020 54454C45 204E4F49 494C4F50 00005943 [ELETION POLICY..]
7F02655C1030 00000000 00000000 00000000 00000000 [................]
Repeat 1 times
7F02655C1050 00000000 204F5400 4C505041 20444549 [.....TO APPLIED ]
7F02655C1060 41204E4F 53204C4C 444E4154 42205942 [ON ALL STANDBY B]
7F02655C1070 454B4341 50552044 54203120 53454D49 [ACKED UP 1 TIMES]
7F02655C1080 204F5420 4B534944 00000000 00000000 [ TO DISK........]
7F02655C1090 00000000 00000000 00000000 00000000 [................]
Repeat 60 times
7F02655C1460 00000000 00000000 49564544 54204543 [........DEVICE T]
7F02655C1470 00455059 00000000 00000000 00000000 [YPE.............]
7F02655C1480 00000000 00000000 00000000 00000000 [................]
Repeat 1 times
7F02655C14A0 00000000 00000000 53494400 4150204B [.........DISK PA]
7F02655C14B0 4C4C4152 53494C45 2032204D 4B434142 [RALLELISM 2 BACK]
7F02655C14C0 54205055 20455059 42204F54 554B4341 [UP TYPE TO BACKU]
7F02655C14D0 54455350 00000000 00000000 00000000 [PSET............]
7F02655C14E0 00000000 00000000 00000000 00000000 [................]
Repeat 944 times
... this secion has RMAN CONFGURATION information



Logical block number 211
Dump of memory from 0x00007F0265589000 to 0x00007F026558D000
7F0265589000 0000C215 000001A8 3DC21EA3 0401FFFF [...........=....]
7F0265589010 000008C1 40845AAE 00010012 000001EE [.....Z.@........]
7F0265589020 00493300 00000000 3DC1C40E 00E6DAE8 [.3I........=....]
7F0265589030 00008000 40845629 00E6E7E1 00008000 [....)V.@........]
7F0265589040 40845A9A 0002631D 00000200 00000001 [.Z.@.c..........]
7F0265589050 A7521CCD 00010000 74706F2F 61726F2F [..R...../opt/ora]
7F0265589060 2F656C63 68637261 6C657669 4F2F676F [cle/archivelog/O]
7F0265589070 434C4352 312F4244 3439345F 3330315F [RCLCDB/1_494_103]
7F0265589080 38303136 2E343138 00666264 00000000 [6108814.dbf.....]
7F0265589090 00000000 00000000 00000000 00000000 [................]
Repeat 27 times
7F0265589250 00000000 00000000 00000000 40A141E4 [.............A.@]
7F0265589260 00010012 000001EF 00493300 00000000 [.........3I.....]
7F0265589270 3DC1C40E 00E6E7E1 00008000 40845A9A [...=.........Z.@]
7F0265589280 00E6E7E4 00008000 40A141E3 00000001 [.........A.@....]
7F0265589290 00000200 00000001 A7521CCD 00010000 [..........R.....]
7F02655892A0 74706F2F 61726F2F 2F656C63 68637261 [/opt/oracle/arch]
7F02655892B0 6C657669 4F2F676F 434C4352 312F4244 [ivelog/ORCLCDB/1]
7F02655892C0 3539345F 3330315F 38303136 2E343138 [_495_1036108814.]
7F02655892D0 00666264 00000000 00000000 00000000 [dbf.............]
7F02655892E0 00000000 00000000 00000000 00000000 [................]
Repeat 27 times
7F02655894A0 00000000 40A143D5 00010012 000001F0 [.....C.@........]
7F02655894B0 00493300 00000000 3DC1C40E 00E6E7E4 [.3I........=....]
7F02655894C0 00008000 40A141E3 00E86F7B 00008000 [.....A.@{o......]
7F02655894D0 40A143D5 000000A9 00000200 00000001 [.C.@............]
7F02655894E0 A7521CCD 00010000 74706F2F 61726F2F [..R...../opt/ora]
7F02655894F0 2F656C63 68637261 6C657669 4F2F676F [cle/archivelog/O]
7F0265589500 434C4352 312F4244 3639345F 3330315F [RCLCDB/1_496_103]
7F0265589510 38303136 2E343138 00666264 00000000 [6108814.dbf.....]
7F0265589520 00000000 00000000 00000000 00000000 [................]
Repeat 27 times
.... and so on ...
... this section has *some* ArchiveLog FileNames
... other ArchiveLog FileNames appear further down the file



Repeat 30 times
7F0265589DC0 00000000 40A143DD 0001061A 000001EA [.....C.@........]
7F0265589DD0 00493300 00000000 3DC1C40E 00E683F5 [.3I........=....]
7F0265589DE0 00008000 40844BA3 00E6841B 00008000 [.....K.@........]
7F0265589DF0 40844BA9 00000017 00000200 003C0001 [.K.@..........<.]
7F0265589E00 A7521CCD 00030000 42445453 00000032 [..R.....STDB2...]
7F0265589E10 00000000 00000000 00000000 00000000 [................]
.... here I see a reference to one of my Standby Databases STDB2




Logical block number 218
Dump of memory from 0x00007F02655A5000 to 0x00007F02655A9000
7F02655A5000 0000C215 000001B5 3DC1D6CE 0401FFFF [...........=....]
7F02655A5010 0000B131 3D94E011 3D94E010 00000001 [1......=...=....]
7F02655A5020 00018003 3D94E011 00000000 00008FA0 [.......=........]
7F02655A5030 4B534944 00000000 00000000 00000000 [DISK............]
7F02655A5040 706F0000 726F2F74 656C6361 6F72702F [..opt/oracle/pro]
7F02655A5050 74637564 6339312F 6862642F 5F656D6F [duct/19c/dbhome_]
7F02655A5060 62642F31 2D632F73 38373732 30333834 [1/dbs/c-27784830]
7F02655A5070 322D3735 30303230 2D333232 00003030 [57-20200223-00..]
7F02655A5080 00000000 00000000 00000000 00000000 [................]
Repeat 36 times
7F02655A52D0 00000000 32474154 30303230 54333232 [....TAG20200223T]
7F02655A52E0 37343232 00003434 00000000 00000000 [224744..........]
7F02655A52F0 00000000 00000000 B637B686 657ADF2F [..........7./.ze]
7F02655A5300 06F753E0 CA270AE8 3D94E011 00000000 [.S....'....=....]
7F02655A5310 00000000 00000000 00000000 00000000 [................]
7F02655A5320 3DC1C42D 3DC1C42C 00000024 0001C003 [-..=,..=$.......]
7F02655A5330 3DC1C42D 00000000 00008FA0 4B534944 [-..=........DISK]
7F02655A5340 00000000 00000000 00000000 706F0000 [..............op]
7F02655A5350 726F2F74 656C6361 4152462F 43524F2F [t/oracle/FRA/ORC]
7F02655A5360 4244434C 43524F2F 4244434C 7475612F [LCDB/ORCLCDB/aut]
7F02655A5370 6361626F 2F70756B 30323032 5F33305F [obackup/2020_03_]
7F02655A5380 6F2F3732 666D5F31 315F735F 31363330 [27/o1_mf_s_10361]
7F02655A5390 34383830 37685F34 66666E73 2E5F7862 [08844_h7snffbx_.]
7F02655A53A0 00706B62 00000000 00000000 00000000 [bkp.............]
7F02655A53B0 00000000 00000000 00000000 00000000 [................]
Repeat 34 times
7F02655A55E0 32474154 30303230 54373233 30303030 [TAG20200327T0000]
7F02655A55F0 00003434 00000000 00000000 00000000 [44..............]
7F02655A5600 00000000 B637B686 657ADF2F 06F753E0 [......7./.ze.S..]
7F02655A5610 CA270AE8 3DC1C42D 00000000 00000000 [..'.-..=........]
7F02655A5620 00000000 00000000 00000000 3E0E1E81 [...............>]
7F02655A5630 3E0E1E80 00000027 0001C003 3E0E1E97 [...>'..........>]
7F02655A5640 00000040 00093AA0 4B534944 00000000 [@....:..DISK....]
7F02655A5650 00000000 00000000 706F0000 726F2F74 [..........opt/or]
7F02655A5660 656C6361 4152462F 43524F2F 4244434C [acle/FRA/ORCLCDB]
7F02655A5670 43524F2F 4244434C 6361622F 7370756B [/ORCLCDB/backups]
7F02655A5680 322F7465 5F303230 325F3530 316F2F32 [et/2020_05_22/o1]
7F02655A5690 5F666D5F 646E6E6E 41545F66 32303247 [_mf_nnndf_TAG202]
7F02655A56A0 32353030 31325432 34323835 6864685F [00522T215824_hdh]
7F02655A56B0 31316E70 622E5F6C 0000706B 00000000 [pn11l_.bkp......]
7F02655A56C0 00000000 00000000 00000000 00000000 [................]
Repeat 31 times
7F02655A58C0 00000000 00000000 00000000 3E0E1E80 [...............>]
7F02655A58D0 00000000 0000002C 00000000 00000000 [....,...........]
7F02655A58E0 00000000 00000000 00000000 32474154 [............TAG2]
7F02655A58F0 30303230 54323235 38353132 00003432 [0200522T215824..]
7F02655A5900 00000000 00000000 00000000 00000000 [................]
7F02655A5910 B637B686 657ADF2F 06F753E0 CA270AE8 [..7./.ze.S....'.]
7F02655A5920 3E0E1E81 00000000 00000000 00000000 [...>............]
7F02655A5930 00000000 00000000 3E0E1E9A 3E0E1E9A [...........>...>]
7F02655A5940 00000028 0001C003 3E0E1EAC 00000040 [(..........>@...]
7F02655A5950 000409F0 4B534944 00000000 00000000 [....DISK........]
7F02655A5960 00000000 706F0000 726F2F74 656C6361 [......opt/oracle]
7F02655A5970 4152462F 43524F2F 4244434C 43524F2F [/FRA/ORCLCDB/ORC]
7F02655A5980 4244434C 3138382F 39324532 46343638 [LCDB/8812E29864F]
7F02655A5990 31363734 35304535 30313033 37303030 [47615E0530100007]
7F02655A59A0 32344146 61622F34 70756B63 2F746573 [FA424/backupset/]
7F02655A59B0 30323032 5F35305F 6F2F3232 666D5F31 [2020_05_22/o1_mf]
7F02655A59C0 6E6E6E5F 545F6664 30324741 35303032 [_nnndf_TAG202005]
7F02655A59D0 32543232 32383531 64685F34 746E7068 [22T215824_hdhpnt]
7F02655A59E0 2E5F7939 00706B62 00000000 00000000 [9y_.bkp.........]
.... and so on ...
... here I see some of the BackupPiece FileNames
... strangely enough BackupPieces from May 2020 are still in the controlfile !!
... more appear later down the file


7F02655BF7B0 00000000 00000000 00000000 41374F03 [.............O7A]
7F02655BF7C0 00000000 00000155 00000000 00000000 [....U...........]
7F02655BF7D0 00000000 00000000 00000000 32474154 [............TAG2]
7F02655BF7E0 30323230 54363131 39343631 00003830 [0220116T164908..]
7F02655BF7F0 00000000 00000000 00000000 00000000 [................]
7F02655BF800 98E21288 1576F464 000153E0 24A47F00 [....d.v..S.....$]
7F02655BF810 41374F08 00000000 00000000 00000000 [.O7A............]
7F02655BF820 00000000 00000000 41374F3D 41374F3C [........=O7AO7A....]
7F02655BF840 000092E0 4B534944 00000000 00000000 [....DISK........]
7F02655BF850 00000000 706F2F00 726F2F74 656C6361 [...../opt/oracle]
7F02655BF860 4152462F 43524F2F 4244434C 43524F2F [/FRA/ORCLCDB/ORC]
7F02655BF870 4244434C 7475612F 6361626F 2F70756B [LCDB/autobackup/]
7F02655BF880 32323032 5F31305F 6F2F3631 666D5F31 [2022_01_16/o1_mf]
7F02655BF890 315F735F 31343930 30383334 796A5F34 [_s_1094143804_jy]
7F02655BF8A0 78317137 2E5F3170 00706B62 00000000 [7q1xp1_.bkp.....]
7F02655BF8B0 00000000 00000000 00000000 00000000 [................]
Repeat 32 times
7F02655BFAC0 00000000 00000000 41374F3C 00000000 [........



So, it seems that not every entry in the Controlfile is ordered.
And also, that there are many entries referencing ArchiveLogs or BackupPieces that no longer exist but haven't been cleaned up yet !




Categories: DBA Blogs

Migrating a CentOS 7 EC2 instance to Rocky Linux 8

Yann Neuhaus - Tue, 2022-01-18 06:50

Some time ago I’ve written about migrating a CentOS 8 machine to Red Hat 8, because CentOS 8 reached end of life last December. We did that for all CentOS 8 installations at that customer and all are running as virtual machines. The very same customer has many EC2 instances running on CentOS 7. Migrating away from CentOS 7 is one of the next tasks to complete, as CentOS 7 will reach end of life in 2024. For doing that, you basically have two options (if you want to stay inside the Red Hat family): Either you deploy your target distribution (either Rocky Linux, Alma Linux, Oracle Linux or Red Hat Enterprise Linux) from an existing AMI and then re-install everything you need and restore the data. Another option would be to directly upgrade from CentOS 7 to whatever Red Hat based Linux version 8. We’ll go for the second option in this post.

Before we start: Of course you need a backup of your EC2 instance before doing this! Of course you really need to test your application after the upgrade because a lot of packages and the kernel will change.

You might wonder how this will work at all, because there is no supported migration path from CentOS 7 to CentOS 8. The people behind Alma Linux stepped into this and created a project called Elevate to support exactly those migrations. You can use that tool to migrate from CentOS 7 to AlmaLinux 8, to Rock Linux 8 and to Oracle Linux 8.

My starting point is a fresh EC2 instance based on the latest CentOS 7 AMI:

[centos@ip-10-0-1-100 ~]$ cat /etc/centos-release
CentOS Linux release 7.7.1908 (Core)
[centos@ip-10-0-1-100 ~]$ uname -a
Linux ip-10-0-1-100.eu-central-1.compute.internal 3.10.0-1062.12.1.el7.x86_64 #1 SMP Tue Feb 4 23:02:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

First thing to do: Update the system to the latest release and reboot:

[centos@ip-10-0-1-100 ~]$ sudo yum update -y
Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: download.cf.centos.org
 * extras: download.cf.centos.org
 * updates: download.cf.centos.org
base                                                                                                                                                                                                                                                   | 3.6 kB  00:00:00     
extras                                                                                                                                                                                                                                                 | 2.9 kB  00:00:00     
updates                                                                                                                                                                                                                                                | 2.9 kB  00:00:00     
(1/4): base/7/x86_64/group_gz                                                                                                                                                                                                                          | 153 kB  00:00:00     
(2/4): extras/7/x86_64/primary_db                                                                                                                                                                                                                      | 243 kB  00:00:00     
(3/4): updates/7/x86_64/primary_db                                                                                                                                                                                                                     |  13 MB  00:00:00     
(4/4): base/7/x86_64/primary_db                                                                                                                                                                                                                        | 6.1 MB  00:00:00     
Resolving Dependencies
--> Running transaction check
---> Package acl.x86_64 0:2.2.51-14.el7 will be updated
---> Package acl.x86_64 0:2.2.51-15.el7 will be an update
...
  sg3_utils-libs.x86_64 1:1.37-19.el7                   shared-mime-info.x86_64 0:1.8-5.el7                     sudo.x86_64 0:1.8.23-10.el7_9.2                         systemd.x86_64 0:219-78.el7_9.5                      systemd-libs.x86_64 0:219-78.el7_9.5           
  systemd-sysv.x86_64 0:219-78.el7_9.5                  teamd.x86_64 0:1.29-3.el7                               tuned.noarch 0:2.11.0-11.el7_9                          tzdata.noarch 0:2021e-1.el7                          util-linux.x86_64 0:2.23.2-65.el7_9.1          
  vim-minimal.x86_64 2:7.4.629-8.el7_9                  virt-what.x86_64 0:1.18-4.el7_9.1                       wpa_supplicant.x86_64 1:2.6-12.el7_9.2                  xfsprogs.x86_64 0:4.5.0-22.el7                       yum.noarch 0:3.4.3-168.el7.centos              
  yum-plugin-fastestmirror.noarch 0:1.1.31-54.el7_8     yum-utils.noarch 0:1.1.31-54.el7_8                      zlib.x86_64 0:1.2.7-19.el7_9                           

Replaced:
  iwl7265-firmware.noarch 0:22.0.7.0-72.el7                                                                                                                                                                                                                                   

Complete!
[centos@ip-10-0-1-100 ~]$ sudo reboot
...
[centos@ip-10-0-1-100 ~]$ cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)
[centos@ip-10-0-1-100 ~]$ uname -a
Linux ip-10-0-1-100.eu-central-1.compute.internal 3.10.0-1160.49.1.el7.x86_64 #1 SMP Tue Nov 30 15:51:32 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Install the elevate repository:

[centos@ip-10-0-1-100 ~]$ sudo yum install -y http://repo.almalinux.org/elevate/elevate-release-latest-el7.noarch.rpm
Loaded plugins: fastestmirror
elevate-release-latest-el7.noarch.rpm                                                                                                                                                                                                                  | 6.9 kB  00:00:00     
Examining /var/tmp/yum-root-NQPt77/elevate-release-latest-el7.noarch.rpm: elevate-release-1.0-1.el7.noarch
Marking /var/tmp/yum-root-NQPt77/elevate-release-latest-el7.noarch.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package elevate-release.noarch 0:1.0-1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==============================================================================================================================================================================================================================================================================
 Package                                                         Arch                                                   Version                                                      Repository                                                                          Size
==============================================================================================================================================================================================================================================================================
Installing:
 elevate-release                                                 noarch                                                 1.0-1.el7                                                    /elevate-release-latest-el7.noarch                                                 3.4 k

Transaction Summary
==============================================================================================================================================================================================================================================================================
Install  1 Package

Total size: 3.4 k
Installed size: 3.4 k
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : elevate-release-1.0-1.el7.noarch                                                                                                                                                                                                                           1/1 
  Verifying  : elevate-release-1.0-1.el7.noarch                                                                                                                                                                                                                           1/1 

Installed:
  elevate-release.noarch 0:1.0-1.el7                                                                                                                                                                                                                                          

Complete!

Elevate uses Leapp in the background so we need to install the corresponding packages:


[centos@ip-10-0-1-100 ~]$  sudo yum install -y leapp-upgrade leapp-data-rocky
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: download.cf.centos.org
 * extras: download.cf.centos.org
 * updates: download.cf.centos.org
elevate                                                                                                                                                                                                                                                | 3.0 kB  00:00:00     
elevate/x86_64/primary_db                                                                                                                                                                                                                              | 6.7 kB  00:00:00     
Resolving Dependencies
--> Running transaction check
---> Package leapp-data-rocky.noarch 0:0.1-2.el7 will be installed
---> Package leapp-upgrade-el7toel8.noarch 0:0.14.0-100.202109271224Z.b7ebfca.master.el7.elevate will be installed
--> Processing Dependency: leapp-repository-dependencies = 6 for package: leapp-upgrade-el7toel8-0.14.0-100.202109271224Z.b7ebfca.master.el7.elevate.noarch
--> Processing Dependency: leapp-framework < 3 for package: leapp-upgrade-el7toel8-0.14.0-100.202109271224Z.b7ebfca.master.el7.elevate.noarch
...
Dependency Installed:
  deltarpm.x86_64 0:3.6-3.el7                                                     dnf.noarch 0:4.0.9.2-2.el7_9                                                            dnf-data.noarch 0:4.0.9.2-2.el7_9                                                                 
  leapp.noarch 0:0.12.1-100.20210924142320684911.master.28.g1f03432.el7           leapp-deps.noarch 0:0.12.1-100.20210924142320684911.master.28.g1f03432.el7              leapp-upgrade-el7toel8-deps.noarch 0:0.14.0-100.202109271224Z.b7ebfca.master.el7.elevate          
  libcomps.x86_64 0:0.1.8-14.el7                                                  libdnf.x86_64 0:0.22.5-2.el7_9                                                          libmodulemd.x86_64 0:1.6.3-1.el7                                                                  
  librepo.x86_64 0:1.8.1-8.el7_9                                                  libreport-filesystem.x86_64 0:2.1.11-53.el7.centos                                      libsolv.x86_64 0:0.6.34-4.el7                                                                     
  pciutils.x86_64 0:3.5.1-3.el7                                                   python-enum34.noarch 0:1.0.4-1.el7                                                      python2-dnf.noarch 0:4.0.9.2-2.el7_9                                                              
  python2-hawkey.x86_64 0:0.22.5-2.el7_9                                          python2-leapp.noarch 0:0.12.1-100.20210924142320684911.master.28.g1f03432.el7           python2-libcomps.x86_64 0:0.1.8-14.el7                                                            
  python2-libdnf.x86_64 0:0.22.5-2.el7_9                                         

Complete!

If you want to migrate to one of the other distributions, replace the “leapp-data-rocky” package with one of these:

  • leapp-data-almalinux
  • leapp-data-oraclelinux
  • leapp-data-rocky

Time for the pre-ugrade check:

[centos@ip-10-0-1-100 ~]$ sudo leapp preupgrade
==> Processing phase `configuration_phase`
====> * ipu_workflow_config
        IPU workflow config actor
==> Processing phase `FactsCollection`
====> * firewalld_facts_actor
        Provide data about firewalld
====> * source_boot_loader_scanner
        Scans the boot loader configuration on the source system.
====> * repository_mapping
        Produces message containing repository mapping based on provided file.
====> * read_openssh_config
        Collect information about the OpenSSH configuration.
====> * scandasd
        In case of s390x architecture, check whether DASD is used.
====> * rpm_scanner
        Provides data about installed RPM Packages.
...
====> * target_userspace_creator
        Initializes a directory to be populated as a minimal environment to run binaries from the target system.
Rocky Linux 8 - PowerTools                      6.8 MB/s | 2.5 MB     00:00    
Rocky Linux 8 - Extras                           42 kB/s |  10 kB     00:00    
Rocky Linux 8 - AppStream                       7.4 MB/s | 8.7 MB     00:01    
Rocky Linux 8 - BaseOS                          4.2 MB/s | 4.6 MB     00:01    
Rocky Linux 8 - HighAvailability                743 kB/s | 545 kB     00:00    
Dependencies resolved.
================================================================================
 Package                     Arch   Version              Repository        Size
================================================================================
Installing:
 dnf                         noarch 4.7.0-4.el8          rocky8-baseos    543 k
 dnf-plugins-core            noarch 4.0.21-3.el8         rocky8-baseos     69 k
...
Check completed.
====> * tmp_actor_to_satisfy_sanity_checks
        The actor does NOTHING but satisfy static sanity checks
====> * check_initramfs_tasks
        Inhibit the upgrade if conflicting "initramfs" tasks are detected
==> Processing phase `Reports`
====> * verify_check_results
        Check all dialogs and notify that user needs to make some choices.
====> * verify_check_results
        Check all generated results messages and notify user about them.

============================================================
                     UPGRADE INHIBITED                      
============================================================

Upgrade has been inhibited due to the following problems:
    1. Inhibitor: Missing required answers in the answer file
Consult the pre-upgrade report for details and possible remediation.

============================================================
                     UPGRADE INHIBITED                      
============================================================


Debug output written to /var/log/leapp/leapp-preupgrade.log

============================================================
                           REPORT                           
============================================================

A report has been generated at /var/log/leapp/leapp-report.json
A report has been generated at /var/log/leapp/leapp-report.txt

============================================================
                       END OF REPORT                        
============================================================

Answerfile has been generated at /var/log/leapp/answerfile

Have a look at the generated answer file and the report. Any issues you need to fix are mentioned there. In my case, this is what I had to do:

[centos@ip-10-0-1-100 ~]$ echo PermitRootLogin yes | sudo tee -a /etc/ssh/sshd_config
PermitRootLogin yes
[centos@ip-10-0-1-100 ~]$ sudo leapp answer --section remove_pam_pkcs11_module_check.confirm=True

Once this is done, kick off the migration:

[centos@ip-10-0-1-100 ~]$ sudo leapp upgrade
==> Processing phase `configuration_phase`
====> * ipu_workflow_config
        IPU workflow config actor
==> Processing phase `FactsCollection`
====> * firewalld_facts_actor
        Provide data about firewalld
...
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.
==> Processing phase `InterimPreparation`
====> * efi_interim_fix
        Adjust EFI boot entry for first reboot
====> * upgrade_initramfs_generator
        Creates the upgrade initramfs
====> * add_upgrade_boot_entry
        Add new boot entry for Leapp provided initramfs.
A reboot is required to continue. Please reboot your system.


Debug output written to /var/log/leapp/leapp-upgrade.log

============================================================
                           REPORT                           
============================================================

A report has been generated at /var/log/leapp/leapp-report.json
A report has been generated at /var/log/leapp/leapp-report.txt

============================================================
                       END OF REPORT                        
============================================================

Answerfile has been generated at /var/log/leapp/answerfile

Looks all fine, the final step is a reboot:

[centos@ip-10-0-1-100 ~]$ sudo reboot

Don’t panic, this reboot takes time. If you are using AWS EC2 you can use the Serial Console to check what is going on:

After some minutes the EC2 instance should be back:

[centos@ip-10-0-1-100 ~]$ cat /etc/rocky-release
Rocky Linux release 8.5 (Green Obsidian)
[centos@ip-10-0-1-100 ~]$ uname -a
Linux ip-10-0-1-100.eu-central-1.compute.internal 4.18.0-348.7.1.el8_5.x86_64 #1 SMP Tue Dec 21 19:02:23 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Quite easy and straight forward, but this was an installation without any user data, modifications to the system or third party applications.

Cet article Migrating a CentOS 7 EC2 instance to Rocky Linux 8 est apparu en premier sur Blog dbi services.

How to properly assign ENABLE_PARALLEL_DML to a profile?

Tom Kyte - Mon, 2022-01-17 14:26
Hi, In a 19.0, I've found a daily update that lasts for more than 30 mins in a 3rd party software. I would like to execute it in parallel. PDML is disabled, and when I try to create a profile for that update, I cannot get the desired plan. I made a small case to show my problem: <code>CREATE TABLE TESTCASE_PDML (keynum number, attribute1 number) PARALLEL 8; --Load some data. INSERT INTO TESTCASE_PDML SELECT ROWNUM, ROWNUM*1000 FROM DUAL CONNECT BY ROWNUM <=100; commit; --Plans for the update: EXPLAIN PLAN FOR UPDATE TESTCASE_PDML SET attribute1 = keynum * 1001 SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY()); Plan hash value: 3488641984 ------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------ | 0 | UPDATE STATEMENT | | 100 | 2600 | 2 (0)| 00:00:01 | | 1 | UPDATE | TESTCASE_PDML | | | | | | 2 | TABLE ACCESS FULL| TESTCASE_PDML | 100 | 2600 | 2 (0)| 00:00:01 | ------------------------------------------------------------------------------------ Note ----- - dynamic statistics used: dynamic sampling (level=3) - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold EXPLAIN PLAN FOR UPDATE /*+ parallel(8) */ TESTCASE_PDML SET attribute1 = keynum * 1001 SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY()); Plan hash value: 2008974791 -------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib | -------------------------------------------------------------------------------------------------------------------- | 0 | UPDATE STATEMENT | | 100 | 2600 | 4 (0)| 00:00:01 | | | | | 1 | UPDATE | TESTCASE_PDML | | | | | | | | | 2 | PX COORDINATOR | | | | | | | | | | 3 | PX SEND QC (RANDOM)| :TQ10000 | 100 | 2600 | 4 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) | | 4 | PX BLOCK ITERATOR | | 100 | 2600 | 4 (0)| 00:00:01 | Q1,00 | PCWC | | | 5 | TABLE ACCESS FULL| TESTCASE_PDML | 100 | 2600 | 4 (0)| 00:00:01 | Q1,00 | PCWP | | -------------------------------------------------------------------------------------------------------------------- Note ----- - dynamic statistics used: dynamic sampling (level=3) - Degree of Parallelism is 8 because of hint <b>- PDML is disabled in current session </b> EXPLAIN PLAN FOR UPDATE /*+ parallel(8) enable_parallel_dml*/ TESTCASE_PDML SET attribute1 = keynum * 1001 SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY()); Plan hash value: 727441780 -------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib | -------------------------------------------------------------------------------------------------------------------- | 0 | UPDATE STATEMENT | | 100 | 2600 | 4 (0)| 00:00:01 | | | | | 1 | PX COORDINATOR | | | | | | | | | | 2 | PX SEND QC (RANDOM) | :TQ10000 | 100 | 2600 | 4 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) | | 3...
Categories: DBA Blogs

Statspack with Oracle Database 19c and Multitenant

Yann Neuhaus - Mon, 2022-01-17 08:30
Introduction

When it comes to performance analysis on Oracle databases, everybody think about Diagnostic Pack. This is a very complete tool and it brings much more metrics than a DBA can analyze. But there is a big drawback with this tool: it’s not included in Standard Edition and it’s an option with Enterprise Edition. If you don’t have this tool, it’s still possible to do performance analysis using older Statspack tool. Statspack is nothing else than the father of Diagnostic Pack, and it still comes with all database versions free of charge. You just need to set it up correctly, and it will help a lot diagnosing performance troubles even if you’re using a modern environment, I mean 19c and Multitenant. For sure, there is some limitations compared to Diagnostic Pack, like text only reports, less metrics and truncated SQL statements, but it’s powerful enough for a good DBA willing to check and improve its databases’ performance.

Does Statspack works correctly with 19c and Multitenant?

Yes it does. You just need to use a specific setup procedure. Statspack should be deployed at the PDB level if you’re using Multitenant. And jobs must be configured using dbms_scheduler, as 19c does not support anymore old fashioned dbms_job. Here is the procedure I use when I deploy Statspack on a modern environment.

Setup procedure

Before configuring Statspack, make sure you’re using Standard Edition or make sure you’re not using Diagnostic Pack on your Enterprise Edition database. Both tools are not supposed to work together, and there is absolutely no use to have both. Diagnostic Pack, and eventually Tuning Pack are enabled/disabled with an instance parameter.

-- Check if DIAGNOSTIC and TUNING pack are disabled at the CDB level
SQL> conn / as sysdba
SQL> sho parameter control_mana

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
control_management_pack_access	     string	 NONE

-- Remove DIAGNOSTIC and TUNING packs if needed
alter system set control_management_pack_access='NONE' scope=spfile;

Starting from now, if you’re using multitenant, commands are done on the PDB:

export ORACLE_PDB_SID=MYPDB;
sqlplus / as sysdba

Statspack should use its own tablespace, 2/3GB should normally be OK.

-- Create a dedicated tablespace if needed
create tablespace PERFSTAT datafile size 300M autoextend on maxsize 3G;

In case Statspack was previously deployed, you can remove it to do a clean installation:

-- Remove previously installed statspack if needed
@?/rdbms/admin/spdrop

Statspack setup will need a default tablespace (the one you just created), a temporary tablespace and a password for its user PERFSTAT (prompt):

-- Create statspack's user and objects
define default_tablespace='perfstat'
define temporary_tablespace='temp'
@?/rdbms/admin/spcreate

You should configure an increased metrics level:

-- Increase level of metrics
exec STATSPACK.MODIFY_STATSPACK_PARAMETER (i_snap_level=>7,i_modify_parameter=>'true', i_instance_number=>null);

Statspack will need 2 jobs, 1 for snapshots (a snapshot is flushing actual performance metrics into dedicated tables) and 1 for snapshots’ purge (no need to keep old performance metrics).

Let’s allow the user PERFSTAT to create jobs:

-- Allow PERFSTAT's user to create jobs
conn / as sysdba
grant create job to perfstat;
conn perfstat/perfstat
show user

Let’s stop and drop existing jobs for snapshots if needed:

-- Drop existing snapshot job if needed
Exec dbms_scheduler.stop_job('SP_SNAP_JOB');
Exec dbms_scheduler.drop_job('SP_SNAP_JOB');
Exec dbms_scheduler.drop_schedule('SP_SNAP_SCHED');
Exec dbms_scheduler.drop_program('SP_SNAP_PROG');

And now let’s create snapshots scheduling components. In this example, my choice is one snapshot every 15 minutes. I would not recommend more often, and less often like every hour is also OK.

-- Create snapshot job using scheduler - every fifteen minutes
exec dbms_scheduler.create_program(program_name => 'SP_SNAP_PROG', program_type => 'STORED_PROCEDURE', program_action => 'PERFSTAT.statspack.snap', number_of_arguments => 0, enabled => FALSE);
exec dbms_scheduler.enable(name => 'SP_SNAP_PROG');
exec dbms_scheduler.create_schedule (schedule_name => 'SP_SNAP_SCHED', repeat_interval => 'freq=hourly; byminute=0,15,30,45; bysecond=0',end_date => null, comments => 'Schedule for Statspack snaps');
exec dbms_scheduler.create_job (job_name => 'SP_SNAP_JOB', program_name => 'SP_SNAP_PROG', schedule_name => 'SP_SNAP_SCHED',  enabled => TRUE, auto_drop => FALSE, comments => 'Statspack Job for snaps');

Let’s check the scheduling of this job:

-- Check job and scheduling
set lines 140
col owner for a10
col job_name for a15
col program_name for a20
col first for a18
col next for a18
col last for a18
select OWNER, JOB_NAME, PROGRAM_NAME, to_char(START_DATE,'YYYY/MM/DD HH24:MI') "FIRST", to_char(NEXT_RUN_DATE,'YYYY/MM/DD HH24:MI') "NEXT", to_char(LAST_START_DATE,'YYYY/MM/DD HH24:MI') "LAST" from dba_scheduler_jobs where owner='PERFSTAT';


OWNER	   JOB_NAME	   PROGRAM_NAME 	FIRST		 NEXT		  LAST
---------- --------------- -------------------- ---------------- ---------------- ----------------
PERFSTAT   SP_SNAP_JOB	   SP_SNAP_PROG 	2021/12/24 14:45 2021/12/24 14:45

Let’s drop the purge job if needed:

-- Drop existing purge job if needed
Exec dbms_scheduler.stop_job('SP_PURGE_JOB');
Exec dbms_scheduler.drop_job('SP_PURGE_JOB');
Exec dbms_scheduler.drop_schedule('SP_PURGE_SCHED');
Exec dbms_scheduler.drop_program('SP_PURGE_PROG');

Create the purge job scheduling. Purge is done weekly, during the night between Saturday and Sunday. In this example, I’m using a 30 day retention for snaphots. I would recommend a retention between 10 days and 2 months. Even if you don’t need a long retention, it’s always nice being able to compare performance metrics the days/weeks before a performance issue.

-- Create a procedure for the purge
create or replace procedure extended_purge(
num_days IN number
)
is
BEGIN
  statspack.purge(i_num_days => num_days, i_extended_purge => TRUE);
END extended_purge;
/

-- Test this procedure if needed
-- exec extended_purge(30);


-- Create snapshot job using scheduler - every Sunday at 0:20AM - keep 30 days of snapshots
exec dbms_scheduler.create_program(program_name => 'SP_PURGE_PROG', program_type => 'STORED_PROCEDURE', program_action => 'PERFSTAT.extended_purge', number_of_arguments => 1, enabled => FALSE);
exec DBMS_SCHEDULER.define_program_argument (program_name => 'SP_PURGE_PROG', argument_name => 'i_num_days', argument_position => 1, argument_type => 'NUMBER', default_value => 30);
exec dbms_scheduler.enable(name => 'SP_PURGE_PROG');
exec dbms_scheduler.create_schedule (schedule_name => 'SP_PURGE_SCHED', repeat_interval =>  'freq=weekly; byday=SUN; byhour=0; byminute=20',end_date => null, comments => 'Schedule for Statspack purge');
exec dbms_scheduler.create_job (job_name => 'SP_PURGE_JOB', program_name => 'SP_PURGE_PROG', schedule_name => 'SP_PURGE_SCHED',  enabled => TRUE, auto_drop => FALSE, comments => 'Statspack Job for purge');

If you’re using older versions like 12cR1, some events may be missing and you should add them (does not concern 19c):

-- Insert missing idle events on 12cR1 only: have a look at this blog post
delete from STATS$IDLE_EVENT;
insert into STATS$IDLE_EVENT select name from V$EVENT_NAME where wait_class='Idle';
insert into STATS$IDLE_EVENT values('log file parallel write');
insert into STATS$IDLE_EVENT values('target log write size');
commit;

Now it’s time to lock the PERFSTAT user, using Statspack report feature is usually done connecting with SYS:

-- Lock the perfstat user
conn / as sysdba
alter user perfstat account lock;

Now with SYS let’s check the scheduling of both jobs:

-- Check jobs's scheduling
set lines 140
col owner for a10
col job_name for a15
col first for a18
col next for a18
col last for a18
col program_name for a20
select OWNER, JOB_NAME, PROGRAM_NAME, to_char(START_DATE,'YYYY/MM/DD HH24:MI') "FIRST", to_char(NEXT_RUN_DATE,'YYYY/MM/DD HH24:MI') "NEXT", to_char(LAST_START_DATE,'YYYY/MM/DD HH24:MI') "LAST"  from dba_scheduler_jobs where owner='PERFSTAT';

OWNER	   JOB_NAME	   PROGRAM_NAME 	FIRST		 NEXT		  LAST
---------- --------------- -------------------- ---------------- ---------------- ----------------
PERFSTAT   SP_SNAP_JOB	   SP_SNAP_PROG 	2021/12/24 14:45 2021/12/24 14:45
PERFSTAT   SP_PURGE_JOB    SP_PURGE_PROG	2021/12/25 00:20 2021/12/25 00:20

Let’s check again later including the run and failure counts, the actual retention (how many days of snapshots are available) and volume of data for these performance metrics:

-- Check later after several occurences
set lines 140
col owner for a10
col job_name for a15
col first for a18
col next for a18
col last for a18
col program_name for a20
select OWNER, JOB_NAME, PROGRAM_NAME, to_char(START_DATE,'YYYY/MM/DD HH24:MI') "FIRST", to_char(NEXT_RUN_DATE,'YYYY/MM/DD HH24:MI') "NEXT", to_char(LAST_START_DATE,'YYYY/MM/DD HH24:MI') "LAST", run_count, failure_count  from dba_scheduler_jobs where owner='PERFSTAT';

OWNER	   JOB_NAME	   PROGRAM_NAME 	FIRST		   NEXT 	      LAST		  RUN_COUNT FAILURE_COUNT
---------- --------------- -------------------- ------------------ ------------------ ------------------ ---------- -------------
PERFSTAT   SP_SNAP_JOB	   SP_SNAP_PROG 	2021/12/24 14:45   2022/01/02 17:15   2022/01/02 17:00		874		0
PERFSTAT   SP_PURGE_JOB    SP_PURGE_PROG	2022/01/02 00:20   2022/01/09 00:20   2022/01/02 00:20		  1		0


-- Check actual retention
SQL> select round(sysdate-min(snap_time)) "RETENTION" from stats$snapshot;

 RETENTION
----------
	 7

-- Check data volume
SQL> select round(sum(bytes)/1024/1024,1) "PERFSTAT MB" from dba_segments where owner = 'PERFSTAT';

PERFSTAT MB
-----------
      742.8



...


OWNER	   JOB_NAME	   PROGRAM_NAME 	FIRST		   NEXT 	      LAST		  RUN_COUNT FAILURE_COUNT
---------- --------------- -------------------- ------------------ ------------------ ------------------ ---------- -------------
PERFSTAT   SP_SNAP_JOB	   SP_SNAP_PROG 	2021/12/24 14:45   2022/01/04 19:15   2022/01/04 19:00	       1074		0
PERFSTAT   SP_PURGE_JOB    SP_PURGE_PROG	2022/01/02 00:20   2022/01/09 00:20   2022/01/02 00:20		  1		0


 RETENTION
----------
	 9


PERFSTAT MB
-----------
      940.8

...

OWNER	   JOB_NAME        PROGRAM_NAME         FIRST              NEXT               LAST                RUN_COUNT FAILURE_COUNT
---------- --------------- -------------------- ------------------ ------------------ ------------------ ---------- -------------
PERFSTAT   SP_SNAP_JOB     SP_SNAP_PROG         2021/12/24 14:45   2022/01/17 14:30   2022/01/17 14:15         2303             0
PERFSTAT   SP_PURGE_JOB    SP_PURGE_PROG        2022/01/02 00:20   2022/01/23 00:20   2022/01/16 00:20            3             0


 RETENTION
----------
	12


PERFSTAT MB
-----------
     1751.8
Data volume expected

Statspack metrics can use several GB on each database depending on the snapshots’ frequency and retention. Here is what you can expect depending on your settings, each snapshot being about 1MB:

  • 1 month – 1 snap/h: ±750MB
  • 1 month – 2 snaps/h: ±1500MB
  • 1 month – 4 snaps/h: ±3000MB
Conclusion

Statspack is still a very capable and usable tool on 19c. Yes this is old fashioned, but it’s free and it does the job for those DBA who know how to read the reports.

Cet article Statspack with Oracle Database 19c and Multitenant est apparu en premier sur Blog dbi services.

Hash Aggregation – 1

Jonathan Lewis - Mon, 2022-01-17 07:15

Here’s an observation I made some time in 2007, but didn’t mention online until a (possibly) relevant question appeared on the Oracle database forum in 2017; and the topic reappeared in a nearly unrelated question a little while ago. The 2017 question was this:

I am running following SQL, which it taking almost 4-5 seconds and returning only 18 records.

SELECT
   SUM(TOTAL_AMOUNT) C1,
   a.LEVEL2_ENAME AS c2,
   c.CURR_YEARMO AS c3
FROM TERRITORY_PRESET_MSNP a,
   CV_RESTATED_MSNP b
   LEFT OUTER JOIN
   MONTH_D c
   ON b.YEARMO = c.CURR_YEARMO,
   PRODUCT_OFFERING d
WHERE   b.PO_ID = d.ROW_ID
    AND b.DATASOURCE_ID = 10
    AND b.YEARMO = 201704
    AND b.OWNER_TERR_ID = a.TERR_ID
    AND c.CURR_YEARMO = 201704
    AND a.YEARMO = 201706
GROUP BY c.CURR_YEARMO, a.LEVEL2_ENAME
ORDER BY C3, C2;

If I remove the ORDER BY clause it is returning results in 1 second.

Before saying anything else, I’ll just make a couple of points about the SQL:

  • It’s not a good idea to mix traditional Oracle syntax with “ANSI” syntax – it’s likely to make things harder for the next person to read the code and there’s just a slight possibility that the rewrite that Oracle applies to hide the ANSI syntax may block some of the possible execution paths.
  • The C3, C2 in the order by clause are the column aliases for the curr_yearno, level2_ename columns used in the group by clause.  Although Oracle allows you to use aliases in the order by (but not in the group by) doing so can only make the SQL a little harder to interpret (especially in a case like this when you have both clauses).
  • There’s a left outer join to month_d (aliased as c), but the where clause then includes the predicate c.CURR_YEARMO = 201704 which will eliminate any rows where curr_yearmo is null, thus converting (we hope – but the mix and match syntax might introduce a side-effect) the outer join to an inner join – so maybe that’s a design error in the SQL.

Addressing the question, though, the first thought (rapidly eliminated) is that perhaps this is the standard complaint of the GUI interface: “it’s fast until I add an order by clause”.

The commonest reason for this complaint is that the typical GUI interface shows you the first few rows and waits for you to page down, so your impression of the response time is “the time to see the first few rows” rather than “the time to get the complete result set” when it might take much more time to return the entire result set. When you add an order by clause it’s possible that Oracle will have to sort the entire result set before you see any of it. It’s often the difference between “first few rows” and “entire result set” that triggers the complaint.

In this case the “rapid elimination” of this thought is due to the OP saying the result set was only 18 rows. which is likely to produce the gut feeling that it shouldn’t take very long for Oracle to sort 18 rows if it had to find all of them before showing displaying them. On the other hand the thought might need a little follow-up, after all:

  • it’s possible that the GUI is only displaying 15 rows at a time and it’s takes a lot of time to find the extra 3 rows. Just think of a tablescan with a filter subquery when the rows you want are the first few in the table. Without an order by the rows can be displayed as they are found, with an order by Oracle will have to get to the end of the tablescan before the rows can be sorted and displayed.
  • the optimizer can produce terrible estimates and the order by clause might prompt it to say “if I start with a different table, driving through a specific index, and changing the join order then I won’t have to do any sorting for the order by clause” The resulting path may be a very bad idea if the arithmetic produces the wrong results.

The OP hasn’t shown us the execution plan – and that’s what we really need to see; but there is an alternative  guess that we could make about what the optimizer is doing that would affect the performance this much.

The query is an aggregate query – we have a group by. Since 10g Oracle has been able to use “hash aggregation” – which shows up as the HASH GROUP BY operation in an execution plan. Here’s a little demo script, with a couple of sample queries:

rem
rem     Script:         sort_hash.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2012
rem 

create table t1
as
with generator as (
        select
                rownum id 
        from dual 
        connect by 
                level <= 1e4  -- > comment to avoid wordpress format issue
)
select
        trunc(dbms_random.value(0,262144))      n_256K,
        trunc(dbms_random.value(0,131072))      n_128K,
        trunc(dbms_random.value(0,8192))        n_8k
from
        generator       v1,
        generator       v2
where
        rownum <= 8 * 1048576  -- > comment to avoid wordpress format issue
;

set arraysize 1000
set timing on
set autotrace traceonly 

prompt  ===========
prompt  No Order by
prompt  ===========

select
        n_8K, count(*) ct
from
        t1
group by
        n_8k
;


prompt  =============
prompt  With Order by
prompt  =============

select
        n_8K, count(*) ct
from
        t1
group by
        n_8k
order by
        1
;


My table has 8M rows, and my queries target the column with 8K distinct values. I’ve enabled autotrace from SQL*Plus, set a large arraysize (to reduce time lost to SQL*Net round-trips), and set timing on so we can get an elapsed time for total execution. I’ve set autotrace to “traceonly” so that the SQL*Plus client will fetch the data but won’t doesn’t waste resources formatting it, but I’m not actually interested in the handful of execution statistics that will be reported.

Here are the two sets of results from a test run on 19.11.0.0. Note, particularly, the figure for Elapsed:


===========
No Order by
===========

8192 rows selected.

Elapsed: 00:00:00.58

Execution Plan
----------------------------------------------------------
Plan hash value: 136660032

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |  8192 | 32768 |  6938  (61)| 00:00:01 |
|   1 |  HASH GROUP BY     |      |  8192 | 32768 |  6938  (61)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T1   |  8388K|    32M|  3460  (21)| 00:00:01 |
---------------------------------------------------------------------------

=============
With Order by
=============

8192 rows selected.

Elapsed: 00:00:03.03

Execution Plan
----------------------------------------------------------
Plan hash value: 3946799371

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |  8192 | 32768 |  6938  (61)| 00:00:01 |
|   1 |  SORT GROUP BY     |      |  8192 | 32768 |  6938  (61)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T1   |  8388K|    32M|  3460  (21)| 00:00:01 |
---------------------------------------------------------------------------

The time has jumped from slightly under 0.6 seconds to just over 3 seconds as the critical operation changes from a HASH GROUP BY to a SORT GROUP BY (even though the estimated cost, hence predicted run-time, of execution has not changed).

Your first thought at this point is probably along the lines of “surely it doesn’t take 2.4 seconds to sort 8,192 small rows, why doesn’t Oracle do a hash group by followed by a sort order by?” The answer seems to be “it just doesn’t”. So here’s one way to make it happen (with execution plan and elapsed time from 19.11.0.0 again):

select
        dist_8k, ct
from
        (
        select  /*+ no_merge */
                n_8K dist_8k, count(*) ct
        from
                t1
        group by
                n_8k
        )
order by 
        dist_8k
;

8192 rows selected.

Elapsed: 00:00:00.59

Execution Plan
----------------------------------------------------------
Plan hash value: 1705136228

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |  8192 |   208K|  6938  (61)| 00:00:01 |
|   1 |  SORT ORDER BY       |      |  8192 |   208K|  6938  (61)| 00:00:01 |
|   2 |   VIEW               |      |  8192 |   208K|  6938  (61)| 00:00:01 |
|   3 |    HASH GROUP BY     |      |  8192 | 32768 |  6938  (61)| 00:00:01 |
|   4 |     TABLE ACCESS FULL| T1   |  8388K|    32M|  3460  (21)| 00:00:01 |
-----------------------------------------------------------------------------


Again the estimated cost of execution doesn’t (to the nearest whole number) change from the basic cost of the hash aggregation – but we have brought the time back down to just under 0.6 seconds.

It’s worth reminding you at this point that if you can re-engineer any SQL that’s performing badly and can see that the driving core of the query can be reduced to something much simpler and more efficient, then wrapping that core into an inline view with the /*+ no_merge */ hint (and possibly putting it up into a “with subquery” clause) might be the safest first step and most effective way of improving performance.

There is an option for avoiding the query rewrite here – hint the path you want to see:


select  /*+ use_hash_aggregation */
        n_8K, count(*) ct
from
        t1
group by
        n_8k
order by 
        1
;

8192 rows selected.

Elapsed: 00:00:00.59

Execution Plan
----------------------------------------------------------
Plan hash value: 2808104874

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |  8192 | 32768 |  6938  (61)| 00:00:01 |
|   1 |  SORT ORDER BY      |      |  8192 | 32768 |  6938  (61)| 00:00:01 |
|   2 |   HASH GROUP BY     |      |  8192 | 32768 |  6938  (61)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| T1   |  8388K|    32M|  3460  (21)| 00:00:01 |
----------------------------------------------------------------------------

The nice thing about this, of course, is that you don’t actually have to edit the text; the hint could be attached to the query through an SQL Patch (or by abusing the SQL Profile or SQL Plan Baseline mechanisms).

The difficult part of hinting is finding the correct query block name for a more complex query. I simply added the hint /*+ use_hash_aggregation */ but the hint can be aimed at a query block so, in the absence of explicit query block names I could have used the hint /*+ use_hash_aggregation(@sel$1) */ using the default naming.

In a more complex case you can find the appropriate query block name by using the ‘alias’ format option when generating the execution plan. Consider the following query (where t2 and t3 are created from view all_objects), with its initial execution plan:


explain plan for
select 
        t2.owner, count(*)
from 
        t2 
where 
        t2.object_id in (
                select  t3.object_id 
                from    t3 
                where   t3.object_type = 'TABLE'
        )
group by 
        t2.owner
order by 
        t2.owner
/


select * from table(dbms_xplan.display(format=>'alias'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 2646727453

------------------------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |    23 |   575 |   319   (8)| 00:00:01 |
|   1 |  SORT GROUP BY        |      |    23 |   575 |   319   (8)| 00:00:01 |
|*  2 |   HASH JOIN RIGHT SEMI|      |  2298 | 57450 |   317   (7)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL  | T3   |  2298 | 34470 |   158   (7)| 00:00:01 |
|   4 |    TABLE ACCESS FULL  | T2   | 57448 |   561K|   156   (6)| 00:00:01 |
------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$5DA710D3
   3 - SEL$5DA710D3 / T3@SEL$2
   4 - SEL$5DA710D3 / T2@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T2"."OBJECT_ID"="T3"."OBJECT_ID")
   3 - filter("T3"."OBJECT_TYPE"='TABLE')

The Query Block Name / Object Alias information tells us that the query block holding the sort group by at operation 1 is named SEL$5DA710D3 so we can use that as the target query block in the hint: /*+ use_hash_aggregation(@SEL$5DA710D3) */ and the plan changes to:


-------------------------------------------------------------------------------
| Id  | Operation              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |      |    23 |   575 |   319   (8)| 00:00:01 |
|   1 |  SORT ORDER BY         |      |    23 |   575 |   319   (8)| 00:00:01 |
|   2 |   HASH GROUP BY        |      |    23 |   575 |   319   (8)| 00:00:01 |
|*  3 |    HASH JOIN RIGHT SEMI|      |  2298 | 57450 |   317   (7)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL  | T3   |  2298 | 34470 |   158   (7)| 00:00:01 |
|   5 |     TABLE ACCESS FULL  | T2   | 57448 |   561K|   156   (6)| 00:00:01 |
-------------------------------------------------------------------------------

It’s possible that you’ll only notice a significant difference in performance (and, perhaps, PGA memory allocated) when you aggregate a large number of rows into a small result set. And it does seem that this is one case where the only way to get the plan you want without a significant code rewrite is through a hint

tl;dr

If you have some code which does a “group by X, Y, Z order by X, Y, Z” (the order of the columns/expressions used doesn’t have to be the same for the two clauses) then Oracle will use a sort group by operation to handle the aggregation and ordering in a single step, even though we can find cases where hash aggregation followed by sort ordering is more efficient.

If you come across such a case then injecting the hint /*+ use_hash_aggregation(@query_block_name) */ may be the only way to change the execution plan if you’re not allowed to edit the SQL.

Footnote

In the second of the two links to the Oracle Developer Forum you’ll see that one poster pointed out that if the order by clause uses a suitable expression to substitute for one of the columns in the group by clause then you don’t need to hint the code, e.g.

group by
        numeric_column
order by
        numeric_column + 0

Very cute, but not a good idea.

Footnote 2

There is another part to the testing I started in 2007, and I’ll come back to that later in Hash Aggregation – 2.

Pages

Subscribe to Oracle FAQ aggregator