Feed aggregator


Jonathan Lewis - Wed, 2019-08-21 10:49

This note is a short follow-up to a note I wrote some time ago about validating foreign key constraints where I examined the type of SQL Oracle generates internally to do the validation between parent and child tables.  In that article I suggested (before testing) that you could create an SQL patch for the generated SQL to over-ride the plan taken by Oracle – a plan dictated to some extent by hints (including a “deprecated” ordered hint) embedded in the code. I did say that the strategy might not work for SQL optimised by SYS, but it turned out that it did.

Here’s a little script I ran to test a few variations on the theme:

        v1      varchar2(128);
        v1 :=   dbms_sqldiag.create_sql_patch(
                        sql_id  => 'g2z10tbxyz6b0',
                        name    => 'validate_fk',
                        hint_text => 'ignore_optim_embedded_hints'
--                      hint_text => 'parallel(a@sel$1 8)'      -- worked
--                      hint_text => 'parallel(8)'              -- worked
--                      hint_text => q'{opt_param('_fast_full_scan_enabled' 'false')}'  -- worked

I’ve tested this on and, but for earlier versions of Oracle, and depending what patches you’ve applied, you will need to modify the code.

The SQL_ID represents the query for my specific tables, of course, so you will have to do a test run to find the query and SQL_ID for the validation you want to do. This is what the statement for my parent/child pair looked like (cosmetically adjusted):

select /*+ all_rows ordered dynamic_sampling(2) */ 
        A.rowid, :1, :2, :3
        "TEST_USER"."CHILD" A , 
        "TEST_USER"."PARENT" B 
        ("A"."OBJECT_ID" is not null) 
and     ("B"."OBJECT_ID"(+) = "A"."OBJECT_ID")
and     ("B"."OBJECT_ID" is null)

The patch that the script creates simply tells Oracle to ignore the embedded hints (in particular I don’t want that ordered hint), but I’ve left a few other options in the text, commenting them out.

Without the patch I got the following plan:.

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  NESTED LOOPS ANTI (cr=399 pr=279 pw=0 time=47801 us starts=1 cost=70 size=22000 card=1000)
    100000     100000     100000   INDEX FAST FULL SCAN CHI_FK_PAR (cr=250 pr=247 pw=0 time=19943 us starts=1 cost=32 size=1700000 card=100000)(object id 73191)
     10000      10000      10000   INDEX UNIQUE SCAN PAR_PK (cr=149 pr=32 pw=0 time=3968 us starts=10000 cost=0 size=49995 card=9999)(object id 73189)

Rerunning the validation test with the patch in place I got the following plan – clearly the patch had had an effect.

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  HASH JOIN RIGHT ANTI (cr=246 pr=242 pw=0 time=96212 us starts=1 cost=39 size=22000 card=1000)
     10000      10000      10000   INDEX FAST FULL SCAN PAR_PK (cr=24 pr=23 pw=0 time=1599 us starts=1 cost=4 size=50000 card=10000)(object id 73235)
    100000     100000     100000   INDEX FAST FULL SCAN CHI_FK_PAR (cr=222 pr=219 pw=0 time=27553 us starts=1 cost=32 size=1700000 card=100000)(object id 73237)
(object id 73229)

Don’t worry too much about the fact that in my tiny example, and with a very new, nicely structured, data set the original plan was a little faster. In a production environment creating a hash table from the parent keys and probing it with the child keys may reduce the CPU usage and random I/O quite dramatically.

Bear in mind that the best possible plan may depend on many factors, such as the number of child rows per parent, the degree to which the parent and child keys arrive in sorted (or random) order, and then you have to remember that Oracle gets a little clever with the original anti-join (note that there are only 10,000 probes for 100,000 child rows – there’s an effect similar to the scalar subquery caching going on there), so trying to patch the plan the same way for every parent/child pair may not be the best strategy.

If you want to drop the patch after playing around with this example a call to execute dbms_sqldiag.drop_sql_patch(name=>’validate_fk’) will suffice.


Useful Linux commands for an Oracle DBA

Yann Neuhaus - Wed, 2019-08-21 09:00

Oracle & Linux is a great duet. Very powerfull, very scriptable. Here are several commands that make my life easier. These tools seems to be widespread on most of the Linux distributions.

watch with diff

It’s my favorite tool since a long time. watch can repeat a command indefinitely until you stop it with Ctrl+C. And it’s even more useful with the – -diff parameter. All the differences since last run are highlighted. For example if you want to monitor a running backup, try this:

watch -n 60 --diff 'sqlplus -s /nolog @check_backup; echo ; du -hs /backup'

The check_backup.sql being:

conn / as sysdba
set feedback off
set lines 150
set pages 100
col status for a30
alter session set NLS_DATE_FORMAT="DD/MM-HH24:MI:SS";
select start_time "Start", round (input_bytes/1024/1024,1) "Source MB", round(output_bytes/1024/1024,1) "Backup MB", input_type "Type", status "Status", round(elapsed_seconds/60,1) "Min", round(compression_ratio,1) "Ratio" from v$rman_backup_job_details where start_time >= SYSDATE-1 order by 1 desc;

Every minute (60 seconds), you will check, in the rman backup views, the amount of data already backed up. And the amount of data in your backup folder.

Very convenient to keep an eye on things without actually repeating the commands.

Truncate a logfile in one simple command

Oracle is generating a lot of logfiles, some of them can reach several GB and fill up your filesystem. How to quickly empty a big logfile without removing it? Simply use the true command:

true > listener.log

Run a SQL script on all the running databases

You need to check something on every databases running on your system? Or eventually make the same change to all these databases? A single line will do the job:

for a in `ps -ef | grep pmon | grep -v grep | awk '{print $8}' | cut -c 10- | sort`; do . oraenv <<< $a; sqlplus -s / as sysdba @my_script.sql >> output.log; done

Don’t forget to put an exit at the end of your SQL script my_script.sql. Using this script through ansible will even increase the scope and save hours of work.

Copy a folder to another server

scp is fine for copying single file or multiple files inside a folder. But copying a folder recursively to a remote server with scp is more complicated. Actually, you need to do a tarfile for that purpose. A clever solution is to use tar without creating any archive on the source server, but with a pipe to the destination server. Very useful and efficient, with just one line:

tar cf - source_folder | ssh oracle@ "cd destination_folder_for_source_folder; tar xf -"

For sure, you will need +rwx on destination_folder_for_source_folder for oracle user on

Check the network speed – because you need to check

As an Oracle DBA you probably have to deal with performance: not a problem it’s part of your job. But are you sure your database system is running at full network speed? You probably didn’t check that, but low network speed could be the root cause of some performance issues. This concerns copper-based networks.

Today’s servers handle 10Gb/s ethernet speed but can also work with 1Gb/s depending on the network behind the servers. You should be aware that you can still find 100Mb/s network speeds, for example if the network port of the switch attached to your server has been limitated for some reason (needed for the server connected to this port before yours for example). If 1Gb/s is probably enough for most of the databases, 100Mb/s is clearly inadequate, and most of the recent servers will even not handle correctly 100Mb/s network speed. Your Oracle environment may work, but don’t expect high performance level as your databases will have to wait for the network to send packets. Don’t forget that 1Gb/s gives you about 100-120MBytes/s in real condition, and 100Mb/s only allows 10-12MBytes/s, “Fast Ethernet” of the 90’s…

Checking the network speed is easy, with ethtool.

[root@oda-x6-2 ~]# ethtool btbond1
Settings for btbond1:
Supported ports: [ ] Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 1000Mb/s <= Network speed is OK
Duplex: Full
Port: Other
Transceiver: internal
Auto-negotiation: off
Link detected: yes

In case of a network bonding interface, please also check the real interfaces associated to the bonding, all the network interfaces belonging to the bonding need to have the same network speed :

[root@oda-x6-2 ~]# ethtool em1
Settings for em1:
Supported ports: [ TP ] Supported link modes: 100baseT/Full <= This network interface is physically supporting 100Mb/s
1000baseT/Full <= also 1Gb/s
10000baseT/Full <= and 10Gb/s
Supported pause frame use: Symmetric
Supports auto-negotiation: Yes
Advertised link modes: 100baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Speed: 1000Mb/s <= Network speed is 1Gb/s
Duplex: Full
Port: Twisted Pair
Transceiver: external
Auto-negotiation: on
MDI-X: Unknown
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes <= This interface is connected to a switch


Hope this helps!

Cet article Useful Linux commands for an Oracle DBA est apparu en premier sur Blog dbi services.

AUSOUG Connect 2019 Conference Series

Richard Foote - Wed, 2019-08-21 03:26
  AUSOUG will again be running their excellent CONNECT 2019 conference series this year at the following great venues: Monday 14th October – Rendezvous Hotel In Melbourne Wednesday 16th October –  Mercure Hotel in Perth As usual, there’s a wonderful lineup of speakers from both Australia and overseas including: Connor McDonald Scott Wesley Guy Harrison […]
Categories: DBA Blogs

Join View

Jonathan Lewis - Tue, 2019-08-20 06:39

It’s strange how one thing leads to another when you’re trying to check some silly little detail. This morning I wanted to find a note I’d written about the merge command and “stable sets”, and got to a draft about updatable join views that I’d started in 2016 in response to a question on OTN (as it was at the time) and finally led to a model that I’d written in 2008 showing that the manuals were wrong.

Since the manual – even the 19c manual – is still wrong regarding the “Delete Rule” for updatable (modifiable) join views I thought I’d quickly finish off the draft and post the 2008 script. Here’s what the manual says about deleting from join views (my emphasis on “exactly”):

Rows from a join view can be deleted as long as there is exactly one key-preserved table in the join. The key preserved table can be repeated in the FROM clause. If the view is defined with the WITH CHECK OPTION clause and the key preserved table is repeated, then the rows cannot be deleted from the view.

But here’s a simple piece of code to model a delete from a join view that breaks the rule:

rem     Script:         delete_join.sql 
rem     Dated:          Dec 2008
rem     Author:         J P Lewis

create table source
select level n1
from dual
connect by level <= 10
create table search
select level n1
from dual
connect by level <= 10

alter table source modify n1 not null;
alter table search modify n1 not null;

create unique index search_idx on search(n1);
-- create unique index source_idx on source(n1)

I’ve set up a “source” and a “search” table with 10 rows each and the option for creating unique indexes on each table for a column that’s declared non-null. Initially, though, I’ve only created the index on search to see what happens when I run a couple of “join view” deletes using “ANSI” syntax.

prompt  ===============================
prompt  Source referenced first in ANSI
prompt  ===============================

delete from (select * from source s join search s1 on s.n1 = s1.n1);
select count(1) source_count from source;
select count(1) search_count from search;
prompt  ===============================
prompt  Search referenced first in ANSI
prompt  ===============================

delete from (select * from search s join source s1 on s.n1 = s1.n1);
select count(1) source_count from source;
select count(1) search_count from search;

With just one of the two unique indexes in place the order of the tables in the inline view makes no difference to the outcome. Thanks to the unique index on search any row in the inline view corresponds to exactly one row in the source table, while a single row in the search table could end up appearing in many rows in the view – so the delete implictly has to operate as “delete from source”. So both deletes will result in the source_count being zero, and the search_count remaining at 10.

If we now repeat the experiment but create BOTH unique indexes, both source and search will be key-preserved in the join. According to the manual the delete should produce some sort of error. In fact the delete works in both cases – but the order that the tables appear makes a difference. When source is the first table in the in-line view the source_count drops to zero and the search_count stays at 10; when search is the first table in the in-line view the search_count drops to zero and the source_count stays at 10.

I wouldn’t call this totally unreasonable – but it’s something you need to know if you’re going to use the method, and something you need to document very carefully in case someone editing your code at a later date (or deciding that they could add a unique index) doesn’t realise the significance of the table order.

This does lead on to another important test – is it the order that the tables appear in the from clause that matters, or the order they appear in the join order that Oracle uses to optimise the query. (We hope – and expect – that it’s the join order as written, not the join order as optimised, otherwise the effect of the delete could change from day to day as the optimizer chose different execution plans!). To confirm my expectation I switched to traditional Oracle syntax with hints (still with unique indexes on both tables), writing a query with search as the first table in the from clause, but hinting the inline view to vary the optimised join order.

prompt  ============================================
prompt  Source hinted as leading table in join order 
prompt  ============================================

delete from (
                /*+ leading(s1, s) */
                search s,
                source s1 
                s.n1 = s1.n1

select count(1) source_count from source; 
select count(1) search_count from search;

prompt  ============================================
prompt  Search hinted as leading table in join order 
prompt  ============================================

delete from (
                /*+ leading(s, s1) */
                search s,
                source s1 
                s.n1 = s1.n1

select count(1) source_count from source; 
select count(1) search_count from search;

In both cases the rows were deleted from search (the first table in from clause). And, to answer the question you should be asking, I did check the execution plans to make sure that the hints had been effective:

Source hinted as leading table in join order

| Id  | Operation           | Name       | Rows  | Bytes | Cost  |
|   0 | DELETE STATEMENT    |            |    10 |    60 |     1 |
|   1 |  DELETE             | SEARCH     |       |       |       |
|   2 |   NESTED LOOPS      |            |    10 |    60 |     1 |
|   3 |    INDEX FULL SCAN  | SOURCE_IDX |    10 |    30 |     1 |
|*  4 |    INDEX UNIQUE SCAN| SEARCH_IDX |     1 |     3 |       |

Search hinted as leading table in join order

| Id  | Operation           | Name       | Rows  | Bytes | Cost  |
|   0 | DELETE STATEMENT    |            |    10 |    60 |     1 |
|   1 |  DELETE             | SEARCH     |       |       |       |
|   2 |   NESTED LOOPS      |            |    10 |    60 |     1 |
|   3 |    INDEX FULL SCAN  | SEARCH_IDX |    10 |    30 |     1 |
|*  4 |    INDEX UNIQUE SCAN| SOURCE_IDX |     1 |     3 |       |


Using updatable join views to handle deletes can be very efficient but the manual’s statement of the “Delete Rule” is incorrect. It is possible to have several key-preserved tables in the view that you’re using, and if that’s the case you need to play safe and ensure that the table you want to delete from is the first table in the from clause. This means taking steps to eliminate the risk of someone editing some code at a later date without realising the importance of the table order.


Upgrading from OpenLeap to SLES15

Yann Neuhaus - Mon, 2019-08-19 13:58

Sometimes business plans change and maybe you need to move your OpenLeap 15 Server to the supported version SUSE Linux Enterprise Server 15. Upgrade is getting really easy with version 15. It can be performed online. So your server does not need to be offline during the upgrade.

So let’s have a look on the upgrade.

First of all, you need a SUSE Subscription. We will help you with this. Just send us a message.
As soon as you got it you can go on with the upgrade.

Let’s start with checking the actual version running on the server.

openleap:~ $ cat /etc/os-release
NAME="openSUSE Leap"
ID_LIKE="suse opensuse"
PRETTY_NAME="openSUSE Leap 15.0"

Now we can install SUSEConnect, so we can register the system in the next step.

openleap:~ $ zypper in SUSEConnect
Retrieving repository 'openSUSE-Leap-15.0-Update' metadata ..............................................................................................................................................................[done]
Building repository 'openSUSE-Leap-15.0-Update' cache ...................................................................................................................................................................[done]
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following 3 NEW packages are going to be installed:
  SUSEConnect rollback-helper zypper-migration-plugin

3 new packages to install.
Overall download size: 138.9 KiB. Already cached: 0 B. After the operation, additional 213.9 KiB will be used.
Continue? [y/n/...? shows all options] (y): y
Retrieving package SUSEConnect-0.3.17-lp150.2.14.1.x86_64                                                                                                                                 (1/3), 100.9 KiB (176.3 KiB unpacked)
Retrieving: SUSEConnect-0.3.17-lp150.2.14.1.x86_64.rpm ..................................................................................................................................................................[done]
Retrieving package rollback-helper-1.0+git20181218.5394d6e-lp150.3.3.1.noarch                                                                                                             (2/3),  22.6 KiB ( 19.9 KiB unpacked)
Retrieving: rollback-helper-1.0+git20181218.5394d6e-lp150.3.3.1.noarch.rpm ..................................................................................................................................[done (7.9 KiB/s)]
Retrieving package zypper-migration-plugin-0.11.1520597355.bcf74ad-lp150.1.1.noarch                                                                                                       (3/3),  15.5 KiB ( 17.6 KiB unpacked)
Retrieving: zypper-migration-plugin-0.11.1520597355.bcf74ad-lp150.1.1.noarch.rpm ..............................................................................................................................[done (253 B/s)]
Checking for file conflicts: ............................................................................................................................................................................................[done]
(1/3) Installing: SUSEConnect-0.3.17-lp150.2.14.1.x86_64 ................................................................................................................................................................[done]
(2/3) Installing: rollback-helper-1.0+git20181218.5394d6e-lp150.3.3.1.noarch ............................................................................................................................................[done]
(3/3) Installing: zypper-migration-plugin-0.11.1520597355.bcf74ad-lp150.1.1.noarch ......................................................................................................................................[done]
openleap:~ # 

Register your system at the SUSE Customer Center, so you get full access to the repositories. This step is mandatory, otherweise it’s not possible to upgrade.

openleap:~ $ SUSEConnect -r REGISTRATION_CODE -p SLES/15/x86_64
Registering system to SUSE Customer Center

Announcing system to https://scc.suse.com ...

Activating SLES 15 x86_64 ...
-> Adding service to system ...
-> Installing release package ...

Successfully registered system

Now check for available extensions and the command to activate it using SUSEConnect

openleap:~ $ SUSEConnect --list-extensions

    Basesystem Module 15 x86_64
    Activate with: SUSEConnect -p sle-module-basesystem/15/x86_64

        Containers Module 15 x86_64
        Activate with: SUSEConnect -p sle-module-containers/15/x86_64

        Desktop Applications Module 15 x86_64
        Activate with: SUSEConnect -p sle-module-desktop-applications/15/x86_64

            Development Tools Module 15 x86_64
            Activate with: SUSEConnect -p sle-module-development-tools/15/x86_64

            SUSE Linux Enterprise Workstation Extension 15 x86_64
            Activate with: SUSEConnect -p sle-we/15/x86_64 -r ADDITIONAL REGCODE

        SUSE Cloud Application Platform Tools Module 15 x86_64
        Activate with: SUSEConnect -p sle-module-cap-tools/15/x86_64

        SUSE Linux Enterprise Live Patching 15 x86_64
        Activate with: SUSEConnect -p sle-module-live-patching/15/x86_64 -r ADDITIONAL REGCODE

        SUSE Package Hub 15 x86_64
        Activate with: SUSEConnect -p PackageHub/15/x86_64

        Server Applications Module 15 x86_64
        Activate with: SUSEConnect -p sle-module-server-applications/15/x86_64

            Legacy Module 15 x86_64
            Activate with: SUSEConnect -p sle-module-legacy/15/x86_64

            Public Cloud Module 15 x86_64
            Activate with: SUSEConnect -p sle-module-public-cloud/15/x86_64

            SUSE Linux Enterprise High Availability Extension 15 x86_64
            Activate with: SUSEConnect -p sle-ha/15/x86_64 -r ADDITIONAL REGCODE

            Web and Scripting Module 15 x86_64
            Activate with: SUSEConnect -p sle-module-web-scripting/15/x86_64


(Not available) The module/extension is not enabled on your RMT/SMT
(Activated)     The module/extension is activated on your system


You can find more information about available modules here:

In case you need more modules, you can add now any module you need. Please keep in mind, for the extensions you need a separate subscription. For my needs the base module is enough.

openleap:~ $ SUSEConnect -p sle-module-basesystem/15/x86_64
Registering system to SUSE Customer Center

Updating system details on https://scc.suse.com ...

Activating sle-module-basesystem 15 x86_64 ...
-> Adding service to system ...
-> Installing release package ...

Successfully registered system

Next step is to do the upgrade itself. As the output is quite huge, I put some [***] as place holders.

openleap:~ $ zypper dup --force-resolution
Warning: You are about to do a distribution upgrade with all enabled repositories. Make sure these repositories are compatible before you continue. See 'man zypper' for more information about this command.
Refreshing service 'Basesystem_Module_15_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_15_x86_64'.
Loading repository data...
Warning: Repository 'openSUSE-Leap-15.0-Update-Non-Oss' appears to be outdated. Consider using a different mirror or server.
Reading installed packages...
Computing distribution upgrade...

The following 11 NEW packages are going to be installed:
  dejavu-fonts glibc-locale-base google-opensans-fonts issue-generator kernel-default-4.12.14-lp150.12.67.1 man-pages man-pages-posix release-notes-sles rpcgen yast2-vm zypper-search-packages-plugin

The following 286 packages are going to be upgraded:
  NetworkManager NetworkManager-lang PackageKit PackageKit-backend-zypp PackageKit-gstreamer-plugin PackageKit-gtk3-module PackageKit-lang aaa_base aaa_base-extras apparmor-abstractions 
  yast2-storage-ng yast2-users

The following 288 packages have no support information from their vendor:
  NetworkManager NetworkManager-lang PackageKit PackageKit-backend-zypp PackageKit-gstreamer-plugin PackageKit-gtk3-module PackageKit-lang aaa_base aaa_base-extras apparmor-abstractions apparmor-docs apparmor-parser

The following package is not supported by its vendor:

286 packages to upgrade, 11 new.
Overall download size: 322.4 MiB. Already cached: 0 B. After the operation, additional 343.9 MiB will be used.
Continue? [y/n/...? shows all options] (y): y
Retrieving package issue-generator-1.6-1.1.noarch                                                                                                                                       (1/297),  28.0 KiB ( 25.6 KiB unpacked)
Retrieving: issue-generator-1.6-1.1.noarch.rpm ..........................................................................................................................................................................[done]
Retrieving package man-pages-4.16-3.3.1.noarch                                                                                                                                          (2/297),   

Executing %posttrans scripts ............................................................................................................................................................................................[done]
There are some running programs that might use files deleted by recent upgrade. You may wish to check and restart some of them. Run 'zypper ps -s' to list these programs.

Disable the openLeap repository to prevent warnings when using zypper (this is optional).

openleap:~ $ zypper lr -d
Repository priorities are without effect. All enabled repositories share the same priority.

#  | Alias                                                                       | Name                                      | Enabled | GPG Check | Refresh | Priority | Type   | URI                                                                                                                                                                                                                                                        | Service       
 1 | Basesystem_Module_15_x86_64:SLE-Module-Basesystem15-Debuginfo-Pool          | SLE-Module-Basesystem15-Debuginfo-Pool    | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Products/SLE-Module-Basesystem/15/x86_64/product_debug?8YsR5pv4h6qQr15qW8KWqXRBK0MDd9EONPOcnYjrQyXxeU4PVhIX5FRdwf5ziU1Oa8rdtuE2W4NyVotHhKeQrdvQMM9OQ3sEllMJno1VxgQEPq-1QyaCv24cSZsg2H21-d3hQqkxXD3iUKRgNTqHGtkRHHCN71yMa28   | Basesystem_Module_15_x86_64
 2 | Basesystem_Module_15_x86_64:SLE-Module-Basesystem15-Debuginfo-Updates       | SLE-Module-Basesystem15-Debuginfo-Updates | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Updates/SLE-Module-Basesystem/15/x86_64/update_debug?jjKAgTm0AAAAAAAAq_jTGwRAkx4zc8EQV0ANMjmrFjIoJBofX8ETJPW9qS9ojjVsnoDNK1TRGjk5t31J0Y9Bv_KRzpdYdJVmoH_gO-WaIo-dsZHiDXUm9fjYvLJcjsm0TidUzPnNkAqDAQsPZGZUUCXrek3JjRZl        | Basesystem_Module_15_x86_64
 3 | Basesystem_Module_15_x86_64:SLE-Module-Basesystem15-Pool                    | SLE-Module-Basesystem15-Pool              | Yes     | (r ) Yes  | No      |   99     | rpm-md | https://updates.suse.com/SUSE/Products/SLE-Module-Basesystem/15/x86_64/product?MbepfbRQy5WToAHi4xjhC2KOqjwW00ax8Xj23W9iMukhhWz78BKVY5sSDHiT4nurfz1JyHJrqcqpZiJU-PdajPthp3lQx4hyu-5FzifML0ALTTvKY6XEYA7qlwbn0E6fmA_iSbMl2JOWvZDpeQUZtMlCjQI                 | Basesystem_Module_15_x86_64
 4 | Basesystem_Module_15_x86_64:SLE-Module-Basesystem15-Source-Pool             | SLE-Module-Basesystem15-Source-Pool       | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Products/SLE-Module-Basesystem/15/x86_64/product_source?86sSfrO8KT3dMsapcn4ihtYRbSwy2kunffEZ6oUiH-vBC-0IkEZQPniCPn63-DeOwlX9brw3vR-BqMNjC9KiOAq0JR0aHZUcyHP5sGhjitLFGTx9zUYo3F4u0KNC3rqIq2WGq-kZEhLm1s2U-vVJHpr6x5RWmMjuBDAe | Basesystem_Module_15_x86_64
 5 | Basesystem_Module_15_x86_64:SLE-Module-Basesystem15-Updates                 | SLE-Module-Basesystem15-Updates           | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | https://updates.suse.com/SUSE/Updates/SLE-Module-Basesystem/15/x86_64/update?WzCCey-NrSLfBHonPxWuaTXt1QuGMemPZsFEhtMfDC_jKtn5XUsqbdI8JZ9D6YNveeYrthpKY2uLTOIB_vtbMQsQUblAr2dU4D59yIBIjZv1l91CLeZD2z61oLPc7ad0UkZjl9R_e6bSNAGP8oz94Fp5                      | Basesystem_Module_15_x86_64
 6 | SUSE_Linux_Enterprise_Server_15_x86_64:SLE-Product-SLES15-Debuginfo-Pool    | SLE-Product-SLES15-Debuginfo-Pool         | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Products/SLE-Product-SLES/15/x86_64/product_debug?xtsT1GSwugZaHDGElBaTczgwJS79hgJDOy_tkzInodgbplBttQlatgP5rI0SnLQqLCw5WsfSqBIyN_tnMVZn4ZLJ3S3ENBDiZsYhg0vGZf7ILMix03bcXoHEKlzAYRntcEIx877RvS7DDHAAR4cj1V5gzcu6               | SUSE_Linux_Enterprise_Server_15_x86_64
 7 | SUSE_Linux_Enterprise_Server_15_x86_64:SLE-Product-SLES15-Debuginfo-Updates | SLE-Product-SLES15-Debuginfo-Updates      | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Updates/SLE-Product-SLES/15/x86_64/update_debug?tkJ9rVV33hinQtEBnPYH_5D8OCs1ZtB4WEQFAShIaq1yN6Lwg2-W2Zu2AFALp5Jk3Oh1g1XVBqEOSPnSgACvcCIWuXr_cRfirUHEwbNqIcaSwcjxGjJYdhsb97t01_X-LPT0FDiGGezP64HheC_CzdV6xA                   | SUSE_Linux_Enterprise_Server_15_x86_64
 8 | SUSE_Linux_Enterprise_Server_15_x86_64:SLE-Product-SLES15-Pool              | SLE-Product-SLES15-Pool                   | Yes     | (r ) Yes  | No      |   99     | rpm-md | https://updates.suse.com/SUSE/Products/SLE-Product-SLES/15/x86_64/product?887kGBgH3AfONFY1X3wVkuYn_5nm8sTKex06X1JSRI9gXQNqJioSBea5sAECwbVhqs510L3YRdVlVLgsavZ9D8PPplk8S_oEvhWEQdS-jfFH9dTKcukF09RkjliWQkcaNHkFzY4uQWbHzXJYekkn                             | SUSE_Linux_Enterprise_Server_15_x86_64
 9 | SUSE_Linux_Enterprise_Server_15_x86_64:SLE-Product-SLES15-Source-Pool       | SLE-Product-SLES15-Source-Pool            | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Products/SLE-Product-SLES/15/x86_64/product_source?XhlzrvfoPp1qTZqv1hErqkUwBGOoZMYY4RAS-c78IKoacswAmOXTemuxa8ZiAFfopgedlQfewbcC7_gxUERoKGdlcW7E4WaqpcuSDYh-xlJr2SG9-4OuxPDToPfZ1CgvDDZIAlqIyXDKGcwvl3EjALH9msDNHg            | SUSE_Linux_Enterprise_Server_15_x86_64
10 | SUSE_Linux_Enterprise_Server_15_x86_64:SLE-Product-SLES15-Updates           | SLE-Product-SLES15-Updates                | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | https://updates.suse.com/SUSE/Updates/SLE-Product-SLES/15/x86_64/update?j0Qh2SfH11scgFCBmZI3R9B4GMJWAh5l6C0P7_jtUle_3dAATzJ2wwwo3SR_dOpn4bBYL4wSkD9bMuCRJlzcmWSkeh1W06Rz8Jsq1KysLODXqUtsBgeE5Tju1Pf-XTpNJF1RQMRRRmb_Tj8RPA                                 | SUSE_Linux_Enterprise_Server_15_x86_64
11 | openSUSE-Leap-15.0-1                                                        | openSUSE-Leap-15.0-1                      | No      | ----      | ----    |   99     | rpm-md | cd:///?devices=/dev/disk/by-id/ata-VBOX_CD-ROM_VB0-01f003f6                                                                                                                                                                                                |               
12 | repo-debug                                                                  | openSUSE-Leap-15.0-Debug                  | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/debug/distribution/leap/15.0/repo/oss/                                                                                                                                                                                        |               
13 | repo-debug-non-oss                                                          | openSUSE-Leap-15.0-Debug-Non-Oss          | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/debug/distribution/leap/15.0/repo/non-oss/                                                                                                                                                                                    |               
14 | repo-debug-update                                                           | openSUSE-Leap-15.0-Update-Debug           | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/debug/update/leap/15.0/oss/                                                                                                                                                                                                   |               
15 | repo-debug-update-non-oss                                                   | openSUSE-Leap-15.0-Update-Debug-Non-Oss   | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/debug/update/leap/15.0/non-oss/                                                                                                                                                                                               |               
16 | repo-non-oss                                                                | openSUSE-Leap-15.0-Non-Oss                | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://download.opensuse.org/distribution/leap/15.0/repo/non-oss/                                                                                                                                                                                          |               
17 | repo-oss                                                                    | openSUSE-Leap-15.0-Oss                    | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://download.opensuse.org/distribution/leap/15.0/repo/oss/                                                                                                                                                                                              |               
18 | repo-source                                                                 | openSUSE-Leap-15.0-Source                 | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/source/distribution/leap/15.0/repo/oss/                                                                                                                                                                                       |               
19 | repo-source-non-oss                                                         | openSUSE-Leap-15.0-Source-Non-Oss         | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/source/distribution/leap/15.0/repo/non-oss/                                                                                                                                                                                   |               
20 | repo-update                                                                 | openSUSE-Leap-15.0-Update                 | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://download.opensuse.org/update/leap/15.0/oss/                                                                                                                                                                                                         |               
21 | repo-update-non-oss                                                         | openSUSE-Leap-15.0-Update-Non-Oss         | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://download.opensuse.org/update/leap/15.0/non-oss/                                                                                                                                                                                                     |               
openleap:~ # zypper mr -d 21
Repository 'repo-update-non-oss' has been successfully disabled.

Check for orphaned packages.

openleap:~ $ zypper rm $(zypper --no-refresh packages --orphaned | gawk '{print $5}'  | tail -n +5 )
Too few arguments.
At least one package name is required.
remove (rm) [OPTIONS]  ...

Remove packages with specified capabilities. A capability is NAME[.ARCH][OP], where OP is
one of <, =, >.

  Command options:

-r, --repo     Load only the specified repository.
-t, --type            Type of package (package, patch, pattern, product).
-n, --name                  Select packages by plain name, not by capability.
-C, --capability            Select packages by capability.
-u, --clean-deps            Automatically remove unneeded dependencies.
-U, --no-clean-deps         No automatic removal of unneeded dependencies.
-D, --dry-run               Test the removal, do not actually remove.
    --details               Show the detailed installation summary.
-y, --no-confirm            Don't require user interaction. Alias for the --non-interactive global

  Solver options:

    --debug-solver          Create a solver test case for debugging.
    --force-resolution      Force the solver to find a solution (even an aggressive one) rather than
    --no-force-resolution   Do not force the solver to find solution, let it ask.

openleap:~ $ zypper --no-refresh packages --orphaned
Loading repository data...
Reading installed packages...
No packages found.

My whole migration lasts about 30 minutes. But that’s really a small server.

And in the end – you have to reboot, anyway.

openleap:~ $ systemctl reboot

Let’s check if we really run a SLES15 server now.

openleap:~ # cat /etc/os-release
PRETTY_NAME="SUSE Linux Enterprise Server 15"

Looks good! The system is running SLES15 now. Now you can enjoy the full support and service of SUSE.

Cet article Upgrading from OpenLeap to SLES15 est apparu en premier sur Blog dbi services.

Oracle 19c Automatic Indexing: My First Auto Index (Absolute Beginners)

Richard Foote - Mon, 2019-08-19 04:21
I am SOOOO struggling with this nightmare block editor but here goes. Please excuse any formatting issues below: I thought it was time to show the new Oracle 19c Automatic Indexing feature in action and what better way than to go through how I created my first ever Automatic Index. To start, I create a […]
Categories: DBA Blogs

Linux: Configuring hosts per ssh in parallel: pssh

Dietrich Schroff - Fri, 2019-08-16 15:11
If you have to set up some hosts in a way, that a number of commands has to be executed on each node, than you should consider PSSH:


The installation is straight forward:
root@zerberus:~# apt-get install pssh
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.      
Statusinformationen werden eingelesen.... Fertig
Die folgenden Pakete wurden automatisch installiert und werden nicht mehr benötigt:
  btrfs-tools geoip-database-extra libcryptui0a libjs-openlayers seahorse-daemon
Verwenden Sie »apt autoremove«, um sie zu entfernen.
Die folgenden NEUEN Pakete werden installiert:
0 aktualisiert, 1 neu installiert, 0 zu entfernen und 8 nicht aktualisiert.
Es müssen 29,0 kB an Archiven heruntergeladen werden.
Nach dieser Operation werden 135 kB Plattenplatz zusätzlich benutzt.
Holen:1 http://de.archive.ubuntu.com/ubuntu bionic/universe amd64 pssh all 2.3.1-1 [29,0 kB]
Es wurden 29,0 kB in 0 s geholt (71,0 kB/s).
Vormals nicht ausgewähltes Paket pssh wird gewählt.
(Lese Datenbank ... 488993 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von .../archives/pssh_2.3.1-1_all.deb ...
Entpacken von pssh (2.3.1-1) ...
pssh (2.3.1-1) wird eingerichtet ...
Trigger für man-db (2.8.3-2ubuntu0.1) werden verarbeitet ...
But executing on ubuntu is a little bit tricky:
If you want to do a test, you have to know, that ubuntu has renamed the binary to parallel-ssh. pssh is not known at commandline:

schroff@zerberus:~$ parallel-ssh -i -H "localhost" ls -l .bashrc
[1] 00:04:48 [SUCCESS]
-rw-r--r-- 1 schroff schroff 3815 Jul 14  2017 .bashrc
[2] 00:04:48 [SUCCESS] localhost
-rw-r--r-- 1 schroff schroff 3815 Jul 14  2017 .bashrc
Please note the syntax highlighting, which i found very helpful:

“No Primary Key item has been defined for form region”

Jeff Kemp - Fri, 2019-08-16 02:30

Otherwise reported as “WWV_FLOW_FORM_REGION.NO_PRIMARY_KEY_ITEM” or merely “Internal error”, this bug caused me to waste hours of investigation, twice, because a simple Google search was not returning any results. This post is merely to remedy that situation.

On an APEX 19.1 page with a new Form region combined with an Interactive Grid region, when the user makes a change to a record and Submits the page, they get this error.


Basic Replication -- 1 : Introduction

Hemant K Chitale - Thu, 2019-08-15 23:24
Basic Replication, starting with Read Only Snapshots has been available in Oracle since  V7.   This was doable with the "CREATE SNAPSHOT" command.

In 8i, the term was changed from "Snapshot" to "Materialized View"  and the "CREATE MATERIALIZED VIEW" command was introduced, while "CREATE SNAPSHOT" was still supported.

Just as CREATE SNAPSHOT is still available in 19c,  DBMS_SNAPSHOT.REFRESH is also available.

Not that I recommend that you use CREATE SNAPSHOT and DBMS_SNAPSHOT anymore.  DBAs and Developers should have been using CREATE MATERIALIZED VIEW and DBMS_REFRESH since 8i.

In the next few blog posts (this will be a very short series) I will explore Basic Replication.  Let me know if you want to see it in 11.2 and 12c as well.

Categories: DBA Blogs

2019 Public Appearances (What In The World)

Richard Foote - Wed, 2019-08-14 21:29
I’ll be presenting at a number of Oracle events over the remainder of the year. Details as follows: Oracle Open World – San Francisco (16-19 September 2019)
Categories: DBA Blogs

Find Docker Container IP Address?

DBASolved - Wed, 2019-08-14 11:38

This is just simple post for later reference, if I need it …

In setting up some docker containers for testing Oracle GoldenGate, I needed to find the IP address of the container where my database was running (I keep my database in a seperate container in order not to rebuild it every time).

To find the address of my database container, I had to use the docker “inspect” command. This command returns low level infomation on Docker objects.

The syntax is as follows:

docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'  




Categories: DBA Blogs

Identifying the TNS_ADMIN for an deployment in GoldenGate Microservices

DBASolved - Wed, 2019-08-14 10:30

Setting up network routing/access with Oracle GoldenGate Microservices (12.3 and later) can be an interesting experience. As many in the Oracle space knows, you simply have to setup TNS_ADMIN to point to the location where your sqlnet.ora and tnsnames.ora files are located. This would normally look like this:

export TNS_ADMIN=${ORACLE_HOME}/network/admin


export TNS_ADMIN=$(ORA_CLIENT_HOME}/network/admin


These examples still working for Oracle GoldenGate Classic, however, when we start looking at this through the lens of Microservices; it changes a bit. Within the Oracle GoldenGate Microservices architecture the TNS_ADMIN enviroment variable has to be set “per deployment”. Depending on the number of deployments that are deployed with in the architecture, it is possible to have 1+N TNS_ADMIN variables.

As a illistration, it would look something like this:

As you can see this is specific to the Microservices architecture and how to setup network routing for individual deployments.


How do you set the TNS_ADMIN environment variable for each deployment? It is quite simple, when you are building a deployment using the Oracle GoldenGate Configuration Assistant (OGGCA). Priort to running OGGCA, you can set the TNS_ADMIN variable at the OS level and the OGGCA will pick it up for that run and configuration of that specific deployment.

Optionally, you don’t have to set it at the OS level. During the OGGCA walkthrough, you will be able to set the variable manually. The OGGCA will not move past the enviornment variables step until it is provided.

Changing TNS_ADMIN

After building a deployment, you many want to chang the location of your network related files. This can be done from either the HTML5 web page for the deployment or from REST API.

To change TNS_ADMIN from the HTML5 pages within Oracle GoldenGate Microservices, you need to start at the ServiceManager Overview page. At the bottom on this page, there is a section called “Deployments”

The select the deployment you want to work with. After clicking on the deployment name, you should now be on the “Deployment Information” page. This page has two tabs at the top. The first tab is related to details of the deployment. The second table is related to configurations for the deployment.

Within the second tab – Configurations, is where you can set/change the environment variables for the deployment. In this case, we want to to modify the TNS_ADMIN enviornment variable.


To the right of the variable in the “Actions” column, click on the pencil icon. This will allow you to edit the environment variable. Change to the new location and save it. You may need to restart the deployment (hint, that step is on the ServiceManager Overview page).

At this point, you should now be able to change the location of your TNS_ADMIN variable. This is also handy for Oracle GoldenGate Microserivces on Marketplace as well … just saying.


This same process can be done quickly using the REST API. The below sample code, is only and sample and has not been tested. Use at your own risk!

curl -X PATCH \
  <a href="https://<ip_address>/services/v2/deployments/alpha" target="_blank" rel="noopener">https://<ip_address>/services/v2/deployments/alpha</a> \
  -H 'cache-control: no-cache' \
  -d '{


Categories: DBA Blogs


Jonathan Lewis - Wed, 2019-08-14 08:20

What happens when you execute dbms_stats.gather_system_stats() with the ‘Exadata’ option ?

Here’s what my system stats look like ( test results) after doing so. (The code to generate the two different versions is at the end of the note).

System Stats
Timed: 13-Aug-2019 15:00:00 - 13-Aug-2019 15:00:00
CPUSPEED        :
CPUSPEEDNW      :          918
IOSEEKTIM       :           10
IOTFRSPEED      :      204,800
MAXTHR          :
MBRC            :          128
MREADTIM        :
SLAVETHR        :
SREADTIM        :

PL/SQL procedure successfully completed.

MBRC       :          128
CPUSPEEDNW :          918
IOSEEKTIM  :           10
IOTFRSPEED :      204,800
MAXTHR     :

PL/SQL procedure successfully completed.

All the code does is set the MBRC, IOSEEKTIM, and IOTFRSPEED to fixed values and the only real gather is the CPUSPEEDNW. The parameters showing blanks are deliberately set null by the procedure – before I called the gather_system_stats() every parameter had a value. You could also check the SQL trace file (with bind captured enabled) to see the statements that deliberately set those parameters to null if you want more proof.

What are the consequences of this call (assuming you haven’t also done something with the calibrate_io() procedure? Essentially Oracle now has information that says a single (8KB) block read request will take marginally over 10 milliseconds, and a multiblock read request of 1MB will take just over 15 milliseconds: in other words “tablescans are great, don’t use indexes unless they’re really precisely targetted”. To give you a quantitative feel for the numbers: given the choice between doing a tablescan of 1GB to pick 1,500 randomly scattered rows and using a perfect index the optimizer would choose the index.

To explain the time calculations: Oracle has set an I/O seek time of 10 ms, and a transfer rate of 204,800 bytes per ms (200 MB/s), with the guideline that a “typical” multiblock read is going to achieve 128 blocks. So the optimizer believes a single block read will take 10 + 8192/204800 ms = 10.04ms, while a multiblock read request for 1MB will take 10 + 1048576/204800 ms = 15.12 ms.

It’s also important to note that Oracle will use the 128 MBRC value in its calculation of the cost of the tablescan – even if you’ve set the db_file_mulitblock_read_count parameter for the session or system to something smaller; and if you have set the db_file_multiblock_read_count that’s the maximum size of multiblock read that the run-time engine will use.


Here are the two procedures I used to report the values above. You will only need the privilege to execute the dbms_stats package for the second one, but you’ll need the privilege to access the SYS table aux_stats$ to use the first. The benefit of the first one is that it can’t go out of date as versions change.

rem     Script:         get_system_stats.sql
rem     Author:         Jonathan Lewis
rem     Dated:          March 2002
rem     Last tested

set linesize 180
set trimspool on
set pagesize 60
set serveroutput on

spool get_system_stats

        m_value         number;
        m_status        varchar2(64);
        m_start         date;
        m_stop          date;
        for r1 in (
                select  rownum rn, pname
                from    sys.aux_stats$
                where   sname = 'SYSSTATS_MAIN'
        ) loop
                dbms_stats.get_system_stats(m_status, m_start, m_stop, r1.pname, m_value);
                if r1.rn = 1 then
                        dbms_output.put_line('System Stats');
                        dbms_output.put_line('Status: ' || m_status);
                                'Timed: ' ||
                                to_char(m_start,'dd-Mon-yyyy hh24:mi:ss') ||
                                ' - ' ||
                                to_char(m_stop ,'dd-Mon-yyyy hh24:mi:ss')
                end if;
                dbms_output.put_line(rpad(r1.pname,15) ||  ' : ' || to_char(m_value,'999,999,999'));
        end loop;

        m_value         number;
        m_status        varchar2(64);
        m_start         date;
        m_stop          date;
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'MBRC', m_value);
        dbms_output.put_line('MBRC       : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'MREADTIM', m_value);
        dbms_output.put_line('MREADTIM   : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'SREADTIM', m_value);
        dbms_output.put_line('SREADTIM   : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'CPUSPEED', m_value);
        dbms_output.put_line('CPUSPEED   : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'CPUSPEEDNW', m_value);
        dbms_output.put_line('CPUSPEEDNW : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'IOSEEKTIM', m_value);
        dbms_output.put_line('IOSEEKTIM  : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'IOTFRSPEED', m_value);
        dbms_output.put_line('IOTFRSPEED : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'MAXTHR', m_value);
        dbms_output.put_line('MAXTHR     : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'SLAVETHR', m_value);
        dbms_output.put_line('SLAVETHR   : ' || to_char(m_value,'999,999,999'));

spool off

Menu Popup with Declarative List

Jeff Kemp - Tue, 2019-08-13 23:04

In the past when I’ve needed to add half a dozen or more buttons to a page, I’ve sometimes encased them in a Collapsible region so that the user can slide them out of the way instead of clogging up the screen. Recently however I’ve started (sparingly) using a Menu Popup, as per this tutorial. The issue I have with this method, however, is that the menu items are defined in a shared component (a List) which means it’s not defined on a per-page basis.

Some of the actions simply need to do a Submit on the page, which is simple enough: set the URL Target to something like:

In other cases, the action needs to do something more specific to the page, e.g. show a region:


Or the action might need to navigate to another page, passing parameters based on specific items on the page. This means the list, defined in Shared Components, now has hardcoded elements that are only useful for that one page; more to the point, they are defined outside of the page – I’d rather that everything specific to a page is defined within that page’s definition.

The approach I’m using now is to use a custom trigger. Each list item has its URL Target set to something like:

The third parameter is set to a unique code that the page can use to identify which menu option was chosen. This parameter will be passed to this.data in the custom trigger’s event handler.

On the page, I have a Dynamic Action with the following attributes:

  • Event: Custom
  • Custom Event: menuAction
  • Selection Type: JavaScript Expression
  • JavaScript Expression: document
  • True Action: Execute JavaScript Code, e.g.:
switch(this.data) {
  case 'OPEN_POPUP':


Note that to do a simple Submit on the page, all I need to do is set the request on the third parameter of the menu item’s URL. If I want to do something slightly different for a particular request, I can put an extra “case” in the JavaScript code to handle it.

The benefit of this approach is that this trigger becomes the jumping-off point for all such menu actions for this page. In theory I could re-use the same List on multiple pages (if the items in the list are generic enough) but have different behaviour occur for each menu item specific to each page. The only challenge with this approach might be if you needed some conditions on each menu item, e.g. so they are shown or hidden in specific circumstances. If the condition for a menu item references a particular page item the List will no longer be generic and re-usable. For this reason, I usually still use a separate List for each menu for each page.

Perhaps in a future release of APEX we will gain the ability to define a List on a Page instead of in Shared Components. In the meantime, if you are interested in all the details on this method (including a solution for implementing a redirect to another page, or to open a modal page), refer to this tip.


Jonathan Lewis - Tue, 2019-08-13 07:34

Here’s a problem (and I think it should be called a bug) that I first came across about 6 years ago, then forgot for a few years until it reappeared some time last year and then again a few days ago. The problem has been around for years (getting on for decades), and the first mention of it that I’ve found is MoS Bug 2891576, created in 2003, referring back to Oracle, The problem still exists in Oracle 19.2 (tested on LiveSQL).

Here’s the problem: assume you have a pair of tables (call them parent and child) with a referential integrity constraint connecting them. If the constraint is enabled and not deferred then the following code may fail, and if you’re really unlucky it may only fail on rare random occasions:

insert all
        into parent({list of parent columns}) values({list of source columns})
        into child ({list of child columns})  values({list of source columns})
        {list of columns}
from    {source}

The surprising Oracle error is “ORA-02291: integrity constraint ({owner.constraint_name}) violated – parent key not found”, and the reason is simple (and documented in MoS note 265826.1 Multi-table Insert Can Cause ORA-02291: Integrity Constraint Violated for Master-Detail tables: the order in which the insert operations take place is “indeterminate” so that child rows may be inserted before their parent rows (and for the multi-table insert the constraint checks are not postponed until the statement completes as they are, for instance, for updates to a table with a self-referencing RI constraint).

Two possible workarounds are suggested in Doc ID 265826.1

  • drop the foreign key constraint and recreate it after the load,
  • make the foreign key constraint deferrable and defer it before the insert so that it is checked only on commit (or following an explicit call to make it immediate)

The second option would probably be preferable to the first but it’s still not a very nice thing to do and could leave your database temporarily exposed to errors that are hard to clean up. There are some details of the implementation of deferrable constraints in the comments of this note on index rebuilds if you’re interested in the technicalities.

A further option which seems to work is to create a (null) “before row insert” trigger on the parent table – this appears to force the parent into a pattern of single row inserts and the table order of insertion then seems to behave. Of course you do pay the price of an increase in the volume of undo and redo. On the down-side Bug 2891576 MULTITABLE INSERT FAILS WITH ORA-02291 WHEN FK & TRIGGER ARE PRESENT can also be fouind on MoS, leading 265826.1 to suggests disabling triggers if their correctness is in some way dependent on the order in which your tables are populated. That dependency threat should be irrelevant if the trigger is a “do nothing” trigger. Sadly there’s a final note that I should mention: Bug 16133798 : INSERT ALL FAILS WITH ORA-2291 reports the issue as “Closed: not a bug”

There is a very simple example in the original bug note demonstrating the problem, but it didn’t work on the version of Oracle where I first tested it, so I’ve modified it slightly to get it working on a fairly standard install. (I suspect the original was executed on a database with a 4KB block size.)

drop table child purge;
drop table parent purge;

create table parent (id number primary key);

create table child  (id number, v1 varchar2(4000),v2 varchar2(3920));
alter table child add constraint fk1 foreign key (id) references parent (id);
create or replace trigger par_bri
before insert on parent
for each row

insert all
        into parent ( id ) values ( id )
        into child  ( id ) values ( id )
select  100 id from dual

In the model above, and using an 8KB block in ASSM, the code as is resulted in an ORA-02991 error. Changing the varchar2(3920) to varchar2(3919) the insert succeeded, and when I kept the varchar2(3920) but created the trigger the insert succeeded.

Fiddling around in various ways and taking some slightly more realistic table definitions here’s an initial setup to demonstrate the “randomness” of the failure (tested on various versions up to

rem     Script:         insert_all_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2018
rem     Last tested 

create table t1
with generator as (
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
        rownum                          id,
        lpad(rownum,10,'0')             small_vc,
        lpad(rownum,100,'0')            medium_vc,
        lpad(rownum,200,'0')            big_vc
        generator       v1

create table parent(
        id              number,
        small_vc        varchar2(10),
        medium_vc       varchar2(100),
        big_vc          varchar2(200),
        constraint par_pk primary key(id)
segment creation immediate

create table child(
        id              number,
        small_vc        varchar2(10),
        medium_vc       varchar2(100),
        big_vc          varchar2(200),
        constraint chi_pk primary key(id),
        constraint chi_fk_par foreign key (id) references parent(id)
segment creation immediate

create table child2(
        id              number,
        small_vc        varchar2(10),
        medium_vc       varchar2(100),
        big_vc          varchar2(200),
        constraint ch2_pk primary key(id),
        constraint ch2_fk_par foreign key (id) references parent(id)
segment creation immediate

I’ve created a “source” table t1, and three “target” tables – parent, child and child2. Table parent has a declared primary key and both child and child2 have a referential integrity constraint to parent. I’m going to do a multi-table insert selecting from t1 and spreading different columns across the three tables.

Historical note: When I first saw the “insert all” option of multi-table inserts I was delighted with the idea that it would let me query a de-normalised source data set just once and insert the data into a normalised set of tables in a single statement – so (a) this is a realistic test from my perspective and (b) it has come as a terrible disappointment to discover that I should have been concerned about referential integrity constraints (luckily very few systems had them at the time I last used this feature in this way).

The multi-table insert I’ve done is as follows:

insert all
        into parent(id, small_vc)  values(id, small_vc)
        into child (id, medium_vc) values(id, medium_vc)
        into child2(id, medium_vc) values(id, medium_vc)
--      into child2(id, big_vc)    values(id, big_vc)
        id, small_vc, medium_vc, big_vc
        rownum <= &m_rows_to_insert

You’ll notice that I’ve allowed user input to dictate the number of rows selected for insertion and I’ve also allowed for an edit to change the column that gets copied from t1 to child2. Althought it’s not visible in the create table statements I’ve also tested the effect of varying the size of the big_vc column in t1.

Starting with the CTAS and multi-table insert as shown the insert runs to completion if I select 75 rows from t1, but if I select 76 rows the insert fails with “ORA-02991: integrity constraint (TEST_USER.CHI_FK_PAR) violated – parent key not found”. If I change the order of the inserts into child1 and child2 the violated constraint is TEST_USER.CH2_FK_PAR – so Oracle appears to be applying the inserts in the order they appear in the statement in some circumstances.

Go back to the original order of inserts for child1 and child2, but use the big_vc option for child2 instead of the medium_vc. In this case the insert succeeds for 39 rows selected from t1, but fails reporting constraint TEST_USER.CH2_FK_PAR when selecting 40 rows. Change the CTAS and define big_vc with as lpad(rownum,195) and the insert succeeds with 40 rows selected and fails on 41 (still on the CH2_FK_PAR constraint); change big_vc to lpad(rownum,190) and the insert succeeds on 41 rows selected, fails on 42.

My hypothesis on what’s happening is this: each table in the multitable insert list gets a buffer of 8KB (maybe matching one Oracle block if we were to try different block sizes). As the statement executes the buffers will fill and, critically, when the buffer is deemed to be full (or full enough) it is applied to the table – so if a child buffer fills before the parent buffer is full you can get child rows inserted before their parent, and it looks like Oracle isn’t postponing foreign key checking to the end of statement execution as it does with other DML – it’s checking as each array is inserted.

Of course there’s a special boundary condition, and that’s why the very first test with 75 rows succeeds – neither of the child arrays gets filled before we reach the end of the t1 selection, so Oracle safely inserts the arrays for parent, child and child2 in that order. The same boundary applies occurs in the first of every other pair of tests that I’ve commented on.

When we select 76 rows from t1 in the first test the child and child2 arrays hit their limit and Oracle attempts to insert the child1 rows first – but the parent buffer is far from full so its rows are not inserted and the attempted insert results in the ORA-02991 error. Doing a bit of rough arithmetic the insert was for 76 rows totalling something like: 2 bytes for the id, plus a length byte, plus 100 bytes for the medium_vc plus a length byte, totalling 76 * 104 =7,904 bytes.

When we switch to using the big_vc for child2 the first array to fill is the child2 array, and we have 3 sets of results as we shorten big_vc:

  • 40 * ((1 + 2) + (1 + 200)) = 8160
  • 41 * ((1 + 2) + (1 + 195)) = 8159
  • 42 * ((1 + 2) + (1 + 190)) = 8148

While I’m fairly confident that my “8KB array” hypothesis is in the right ballpark I know I’ve still got some gaps to explain – I don’t like the fact that I’ve got a break point around 7,900 in the first example and something much closer to 8,192 in the other three examples.  I could try to get extra precision by running up a lot more examples with different numbers and lengths of columns to get a better idea of where the error is appearing – but I’m sufficiently confident that the idea is about right so I can’t persuade myself to make the effort to refine it. An example of an alternative algorithm (which is actually a better fit though a little unexpected) is to assume that the normal 5 byte row overhead (column count, lock byte, flags and 2-byte row directory entry) has been included in the array sizing code, and the insert takes place at the point incoming row breaks, or just touches, the limit. In this case our 4 results would suggest the following figures:

  • 75 * 109 = 8175
  • 39 * 209 = 8151
  • 40 * 204 = 8160
  • 41 * 199 = 8159

With these numbers we can see 8KB (8,192 bytes) very clearly, and appreciate that the one extra row would take us over the critical limit, hence triggering the insert and making the array space free to hold the row.

Bottom Line

If you’re using the multi-table “insert all” syntax and have referential integrity declared between the various target tables then you almost certainly need to ensure that the foreign key constraints are declared as deferrable and then deferred as the insert takes place otherwise you may get random (and, until now, surprisingly inexplicable) ORA-02991 foreign key errors.

A possible alternative workaround is to declare a “do nothing” before row insert trigger on the top-level as this seems to switch the process into single row inserts on the top-most parent that force the other array inserts to take place with their parent row using small array sizes and protecting against the foreign key error. This is not an officially sanctioned workaround, though, and may only have worked by accident in the examples I tried.

It is possible, if the 8KB working array hypothesis is correct, that you will never see the ORA-02991 if the volume of data (number of rows * row length) for the child rows of any given parent row is always less than the size of the parent row – but that might be a fairly risky thing to hope for in a production system. It might be much better to pay the overhead of deferred foreign key checking than having a rare, unpredictable error appearing.


Obvious But Not For Oracle Obviously

Michael Dinh - Mon, 2019-08-12 13:38

While dropping RAC database, I found error ORA-01081: cannot start already-running ORACLE – shut it down first from dbca log.

Looking up error: Cause is obvious

$ oerr ora 01081
01081, 00000, "cannot start already-running ORACLE - shut it down first"
// *Cause:  Obvious
// *Action:

Here is the process for 12.1.0:

$ ps -ef|grep pmon
oracle   41777     1  0 Aug09 ?        00:00:30 asm_pmon_+ASM2

$ srvctl config database

$ srvctl status database -d DBFS -v
Instance DBFS1 is not running on node node1
Instance DBFS2 is not running on node node2

$ dbca -silent -deleteDatabase -sourceDB DBFS
Connecting to database
9% complete
14% complete
19% complete
23% complete
28% complete
33% complete
38% complete
47% complete
Updating network configuration files
48% complete
52% complete
Deleting instances and datafiles
66% complete
80% complete
95% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/DBFS.log" for further details.

$ cat /u01/app/oracle/cfgtoollogs/dbca/DBFS.log
The Database Configuration Assistant will delete the Oracle instances and datafiles for your database. 
All information in the database will be deleted. Do you want to proceed?
Connecting to database
ORA-01081: cannot start already-running ORACLE - shut it down first
Updating network configuration files
Deleting instances and datafiles
Database deletion completed.

$ srvctl config database

Patching or reimaging your ODA?

Yann Neuhaus - Mon, 2019-08-12 08:30

One of the key features of the ODA (Oracle Database Appliance) is the ability to patch the entire stack every three months, the goal being to keep all the components up-to-date. Most of the customers won’t patch so often, but one patch a year is quite a good average. But when comes the time for patching, comes the time for anxiety for the DBA. And it’s totally justified.

Why ODA patching can eventually be a nightmare?

First of all, patching all the products is a complex operation. ODA is not a real appliance: it’s classic hardware composed of parts from various vendors, and with nearly standard software, including Linux, Grid Infrastructure, ASM, ACFS and database engines. And all these products need to be patched together. If you were allowed to patch the ODA components separately, it could last quite a long time. Yes, Oracle provides a single patch for the ODA, but it’s just a bundle of dozen of patches. It’s easier to apply, all together the patches are certified, but it’s still a complex operation to bring all the modules to the target patch level. This is why you can encounter multiple problems. For example if you installed your own RPMs onto the system (unable to update the OS), if you lack some free space (unable to complete the patching), if your databases have specific configuration, or eventually if you discover that there is a bug in the patch related to the version you come from and linked to your ODA model.

Also, some of the patches are not cumulative, meaning that you cannot directly upgrade to the latest version. You sometimes need to apply 4 or 5 patches to upgrade, making the patching even more uncertain.

Starting from these facts, you may think about reimaging, and you’re probably right.

What are the advantages and drawbacks of reimaging?

For sure, reimaging has a lot of advantages:

  • Guarantee of success (you start from scratch)
  • Cleaner ODA (no question about that)
  • Make sure you are able to do the reimage (in case of you really need it)
  • Make sure your documentation is good (and this is the only way to validate it!)
  • Avoid later problems if patching not worked correctly

These are the drawbacks:

  • Longer than a single patch succesfully applied on the first try (who knows)
  • Need to erase everything and restart as if it were a new ODA
  • You need to know how your ODA was installed and configured (not so simple if someone did the job for you)
  • You probably need another ODA with Data Guard or DBVisit to limit the downtime
Can reimaging be quicker than patching?

Patching last about 3 hours if everything is OK. But it’s only for one patch and only if everything is OK. With my patching experience, you probably need to plan 1 day for the first ODA you will patch.

Reimaging also last about 3 hours (more or less depending on your ODA version). But it’s only for reinstalling the software without any database. You will need to restore all your databases, and do all the things you’ve done at the first deployment: copy your scripts, setup your additional software, restore your crontabs, your specific configuration, put back monitoring, and so on.

So, reimaging is probably longer, but you are quite sure to redeploy your ODA in a known time. This is a strong argument. “It will take 8 hours” is always better than “it would take between 3 and 8 hours. Or maybe more. If I succeed”.

How to proceed with patches?

If you need to patch regularly, try to apply the patch on a ODA you can live without. If something goes wrong, you can decide to reimage very quickly instead of opening a SR on MOS. Please don’t get stuck because a patch is not applying correctly, it’s a waste of time.

If you patch every year, consider redeploying instead of patching. It’s probably more work but it will take the same amount of time, with success guarantee (you will love that point). Also, you will ensure that you are able to reimage completely. Reimaging is sometimes also needed if you move your ODA to another datacenter with a network change, so you could have to reimage even for other reasons than patching.

How to make sure that you are able to reimage?

This is the key: be able to reimage

Rules to follow:

  • restrict the access on your ODA to only people concerned about the appliance
  • document every change you make on the server, even a simple chmod
  • never use the GUI to deploy the appliance: deploy your ODA using odacli and save the deployment json file outside of the ODA
  • never use the GUI to create the databases: create the database with odacli and backup the used parameters in the documentation
  • use scripts to configure your databases (avoid one-shot changes)
  • install other products only if necessary: do you really need a backup tool on ODA? NFS backups are great and easy to configure without installing anything
  • install only RPMs manually from Oracle ISOs and only if needed
  • do everything from the command line and avoid using vi. Text editors prevent you from being able to repeat the exact same operation. For example, replace vi /etc/fstab by echo "srv-nfs:/orabackups /backup nfs rw,bg,hard,nolock,nointr" >> /etc/fstab
  • always consider your ODA not so critical by having the possibility to restore your database elsewhere (understand on another ODA), or adopt Data Guard or DBVisit for all your databases that cannot support to be down for hours (even development databases are production for developpers!)
  • keep the install zipfiles corresponding to your version somewhere secured to avoid searching for them on MOS the day you need to reimage

Regarding the scripts, I always create a scripts folder in /home/oracle on ODA, and each database has 3 dedicated scripts to speed up the database recreation if needed: create_SID.sh, configure_SID.sql and tbs_SID.sql. First script is for odacli database creation, first SQL script if for specific configuration (controlfile multiplexing for example, disabling the recycle bin or enabling the archive_lag_target, etc). Second SQL script is for tablespace creation. Target is to be able to recreate the database even for datapump-based restore. Make sure to backup these scripts somewhere else.

Few words about RPMs : for me the best way to install additional RPMs on ODA is to download the Oracle Linux ISO corresponding to the version on your ODA (the ISO you would use if you need to deploy a normal server), mount the ISO on your ODA and pickup only the RPMs you need from it (you can also put these few RPMs on /home/oracle/RPMs).


Reimaging should always be considered as an alternative way of patching. Or the best way. Companies already having integrated this are happier with their ODAs. And are taking the best from these appliances.

Cet article Patching or reimaging your ODA? est apparu en premier sur Blog dbi services.

Video : Vagrant : Oracle Database Build (19c on OL8)

Tim Hall - Mon, 2019-08-12 02:18

Today’s video is an example of using Vagrant to perform an Oracle database build.

In this example I was using Oracle 19c on Oracle Linux 8. It also installs APEX 19.1, ORDS 19.2, SQLcl 19.2, with ORDS running on Tomcat 9 and OpenJDK 12.

If you’re new to Vagrant, there is an introduction video here. There’s also an article if you prefer to read that.

If you want to play around with some of my other Vagrant builds, you can find them here.

If you want to read about some of the individual pieces that make up this build, you can find them here.

The star of today’s video is Noel Portugal. It’s been far too long since I’ve seen you dude!



Video : Vagrant : Oracle Database Build (19c on OL8) was first posted on August 12, 2019 at 8:18 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.


Jonathan Lewis - Sun, 2019-08-11 15:28

Here’s a question to provoke a little thought if you’ve got nothing more entertaining to do on a Sunday evening.  What threats do you think of when you see a statement like the following in (say) an AWR report, or in a query against v$sql ?

update tableX set
        col001 = :1, col002 = :2, col003 = :3, ...
        -- etc. 
        -- the names are supposed to indicate that the statement updates 302 columns
        -- etc.
        col301 = :301, col302 = :302
        pk_col = :303

I’ll be writing up some suggestions tomorrow (Monday, UK BST), possible linking to a few other articles for background reading.


The first three comments have already hit the high points, but I’m going to jot down a few notes anyway.

The first two things that really (should) make an impact are:

  • There’s a table in the application with (at least) 303 columns – anything over 255 is a potential disaster area
  • An update statement that updates 302 columns is probably machine generated by a non-scalable application

A secondary detail that might be useful is recognising the pattern of the text – lower case for column names, simple “:nnn” for bind variables.  As it stands I don’t recognise the possible source for this style, but I know it’s not SQL embedded in PL/SQL (which would be all capitals with “:Bnnn” as bind variable names) and it’s not part of a distributed query from a remote database (which would be in capitals with quoted names, and table aliases like “A1, A2, …”), and it’s not “raw” Hiberbate code which produces meaningless table and column aliases based on numbers with a “t” for table and “c” for column.

So let’s think about possible problems and symptoms relating to the two obvious issues:

Wide tables

Once you have more than 255 (real) columns in a table – even if that count includes columns that have been marked unused – Oracle will have to split rows into “rowpieces” that do not exceed 255 columns and chain those pieces together. Oracle will try to be as efficient as possible – with various damage-limiting code changes appearing across versions – attempting store these row pieces together and keeping the number to a minimum, but there are a number of anomalies that can appear that have a significant impact on performance.

Simply having to visit two row pieces to pick up a column in the 2nd row piece (even if it is in the same block) adds to the cost of processing; but when you have to visit a second block to acquire a 2nd (or 3rd, or 4th) rowpiece the costs can be significant. As a quirky little extra, Oracle’s initial generation of row-pieces creates them from the end backwards – so a row with 256 columns starts with a row-piece of one column following by a rowpiece of 255 columns: so you may find that you have to fetch multiple row pieces for virtually every row you access.

It’s worth noting that a row splitting is based only on columns that have been used in the row. If your data is restricted to the first 255 column of a row then the entire row can be stored as a single row piece (following the basic rule that “trailing nulls take no space”); but as soon as you start to update such a row by populating columns past the 255 boundary Oracle will start splitting from the end – and it may create a new trailing row-piece each time you populate a column past the current “highest” column.  In an extreme case I’ve managed to show an example of a single row consisting of 746 row pieces, each in a different block (though that’s a bug/feature that’s been addressed in very recent versions of Oracle).

If rows have been split across multiple blocks then one of the nastier performance problems appears with direct path read tablescans. As Oracle follows the pointer to a secondary row piece it will do a physical read of the target block then immediately forget the target block so, for example, if you have inserted 20 (short) rows into a block then updated all of them in a way that makes them split and all their 2nd row pieces go to the same block further down the table you can find that Oracle does a single direct path read that picks up the head pieces, then 20 “db file sequential read” calls to the same block to pick up the follow-on pieces. (The same effect appears with simple migrated rows.) Contrarily, if you did the same tablescan using “db file scattered read” requests then Oracle might record a single, highly deceptive “table fetch continued row” because it managed to pin the target block and revisit it 20 times.

Often a very wide row (large number of columns) means the table is trying to hold data for multiple types of object. So a table of 750 columns may use the first 20 columns for generic data, columns 21 to 180 for data for object type A, 181 to 395 for data for object type B, and so on.  This can lead to rows with a couple of hundred used columns and several hundred null columns in the middle of each row – taking one byte per null column and burning up lots of CPU as Oracle walks a row to find a relevant column. A particularly nasty impact can appear from this type of arrangement when you upgrade an applications:  imagine you have millions of rows of the “type A” above which use only the first 180 columns.  For some reason the application adds one new “generic” column that (eventually) has to be populated for each row – as the column is populated for a type A row the row grows by 520 (null counting) bytes and splits into at least 3 pieces. The effect could be catastrophic for anyone who had been happy with their queries reporting type A data.

One of the difficulties of handling rows that are chained due to very high column counts is that the statistics can be quite confusing (and subject to change across versions). The most important clue comes from “table fetch continued row”; but this can’t tell you whether your “continued” rows are migrated or chained (or both), which table they come from, and whether you’ve been fetching the same small number multiple times or many different rows. Unfortunately the segment statistics (v$segstat / v$segment_statistics) don’t capture the number of continued fetches by segment – it would be nice if they did since it ought to be a rare (and therefore low-cost) event. The best you can do, probably, is to look at the v$sql_monitor report for queries that report tablescans against large tables but report large numbers of single block reads in the tablescan – and for repeatable cases enable SQL trace with wait tracing enabled against suspect queries to see if they show the characteristic mix of direct path reads and repeated db file sequential reads.

Update every column

The key performance threat in statements that update every column – including the ones that didn’t change – is that Oracle doesn’t compare before and after values when doing the update. Oracle’s code path assumes you know what you’re doing so it saves every “old” value to an undo record (which gets copied to the redo) and writes every “new” value to a redo change vector.  (Fortunately Oracle does check index definitions to see which index entries really have suffered changes, so it doesn’t visit index leaf blocks unnecessarily). It’s possible that some front-end tool that adopts this approach has a configuration option that switches from “one SQL statement for all update” to “construct minimal statement based on screen changes”.

The simple trade-off between these two options is the undo/redo overhead vs. parsing and optimisation overheads as the tool creates custom statements on demand. In the case of the table with more than 255 columns, of course, there’s the added benefit that an update of only the changed columns might limit the update to columns that are in the first rowpiece, eliminating the need (some of the time) to chase pointers to follow-up pieces.

Limiting the update can help with undo and redo, of course, but if the tool always pulls the entire row to the client anyway you still have network costs to consider. With the full row pulled and then updated you may find it takes several SQL*Net roundtrips to transfer the whole row between client and server.  In a quick test on a simple 1,000 column table with an update that set every column in a single row to null (using a bind variables) I found that the a default setup couldn’t even get 1,000 NULLs (let alone “real values”) across the network without resorting to one wait on “SQL*Net more data from client”

variable b1 number
exec :b1 := null;

update t1 set
        col0001 = :b1,
        col0002 = :b1,
        col1000 = :b1

Although “SQL*Net message to/from client” is typically (though not always correctly) seen as an ignorable wait, you need to be a little more suspicious of losing time to “SQL*Net more data to/from client”. The latter two waits mean you’re sending larger volumes of information across the network and maybe you can eliminate some of the data or make the transfer more efficient – perhaps a simple increase in the SDU (session data unit) in the tnsnames.ora, listener.ora, or sqlnet.ora (for default_sdu_size) might be helpful.


One of the features of trouble-shooting from cold is that you don’t know very much about the system you’re looking at – so it’s nice to be able to take a quick overview of a system looking for any potentially interesting anomalies and take a quick note of what they are and what thoughts they suggest before you start asking questions and digging into a system. This article is just a couple of brief notes along the lines of: “that doesn’t look nice- what questions does it prompt”.

Oracle JET - How To Reference JavaScript Module in HTML

Andrejus Baranovski - Sun, 2019-08-11 07:06
I will explain how to reference JavaScript module function in Oracle JET HTML. In previous Oracle JET versions, we were using $root to access parent appController module. Syntax $root looks a bit like magic, it is better to reference module through a predefined variable. I will show you how.

Sample app comes with number converter, which helps to format numeric value. For convenience reasons and better reuse, number formatting function is moved to a separate module. Our goal is to call function self.convertNumericValue from converter module inside our module HTML:

Import the converter module into your target module. Make sure to define a variable for the import. Then define the local variable and assign it with the value pointing to the imported module. This will allow calling functions from the imported module, anywhere within our target module:

To demonstrate how it works, I will include a call to format number for the table column. I have defined a table column template for that reason:

Within the template, I'm calling converter (function is called by referencing local variable) to format number:

It works well, the column with salary values is formatted by function from the imported module:

A sample application is available on my GiHub repo.


Subscribe to Oracle FAQ aggregator