Jeff Kemp

Subscribe to Jeff Kemp feed Jeff Kemp
Oracle Database: Get it Right
Updated: 3 weeks 6 days ago

“No Primary Key item has been defined for form region”

Fri, 2019-08-16 02:30

Otherwise reported as “WWV_FLOW_FORM_REGION.NO_PRIMARY_KEY_ITEM” or merely “Internal error”, this bug caused me to waste hours of investigation, twice, because a simple Google search was not returning any results. This post is merely to remedy that situation.

On an APEX 19.1 page with a new Form region combined with an Interactive Grid region, when the user makes a change to a record and Submits the page, they get this error.

This is caused by bug 29612553 – FORM REGION DML PROCESS THROWS INTERNAL ERROR WHEN INTERACTIVE GRID DML PROCESS IS ON THE SAME PAGE and fixed by APEX patch 29369869.

Menu Popup with Declarative List

Tue, 2019-08-13 23:04

In the past when I’ve needed to add half a dozen or more buttons to a page, I’ve sometimes encased them in a Collapsible region so that the user can slide them out of the way instead of clogging up the screen. Recently however I’ve started (sparingly) using a Menu Popup, as per this tutorial. The issue I have with this method, however, is that the menu items are defined in a shared component (a List) which means it’s not defined on a per-page basis.

Some of the actions simply need to do a Submit on the page, which is simple enough: set the URL Target to something like:

In other cases, the action needs to do something more specific to the page, e.g. show a region:

apex.theme.openRegion("popupQuestion")

Or the action might need to navigate to another page, passing parameters based on specific items on the page. This means the list, defined in Shared Components, now has hardcoded elements that are only useful for that one page; more to the point, they are defined outside of the page – I’d rather that everything specific to a page is defined within that page’s definition.

The approach I’m using now is to use a custom trigger. Each list item has its URL Target set to something like:

The third parameter is set to a unique code that the page can use to identify which menu option was chosen. This parameter will be passed to this.data in the custom trigger’s event handler.

On the page, I have a Dynamic Action with the following attributes:

  • Event: Custom
  • Custom Event: menuAction
  • Selection Type: JavaScript Expression
  • JavaScript Expression: document
  • True Action: Execute JavaScript Code, e.g.:
switch(this.data) {
  case 'OPEN_POPUP':
    apex.theme.openRegion("popupQuestion");
    break;

  default:
    apex.page.submit({request:this.data,showWait:true});
}

Note that to do a simple Submit on the page, all I need to do is set the request on the third parameter of the menu item’s URL. If I want to do something slightly different for a particular request, I can put an extra “case” in the JavaScript code to handle it.

The benefit of this approach is that this trigger becomes the jumping-off point for all such menu actions for this page. In theory I could re-use the same List on multiple pages (if the items in the list are generic enough) but have different behaviour occur for each menu item specific to each page. The only challenge with this approach might be if you needed some conditions on each menu item, e.g. so they are shown or hidden in specific circumstances. If the condition for a menu item references a particular page item the List will no longer be generic and re-usable. For this reason, I usually still use a separate List for each menu for each page.

Perhaps in a future release of APEX we will gain the ability to define a List on a Page instead of in Shared Components. In the meantime, if you are interested in all the details on this method (including a solution for implementing a redirect to another page, or to open a modal page), refer to this tip.

Conditionally Required Floating Item

Mon, 2019-07-22 03:43

An item in the Universal Theme using the Optional – Floating template looks like this:

An item using the Required – Floating template looks like this:

In addition, if the item is required we would most probably set the Value Required attribute to Yes. What if the item is sometimes required but not always? How do we create a Conditionally Required field?

Firstly, we would make sure there is a Validation on the field that checks that the value is provided if required. This way, regardless of what the form may or may not send to the database, it is validated appropriately.

Secondly, to indicate to the user that the item is required or optional, based on the value of another item, we can use a Dynamic Action that sets the required item property (this triggers the client-side validation) and adds or removes the is-required class from the item’s container (this shows the little red “required” indicator on the page).

For example, let’s say that whether item P1_COST_CENTRE is required or not is dependent on whether a hidden item, P1_COST_CENTRE_REQUIRED, has the value 'Y'.

  • Create a Dynamic Action
    1. Event: Change
    2. Selection Type: Item(s)
    3. Item(s): P1_COST_CENTRE_REQUIRED
    4. Client-side Condition Type: Item = Value
    5. Item: P1_COST_CENTRE_REQUIRED
    6. Value: Y
  • Create a True Action: Execute JavaScript Code
var item = $("#P1_COST_CENTRE");
item.prop("required",true);
item.closest(".t-Form-fieldContainer").addClass("is-required");
  • Create a False Action: Execute JavaScript Code
var item = $("#P1_COST_CENTRE");
item.prop("required",false);
item.closest(".t-Form-fieldContainer").removeClass("is-required");

The above code works for all item templates (“Optional”, “Optional – Above”, “Optional – Floating”, etc.) in the Universal Theme; I’ve tested this on APEX 18.2 and 19.1.

Note: this is custom code for the Universal Theme, so it may or may not work for other themes; and might stop working in a future revision of the theme.

Plugins

UPDATE 29/7/2019: I’ve created some simple Dynamic Action plugins (for APEX 18.2 and later) to implement this, if you’re interested you can download them from here:

To use these plugins, select them as an Action to take on a Dynamic Action:

EDIT 29/7/2019: modified to use a better method to find the container div.

Older Themes

In other themes, the way that a required field is rendered is different. For example, in Theme 26 (Productivity Applications) the label for a required item is rendered in bold, along with a red asterisk; if the item is optional, no red asterisk is rendered. The way to make an item conditionally mandatory in this theme is:

  1. Set the item to use the Required template (so that the red asterisk is rendered).
  2. In the Dynamic Action JavaScript, execute the following if the item should be optional:
var itemLabel = $("label[for='P1_COST_CENTRE']");
itemLabel.removeClass("uRequired");
itemLabel.addClass("uOptional");

To make the item required again:

var itemLabel = $("label[for='P1_COST_CENTRE']");
itemLabel.removeClass("uOptional");
itemLabel.addClass("uRequired");

Report Google Map Plugin v1.0 Released

Fri, 2019-07-19 11:25

Over the past couple of weeks I’ve been working on an overhaul of my Google Maps region for Oracle Application Express. This free, open-source plugin allows you to integrate fully-featured Google Maps into your application, with a wide range of built-in declarative features including dynamic actions, as well as more advanced API routines for running custom JavaScript with the plugin.

The plugin has been updated to Oracle APEX 18.2 (as that is the version my current system is using). Unfortunately this means that people still on older versions will miss out, unless someone is willing to give me a few hours on their APEX 5.0 or 5.1 instance so I can backport the plugin.

EDIT: Release 1.0.1 includes some bugfixes and a backport for APEX 5.0, 5.1 and 18.1.

The plugin is easy to install and use. You provide a SQL Query that returns latitude, longitude, and information for the pins, and the plugin does all the work to show them on the map.

The plugin has been rewritten to use the JQuery UI Widgets interface, at the suggestion of Martin D’Souza. This makes for a cleaner integration on any APEX page, and reduces the JavaScript footprint of each instance on the page if you need two or more map regions at the same time. This represented a rather steep learning curve for me personally, but I learned a lot and I’m pleased with the result. Of course, I’m sure I’ve probably missed a few tricks that the average JavaScript coder would say was obvious.

The beta releases of the plugin (0.1 to 0.10) kept adding more and more plugin attributes until it hit the APEX limit of 25 region-level attributes. This was obviously not very scaleable for future enhancements, so in Release 1.0 I ran the scythe through all the attributes and consolidated, replaced, or removed more than half of them – while preserving almost every single feature. This means v1.0 is not backwards compatible with the beta versions; although many attributes are preserved, others (including the SQL Query itself, which is rather important) would be lost in the conversion if the plugin was merely replaced. For this reason I’ve changed the Internal ID of the plugin. This is so that customers who are currently using a beta version can safely install Release 1.0 alongside it, without affecting all the pages where they are using the plugin. They can then follow the instructions to gradually upgrade each page that uses the plugin.

All of the plugin attributes relating to integrating the plugin with page items have been removed. Instead, it is relatively straightforward to use Dynamic Actions to respond to events on the map, and an API of JavaScript functions can be called to change its behaviour. All of this is fully documented and sample code can be found in the wiki.

New features include, but are not limited to:

  • Marker Clustering
  • Geo Heatmap visualisation (this replaces the functionality previous provided in a separate plugin)
  • Draggable pins
  • Lazy Load (data is now loaded in a separate Ajax call after the page is loaded)

The plugin attributes that have been added, changed or removed are listed here.

If you haven’t used this plugin before, I encourage you to give it a go. It’s a lot of fun and the possibilities presented by the Google Maps JavaScript API are extensive. You do need a Google Maps API Key which requires a Google billing account, but it is worth the trouble. It is recommended to put a HTTP Referer restriction on your API Key so that people can’t just copy your public key and use it on their own sites. For more information refer to the Installation Instructions.

If you are already using a beta version of the plugin in your application, please review the Upgrading steps before starting. Don’t panic! It’s not quite as simple as just importing the plugin into your application, but it’s not overly complicated. If you were using any of the Page Item integration attributes, you will need to implement Dynamic Actions to achieve the same behaviour. If you had any JavaScript integrations with the plugin, you will need to update them to use the new JQuery UI Widget API calls. I am keen for everyone to update to Release 1.0 as soon as possible, so I will provide free support (via email) for anyone needing help with this.

I am very keen to hear from everyone who is using the plugin, and how it is being used – please let me know in the comments below. If you notice a bug or have a great idea to enhance the plugin, please raise an issue on GitHub.

Links

Change Item Icon Dynamically

Tue, 2019-07-09 04:18

The floating item type has an optional “Icon” property that allows you to render an icon next to the item, which can help users quickly identify what the item is for. This is especially helpful when the form has a lot of items.

The icon attribute can be static, e.g. fa-hashtag, or it can be chosen based on the value of another item, e.g. &P1_FA_ICON..

If you want the icon to change dynamically as the user enters or modifies data, it’s a little bit more complicated. I have a list item based on a table of asset categories, and each asset category has an icon assigned to it. When the user selects an asset category from the list I want it to get the icon from the table and show it in the item straight away.

To do this, I use two Dynamic Actions: (1) a PL/SQL action which updates the hidden Pn_FA_ICON item, and (2) a Javascript action which manipulates the displayed icon next to the list item.

This is my item and its two dynamic actions. The Icon attribute causes the icon to be shown when the page is loaded.

The Execute PL/SQL Code action is a simple PL/SQL block which gets the icon from the reference table for the selected category code. Make sure the “Wait for Result” is “Yes”, and make sure the Items to Submit and Items to Return are set to P260_CATEGORY_CODE and P260_CATEGORY_FA_ICON, respectively.

select x.fa_icon
into   :P260_CATEGORY_FA_ICON
from   asset_categories x
where  x.code = :P260_CATEGORY_CODE;

On examining the source of the page, we see that the select item is immediately followed by a span which shows the icon:

The Execute JavaScript Code action finds the item (in this case, the triggering element), then searches the DOM for the following span with the apex-item-icon class. Once found, it resets the classes on the span with a new set of classes, including the new icon.

It’s a little gimmicky but it’s an easy way to delight users, and it might help them to quickly identify data entry mistakes.

Warning: due to the way the javascript manipulates the DOM, this method is not guaranteed to work correctly in future releases of APEX., so it will need to be retested after upgrades.

Wipe APEX mail queue

Thu, 2019-07-04 01:32

Refreshing any of our non-prod environments (e.g. dev, test, etc.) with a clone from production is a fairly regular process at my client. A recurring issue with this is emails: we’ve had occasion where users have received a second copy of an email immediately after the clone has completed. This was confusing because they thought the event that had triggered the email actually occurred twice.

As it turns out, the duplicate emails were caused by the fact that the emails happened to be waiting in the APEX mail queue in production at the time of the export. After the export, the APEX mail queue was processed normally in production and the users received their emails as expected; after the clone was completed, the database jobs were restarted in the cloned environment which duly processed the emails sitting in the cloned queue and the users effectively got the same emails a second time.

What’s worse, if the same export were to be used for multiple clones, the users might get the same emails again and again!

A good way to solve this sort of issue would be to isolate the non-prod environments behind a specially configured mail server with a whitelist of people who want (and expect) to get emails from the non-prod systems. We don’t have this luxury at this client, however.

Instead, we have a post_clone.sql script which is run by the DBAs immediately after creating the clone. They already stop all the jobs by setting job_queue_processes=0.

In case the mail queue happens to have any emails waiting to be sent, the post clone script now includes the following step:

begin
*** WARNING: DO NOT RUN THIS IN PRODUCTION! ***
  for r in (
    select workspace_id
          ,workspace
    from apex_workspaces
    ) loop
    apex_application_install.set_workspace_id (r.workspace_id);
    apex_util.set_security_group_id
      (p_security_group_id => apex_application_install.get_workspace_id);
    delete apex_mail_queue;
  end loop;
  commit;
end;
/

This script is run as SYS but it could also be run as SYSTEM or as APEX_nnnnnn, depending on your preference.

ADDENDUM: Overriding the From Email Address

Christian Neumüller commented that an additional technique that might be useful is to override the From (sender) email address to indicate which environment each email was sent from. To do this, run something like the following:

begin
  apex_instance_admin.set_parameter('EMAIL_FROM_OVERRIDE',
    'apex-' || sys_context('userenv','db_name') || '@mydomain');
end;

I’ve tested this in APEX 19.1 and it seems to work fine. Regardless of the p_from parameter that the code passes to apex_mail.send, the EMAIL_FROM_OVERRIDE email address is used instead.
Note that this is currently undocumented, so this may stop working or change in a future release.

Null Display Value on Read-only List Item

Wed, 2019-07-03 04:00

The updated Universal Theme has added new “Floating” item templates which look great, e.g.:

I had a list item which I wanted to leave optional; if the user leaves it null, I wanted it to show a “default” display value (derived at runtime). To implement this, I added a hidden item (P10_DEPTNO_DEFAULT) and on the list item set Null Display Value to &P10_DEPTNO_DEFAULT..

If the page is shown in read-only mode, however, the list item is rendered as a Display Item, and the Null Display Value attribute is ignored:

To solve this, I added a Dynamic Action which injects the default value into the HTML for display (without affecting the value of the underlying item):

  • Event: Page Load
  • Server-side Condition: <page is readonly> AND :P10_DEPTNO IS NULL
  • Action: Execute JavaScript Code
  • Fire on Initialization: No
  • Code:

This finds the span for the display-only item and injects the default display value for display:

If you want to see this in action for yourself, here is a demo: https://apex.oracle.com/pls/apex/f?p=APEXTEST:DISPSHOWDEFAULT&c=JK64

Disable scheduler jobs during deployment

Mon, 2019-06-24 19:54

Like most active sites our applications have a healthy pipeline of change requests and bug fixes, and we manage this pipeline by maintaining a steady pace of small releases.

Each release is built, tested and deployed within a 3-4 week timeframe. Probably once or twice a month, on a Thursday evening, one or more deployments will be run, and each deployment is fully scripted with as few steps as possible. My standard deployment script has evolved over time to handle a number of cases where failures have happened in the past; failed deployments are rare now.

One issue we encountered some time ago was when a deployment script happened to be run at the same time as a database scheduler job; the job started halfway during the deployment when some objects were in the process of being modified. This led to some temporary compilation failures that caused the job to fail. Ultimately the deployment was successful, and the next time the job ran it was able to recover; but we couldn’t be sure that another failure of this sort wouldn’t cause issues in future. So I added a step to each deployment to temporarily stop all the jobs and re-start them after the deployment completes, with a script like this:

prompt disable_all_jobs.sql

begin
  for r in (
    select job_name
    from   user_scheduler_jobs
    where  schedule_type = 'CALENDAR'
    and    enabled = 'TRUE'
    order by 1
  ) loop
    dbms_scheduler.disable
      (name  => r.job_name
      ,force => true);
  end loop;
end;
/

This script simply marks all the jobs as “disabled” so they don’t start during the deployment. A very similar script is run at the end of the deployment to re-enable all the scheduler jobs. This works fine, except for the odd occasion when a job just happens to start running, just before the script starts, and the job is still running concurrently with the deployment. The line force => true in the script means that my script allows those jobs to continue running.

To solve this problem, I’ve added the following:

prompt Waiting for any running jobs to finish...

whenever sqlerror exit sql.sqlcode;

declare
  max_wait_seconds constant number := 60;
  start_time       date := sysdate;
  job_running      varchar2(100);
begin
  loop

    begin
      select job_name
      into   job_running
      from   user_scheduler_jobs
      where  state = 'RUNNING'
      and    rownum = 1;
    exception
      when no_data_found then
        job_running := null;
    end;

    exit when job_running is null;

    if sysdate - start_time > max_wait_seconds/24/60/60 then

      raise_application_error(-20000,
           'WARNING: waited for '
        || max_wait_seconds
        || ' seconds but job is still running ('
        || job_running
        || ').');

    else
      dbms_lock.sleep(2);
    end if;

  end loop;
end;
/

When the DBA runs the above script, it pauses to allow any running jobs to finish. Our jobs almost always finish in less than 30 seconds, usually sooner. The loop checks for any running jobs; if there are no jobs running it exits straight away – otherwise, it waits for a few seconds then checks again. If a job is still running after a minute, the script fails (stopping the deployment) and the DBA can investigate further to see what’s going on; once the job has finished, they can re-start the deployment.

“Before Header” Branch

Thu, 2019-06-13 23:27

It’s well known that after processing a page one often needs to add a Branch so the user is taken to another page, e.g. to start the next step in a process. It’s less common to need a Branch that is evaluated before the page is shown. This is a “Before Header” Branch and I use it when the user might open a page but need to be redirected to a different one.

A good example is a page that is designed to direct the user to two or more different pages depending on what data they requested.

Another way that a branch like this can be useful is where a user might navigate to a page that should show a record, but if the record ID is not set, I might want the page to automatically redirect the user back to a report page instead.

Of course, you have to take some care with these sorts of branches; if the target page also has its own “Before Header” branch, that will also be evaluated; if the user ends up in a loop of branches the page will fail to load (with a “ERR_TOO_MANY_REDIRECTS” error).

This is the order in which various steps are done before APEX shows a page:

  1. Authentication check
  2. Verify session
  3. “Before Header” Branches
  4. Authorization check
  5. Computations
  6. “Before Header” Processes
  7. Regions, etc.

One of the implications of the above order is that any computations or processes that set or change application state (e.g. an application item) are not run before it evaluates conditions or attributes for any “Before Header” Branches. This applies regardless of whether the computations or processes are defined on the page, or defined globally in Shared Components. This little detail tripped me up today, because I had a branch that I needed to run based on a condition that relied on state that should have been set by a “Before Header” process defined globally.

If you need to redirect a user to a different page on the basis of any application state that is set by a computation or process, you can use a PL/SQL Process instead that does the redirect. The PL/SQL code to use is something like:

apex_util.redirect_url(p_url => apex_page.get_url(p_page => 1));

Warning: redirect_url raises the exception ORA-20876: Stop APEX Engine, so it’s probably preferable to avoid combining this with other PL/SQL code that might need to be committed first.

Showing image thumbnails in Icon View

Thu, 2019-06-13 02:23

I had an interactive report an an old APEX application that I’ve kept maintained for quite a few years, which is able to show an Icon View that shows a thumbnail of the image for each item.

The problem was that the layout sometimes went wonky depending on the size of the image or the size of the label text. How the items were laid out depended on the width of the viewing window as well. I have set Columns Per Row to 5, and I’m using a Custom Icon View with the following Custom Link:

<a href="#WORK_URL#">
<img src="#IMAGE_URL#" width="140px">
<p>
#FULL_TITLE_SORT#
</a>

Each item shows an image, scaled down to width 140 pixels (my images come in all sorts of sizes), plus the title; either the title or the image may be clicked to open the details for it.

This is how it looked:

Depending on a number of variables (length of the label text, and the width of the viewing window) some rows would show their first item in the 5th column instead of over on the left; this would have a run-on effect to following rows. In addition, I wasn’t quite happy with the left-justified layout for each item.

To fix this I added some DIVs and some CSS. My Custom Link is now:

<div class="workcontainer">
  <div class="workicon">
    <a href="#WORK_URL#">
      <img src="#IMAGE_URL#" class="workimg">
    </a>
  </div>
  <div class="worktitle">
    <a href="#WORK_URL#">#FULL_TITLE_SORT#</a>
  </div>
</div>

Each record gets a container div, within which are nested a div for the image and a div for the label.

The width attribute is no longer needed directly on the img; instead, I added the following CSS to the region Header Text:

<style>
.workcontainer {
    text-align:center;
    width:160px;
    height:200px;
  }
.workicon { display:block; }
.workimg {
    max-width:160px;
    max-height:160px;
    width:auto;
    height:auto;
  }
.worktitle {
    display:block;
    font-weight:bold;
    height:40px;
    overflow:hidden;
  }
</style>

Some of the key bits here are that the container div has width and height attributes, and the image is constrained using max-width, max-height, width:auto and height:auto. Also, the work title is constrained to a 40 pixel high block, with overflow:hidden.

My report now looks like this:

The images are scaled down automatically to fit within a consistent size, and both the images and the labels are horizontally centered leading to a more pleasing layout. If an image is already small enough, it won’t be scaled up but will be shown full-size within the available area. Also, the label height is constrained (if an item label is too high the overflow will be hidden) which solves the layout problem I had before. If the image is not very tall, the label appears directly beneath it which is what I wanted.

The only remaining issue is that the icon view feature of the interactive report generates a table with rows of a fixed number of columns (5, in my case), which doesn’t wrap so nicely on a small screen when it can’t fit 5 in a row. I’ve set Columns Per Row to 1 now, and it seems to wrap perfectly; it shows up to a maximum of 6 items per row depending on the viewing window width.

POSTSCRIPT – Lazy Load

Thanks to Matt (see comment below) who pointed out that a report like this would benefit greatly from a Lazy Load feature to reduce the amount of data pulled to the client – for example, if the user requests 5,000 records per page and starts paging through the results, each page view could potentially download a large volume of data, especially if the images are quite large.
I’ve implemented this now and it was quite straightforward:

  1. Download the latest release of the jQuery plugin “Lazyload Anyhttps://github.com/emn178/jquery-lazyload-any
  2. Upload the file jquery.lazyload-any.js to Static Application Files
  3. Add the following to the page attribute File URLs:
    #APP_IMAGES#jquery.lazyload-any.js
  4. Add a Dynamic Action to the report region:
    Event = After Refresh
    Action = Execute JavaScript Code
    Code = $(".workicon").lazyload()
    Fire on Initialization = Yes
  5. Modify the Icon View Custom Link code to put the bits to lazy load within a script tag of type “text/lazyload” (in my case, all the html contents within the “workicon” div), e.g.:
<div class="workcontainer">
  <div class="workicon">
    <script type="text/lazyload">
      <a href="#WORK_URL#">
        <img src="#IMAGE_URL#" class="workimg">
      </a>
    </script>
  </div>
  <div class="worktitle">
    <a href="#WORK_URL#">#FULL_TITLE_SORT#</a>
  </div>
</div>

This was an important addition in my cases as some of the images were quite large; I don’t have to worry about load on my server because they are hosted on Amazon S3, but I do have to pay a little bit for the transfer of data.

Hiding HTML when downloading an Interactive Report

Thu, 2019-05-23 02:33

A common requirement is to format data for display in an Interactive Report, for example showing an icon, making part of the data a clickable link or a button, or showing it in different colours.

The problem with embedding formatting in the data for the report is that it is not only used for display in the web page; it is also used for the Download option, causing user confusion when HTML code is exposed in the generated CSV file.

If the logic for the formatting is not data-driven, the solution is to enter the HTML in the HTML Expression attribute on the report column definition. The HTML Expression is used only when displaying the data in the interactive report; the underlying column data is used for the Download. The underlying column data is also used to generate the filter list for the column (if Column Filter Type is “Default Based on Column Type”).

If the logic for the formatting is data-driven, a simple solution is to generate the HTML in an underlying column in the query. However, you don’t want this HTML appearing in the Download CSV, so what you can do is generate the HTML in an additional, hidden column in the report; then use the hidden column in the HTML Expression attribute. This works because the HTML Expression attribute is allowed to refer to any column in the query, even hidden ones.

For example, here is a query with a generated column that determines an icon to be displayed within the “job” column:

select ename,
       job,
       job
       || ' <span class="fa '
       || case when mgr is null then 'fa-gear' else 'fa-user' end
       || '"></span>' as job_html
from emp

The following column attributes are set:

  • JOB: HTML Expression = #JOB_HTML#
  • JOB_HTML: Type = Hidden Column

When the report is run, we see the icons displayed:

When the report is downloaded as CSV, the Job column is plaintext as desired:

Escaped strings and PL/SQL Dynamic Actions

Mon, 2019-04-08 02:50

I had a simple form where the user can select a vendor from a select list (using the Select2 plugin), and the form would retrieve the latest purchase details based on a simple string search of that vendor’s name – for most cases. Sometimes, however, it failed to find anything even though I knew the data was there.

There was a dynamic action that fires on Change of the list item that executes a PL/SQL procedure and retrieves the Total Amount and the Memo:

transaction_pkg.get_suggestion
  (vendor_name => :P5_VENDOR_NAME
  ,total_amt   => :P5_AMOUNT /*out*/
  ,memo        => :P5_MEMO); /*out*/

This didn’t work in some cases where a vendor name included special characters, such as &. This is because the item had Escape special characters set to the default (Yes), which is good practice to protect against Cross-Site Scripting (XSS) attacks. Therefore, the value sent by the dynamic action to my PL/SQL procedure had the special html characters escaped, e.g. “Ben & Jerry’s” was escaped to “Ben &amp; Jerry&#x27;s“. I believe APEX uses the apex_escape.html function to do this.

Usually, I would try to rework my code to send a numeric ID instead of a string; but in this particular case the data model does not have surrogate keys for vendors (it’s just a free-text field in the transactions table) so I want to use the name.

If I was doing this properly, I would fix the data model to make vendors first-class entities, instead of using a free-text field. This would allow using a surrogate key for the vendor list and this escaping behaviour would no longer be a problem.

Another alternative here is to use the latest transaction ID as a surrogate key for each vendor; but then I would need to modify the form to translate this back into a vendor name when inserting or updating the table; and this would add unnecessary complexity to this simple form, in my opinion.

Instead, before sending this string to my procedure, I’ve chosen to unescape the data. To do this, I add a call to utl_i18n.unescape_reference:

transaction_pkg.get_suggestion
  (vendor_name => utl_i18n.unescape_reference(:P5_VENDOR_NAME)
  ,total_amt   => :P5_AMOUNT /*out*/
  ,memo        => :P5_MEMO); /*out*/

This converts the escaped data back into a plain string, and my simple data matching procedure now works as expected.

Form Regions – Superclass/Subclass

Tue, 2019-04-02 22:02

The new Form Region feature introduced in Oracle APEX 19.1 is a big improvement to the way that single-record forms are defined declaratively.

In prior versions of APEX, you were effectively limited to a single DML load process and a single DML submit process, and all the items on the page that were linked to a table column were required to be from only a single table. If you wanted to do anything more complicated than that, you had to either split your form into multiple pages or get rid of the DML processes and hand-code the PL/SQL yourself.

The new components/attributes for Form Regions are:

  • Region type “Form” which defines the Source Table (or view, or SQL Query, or PL/SQL returning a SQL query, or Web Source) and associated semantics.
  • Item Source attribute that associates each item with a particular Form Region. The item can optionally be marked as “Query Only” and/or as a “Primary Key”.
  • Pre-render process type “Form – Initialization” that is associated with a Form Region.
  • Page process type “Form – Automatic Row Processing (DML)” that is associated with a Form Region. This can be used to change the Target for the form region (default is the Source Table) to a different table or view, or to replace the DML processing with your custom PL/SQL. You can also modify other semantics such as Prevent Lost Updates and row locking.

If you generate a single-record form page APEX will generate all the above for you and you can then customise them as you need.

Now, I want to create a slightly more complex form – one based on two tables that are related – “Transaction Lines” and “Reserve Lines” have the same Primary Key, and a referential integrity constraint linking them. They act like a “super class / subclass”, where one table (“Reserve Lines”) is optional. This data model eliminates the need to have lots of nullable columns in a base table.

Data model

Here’s the form for editing a single “Transaction Line”:

Form with “Reserve” unticked

The form shows the associated “Reserve Line” if it exists:

Form with “Reserve” ticked

If the “Reserve” checkbox is ticked, the Reserve details region is shown and the user can enter the Reserve Line attributes.

I need the page to perform the following actions:

  1. For a new record, if the user leaves “Reserve” unticked, only a Transaction Line should be inserted.
  2. For a new record, if the user ticks “Reserve”, both a Transaction Line and a Reserve Line should be inserted.
  3. For an existing record, if the user ticks or unticks “Reserve”, the Reserve Line should be inserted or deleted accordingly.

If you want to see this example in action, feel free to try it out here:
https://apex.oracle.com/pls/apex/f?p=JK201904&c=JK64 (login using your email address if you want). The starting page is a blank transaction; click “Create”, then click “Add Line” to open the transaction line page.

The page is composed of two Form Regions. Note that in this example I’ve laid out all the items for each form within its Form Region, but this is not strictly necessary.

To do this, I built the form in the following manner:

  1. Created a Form Region for the Transaction Line with standard Initialize form and Automatic Row Processing (DML) processes.
  2. Created a second Form Region for the Reserve Line with the standard processes (but customised later).
  3. Added the “P2_RESERVE_Y” checkbox item, not linked to any source column. Added a dynamic action to show/hide the Reserve details region if it is ticked/unticked.
  4. Added a “set up reserve line” PL/SQL Code process immediately after the Initialize form Transaction Line and just before the Initialize form Reserve Line process:
  • Server-side Condition Type = Rows returned
  • SQL Query =
    select null from reserve_lines where line_id=:P2_LINE_ID
  • PL/SQL Code:
:P2_RESERVE_LINE_ID := :P2_LINE_ID;
:P2_RESERVE_Y := 'Y';

5. Change the Automatic Row Processing (DML) for the Reserve Line region:

  • Target Type = PL/SQL Code
  • PL/SQL Code to Insert/Update/Delete:
case when :P2_RESERVE_Y is null then

  delete reserve_lines where line_id=:P2_RESERVE_LINE_ID;

when :P2_RESERVE_Y is not null
and :P2_RESERVE_LINE_ID is null then

  insert into reserve_lines
    (line_id,asset,target_amt,target_year)
  values
    (:P2_LINE_ID,:P2_ASSET,:P2_TARGET_AMT,:P2_TARGET_YEAR )
  returning line_id into :P2_RESERVE_LINE_ID;

when :P2_RESERVE_Y is not null
and :P2_RESERVE_LINE_ID is not null then

  update reserve_lines
    set asset=:P2_ASSET
       ,target_amt=:P2_TARGET_AMT
       ,target_year=:P2_TARGET_YEAR
  where line_id=:P2_RESERVE_LINE_ID;

else
  null;
end case;

6. Add a special handler to delete the reserve line if the user clicks the Delete button (this needs to be executed prior to the Process form Transaction Line to avoid a FK violation).

This solution is not quite as “low-code” as I’d like; it would be simpler to call a TAPI here instead of hardcoding the DML statements. The reason we need custom PL/SQL here is that when the user clicks the “Create” or “Save” buttons (which are associated with the SQL Insert and Update actions, respectively), we often need to translate this into a different DML action (insert, update, or delete) for the Reserve Lines table.

Amend commit message in SourceTree

Sun, 2019-03-31 21:10

If you’re using Atlassian SourceTree with a git repository and you do a local commit but then realise the message was incorrect, you can amend it before you push it to remote. To set this up, you can create a “Custom Action” in SourceTree:

  1. Tools -> Options -> Custom Actions
  2. Click Add
  3. Set Menu caption, e.g. “Amend commit message”
  4. Select “Open in a separate window” and unselect “Run command silently”
  5. Set Script to run to “git.exe” including path
  6. Set Parameters to “commit –amend”

Now, whenever you want to correct the message on your latest commit, you right-click on the commit, and select “Custom Actions” -> “Amend commit message“. A terminal window will open with a vi editor open with the current commit message at the top of the file (there will also be some instructions with # at the start of each line; you can leave them alone).

If you know vi, you’ll know what to do. Here’s a quick guide:

  • To start editing, press “i” and then edit the message (on the first line of the file).
  • To delete everything on the line, press ESC, followed by “ddO” (this will delete the line and then return you to edit mode to type the new message)
  • To save and quit, press ESC, followed by “:wq
  • To quit without making any changes, press ESC, followed by “:q!

Oracle Workspace Manager + APEX + VPD

Fri, 2019-03-15 20:24

This is an article I wrote quite some time ago and put “on ice” until I completed the client project it was related to. However, for unrelated reasons the project was significantly delayed and later rebooted with an updated set of requirements, which no longer require Oracle Workspace Manager. This means I’m unable to add a triumphant postscript saying “this was a great success and it’s being used to this day”. Instead, I’m adding this preamble to say “this was a very interesting feature of Oracle I learned but didn’t quite get to use, but hopefully someone will find it useful”.

Oracle Workspace Manager (OWM) is a feature of the Oracle database that was built back in the 8i days and installed in Oracle by default since 9i. As described by Tim Hall, it allows multiple transactionally consistent environments to exist within one database.

Confusing Terms
Don’t confuse OWM with Oracle APEX workspaces, or with Analytic Workspace Manager for Oracle OLAP.

OWM allows the developer to take a leap over the complexities involved in a number of use cases, such as:

  • Savepoints – a snapshot of a point in time
  • Workspaces – a private area for modifications that are not visible to normal users
  • Row History – store a history of every change for every record
  • Valid Time – support date/time range validity for each record

The Row History use case is similar to using Flashback Query which is a more modern feature of the database; however, since it can be enabled or disabled individually for each table, it may require less storage space to support querying back as far as the user would like in time.

The Valid Time use case allows your users to set a date/time range on each record; multiple versions of a unique row can be created with non-overlapping date ranges; updates can be done within the context of a given date/time range, which will cause rows that span the boundary of the range to be split into multiple versions. I haven’t tried this myself but it looks like a powerful feature.

Note: the default workspace for a user session is called “LIVE“, and the default savepoint is “LATEST“.

One example of using savepoints is that a user could create a savepoint, make changes, go back and view the database as of the savepoint, and can rollback all changes to a savepoint. They can also run an API command (dbms_wm.SetDiffVersions) to generate a differences report which shows all the inserts, updates, and deletes that have occurred since a savepoint.

An example of using workspaces is where a user could create one or two workspaces, each representing a different scenario. The user can jump back and forth between the scenarios and the “live” workspace (which is the default). They can edit any of these workspaces. Changes made by other users to the “live” workspace may, optionally, be automatically propagated into a workspace. Similarly to savepoints, the user can get a differences report between any workspace (including the “live” workspace). In addition, the user can create and view savepoints and additional workspaces within a workspace – OWM maintains a hierarchy of workspaces.

If the user is not happy with their scenario, they can simply delete the workspace and the live data is not affected.

If they are happy with their changes in a workspace, they can choose to do a Merge – which attempts to effect all the inserts, updates and deletes that were made in the workspace to its parent workspace (e.g. the “live” workspace if that was the source). If any change to a row would conflict with a change that another user made to the same row in the parent workspace, the merge stops and the user may be prompted to resolve the conflicts (i.e. for each row, the user can decide to refresh their workspace with the updated data in the parent, or they can force their workspace’s change onto the parent).

I suspect OWM was once known (or internally referred to) as “Long Transactions” or something like that, probably because of its user workspace and merging features. You can see the remnants of this old name in the documentation – many of the document URLs start with “long”. Also, note the title of this slide deck by an Oracle product manager: “Long Transactions with Oracle Database Workspace Manager Feature”.

The features of OWM only affect the tables that you explicitly Enable Versioning on via the API (DBMS_WM).

Limitations/Restrictions of Workspace Manager

If you are looking into using OWM, you must read the Intro to Workspace Manager in the docs. Make sure to review the restrictions that apply, such as:

  • Each version-enabled table must have a primary key.
  • If a parent table is version-enabled, each child table must also be version-enabled (but the opposite does not apply).
  • Referential integrity constraints MUST refer to the primary key in the parent table.
  • Primary key values in a parent table cannot be updated.
  • Only row-level triggers are supported – no per-statement triggers.
  • SQL MERGE statements are not allowed (attempts to do so results in “ORA-01400 cannot insert NULL into WM_VERSION”).
  • RETURNING clause is not supported (for INSERT or UPDATE statements).
  • Row-level security policies (VPD) are not enforced during workspace operations (such as Merge Workspace, and Rollback to Savepoint).
  • Materialized Views on a version-enabled table can only use the REFRESH COMPLETE method; e.g. no FAST or ON COMMIT.
  • Table names cannot be longer than 25 characters.
  • Column names cannot be longer than 28 characters.
  • Trigger names cannot be longer than 27 characters.
  • Some reserved words and characters apply, e.g. column names cannot start with WM$ or WM_.
  • Most DDL operations cannot be run directly on a version-enabled table (see below).

There are some other restrictions, so make sure to review the intro document carefully and consider the ramifications of each limitation for your particular situation.

I’ve used this simple script to do some basic checks on a table before I enable it for versioning: check_table_for_owm.sql

Database schema changes made by OWM

Internally, when you version-enable a table (e.g. MYTABLE), OWM makes a number of changes to your schema as follows:

  1. Renames MYTABLE to MYTABLE_LT, for internal use only
  2. Adds a number of grants on the object to WMSYS and WM_ADMIN_ROLE
  3. Adds some extra OWM-specific columns (with names prefixed with WM_) and indexes to MYTABLE_LT
  4. Creates the view MYTABLE, for use by your application for querying and DML
  5. Creates an INSTEAD OF trigger (owned by WMSYS) on the view to handle DML
  6. Creates some other triggers on any parent tables for referential integrity
  7. Creates one or two other tables (MYTABLE_AUX, and sometimes MYTABLE_LCK) for internal use only
  8. Creates some other views (MYTABLE_CONF, MYTABLE_DIFF, MYTABLE_HIST, MYTABLE_LOCK, MYTABLE_MW) for application use where needed
  9. Converts any triggers and VPD policies on the table to metadata and then drops them; these are managed by OWM from then on

Based on the grants that OWM gives to WMSYS (such as ON COMMIT REFRESH and QUERY REWRITE) I think the OWM tables are materialized views.

To change the structure of a version-enabled table, e.g. adding/modifying/removing a column, constraint, or trigger, the following steps must be done:

  1. Call dbms_wm.BeginDDL('MYTABLE');
  2. Make the changes to a special table called MYTABLE_LTS
  3. Call dbms_wm.CommitDDL('MYTABLE');

Note that this will fail if the table happens to have any constraints or indexes that are longer than 26 characters – so keep this in mind when naming them.

One of the most pleasing features of OWM is that it is relatively idiot-proof; if you try to perform an operation that is not supported, OWM will simply raise an exception (e.g. “ORA-20061: versioned objects have to be version disabled before being dropped“) instead of leaving you with a mess to clean up. Therefore it’s generally safe to test your scripts by simply running them and seeing what happens. For example, try doing some damage to the underlying tables or views by changing columns or dropping them – OWM will stop you. I would never do this sort of experiment in a production environment, of course!

Column comments on a table that becomes version-enabled are not migrated automatically to the view. They do remain on the renamed table (MYTABLE_LT). You can add comments to the view columns (and you don’t need to execute the BeginDDL/CommitDDL procedures for this); the same applies to the TABLE_LT table as well. Note, however, that if you remove the versioning from the table the view is dropped along with its comments, so if you later re-enable versioning you might want to re-apply the column comments as well.

To copy/synchronize the column comments from the underlying MYTABLE_LT table to the MYTABLE view, I use this script: sync_comments_lt_to_view.sql.

Implementing Workspace Manager

For guidance on how to take advantage of Oracle Workspace Manager in an APEX application, I watched this video by Dan McGhan. He demonstrates it in a javascript application but the real work is all done on the database in PL/SQL, which can be used by any application environment including APEX.

These are some of the API calls that I’ve used to build OWM features into my application:

set up a table for scenariosdbms_wm.EnableVersioning('MYTABLE', 'VIEW_W_OVERWRITE');create a scenariodbms_wm.CreateWorkspace('SCENARIO1');view a scenariodbms_wm.GotoWorkspace('SCENARIO1');create a snapshotdbms_wm.CreateSavepoint('SCENARIO1', 'SCENARIO1_SP1');rollbackdbms_wm.RollbacktoSP('SCENARIO1', 'SCENARIO1_SP1');merge a scenario to its parentdbms_wm.MergeWorkspace('SCENARIO1', create_savepoint=>false, remove_workspace=>true);drop a scenariodbms_wm.RemoveWorkspace('SCENARIO1');go back to livedbms_wm.GotoWorkspace('LIVE');compare two scenariosdbms_wm.SetDiffVersions('SCENARIO1', 'SCENARIO2', onlyModified=>true);analyze conflicts prior to mergedbms_wm.SetConflictWorkspace('SCENARIO1');

“You just can’t tell about the future.”

APEX and VPD

My APEX application already uses VPD (or Row Level Security) to provide a multi-tenant environment for a wide range of users. Groups of users are assigned to one or more Security Groups; when they login they are assigned to one Security Group at a time which is set in a Global Application Context. A VPD policy has been applied to most tables like this:

function vpd_policy
  (object_schema in varchar2
  ,object_name in varchar2
  ) return varchar2 is
begin
  return q'[security_group_id=sys_context('CTX','SECURITY_GROUP_ID')]';
end vpd_policy;

The Application Context is associated with the APEX session’s Client Identifier (e.g. JBLOGGS:16630445499603) so each page request will be executed within the correct VPD context. A procedure is called from the Post-Authentication Procedure Name on the authentication scheme which sets the SECURITY_GROUP_ID context variable.

For more info: Converting an APEX Application to Multi-Tenant

Workspace Manager and VPD

According to the docs, OWM can work along with VPD. However, you need to be aware of a few considerations:

  • Row-level security policies are not enforced during workspace operations, including MergeWorkspace and RollbackToSP.
  • Row-level security policies must be defined on a number of views, not just the view for the version-enabled table.
  • Don’t apply policies to the underlying tables created by OWM.

You can add VPD policies to a table prior to version-enabling it and these will be handled correctly by OWM when you version-enable it. However, if I need to add VPD policies after a table has been version-enabled, this is the code I run:

declare
  target_table varchar2(30) := 'MYTABLE';
begin
  for r in (
    select view_name
    from user_views
    where view_name in (target_table
                       ,target_table||'_LOCK'
                       ,target_table||'_CONF'
                       ,target_table||'_DIFF'
                       ,target_table||'_HIST'
                       ,target_table||'_MW')
  ) loop
    begin
      dbms_rls.add_policy
        (object_name     => r.view_name
        ,policy_name     => 'security_policy'
        ,policy_function => 'security_pkg.security_policy'
        ,update_check    => true
        ,static_policy   => true);
    exception
      when others then
        if sqlcode != -28101 /*policy already exists*/ then
          raise;
        end if;
    end;
  end loop;
end;

It adds the policy to the base view, as well as the five specific associated views (mytable_LOCK, mytable_CONF, mytable_DIFF, mytable_HIST and mytable_MW). Similar code may be used to alter or drop policies.

Note: with relation to VPD policies, the documentation doesn’t specifically mention the Multi-Workspace (_MW) view. As far as I can tell, however, this is required – because when you version-enable a table, a VPD policy on a non-version-enabled table will be applied by Workspace Manager to this _MW view as well as the other views.

Since a user’s changes within a workspace will be restricted to data for their Security Group, a MergeWorkspace works just fine – even though technically the merge occurs across the entire table ignoring the VPD policy, the only changes we expect are those for the user’s Security Group.

However, Savepoints are different; like workspaces, they are a snapshot of the entire table; if you issue a RollbackToSP, it will ignore any VPD policies and rollback all data to the selected savepoint. This means that you can’t use RollbackToSP to give users the ability to rollback their data to a snapshot, while isolating their changes to their VPD context. For this reason I don’t use RollbackToSP in my application.

All together – OWM, VPD and APEX

The users of my application need to be able to create scenarios for people in their Security Group to make proposed changes, see how the changes affect the reports, and merge those changes back to the “live” version. To facilitate this, I create a table to list the workspaces, and apply the VPD policy to it so that users can only view and work with workspaces for their Security Group.

create table scenarios (
  scenario_id         number
    default to_number(sys_guid()
                     ,'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')
    not null
 ,scenario_name       varchar2(200) not null
 ,wm_workspace        varchar2(30) not null
 ,security_group_id   number
    default sys_context('CTX','SECURITY_GROUP_ID')
    not null
 ,constraint scenario_pk primary key (scenario_id)
 ,constraint scenario_uk unique (wm_workspace)
);

Each scenario has an internal ID (in this case, a surrogate key generated from a GUID), a user-specified name, and a name for the workspace. I didn’t use the user-entered name for the workspace name because (a) it must be unique across the database, and (b) it is limited to 30 characters.

After a user clicks the “Create Scenario” button and enters a name, the page process calls the following procedure (within the package security_pkg associated with the context):

procedure create_scenario
  (scenario_name  in varchar2
  ,keep_refreshed in boolean := false) is
  l_wm_workspace varchar2(30);
begin

  -- workspace name must be unique and &amp;lt;=30 chars
  l_wm_workspace := to_basex
   (to_number(sys_guid(),'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')
   ,36);

  -- record the new scenario
  insert into scenarios (scenario_name, wm_workspace)
  values (create_scenario.scenario_name, l_wm_workspace);

  -- create the workspace
  dbms_wm.CreateWorkspace
    (workspace   => l_wm_workspace
    ,isrefreshed => keep_refreshed
    ,description => scenario_name
                 || ' ('
                 || sys_context('CTX','SECURITY_GROUP')
                 || ')'
    ,auto_commit => false);

  -- reset the savepoint, if required
  dbms_session.clear_context
      (namespace => 'CTX'
      ,attribute => 'WM_SAVEPOINT'
      ,client_id => sys_context('userenv','CLIENT_IDENTIFIER'));

  -- go to the workspace on the next page view
  dbms_session.set_context
      (namespace => 'CTX'
      ,attribute => 'WM_WORKSPACE'
      ,value     => l_wm_workspace
      ,client_id => sys_context('userenv','CLIENT_IDENTIFIER'));

end create_scenario;

Note: the to_basex function is based on code copied from this proposed OraOpenSource addition.

In my APEX application Security Attributes, I have the following:

Initialization PL/SQL Codesecurity_pkg.init_apex_session;Cleanup PL/SQL Codesecurity_pkg.cleanup_apex_session;

The procedures called above are as follows:

procedure wm_init is
begin
  dbms_wm.GotoWorkspace(nvl(sys_context('CTX','WM_WORKSPACE')
                           ,'LIVE'));
  dbms_wm.GotoSavepoint(nvl(sys_context('CTX','WM_SAVEPOINT')
                           ,'LATEST'));
end wm_init;

procedure init_apex_session is
begin
  wm_init;
end init_apex_session;

procedure cleanup_apex_session is
begin
  dbms_wm.GotoWorkspace('LIVE');
  dbms_wm.GotoSavepoint('LATEST');
end cleanup_apex_session;

The effect of this is that for each page request, the user’s selected workspace and/or savepoint is activated, or if they have not yet chosen a workspace or savepoint, the “live” workspace and “latest” savepoint is selected (which are the defaults). At the end of each page request, the session is reset to the “live” workspace and “latest” savepoint.

“That makes two of us.”

Create a Snapshot

Here is my code to create a snapshot, using the Workspace Manager SavePoint feature:

procedure create_snapshot
  (snapshot_name in varchar2) is
  l_wm_workspace varchar2(30);
  l_wm_savepoint varchar2(30);
begin

  -- savepoint name must be unique and &amp;lt;=30 chars
  l_wm_savepoint := to_basex
   (to_number(sys_guid(),'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')
   ,36);

  l_wm_workspace := nvl(sys_context(CTX,'WM_WORKSPACE'), 'LIVE');

  -- record the snapshot
  insert into snapshots
    (snapshot_name
    ,wm_workspace
    ,wm_savepoint)
  values
    (create_snapshot.snapshot_name
    ,l_wm_workspace
    ,l_wm_savepoint);

  -- create the savepoint
  dbms_wm.CreateSavepoint
    (workspace      => l_wm_workspace
    ,savepoint_name => l_wm_savepoint
    ,description    => snapshot_name
                    || ' ('
                    || sys_context(CTX,'SECURITY_GROUP')
                    || ')'
    ,auto_commit    => false);

end create_snapshot;

Go to a Scenario

This sets the context for the user’s session so that subsequent page requests will load the specified Workspace. Any DML the user performs on version-enabled tables will be private to the workspace.

procedure goto_scenario (scenario_name in varchar2) is
  l_wm_workspace varchar2(30);
begin

  -- retrieve the workspace name for the given scenario
  select s.wm_workspace into l_wm_workspace
  from scenarios s
  where s.scenario_name = goto_scenario.scenario_name;

  -- reset the savepoint, if required
  dbms_session.clear_context
      (namespace => 'CTX'
      ,attribute => 'WM_SAVEPOINT'
      ,client_id => sys_context('userenv','CLIENT_IDENTIFIER'));

  -- go to the workspace on the next page view
  dbms_session.set_context
      (namespace => 'CTX'
      ,attribute => 'WM_WORKSPACE'
      ,value     => l_wm_workspace
      ,client_id => sys_context('userenv','CLIENT_IDENTIFIER'));

end goto_scenario;

View a Snapshot

This sets the context for the user’s session so that subsequent page requests will be viewing the specified Savepoint. The version-enabled tables will be read-only; any DML on them will raise an exception.

procedure goto_snapshot (snapshot_name in varchar2) is
  l_wm_workspace varchar2(30);
  l_wm_savepoint varchar2(30);
begin

  -- retrieve the details for the given snapshot
  select s.wm_workspace
        ,s.wm_savepoint
  into   l_wm_workspace
        ,l_wm_savepoint
  from snapshots s
  where s.snapshot_name = goto_snapshot.snapshot_name;

  -- set the workspace and savepoint on the next page request

  dbms_session.set_context
      (namespace => 'CTX'
      ,attribute => 'WM_WORKSPACE'
      ,value     => l_wm_workspace
      ,client_id => sys_context('userenv','CLIENT_IDENTIFIER'));

  dbms_session.set_context
      (namespace => 'CTX'
      ,attribute => 'WM_SAVEPOINT'
      ,value     => l_wm_savepoint
      ,client_id => sys_context('userenv','CLIENT_IDENTIFIER'));

end goto_snapshot;

Go to Live

When the user is in a Scenario, and they wish to go back to “Live”, they can click a button which executes the following procedure. Their next page request will go to the LATEST savepoint in the LIVE workspace.

procedure goto_live is
begin

  dbms_session.clear_context
      (namespace => 'CTX'
      ,attribute => 'WM_WORKSPACE'
      ,client_id => sys_context('userenv','CLIENT_IDENTIFIER'));

  dbms_session.clear_context
      (namespace => 'CTX'
      ,attribute => 'WM_SAVEPOINT'
      ,client_id => sys_context('userenv','CLIENT_IDENTIFIER'));

end goto_live;

Go to Latest

When the user is viewing a Snapshot, and they wish to go back to “Latest” (so they do DML, etc.), they can click a button which executes the following procedure. This works regardless of whether they are in the Live workspace or viewing a scenario. Their next page request will go to the LATEST savepoint.

procedure goto_latest is
begin

  dbms_session.clear_context
      (namespace => 'CTX'
      ,attribute => 'WM_SAVEPOINT'
      ,client_id => sys_context('userenv','CLIENT_IDENTIFIER'));

end goto_latest;

Compare two Scenarios

When the user wishes to analyze the differences between two scenarios, or between a scenario and the Live workspace, there is a two-step process:

  1. The user selects two Scenarios (or the “Live” workspace) from some select lists; the return value of these select lists should be the wm_workspace column from the table, or 'LIVE' for the Live workspace.
  2. The user clicks a button to execute the find_diffs1 procedure (see below).
  3. The page has a report on the view mytable_DIFF.
procedure find_diffs1
  (wm_workspace1 in varchar2
  ,wm_workspace2 in varchar2) is
begin

  dbms_wm.SetDiffVersions
    (workspace1   => wm_workspace1
    ,workspace2   => wm_workspace2
    ,onlyModified => true);

end find_diffs1;

Compare Snapshots

When the user wishes to analyze the differences between two snapshots (in any workspace), or between the latest data and a snapshot, a similar process is used:

  1. The user selects two Snapshots (or the “Latest” snapshot for either the Live workspace or a given Scenario); the return values of these select lists should be the wm_workspace and wm_savepoint, e.g. wm_workspace||':'||wm_savepoint.
  2. The user clicks a button to execute the find_diffs2 procedure (see below).
  3. The page has a report on the view mytable_DIFF.
procedure find_diffs
  (workspace_savepoint1 in varchar2
  ,workspace_savepoint2 in varchar2) is
  l_sep1 number := instr(workspace_savepoint1,':');
  l_sep2 number := instr(workspace_savepoint2,':');
begin

  dbms_wm.SetDiffVersions
    (workspace1   => substr(workspace_savepoint1, 1, l_sep1-1)
    ,savepoint1   => substr(workspace_savepoint1, l_sep1+1)
    ,workspace2   => substr(workspace_savepoint2, 1, l_sep2-1)
    ,savepoint2   => substr(workspace_savepoint2, l_sep2+1)
    ,onlyModified => true);

end find_diffs;

Merge a Scenario

In my system, I don’t allow users to create scenarios within scenarios, although this is possible with Workspace Manager. If the user wishes to merge all changes in a Scenario to Live, they click a button to execute the following procedure:

wm_error_55 exception;
wm_error_56 exception;
pragma exception_init(wm_error_55, -20055);
pragma exception_init(wm_error_56, -20056);

procedure merge_scenario is
  l_wm_workspace varchar2(30);
begin

  l_wm_workspace := sys_context('CTX','WM_WORKSPACE');

  goto_live;
  wm_init;

  -- merge the workspace
  dbms_wm.MergeWorkspace
    (workspace        => l_wm_workspace
    ,remove_workspace => true
    ,auto_commit      => false);

  -- delete the scenario
  delete scenarios s
  where s.wm_workspace = merge_scenario.scenario_id;

exception
  when wm_error_55 or wm_error_56 then
    -- unable to merge due to conflicts

    -- go back into the workspace
    dbms_session.set_context
      (namespace => 'CTX'
      ,attribute => 'WM_WORKSPACE'
      ,value     => l_wm_workspace
      ,client_id => sys_context('userenv','CLIENT_IDENTIFIER'));

    -- caller should redirect user to the "resolve conflicts" page
    raise e_merge_conflict;

end merge_scenario;

This will fail with an exception if there are any conflicting changes in the Live workspace, e.g.:

  • a record was updated in the scenario, but was deleted in Live
  • a record was updated or deleted in the scenario, but also updated in Live
  • a record was inserted in the scenario, but another record with the same PK was inserted in Live

If the workspace had been created with the isrefreshed option, the changes being made in Live will be automatically copied to the workspace so these conflicts should be minimised; however, conflicts cannot always be avoided. To give the user the ability to analyze the conflicts, you would perform the following steps:

  1. Call dbms_wm.BeginResolve(workspace => sys_context('CTX','WM_WORKSPACE'))
  2. Show a report that queries the view mytable_CONF
  3. Allow the user to select an action for each record in conflict – either “PARENT” (keep the parent, i.e. discard the change in the scenario) or “CHILD” (keep the child, i.e. discard the change made in Live).
  4. For each record, call dbms_wm.ResolveConflicts (see example below).
  5. COMMIT
  6. Call dbms_wm.CommitResolve(workspace => sys_context('CTX','WM_WORKSPACE'))
  7. Call merge_scenario again.

The ResolveConflicts procedure takes a where clause that identifies the row (or rows) to mark as resolved. In my case, I just call it with the id for each record the user chose:

for r in (
  ...query on something, e.g. an APEX collection...
  ) loop

  dbms_wm.ResolveConflicts
    (workspace    => sys_context('CTX','WM_WORKSPACE')
    ,table_name   => 'MYTABLE'
    ,where_clause => 'id=' || r.id
    ,keep         => r.action /*'PARENT' or 'CHILD'*/);

end loop;

If the conflicts are across multiple tables, the UI will probably be a bit more complicated. You’d have to resolve conflicts on all the affected tables before the Merge can succeed.

“Don’t let a white fence and a promotion end the world for you.”

Delete a Scenario

Deleting a scenario uses RemoveWorkspace:

procedure delete_scenario
  (scenario_name in varchar2) is
  l_wm_workspace varchar2(30);
begin

  -- retrieve the workspace name for the given scenario
  select s.wm_workspace into l_wm_workspace
  from scenarios s
  where s.scenario_name = delete_scenario.scenario_name;

  -- get out of the workspace
  goto_live;
  wm_init;

  -- delete the workspace
  dbms_wm.RemoveWorkspace
    (workspace   => l_wm_workspace
    ,auto_commit => false);

  delete scenarios s
  where s.wm_workspace = l_wm_workspace;

end delete_scenario;

Delete a Snapshot

Deleting a snapshot uses DeleteSavepoint:

procedure delete_snapshot
  (snapshot_name varchar2) is
  l_wm_workspace varchar2(30);
  l_wm_savepoint varchar2(30);
begin

  -- retrieve the details for the given snapshot
  select s.wm_workspace
        ,s.wm_savepoint
  into   l_wm_workspace
        ,l_wm_savepoint
  from snapshots s
  where s.snapshot_name = delete_snapshot.snapshot_name;

  -- get out of the snapshot/scenario:
  goto_latest;
  wm_init;

  -- delete the savepoint
  dbms_wm.DeleteSavepoint
    (workspace      => nvl(l_wm_workspace,'LIVE')
    ,savepoint_name => l_wm_savepoint
    ,auto_commit    => false);

  delete snapshots s
  where s.wm_savepoint = l_wm_savepoint;

end delete_snapshot;

Row History

One of the requirements of my application was to show a report of the entire history of edits to each record in a table. Since I’m already going to version-enable this table, it makes sense to take advantage of the Row History feature of Oracle Workspace Manager.

When you version-enable a table, OWM creates a view called MYTABLE_HIST which includes all the columns of the table, plus the following columns: WM_WORKSPACE, WM_VERSION, WM_USERNAME, WM_OPTYPE, WM_CREATETIME, and WM_RETIRETIME. By default, when you version-enable a table, OWM keeps only a minimal set of history in order to support other features in use, such as snapshots. In order to retain a complete history of changes to the record, enable versioning with the View Without Overwrite history option:

begin
  dbms_wm.EnableVersioning('MYTABLE'
    ,hist => 'VIEW_WO_OVERWRITE');
end;

This stops OWM from overwriting the history of changes to each record, so it can be queried via the _HIST view.

Now, I wanted to expose the contents of this view to my users, and for each history record show Who did it and When. “When” is easily answered by WM_CREATETIME, which is a TIMESTAMP WITH TIME ZONE. “Who” would normally be answered by WM_USERNAME, but since we’re in APEX, this will always be 'APEX_PUBLIC_USER' which is not very useful. Therefore, I have an ordinary column in my table called DB$UPDATED_BY which is set by the following ordinary trigger:

create trigger MYTABLE$TRG
  before update on MYTABLE
  for each row
begin
  :new.db$updated_by :=
     coalesce(sys_context('APEX$SESSION','APP_USER')
             ,sys_context('USERENV','SESSION_USER'));
end MYTABLE$TRG;

This means my report will show the actual APEX username of who created or updated the record.

The WM_OPTYPE column in the _HIST view will be 'I' (inserted), 'U' (updated), or 'D' (deleted). I can translate this code into user-friendly values for reporting purposes, but there is one problem: when a row is deleted, the _HIST table knows the username who deleted the record, but we never recorded the APEX username. Our db$updated_by column wasn’t changed, so it simply shows whoever last updated the records before it was deleted.

To solve this, we have two options:
(1) Write the application to do a quick no-change UPDATE on each row immediately before deleting it; then do some fancy footwork in the view to show this as a “Delete” operation.
(2) Don’t allow the application to execute actual DELETEs – in other words, use a “soft-delete” design.

I’ve gone with option (2) which is what the users needed anyway – they needed to be able to Delete and Undelete records at will, so I’ve only granted INSERT and UPDATE on the table and the application will translate a request to “DELETE” into an update of a “deleted” flag on the table. A request to Undelete a record is simply another update to set the “deleted” flag to NULL. All reports in the application have a predicate to filter out any deleted records unless the user is viewing the “recycle bin”. In addition, if the user is viewing a deleted record, I put the form into “Readonly” mode – the only thing they can do is Undelete it (thereafter, they can modify the record if they wish).

I’ve created the following view in order to show the history of changes to a record:

create or replace view mytable_hist_vw as
select x.id, x.name, etc.
      ,l.wm_createtime as op_timestamp
      ,case
       when x.wm_optype = 'U'
        and x.deleted_ind = 'Y'
        and lag(x.deleted_ind,1)
            over (partition by x.id
                  order by x.wm_createtime) is null
       then 'Deleted'
       when x.wm_optype = 'U'
        and x.deleted_ind is null
        and lag(x.deleted_ind,1)
            over (partition by x.id
                  order by x.wm_createtime) = 'Y'
       then 'Undeleted'
       when x.wm_optype = 'I' then 'Created'
       when x.wm_optype = 'U' then 'Updated'
       when x.wm_optype = 'D' then 'Deleted permanently'
       end as op_desc
      ,case
       when x.wm_optype = 'D'
       then x.wm_username
       else x.db$updated_by
       end as op_by
from   mytable_hist x

I interpret an WM_OPTYPE of 'D' as “Deleted permanently”. This might occur if we do a delete directly on the table, e.g. via a data fix run directly on the database. In this case I report the username according to the wm_username column since we won’t have the APEX user name.

Data Dictionary Views

OWM provides a number of data dictionary views:

Versioned tablesuser / all / dba_wm_versioned_tablesConstraints on version-enabled tablesuser / all_wm_constraintsTriggers on version-enabled tablesuser / all_wm_tab_triggersVPD policies on version-enabled tablesuser / all_wm_policiesWorkspacesuser / all / dba_workspaces

The Future

According to Bill Beauregard (Long Transactions with Oracle Database Workspace Manager Feature), we can expect to see some enhancements to OWM in the near future, including:

  • 128 character workspace names
  • Relaxed limitations on table, column and index name length
  • Support for invisible indexes, virtual and identity columns

Related

Oracle Documentation

Did you recognise all the TV shows referenced in this article?

Interactive Grid: Custom Select List on each row

Tue, 2018-02-27 22:19

I had a column in an editable interactive grid based on a Select List, which takes valid values from a table that supports “soft delete” – i.e. records could be marked as “deleted” without affecting existing references to those deleted records.

The SQL Query for the LOV was like this (my example is a view on a reference table of school “year levels”):

select name, id from year_levels_vw
where deleted_ind is null
order by sort_order

The problem is that if a year level is marked as deleted, the select list will not include it due to the where clause; since Display Extra Values is set to “Yes”, the item on the page will instead show the internal ID which is not very useful to the user. Instead, I want to show the name but appended with a string to show it has been deleted:

select name
       || case when deleted_ind = 'Y' then ' (DELETED)' end
       as disp_name, id
from year_levels_vw
order by deleted_ind nulls first, sort_order

So now the select list shows the name, even if it has been deleted. However, once users start using this system and they delete some year levels, each select list will include all the deleted values, even if they will never be used again. We’d prefer to only include a deleted value IF it is currently used by the record being viewed; otherwise, we want to omit it from the list.

If this was an APEX item in a single-record edit form, I’d simply change the SQL Query for the LOV to:

select name
       || case when deleted_ind = 'Y' then ' (DELETED)' end
       as disp_name, id
from year_levels_vw
where deleted_ind is null or id = :P1_FROM_YEAR_LEVEL_ID
order by deleted_ind nulls first, sort_order

This way, the select list will only include the deleted year level if the underlying item was already set to that deleted ID. But we are now using an Interactive Grid – there is no page item to refer to.

The method I’ve used to solve this is to take advantage of the Cascading LOV feature in order to allow the query to refer to the value of the column. The SQL Query for the LOV on my Interactive Grid is:

select name
       || case when deleted_ind = 'Y' then ' (DELETED)' end
       as disp_name, id
from year_levels_vw
where deleted_ind is null or id = :FROM_YEAR_LEVEL_ID
order by deleted_ind nulls first, sort_order

Now, we need to make sure that “FROM_YEAR_LEVEL_ID” is available to the query, so we need to put it in the Items to Submit attribute. To make this attribute available, however, we must set Cascading LOV Parent Column(s) to something; I set it to the PK ID of the table, or some other column which doesn’t get changed by the user and isn’t actually referred to in the LOV Query.

Now, records not referring to a deleted value show only valid values:

And records that refer to a deleted value include the deleted value in the list, as desired:

It should be noted that the design of the select list means that the user is allowed to save changes to the record while preserving the reference to the deleted year level. This is desired, in this case; if it wasn’t, however, I’d consider putting a validation on the page to stop the record being saved unless the user changes it to a valid value.

P.S. Concerned parents should note that this example was just testing the ability to delete a record from a database, and I’d just like to make it totally clear that there are no plans by the department to eliminate year 7 from schools. Honest!

ADDENDUM (19/3/2018):

There are two known issues:

  1. If the item is the child of a Cascading LOV, when the parent item is changed, APEX automatically clears out any value in the child before rendering the list of values – which means the column value submitted will be NULL – which means the “deleted” items disappear from the list immediately. This means the user will not be allowed to save the record with a reference to a deleted value from the list.
  2. The column filter list of values is empty – this is due to a known bug in APEX [Doc ID 2289512.1 FILTER NOT WORKING IN INTERACTIVE GRID WITH CASCADING LOV][thanks to Dejan for alerting me to this]

AUSOUG Connect 2017 – see you in Perth

Thu, 2017-10-12 09:13

I’ll be presenting a talk about some APEX applications I’ve been building gradually over the past five years at the AUSOUG Connect 2017 conference in Perth, Western Australia on 29 November.

My hope is that you’ll get some ideas on how best to set up the overall architecture for your APEX applications that may need to serve more than one customer – whether different organisations or different business groups within your organisation.

Although it’s just one day, there’s a lot of good content packed in for developers, DBAs and eBusiness professionals so you really don’t want to miss it!

For more info

Site migrated

Wed, 2017-10-04 20:14

I’ve recently migrated this site to a new host. Most of the content has come across unscathed but there are some remaining issues I’m working on. If you notice anything amiss, PLEASE let me know by email to errata@jk64.com – it will be greatly appreciated!

PayWay interface

Mon, 2017-08-28 21:22

This is just to document how my system (which was built quite a few years ago) handles credit card payments via PayWay. The interface is rather dated and the way my system uses it is a bit hackish but for what it’s worth:

Step 1. On the server, generate a unique transaction reference, e.g. id123456

Step 2. On the server, request a token via HTTP Post:

https://www.payway.com.au/RequestToken

Data (all one line):

biller_code=my-vendor-code
&merchant_id=my-merchant-id
&payment_reference_text=Our+Reference
&payment_reference=id123456
&payment_amount_text=Amount
&payment_amount=123.45
&receipt_address=customer@example.com
&username=my-payway-username
&password=my-payway-password
&payment_alert=bookkeeper@example.com
&return_link_url=https://example.com/return/gamesfestid12345
&return_link_redirect=true
&information_fields=CustomField1,CustomField2
&CustomField1=foo
&CustomField2=bar

Make sure all the values are url encoded.

Some of the fields are optional. The “information_fields” is a list of custom attributes that you can add if you want.

Step 3. Read the response

The response data will be something like this:

token=xyz1234567890

If it doesn’t start with “token=”, I interpret it as an error message.

Step 4. On the client, redirect browser to this URL:

https://www.payway.com.au/MakePayment?biller_code=my-vendor-code&token=xyz1234567890

Step 5. After the user comes back to your site (or in a separate server job) you have to query PayWay’s system to find out the result of the transaction, using the payment reference you generated earlier. This is a REST-style interface.

http://api.payway.com.au/rest/v1/transactions/search-customer?customerNumber=id123456

This returns a JSON document listing one or more transactions for the given reference, including transactionid (e.g. 12345678901234567890), status, transactionType, paymentAmount, and settlementDate.

Step 6. For each transaction, query PayWay’s system again to get all the details, using the transactionid retrieved earlier.

http://api.payway.com.au/rest/v1/transactions/12345678901234567890

This returns a JSON document with attributes including: receiptNumber, status, responseCode, responseText, transactionType, customerName, customerEmail, principalAmount, creditcard.cardScheme, creditCard.cardNumber, creditCard.expiryDateMonth, creditCard.expiryDateYear, creditCard.cardholderName, transactionDateTime, settlementDate, declinedDate, isRefundable

My system keeps track of the payment references generated in step #1, and periodically requeries PayWay to get the results (and I give up after about a day or two – this happens if the customer never completes the transaction).

Note that for all this to work, you have to first get the bank to enable both interfaces for your client – the token interface on http://www.payway.com.au) as well as the REST-style interface on api.payway.com.au. It took quite a few emails and phone calls before it was all working.


Filed under: Other

Apex Plugin: Password Strength Estimator

Tue, 2017-08-15 23:26

I needed a simple password strength prompt for users when they need to create or change their password on my website. After a bit of Googling I found the “Low-Budget Password Strength Estimator” which is supposedly used by Dropbox, so you know it’s good :)

This simple javascript library runs entirely within the client’s browser, and when presented with a candidate password, gives a score from 0 (very poor) to 4 (very good). It can also return extra feedback, including a warning message for poor passwords, as well as suggestions for making a password more secure.

preview-verypoor

So I’ve created a very simple Dynamic Action plugin (try the demo here) that allows you to add this functionality to any item on your page. You can specify a minimum length for the password, and can override the default messages for each score. You can also select whether or not the feedback warnings or suggestions are shown.

preview-poor

It seems to catch a lot of poor passwords, including ones comprising common words and names, and ones involving a simple sequence or repetition.

preview-good

preview-strong

Obviously it’s only really useful for password entry fields; but don’t use it on your Login page!

Download from: https://github.com/jeffreykemp/jk64-plugin-passwordstrength


Filed under: APEX, Oracle Tagged: APEX, passwords, plug-ins

Pages