The Anti-Kyte

Subscribe to The Anti-Kyte feed The Anti-Kyte
Oracle - for when it was like that when you got there
Updated: 4 weeks 18 hours ago

ORA-01775 – Looping Chain of Synonyms and Epic Football Punditry

Tue, 2019-04-30 04:59

In 1990, Liverpool became English League Champions for the 10th time in 15 seasons.
Despite this impressive track record, my Dad came over all Yoda-esque and confidently predicted that they would not win The Title again in his lifetime.
Since then, Liverpool have won everything else, including the Champions League – OK Dad , the European Cup – but the prediction has held.
In fact, it’s gone on so long that it probably qualifies as a prophecy by now.
Before the start of each season, I can assess Liverpool’s prospects, by simply enquiring after his health.
“Musn’t grumble, ‘cos if you do no-one bloody listens !” can be taken as a synonym for “I’ll be around for a while yet, so don’t waste your money on backing Liverpool to win it this season”.
Which brings us to the subject of this post – namely the apparently random nature of the ORA-01775 error, where synonyms are concerned…

To demonstrate what’s going on and (hopefully) why, I’ve created a table in the MIKE schema and granted SELECT on it to HR :

create table predictions (
    predictor varchar2(128), 
    prediction varchar2(4000))
/

insert into predictions values('MY DAD', 'Liverpool will not win the League again in my lifetime')
/

grant select on predictions to hr
/

As a result, I can query the table when connected as HR :

I can also create a local synonym for the table so I don’t have to remember which schema it’s in …

create or replace synonym predictions for mike.predictions;

Now, if we were to drop the table (back in the MIKE schema, remember) …

drop table predictions;

… and then attempt to reference it directly, the result is entirely, well, predictable…

Meanwhile, the synonym is unaffected by the removal of it’s underlying table …

…which means we get a different error when we use it to try to access the dropped table…

I’m using SQLCL here so we only see the first line of the error. However, we can use SQLCL’s OERR function to get the full error text, which is far more informative :

The error message makes it very clear what might be causing the error and gives you a good idea how to fix it.
Of course, you’ll always get this error if the synonym you’re referencing is pointing to a non-existent table, right ?

When Oracle starts yanking your chain

Let’s re-create the table in MIKE again …

create table predictions (
    predictor varchar2(128), 
    prediction varchar2(4000))
/

insert into predictions values('MY DAD', 'Liverpool will not win the League again in my lifetime')
/

grant select on predictions to hr
/

This time however, we’re going to create a public synonym (connecting as a user with CREATE PUBLIC SYNONYM privilege) …

create public synonym predictions for mike.predictions
/

…and drop HR’s local synonym…

drop synonym predictions;

…which leaves us with only the Public synonym pointing to the table…

HR can still access the table, but this time it’s using the public synonym…

Of course, if we drop the table again, we’ll get the same error when we attempt to access it via the synonym…

Wait, what ? What happened to that nice, explanatory ORA-00980 ?

OERR isn’t saying anything :


According to the docs :

Oracle Database attempts to resolve references to objects at the schema level before resolving them at the PUBLIC synonym level

In this instance, it seems that Oracle thinks that the synonym is pointing to itself.

We can confirm this hypothesis by using a public synonym with a different name to that of the table it’s pointing to.

First of all though, we need to drop our current public synonym or we’ll wind up right back here in looping chain lunacy :

drop public synonym predictions;

create table predictions (
    predictor varchar2(128), 
    prediction varchar2(4000))
/

insert into predictions values('MY DAD', 'Liverpool will not win the League again in my lifetime')
/

grant select on predictions to hr
/

create public synonym prophecies for mike.predictions;

HR now accesses the table using the PROPHECIES public synonym…


If we drop the underlying table again and attempt to use the synonym we’re back to a rather more reassuring error :

The moral of this tale ? Well, if you get an ORA-01775 error then it’s probably worth checking the availability of any tables that are a target of a public synonym as a first step to debugging the issue.

Oh, and my Dad knows best.

Easy Listening with Datapump in the SQLDeveloper DBA Module

Mon, 2019-04-01 12:05

There are a number of ways to transfer data between Oracle Databases, one of which is to use the PL/SQL Datapump API – DBMS_DATAPUMP.
If you wish to avail yourself of this utility but find the syntax a bit fiddly, you always have the option of getting SQLDeveloper to do (most of) it for you.
What we’re talking about here is how to persuade the SQLDeveloper DB module to :

  • Create and execute a custom Datapump export job
  • do most of the work creating an import of a subset of the exported data

The Application

I’ve got a simple application which consists of a single table and some data :

select title, artist, original_artist, is_mellow
from mike.tracks
/

TITLE                          ARTIST                ORIGINAL_ARTIST                                    IS_MELLOW
------------------------------ --------------------- -------------------------------------------------- ---------
MOTORHEAD                      COURDUROY             HAWKWIND                                           Y        
MOTORHEAD                      MOTORHEAD             HAWKWIND                                           N        
THUNDERSTRUCK                  STEVE N SEAGULLS      AC/DC                                              Y        
WHISKY IN THE JAR              METALLICA             THIN LIZZY                                         N        
HIGHWAY TO HELL                CARLA BRUNI           AC/DC                                              Y        
SHIPPING UP TO BOSTON          GODDESSES OF BAGPIPES DROPKICK MURPHYS                                   Y        
ANOTHER GIRL ANOTHER PLANET    BLINK 182             THE ONLY ONES                                      N        
SMOKE ON THE WATER             PAT BOONE             DEEP PURPLE                                        Y        
EMERGENCY                      MOTORHEAD             GIRL SCHOOL                                        N        
SMELLS LIKE TEEN SPIRIT        2 CELLOS              NIRVANA                                            Y        
WHOLE LOTTA LOVE               CHICO ARNEZ           LED ZEPPLIN                                        Y        
KING OF TWILIGHT               IRON MAIDEN           NEKTAR                                             N        
PARANOID                       GUNTER NORIS          BLACK SABBATH                                      Y        
C'MON FEEL THE NOISE           OASIS                 SLADE                                              N        

14 rows selected. 

The Datapump Export

Note that I’ve deliberately kept things simple in this example. If you were looking at ways of transferring this kind of data-volume then Datapump is probably overkill.

Anyhow, I want to export this table and all of it’s data. To do this in SQLDeveloper, connected as a DBA user I can to go to the View menu and select DBA.
In the DBA tree, I need to right-click the Data Pump node and select the Data Pump Export Wizard

This brings up the first screen of the Export Wizard. Slightly confusingly, it’s labelled as Step 2.
The table I want to export is in a different Schema to the one I’m connected as so I select Schemas from the radio group :

After clicking the Next button we get the chance to choose one or more schemas to include in the export. In my case, I just want the one schema :

The next screen affords us the opportunity to include or exclude types of object. The Filter Types are the same as those you’d see in the export log file :

On this occasion however, we’re not going use these filters so the screen for this step looks like this :

The next screen allows us to select which specific tables we may want to include in the export.

If I enter a search string in the Name field and click the Lookup button, I will eventually get a list of matching tables :

I can then highlight the required table(s) and add them to the list for export by clciking the down arrow…

In this case I want to export all of the data in all of the columns in the table so I have not specified a column list or a predicate.

Next, I can specify the name and location of a logfile for the export.
I’ve left the other options on this screen at their default setting…

I now need to specify the name and location of the dump file. Once again, all other settings are left at the default :

I’d like to run the job immediately so I don’t need to change any of the scheduling information :

Finally, I can see a summary of the job :

Interestingly, if I click on the PL/SQL tab, I can see the code that SQLDeveloper has generated for this export :

That’s quite interesting. Maybe this feature will come in handy in a bit.
For now though, I want to run the export immediately, so I just click the Finish button and SQLDeveloper starts the job whilst providing me a nice interactive view of it :

If I go to the Data Pump/Export Jobs node in the DBA tree, I can see a record of the job.
It will also display the contents of the log file :

If we now check the directory on the OS – the path for which can be found with…

select directory_path
from dba_directories
where directory_name = 'DATA_PUMP_DIR'
/

DIRECTORY_PATH                                              
------------------------------------------------------------
/opt/oracle/admin/XE/dpdump/7DE2A3259D9C3747E0530F84F25CE87C

… we can see that the export file has been created, with a timestamp appended in the filename :

cd /opt/oracle/admin/XE/dpdump/7DE2A3259D9C3747E0530F84F25CE87C

ls -l
-rw-r-----. 1 oracle oinstall 331776 Mar 28 12:30 mike_tracks-12_30_11.dmp

We need to make a note of the file name because we’ll need it when we come to do…

The Datapump Import

I want to import this table into the HR schema. However, I don’t want all of the data, just the mellow tracks because I really need something to chill out to. Oh, and drown out the sound of the country going down the toilet.

Fortunately, SQLDeveloper can get us most of the way.

Once again, we need to right-click the Data Pump node of the DBA tree and select Data Pump Import Wizard.
We need to start by telling the Wizard the type of Import we want to do ( Tables) and the name of the dump file ( the export file we’ve just created) :

Next, we select the table using the shuttle buttons :

We want to import into the HR schema rather than the original schema :

I only want to create the table if it does not already exist :

Once again, I don’t want to schedule the job :

Finally, we have the summary :

Unlike the export, you’ll notice that the import Wizard has not given us the option to restrict the actual table rows we will be importing.
However, we do still get the code that has been generated in the PL/SQL tab of the summary window :

This time, rather than executing the job, I’m going to copy the code into an editor and add a manual “tweak” – namely :

    --
    -- Make sure we only get the mellow tracks because I really need to chill out...
    --
    dbms_datapump.data_filter(handle => h1, name => 'SUBQUERY', value => q'[where is_mellow = 'Y']', table_name => 'TRACKS', schema_name => 'MIKE');

The full script now looks like this :

set scan off
set serveroutput on
set escape off
whenever sqlerror exit 
DECLARE
    s varchar2(1000); 
    h1 number;
    errorvarchar varchar2(100):= 'ERROR';
    tryGetStatus number := 0;
begin
    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'TABLE', job_name => 'mike_playlist', version => 'COMPATIBLE'); 
    tryGetStatus := 1;
    dbms_datapump.set_parallel(handle => h1, degree => 1); 
    dbms_datapump.add_file(handle => h1, filename => 'mike_tracks_imp.log', directory => 'DATA_PUMP_DIR', filetype=>DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE); 
    dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 1); 
    dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''MIKE'')'); 
    dbms_datapump.metadata_filter(handle => h1, name => 'NAME_EXPR', value => 'IN(''TRACKS'')'); 
    dbms_datapump.add_file(handle => h1, filename => 'mike_tracks-12_30_11.dmp', directory => 'DATA_PUMP_DIR', filetype => 1); 
    dbms_datapump.metadata_remap(handle => h1, name => 'REMAP_SCHEMA', old_value => 'MIKE', value => UPPER('HR') ); 
    
    --
    -- Make sure we only get the mellow tracks because I really need to chill out...
    --
    dbms_datapump.data_filter(handle => h1, name => 'SUBQUERY', value => q'[where is_mellow = 'Y']', table_name => 'TRACKS', schema_name => 'MIKE');
    dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1); 
    dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC'); 
    dbms_datapump.set_parameter(handle => h1, name => 'TABLE_EXISTS_ACTION', value => 'SKIP'); 
    dbms_datapump.set_parameter(handle => h1, name => 'SKIP_UNUSABLE_INDEXES', value => 0);
    dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0); 
    dbms_datapump.detach(handle => h1); 
    errorvarchar := 'NO_ERROR'; 
EXCEPTION
    WHEN OTHERS THEN
    BEGIN 
        IF ((errorvarchar = 'ERROR')AND(tryGetStatus=1)) THEN 
            DBMS_DATAPUMP.DETACH(h1);
        END IF;
    EXCEPTION 
    WHEN OTHERS THEN 
        NULL;
    END;
    RAISE;
END;
/

I can now run this code in a SQLDeveloper Worksheet…

Whilst the feedback is not quite as good in this environment, you can still find the job itself in the DBA module, where you can see all of the details…

If I now check the new HR.TRACKS table :

Appropriately enough given current events, eight tracks is the precise number needed to form a Desert Island Discs playlist.

Adding an item to the Ubuntu Unity Launcher

Sun, 2019-03-24 17:20

I would begin this post by saying something pithy about the latest Brexit crisis gripping the nation. However, watching any news or current affairs program at the moment leaves me feeling rather like this :

Fortunately, I had a new version of SQLDeveloper to install on my Ubuntu 16.04 laptop to take my mind off things.
After installing the software, I forgot – as I almost always do – how to add a new item to the Unity Launcher, so I thought I’d write down the steps this time.

Unity is similar to Gnome, KDE and – as it turns out – XFCE in that it seems to follow the freedesktop.org Desktop Entry Specification.
So, all I need to do is :

  • create a .desktop file in a suitable location to tell Unity how to run SQLDeveloper
  • add it to the Unity Launcher
Creating the desktop file

The desktop files in this Ubuntu version are in /usr/share/applications so…

sudo nano /usr/share/applications/sqldeveloper184.desktop

The file itself, looks like this

[Desktop Entry]
Name=SQLDeveloper 18.4
Exec=/opt/sqldeveloper184/sqldeveloper/sqldeveloper.sh
Icon=/opt/sqldeveloper184/sqldeveloper/icon.png
Type=Application
Categories=Utility;Development;IDE;

It’s probably worth briefly covering some of the entries in the file…

In Unity, the Name is the string which appears in the tooltip on the Launcher.
Exec is the fully qualified path of the command to run for this menu item.
Type is mandatory and must be one of Application, Link, or Directory.
Categories are optional but may be relevant if you are using a desktop which offers access to applications via a menu.

Now we’ve created the file, we need to add it to the Launcher.

In Unity, the Name is the string which appears in the tooltip on the Launcher.
Exec is the fully qualified path of the command to run for this menu item.
Type is mandatory and must be one of Application, Link, or Directory.
Categories are optional but may be relevant if you are using a desktop which offers access to applications via a menu.

Now we’ve created the file, we need to add it to the Launcher.

Adding the application to the launcher

The easiest way to do this, is to simply open the directory where the file is located in the File Manager…

nautilus /usr/share/applications

Sorting the files by Modified date (latest first), I can easily find my new file :

Then, I just need to drag it over to the Launcher…

Useful Links

As is so often the case, this sort of thing is easy when you know how…or when someone is kind enough to tell you. So…

Installing APEX and ORDS on Oracle 18cXE on CentOS

Fri, 2019-03-01 12:06

It’s been rather a trying week.
Wales beat England in the Rugby on Saturday and every Welsh person alive has been observing the ancient tradition of rubbing English noses in it ever since.
My claim to Welsh heritage by marriage have been given short-shrift by Deb, whose accent has become rather more pronounced ever since the final whistle.

All in all, the onslaught of Welsh chauvinism has left me feeling rather like this :

Until things blow over, I’ve decided to spend more time in the shed. Fortunately, the Wifi signal is still pretty good so I’ve decided to use the free time by installing APEX 18.2 into an Oracle 18c RDBMS. As I’ve got time on my hands ( celebrations are unlikely to fizzle out for a couple of months yet), I’ve decided to follow Oracle’s recommendation and configure it to run on ORDS 18.4.
Specifically, what I’ll be covering here is :

  • installing APEX 18c
  • installing ORDS 18c
  • configuring APEX to run on ORDS
  • configuring ORDS to run on HTTPS with self-signed SSL certificates
  • using systemd to start ORDS automatically on boot

That should keep me occupied for a while…

The Environment

The server is running CentOS 7, a Linux distro that is functionally identical to the corresponding RHEL (Red Hat Enterprise Linux) release.
The Oracle Database is 18c. In this context, the edition doesn’t really matter, but I’m using Express Edition.
The APEX (Application Express) version is 18.2.
The ORDS (Oracle Rest Data Services) version is 18.4.

Whilst using ORDS with APEX makes your application architecturally N-tier – the ORDS server is a separate piece of software form the RDBMS hosting APEX – you can physically run ORDS on the same server as the database itself and this is what I’m doing here.
Once again, this should make little (if any) difference to the steps required to complete the installation.

I’m assuming that the server you want to install ORDS on will be headless. Therefore, all of the server-side steps described here are performed on the command line.

Oracle Recommendations

There are several recommendations spread through the relevant Oracle documentation which I have followed :

  • I’m running a multi-tenant database so APEX is installed in a PDB
  • I’ve installed APEX before ORDS
  • I’ve configured ORDS to run on HTTPS

I’ll link to the relevant documentation for each recommendation as and when we get to it.

Helpful Links

I was going to put these at the end but then I realised you might want to refer to them before you get thoroughly confused by my ramblings. So…

First up then…

Installing APEX

APEX itself sits entirely within the Oracle RDBMS.

Checking Requirements

According to the Installation Guide, the database requirements for APEX 18.2 are :

  • Oracle Database 11.2.0.4 or higher
  • a database MEMORY_TARGET of at least 300MB
  • At least 220MB plus 60MB for each additional language in the “Apex” tablespace
  • At least 100MB free in the SYSTEM tablespace
  • If installing the development environment, Oracle XML DB

Note that “Apex” tablespace is in quotes because, by default, the APEX users tend to get installed into the SYSAUX tablespace.

Let’s have a quick check to make sure that all these requirements are met on our system before we go any further.

For the Database version, we can run the following query :

select banner
from v$version
/

BANNER                                                                          
--------------------------------------------------------------------------------
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production             

The MEMORY_TARGET setting is a bit more convoluted as it’s set to 0 by default :

select display_value, isdefault, description
from v$parameter
where name = 'memory_target'
/

DISPLAY_VALUE        ISDEFAULT  DESCRIPTION                             
-------------------- ---------- ----------------------------------------
0                    TRUE       Target size of Oracle SGA and PGA memory

The description of the MEMORY_TARGET in this query provides a clue as to how you can make sure that this is the case.

The sga_target parameter holds the target size of the sga
the pga_aggregate_target parameter holds the “Target size for the aggregate PGA memory consumed by the instance”

So…

select sum(value)/1024/1024 as "Total Size (MB)"
from v$parameter
where name in ('sga_target', 'pga_aggregate_target')
/

Total Size (MB)
---------------
            378

Alternatively, if you’re running 12c or later, you can simply use Enterprise Manager Express :

As for the tablespace space availability :

select tablespace_name, 
    round((sum(maxbytes) - sum(bytes))/1024/1024) as "MB Free"
from dba_data_files
where tablespace_name in ('SYSTEM', 'SYSAUX')
group by tablespace_name;

TABLESPACE_NAME                   MB Free
------------------------------ ----------
SYSTEM                              32398
SYSAUX                              32148

Finally, we can check that Oracle XML DB is present with :

select comp_name, version_full
from dba_registry
where upper(comp_name) like 'ORACLE XML%'
/

COMP_NAME            VERSION_FULL                  
-------------------- ------------------------------
Oracle XML Database  18.4.0.0.0                    

Now that’s all done, we can go and get the software.

Downloading APEX

Head over to the APEX Download Page and pick up the latest version ( 18.2.0.0.12 at the time of writing). Note that there’s no OS specific options because APEX sits entirely within the RDBMS.

You can choose between the “All languages” version (705MB uncompressed) or “English language only” (310MB uncompressed). I’ve gone for the latter and therefore ended up with this file :

-rw-rw-r-- 1 mike mike 94421975 Feb 20 11:51 apex_18.2_en.zip

First we need to change the ownership of the file to the oracle user as that’s the os user we’ll be running the install as :

sudo chown oracle:oinstall apex_18.2_en.zip

Now we can switch to the oracle user and unzip the file to what is usually the $ORACLE_BASE directory (/opt/oracle) :

sudo su oracle
unzip -d /opt/oracle apex_18.2_en.zip

echo $ORACLE_BASE
/opt/oracle
cd $ORACLE_BASE/apex

Before we connect to the database, it’s worth noting that the Installation Guide has this to say about APEX on a Multitenant Database :

“Oracle recommends removing Oracle Application Express from the root container database for the majority of use cases, except for hosting companies or installations where all pluggable databases (PDBs) utilize Oracle Application Express and they all need to run the exact same release and patch set of Oracle Application Express. ”

In my case I’m installing into an 18cXE database which does not have APEX pre-installed. Either way, I want to install into a PDB rather than the CDB.
It’s also worth noting that you’ll be prompted for the following when you run the installation script :

  • The tablespace for the APEX application user (usually SYSAUX)
  • The tablespace for the APEX files user (SYSAUX)
  • The temporary tablespace (TEMP)
  • The virtual images directory (“/i/”)

So, still as the oracle user, from /opt/oracle/apex :

sqlplus /nolog
conn / as sysdba
alter session set container = xepdb1;

If you want to make sure that you are where you should be :

select sys_context('userenv', 'session_user') as session_user,
    sys_context('userenv', 'con_name') as container
from dual
/

SESSION_USER		       CONTAINER
------------------------------ ------------------------------
SYS			       XEPDB1

Next, we need to check that the Default Profile’s password complexity function is disabled :

select limit
from dba_profiles
where profile = 'DEFAULT'
and resource_type = 'PASSWORD'
and resource_name = 'PASSWORD_VERIFY_FUNCTION'
/

LIMIT
----------------------------------------
NULL

If there is a password complexity function assigned, you’ll need to disable it.
Remember to make a note of it’s name first so that you can put it back once the installation is complete.
To unset it :

alter profile default password_verify_function null;

Finally, we can start the installation. We want the full development environment so…

@apexins.sql SYSAUX SYSAUX TEMP /i/

This causes screens of messages and can run for some considerable time.
Eventually though, you should end up with :

Thank you for installing Oracle Application Express 18.2.0.00.12

Oracle Application Express is installed in the APEX_180200 schema.

The structure of the link to the Application Express administration services is as follows:
http://host:port/pls/apex/apex_admin (Oracle HTTP Server with mod_plsql)
http://host:port/apex/apex_admin     (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)
http://host:port/apex/apex_admin     (Oracle REST Data Services)

The structure of the link to the Application Express development interface is as follows:
http://host:port/pls/apex (Oracle HTTP Server with mod_plsql)
http://host:port/apex     (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)
http://host:port/apex     (Oracle REST Data Services)

timing for: Phase 3 (Switch)
Elapsed: 00:00:06.44
timing for: Complete Installation
Elapsed: 00:05:51.38

PL/SQL procedure successfully completed.





1 row selected.

...null1.sql
SYS> 

According to the Installation Guide, we should now have 3 new users, howver, it seems that four are actually created…

select username, 
    default_tablespace as default_ts, 
    temporary_tablespace as temp_ts
from cdb_users
where trunc(created) = trunc(sysdate)
/

USERNAME                  DEFAULT_TS                     TEMP_TS
------------------------- ------------------------------ ------------------------------
APEX_PUBLIC_USER          USERS                          TEMP
FLOWS_FILES               SYSAUX                         TEMP
APEX_180200               SYSAUX                         TEMP
APEX_INSTANCE_ADMIN_USER  USERS                          TEMP

4 rows selected.

APEX_INSTANCE_ADMIN_USER is not mentioned in the documentation but seems to have been created in addition to the three expected accounts.

Setting up the APEX Admin User

The apxchpwd.sql script we run for this purpose will prompt for a password. The script enforces the following password complexity rules :

  • Password must contain at least 6 characters
  • New password must differ from old password by at least 2 characters
  • Password must contain at least one numeric character (0123456789)
  • Password must contain at least one punctuation character (!”#$%&()“*+,-/:;?_)
  • Password must contain at least one upper-case alphabetic character

So…

Setting up the APEX_PUBLIC_USER database account

As we saw, the APEX_PUBLIC_USER account has been created as part of the installation.
At this point, it has been created with a randomly generated password, which we’ll need to change to something we know.
Additionally, you may feel it prudent to make sure that the password, once reset, won’t expire as, if it does, your application will stop working until you change it again.
Note that this is something you need to consider carefully – does the convenience of not having to worry about password expiration for this account outweigh the security risks raised by never changing it ? In my case I think it’s fine because I’m messing about with a VM on my laptop. If your in a more formal environment, you may have a somewhat different risk appetite.

If you’re horrified by the flagrant disregard for password security that I’m about to demonstrate, look away now…

First, we need to create a profile where the password does not expire :

create profile apex_pu limit password_life_time unlimited;

Note that all of the other profile properties will have default values :

select resource_name, limit
from dba_profiles
where profile = 'APEX_PU'
order by resource_type, resource_name
/

RESOURCE_NAME		       LIMIT
------------------------------ --------------------
COMPOSITE_LIMIT 	       DEFAULT
CONNECT_TIME		       DEFAULT
CPU_PER_CALL		       DEFAULT
CPU_PER_SESSION 	       DEFAULT
IDLE_TIME		       DEFAULT
LOGICAL_READS_PER_CALL	       DEFAULT
LOGICAL_READS_PER_SESSION      DEFAULT
PRIVATE_SGA		       DEFAULT
SESSIONS_PER_USER	       DEFAULT
FAILED_LOGIN_ATTEMPTS	       DEFAULT
INACTIVE_ACCOUNT_TIME	       DEFAULT
PASSWORD_GRACE_TIME	       DEFAULT
PASSWORD_LIFE_TIME	       UNLIMITED
PASSWORD_LOCK_TIME	       DEFAULT
PASSWORD_REUSE_MAX	       DEFAULT
PASSWORD_REUSE_TIME	       DEFAULT
PASSWORD_VERIFY_FUNCTION       DEFAULT

Next, we assign this profile to APEX_PUBLIC_USER :

alter user apex_public_user profile apex_pu;

To confirm that the profile has been assigned :

select profile
from dba_users
where username = 'APEX_PUBLIC_USER';

PROFILE
------------------------------
APEX_PU

Security conscientious objectors can look again now

To change the password :

alter user apex_public_user identified by Land0fmyfath3rs;

…replacing Land0fmyfath3rs with your own choice of non-rugby referencing password.

Finally, if you’ve unset the password verify function before starting, you now need to put it back :

alter profile default password_verify_function myfunc;

…where myfunc was the original password verify function.

At this point, the APEX installation is pretty much done.

Until now, I’ve been content to use the PL/SQL Gateway to serve my APEX pages. This involves using the Web Listener that is embedded in the database.
If you want go down this route, the installation steps can be found in Appendix A of the Installation Guide.

However, the Installation Guide has this to say on the subject of choosing a web listener :

“Oracle HTTP Server and Embedded PL/SQL gateway are considered legacy web listeners and their configuration instructions are in the appendix.”

This time, I’m going to go with Oracle’s recommendation and use ORDS.

Installing ORDS

ORDS – or Oracle Rest Data Services to give it it’s full name – is a Java EE based Web Listener. For this installation, we’ll be running it standalone using it’s built-in Jetty Web Server.
The official Installation and Configuration Guide can be found here.

Pre-Requisites

ORDS 18.4 requires an Oracle Database running release 11.1 or later. As the APEX install we’ve just done requires a slightly later minimum database version, we should be fine.

If you really feel the need, you can confirm that we’re good to go with the following query :

select banner
from v$version
/

BANNER                                                                          
--------------------------------------------------------------------------------
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production             

The other requirement for ORDS is the presence of a JDK for version 8 or later.
This can be ascertained from the server command line by running :

java -version

Downloading ORDS

Back to Oracle we go, this time to the ORDS Download Page.
Unsurprisingly given that we’re downloading a Java application, the download is not OS specific. The current version is 18.4.

A short time later, you should now be the proud owner of…

-rwxrwx---. 1 mike mike 59777214 Feb 20 12:01 ords-18.4.0.354.1002.zip

As with the APEX install, we’ll want to transfer ownership of the file to oracle…

sudo chown oracle:oinstall ords-18.4.0.354.1002.zip

…as this is the account we’ll be using for the installation…

sudo su oracle

echo $ORACLE_BASE
/opt/oracle

mkdir $ORACLE_BASE/ords

Now we extract the file into the new directory…

unzip -d /opt/oracle/ords ords-18.4.0.354.1002.zip

…which produces screens of output ending with…

...
inflating: /opt/oracle/ords/examples/soda/getting-started/indexSpec1.json  
  inflating: /opt/oracle/ords/examples/db_auth/index.html  
  inflating: /opt/oracle/ords/examples/pre_hook/sql/install.sql  
  inflating: /opt/oracle/ords/examples/pre_hook/sql/uninstall.sql  
  inflating: /opt/oracle/ords/examples/plugins/plugin-echo-cmd/src/EchoMessages.properties  
  inflating: /opt/oracle/ords/examples/plugins/plugin-echo-cmd/src/EchoCommand.java  
  inflating: /opt/oracle/ords/examples/plugins/plugin-demo/build.xml  
  inflating: /opt/oracle/ords/examples/plugins/plugin-demo/.classpath  
  inflating: /opt/oracle/ords/examples/plugins/plugin-demo/.project  
  inflating: /opt/oracle/ords/examples/plugins/lib/javax.inject-1.jar  
  inflating: /opt/oracle/ords/examples/soda/getting-started/QBE.2.json  
  inflating: /opt/oracle/ords/examples/db_auth/sql/install.sql  
  inflating: /opt/oracle/ords/examples/pre_hook/sql/custom_auth_api.pls  
  inflating: /opt/oracle/ords/examples/soda/getting-started/poUpdated.json  
  inflating: /opt/oracle/ords/examples/soda/getting-started/QBE.3.json  
  inflating: /opt/oracle/ords/examples/soda/getting-started/QBE.5.json  
  inflating: /opt/oracle/ords/examples/soda/getting-started/poPatchSpec.json  
  inflating: /opt/oracle/ords/examples/soda/getting-started/QBE.1.json  
  inflating: /opt/oracle/ords/examples/soda/getting-started/QBE.4.json  
  inflating: /opt/oracle/ords/examples/soda/getting-started/po.json  
  inflating: /opt/oracle/ords/examples/pre_hook/README.md  
  inflating: /opt/oracle/ords/examples/soda/getting-started/qbePatch.json  
  inflating: /opt/oracle/ords/examples/soda/getting-started/POList.json  
  inflating: /opt/oracle/ords/examples/pre_hook/index.html  
Make sure the PL/SQL Gateway is disabled

You can check whether the PL/SQL Gateway is currently enabled by running…

select dbms_xdb.gethttpport
from dual;


GETHTTPPORT
-----------
          0

If this query returns something other than zero then you can disable the PL/SQL Gateway as follows :

exec dbms_xdb.sethttpport(0)
Copy the APEX images

Before we configure ORDS, we need to copy the APEX images somewhere that is visible to the ORDS server so…

cd /opt/oracle/ords
mkdir apex

…then…

cd apex
pwd 
/opt/oracle/ords/apex
cp -r $ORACLE_BASE/apex/images .
ls -l
total 44
drwxr-xr-x. 33 oracle oinstall 28672 Feb 20 16:57 images
Initial ORDS Installation

To start with, we’re going to install ORDS and configure it to run on HTTP. This is simply so that we can sanity check that ORDS and APEX are working together as expected.
Note that I’m accepting the default location for the default tablespace of the two new users that will be created as part of the installation. If you’re planning to do the same then you should make sure that you have a USERS tablespace available in your PDB.

Finally, we can now run the installation. Still connected as oracle :

cd $ORACLE_BASE/ords
java -jar ords.war install advanced

At this point, we now have the option to start ORDS…

…which causes a lot of feedback…

It’s perhaps unsurprising that we hit the ORA-28000 error at this stage…

alter session set container = xepdb1;

select username, account_status
from dba_users
where username like 'ORDS%'
or username like 'APEX%'
/

USERNAME                       ACCOUNT_STATUS                
------------------------------ ------------------------------
APEX_180200                    LOCKED                        
APEX_INSTANCE_ADMIN_USER       OPEN                          
APEX_PUBLIC_USER               LOCKED                        
ORDSYS                         EXPIRED & LOCKED              
ORDS_METADATA                  EXPIRED & LOCKED              
ORDS_PUBLIC_USER               OPEN                          

6 rows selected. 

We’ll sort that out in a bit. For now though, let’s just check that ORDS’ Jetty server is accessible.
As ORDS is running in the foreground, we’ll need to leave it running and start a separate session.
Then we can test it with :

curl -ISs http://frea.virtualbox:8080

…which should return the HTTP header :

Now we’re happy that ORDS itself is running, we can stop it by pressing [CTRL]+C in the Terminal session it’s running in.

Next, we need to run ords with the validate option :

cd $ORACLE_BASE/ords
java -jar ords.war validate 

The output looks innocuous enough :

However, if we look at the log file that has been written, we can see that there’s been a fair bit of activity…

[*** script: ords_alter_session_script.sql] 

PL/SQL procedure successfully completed.

[*** script: ords_schema_mapping.sql] 
INFO: Configuring ORDS_PUBLIC_USER to map APEX Workspaces and ORDS schemas

Session altered.

Configuring APEX and ORDS schemas for url mapping
Made APEX_PUBLIC_USER proxiable from ORDS_PUBLIC_USER
APEX_REST_PUBLIC_USER does not exist
APEX_LISTENER.POOL_CONFIG synonym does not exist, stubbing out
ORDS_METADATA.APEX_POOL_CONFIG


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


Grant succeeded.

INFO: Completed configuring ORDS_PUBLIC_USER to map APEX Workspaces and ORDS Schemas

Session altered.

[*** script: ords_repair_proxy_connect.sql] 
INFO: Checking ords enabled schemas and its proxy user

Session altered.


PL/SQL procedure successfully completed.

[*** script: ords_migrate_grant_priv.sql] 

Session altered.

INFO: Verify if Application Express exists to setup the migration privileges for
ORDS.
INFO: Completed setting up the APEX REST migration privileges for ORDS.


PL/SQL procedure successfully completed.

[*** script: ords_validate_objects.sql] 

Session altered.

INFO: 15:25:18 Validating objects for Oracle REST Data Services.
VALIDATION: 15:25:18 Starting validation for schema: ORDS_METADATA
VALIDATION: 15:25:18 Validating objects
VALIDATION: 15:25:19 Validating ORDS Public Synonyms
VALIDATION: 15:25:20 Total objects: 262, invalid objects: 0
VALIDATION: 15:25:20     72  INDEX
VALIDATION: 15:25:20      1  JOB
VALIDATION: 15:25:20     12  PACKAGE
VALIDATION: 15:25:20     12  PACKAGE BODY
VALIDATION: 15:25:20     44  PUBLIC SYNONYM
VALIDATION: 15:25:20      1  SEQUENCE
VALIDATION: 15:25:20     14  SYNONYM
VALIDATION: 15:25:20     27  TABLE
VALIDATION: 15:25:20     26  TRIGGER
VALIDATION: 15:25:20     20  TYPE
VALIDATION: 15:25:20      6  TYPE BODY
VALIDATION: 15:25:20     27  VIEW
VALIDATION: 15:25:20 Validation completed.
INFO: 15:25:20 Completed validating objects for Oracle REST Data Services.


PL/SQL procedure successfully completed.


Session altered.


Commit complete.

[*** script: ords_alter_session_script.sql] 

PL/SQL procedure successfully completed.

In case you’re wondering, the scripts referenced in this log file are located in ords.war itself.

Now we’re ready to…

Configure APEX to run on ORDS

As oracle…

cd $ORACLE_BASE/apex
sqlplus / as sysdba

Once connected to the database…

alter session set container = xepdb1;

@apex_rest_config.sql

This will create two new user accounts :

  • APEX_LISTENER
  • APEX_PUBLIC_USER

You will be prompted for a password for each of them.

Once the script is completed, you should be able to confirm that two further accounts have been created :

select username, account_status
from dba_users
where username in ('APEX_LISTENER', 'APEX_PUBLIC_USER')
order by 1
/

USERNAME                       ACCOUNT_STATUS                
------------------------------ ------------------------------
APEX_LISTENER                  OPEN                          
APEX_PUBLIC_USER               LOCKED                        

Granting access to the APEX owner via ORDS

Once again, connect to the database as sysdba :

alter session set container = xepdb1;

begin
    dbms_network_acl_admin.append_host_ace(
        host => 'localhost',
        ace => xs$ace_type(
            privilege_list => xs$name_list('connect'),
            principal_name => 'APEX_180200',
            principal_type => xs_acl.ptype_db));
end;
/

alter user apex_public_user account unlock
/

Now if we re-start ORDS…

java -jar ords.war standalone

…and in a separate session we should be able to get a sensible header from the apex_admin URL :

curl -ISs http://frea.virtualbox:8080/ords/apex_admin

We could just call it a day at this point. However, if you like your applications to be a little more secure than an England three-quarter under a Dan Bigger garryowen, you’ll want to follow Oracle’s recommendation :
“If you want to use RESTful services that require secure access, you should use HTTPS.”

In order to do this, we’re going to have to do some messing about with SSL certificates.

Generating a self-signed SSL Certificate

We need to connect as root and create directory to hold the key :

sudo -s
mkdir -p /etc/ssl/private/frea.virtualbox
cd /etc/ssl
chmod -R 700 private/frea.virtualbox

Now we can generate the key and the certificate :

cd /etc/ssl/private/frea.virtualbox
openssl req -newkey rsa:2048 -nodes -keyout frea.virtualbox.key -x509 -days 3650 -out frea.virtualbox.crt

Note that the days value I’m using will create a certificate that does not expire for 10 years. Whilst this does mean I won’t have to worry about the certificate expiring and stopping my application from working at an unexpected moment, it’s probably not strictly in line with security best practices. If you find yourself doing this in a production environment, you may want to consider a rather shorter lifetime for your certificate.

Anyhow, we will be prompted to supply some values. The ones I’m using are :

  • Country Name : UK
  • State or Province Name : England
  • Organization Name : The Anti-Kyte
  • Organizational Unit Name : Mike
  • Common Name : frea.virtualbox

…all of which looks like this :

You should now have two new files :

ls -l
total 8
-rw-r--r--. 1 root root 1363 Feb 22 11:45 frea.virtualbox.crt
-rw-r--r--. 1 root root 1708 Feb 22 11:45 frea.virtualbox.key

OK, we can stop being root now.

Incidentally, if you want to verify the expiry date of our your new certificate :

sudo openssl x509 -text -noout -in /etc/ssl/private/frea.virtualbox/frea.virtualbox.crt |grep 'Not After'
 Not After : Feb 19 11:45:07 2029 GMT

The easiest way to reconfigure ORDS to use the certificate – and HTTPS – is to stop any running instances of ORDS, connect as oracle and then start it again, using the appropriate command line parameters :

cd $ORACLE_BASE/ords

java -jar ords.war standalone --secure-port 8443 --secure-host frea.virtualbox --secure-cert-path /etc/ssl/private/frea.virtualbox/frea.virtualbox.crt --secure-cert-key-path /etc/ssl/private/frea.virtualbox/frea.virtualbox.key

If we test using curl…

curl -ISs https://frea.virtualbox:8443/ords/apex_admin

The presence of a self-signed certificate will cause comment :

…so we’ll have to use a bit of TLC…

curl -kISs https://frea.virtualbox:8443/ords/apex_admin

This does mean that your web browser is also likely to object to the cert the first time we point it at this site. We’ll come onto that in a bit.

For now though, we can see that the ssh settings have been added to the properties file in standalone sub-directory :

cd $ORACLE_BASE/ords/ords/standalone
cat standalone.properties 

The file now looks like this :

#Fri Feb 22 11:48:35 GMT 2019
jetty.secure.port=8443
ssl.cert=/etc/ssl/private/frea.virtualbox/frea.virtualbox.crt
ssl.cert.key=/etc/ssl/private/frea.virtualbox/frea.virtualbox.key
ssl.host=frea.virtualbox
standalone.context.path=/ords
standalone.doc.root=/opt/oracle/ords/ords/standalone/doc_root
standalone.scheme.do.not.prompt=true
standalone.static.context.path=/i
standalone.static.path=/opt/oracle/ords/apex/images

From now on, when ORDS starts, it will use these properties.

Now ORDS is installed and configured, we need to get it to start when the server boots…

Creating a Systemd service for ORDS

The documentation mentions the fact that there is a limit on the size of POST data when running standalone and suggests increasing this limit this by starting ORDS like this :

java -Dorg.eclipse.jetty.server.Request.maxFormContentSize=3000000 -jar ords.war

We will implement this suggestion in our service.

So, as root :

cd /etc/systemd/system
nano ords.service

The file should look something like this :

[Unit]

Description=Oracle Rest Data Services (ORDS) Embedded Jetty WEB Server for APEX
After=network.target

[Service]
User=oracle
TimeoutStartSec=0

Type=simple
KillMode=process
ExecStart=/usr/bin/java -Dorg.eclipse.jetty.server.Request.maxFormContentSize=3000000 -jar /opt/oracle/ords/ords.war standalone
Restart=always
RestartSec=2
LimitNOFILE=5555

[Install]

WantedBy=multi-user.target

We now need to make the file executable :

chmod a+x ords.service

Finally, we need to add the ORDS service to systemctl :

systemctl daemon-reload
systemctl enable ords
systemctl start ords

Now we can check that ORDS is up :

systemctl status ords

It’s alive !!!

We can test once again with…

curl -kISs https://frea.virtualbox:8443/ords/apex_admin

…which should return something like :

HTTP/1.1 302 Found
Date: Thu, 28 Feb 2019 22:27:34 GMT
Content-Type: text/html;charset=utf-8
X-Content-Type-Options: nosniff
X-Xss-Protection: 1; mode=block
Cache-Control: no-store
Pragma: no-cache
Expires: Sun, 27 Jul 1997 13:00:00 GMT
Set-Cookie: ORA_WWV_USER_250198699356158=ORA_WWV-R7rbCSQ886zYN9Q6CXIOpnb2; path=/ords; secure; HttpOnly
Location: https://frea.virtualbox:8443/ords/f?p=4550:10:14143531583026:::::
Transfer-Encoding: chunked
Making APEX available to remote machines

Now we’ve got everything configured, we simply need to update the server firewall to allow traffic to the HTTPS port :

sudo firewall-cmd --zone=public --permanent --add-port=8443/tcp
sudo firewall-cmd --reload

We can now confirm that the port is available :

sudo firewall-cmd --list-ports
1522/tcp 5500/tcp 8443/tcp

Finally, we can now access APEX from a remote machine.

When we first hit the APEX URL, Firefox is understandably skeptical of the my self-signed certificate…

…so I need to convince it that I’m trustworthy (or just add an exception)…

…before I can finally see APEX in the browser :

That’s it, I can now leave my sanctuary safe in the knowledge that APEX and ORDS are now configured and that the Welsh Nationalist fervour has abated…except that it’s now St. David’s Day. On the plus side, it looks like I’m having Cheese and Leek Pie for tea rather than the Humble Pie I’ve been eating all week.

Oracle Create Schema – multiple DDL statements in a single transaction

Thu, 2019-02-14 16:08

I was going to begin with some extended Brexit metaphor to illustrate the chaos and confusion that can ensue when you first encounter Oracle’s CREATE SCHEMA command.
Fortunately, the Dutch Government saved me the trouble :

dutch_brexit_monster

Much as I’d like to believe that the Cookie Monster has finally overcome his Sesame Street type casting, I can’t help noticing that the Brexit Monster never seems to in the same room as this guy…

boris

In Oracle, the term “schema” is used interchangeably with the term “user”. Creating a user in Oracle automatically creates a schema of the same name belonging to that user.
The process is so seamless that it’s almost impossible to tell where one begins and the other ends.
You may therefore be somewhat confused the first time you encounter Oracle’s CREATE SCHEMA command…

Contrary to what you might expect, CREATE SCHEMA does not enable you to create a database user. What it does do is to enable you to run multiple DDL statements inside a single transaction.
Now, if you thought that any DDL statement in Oracle would end with an implicit commit of the current transaction, well that makes two of us (at least).

To demonstrate why this is not necessarily true, let’s say we have a user created like this :

create user brexit_monster identified by lets_go_dutch 
    default tablespace users quota 5M on users
/

grant create session, create table, create view, create procedure to brexit_monster
/

…and a role created like this :

create role erg
/

If we want to create a table in the brexit_monster schema, then a view on that table, followed by a grant to the ERG role we could achieve this with three separate statements in a script…

create table brexit_types( brexit_cid varchar2(30),  classification varchar2(65000))
/

create or replace view promised_land as select brexit_cid from brexit_types where classification = 'HARD'
/

grant select on promised_land to erg
/

…however, if the first statement fails, the next two will also fail leaving something rather messy…

script_errors

If instead, we were to wrap these statements into a single CREATE SCHEMA…

create schema authorization brexit_monster
    create table brexit_types( 
        brexit_cid varchar2(30),  
        classification varchar2(65000))
    create view promised_land as 
        select brexit_cid 
        from brexit_types 
        where classification = 'HARD'
    grant select on promised_land to erg
/   
    

…the error stack is somewhat more manageable

cs_err1

Note however, that the View statement has changed a bit. It’s now a straight CREATE rather than CREATE OR REPLACE.
In fact, if you try to plug any DDL statement into CREATE SCHEMA that is not either a CREATE TABLE, CREATE VIEW or GRANT then you will get :

ORA-02422: missing or invalid schema element

If we issue the correct statement…

create schema authorization brexit_monster
    create table brexit_types( 
        brexit_cid varchar2(30),  
        classification varchar2(5))
    create view promised_land as 
        select brexit_cid 
        from brexit_types 
        where classification = 'HARD'
    grant select on promised_land to erg
/   


Schema created.


…we can see that all of the DDL has been executed…

select object_name, object_type
from user_objects
order by 2
/

OBJECT_NAME		       OBJECT_TYPE
------------------------------ -----------------------
BREXIT_TYPES		       TABLE
PROMISED_LAND		       VIEW

select grantee, privilege
from user_tab_privs
where owner = 'BREXIT_MONSTER'
and table_name = 'PROMISED_LAND'
/

GRANTEE 		       PRIVILEGE
------------------------------ ------------------------------
ERG			       SELECT

At this point you may still be somewhat skeptical about whether all this really happens in a single transaction.

I mean, how do we know that Oracle isn’t just parsing each statement to make sure they’re all valid and then trusting it won’t hit a runtime error ?
One way to find out is, of course, to engineer a runtime error.

You remember when I created the BREXIT_MONSTER user and you thought that I was a bit stingy with the tablespace quota allocation ? Well…

set serverout on
exec dbms_output.put_line('Current transaction = '||dbms_transaction.local_transaction_id(true));

create schema authorization brexit_monster
    create wto_terms(is_problem varchar2(3) default 'NO')
    create table little_objects as select * from all_objects fetch first 5 rows only
    create table my_objects as select * from all_objects
/   
exec dbms_output.put_line('Current transaction='||dbms_transaction.local_transaction_id(true));

When we run this we get …

runtime_err

We can see that the local_transaction_id has changed. So the transaction that the CREATE SCHEMA was running in has ended. Question is – has it been commited or rolled back ?
Now to check if any of the tables have been created…

select table_name
from user_tables
where table_name in ('WTO_TERMS', 'LITTLE_OBJECTS')
/

no rows selected

We could go into great detail here and do some digging around in trace files.
Then again, there’s been quite enough procrastination around this whole Brexit business already.
As we can see, the functionality of CREATE SCHEMA is that it does execute multiple DDL statements in a single database transaction – i.e. all statements succeed or none do.
In other words, if it walks like a Brexit Monster and talks about sunlight uplands it’s probably Boris Johnson.

Pi in a time of Brexit – Remote Controlling Raspberry Pi from Ubuntu using VNC

Sun, 2019-01-27 14:52

What with Larry the Downing Street Cat and Palmerston, his counterpart at the Foreign Office, Teddy suspects he knows the real reason for the Country’s current travails.
Here he is, doing his best Boris Johnson impression :

“No wonder Brexit’s a Cat-astrophe !”

In an attempt to distract myself from the prospect of the country being ruined by this feline consipracy, I’ve been playing with my new Raspberry Pi Model 3 B-spec.
At some point, I’m going to want to connect remotely to the Desktop on the Pi. What follows is how I can do this using VNC…

Why VNC ?

Regular readers (hello Mum!) may be wondering why I’m returning to this topic, having previously used RDP to remote to a Pi.

Well, this newer model RaspberryPi is running Raspbian Stretch ( or version 9) as opposed to the older machine, which was running Jessie (version 8).
Stretch has VNC included by default so it makes sense to use this protocol for connecting to the desktop remotely.

Now, the more observant among you will notice that you can simply and easily enable VNC in the same way as you can enable SSH during initial setup.
You can see this option in the Preferences/Raspberry Pi Configuration menu when you click on the Interfaces tab :


If, like me, you don’t discover that this is the way to go until after you’ve put away all those old peripherals you had to dig out of the attic to setup your Pi then fear not, you can also do this from the command line…

On the Pi

First of all, we want to make sure that we do, in fact, have the required VNC software on the Pi.
So, once I’ve connected to the Pi via SSH, I can run this from the command line :

apt list realvnc*

…which should come back with :

Now we want to configure VNC on the pi so, on the command line we need to enter …

sudo raspi-config

This will bring up the Software Configuration Tool screen below.
Using the arrow keys on the keyboard, navigate to the line that starts 5 Interface Options and hit the [Enter] key.


…which brings up a sub-menu. Here, you need to navigate to P3 VNC and hit [Enter]


…and [Enter] again to confirm you want to enable VNC…

…before you receive a message confirming that VNC is now enabled :

To exit, hit [Enter]

I’m not sure if it’s strictly necessary, but at this point, I re-started the pi by entering :

sudo reboot
In Ubuntu

Meanwhile, on the Ubuntu machinea (I’m running Ubuntu 16.04), we need to head over to the VNC Viewer download site.
As I’m on a 64-bit version of Ubuntu, I chose the DEB x64 version to download.

Incidentally, you can tell if you’re running a 32-bit or 64-bit Linux distro, you can run :

uname -i

If this returns x86_64 the you’re on a 64-bit platform.

Anyhow, when prompted, I opted to open the downloaded file – VNC-Viewer-6.19.107-Linux-x64.deb with Software Install


…which results in…


Now we simply click Install and enter our password when prompted.

Once the installation is completed we’re ready to connect remotely.

Running VNC Viewer

To start the viewer, you can simply open a Terminal and run :

vncviewer

After you’ve accepted the licence, enter the address of the server to connect to (in my case pithree) :

You’ll then be prompted to enter the username and password of a user on the Pi :


Press OK and…


You can tweak the display to make it a bit more practical.
In the VNC Window, move the cursor to the top of the screen so that the Viewer menu slides down then select the cog-wheel icon (second from the right) :

In the Options tab, set Picture Quality to High and Scaling to Scale to fit window :

After this, the VNC viewport should scale to the size of the VNC window itself.

Now all I need is to something else to distract myself from the ongoing battle between Project Fear and Project Farce.

Using a Proxy User for Code Deployment in Oracle

Mon, 2019-01-14 12:54

“Proxy Users !” exclaimed Debbie.
“I say, that’s rather harsh don’t you think ?” came the rather startled reply from her boss.
Debbie sighed. They were in the midst of a discussion on the subject of how best to deploy database changes to multiple schemas.
“I meant”, she replied with iron patience, “that we could set up a proxy user to connect as each application owner in turn. That way, we wouldn’t have to grant those CREATE ANY privileges that get auditors so worried”.
“Oh, I see”, said Mike, who didn’t.
Not for the first time, Debbie wondered whether she had been lumbered with a less competent man as her boss simply in order to imbue this post with a semblance of social realism.
“I think”, she said, “that it’s time to move on to the techie bit.”
Debbie is right, as usual…

In order to make a change in Oracle ( or any database for that matter), you need at some point to connect to the database and run some SQL.
This is relatively straightforward if you are using the schema that is – or will be – the owner of the objects you are creating or changing.

However, this may not be possible if the account is identified externally or – in more recent releases – it’s a schema only account.
So, what is the best way to setup and use an account to make such changes in other schemas ?

The changes

NOTE : whilst the code examples that follow are written using Oracle 18c, This technique will also work in releases as far back as 10G.
I’ll point out any container specific syntax as it comes up.

We want to create a new table in the HR schema :

create table new_dept as select * from departments
/

create or replace view it_vw as 
    select * from departments where deptartment_id in (60, 210, 230)
/

As you’d expect, HR itself has the required permissions to do this.
In 18c, you may have to switch to the container in which the HR user is installed first…

alter session set container = xepdb1

…but the privileges are pretty much the same…

select privilege 
from dba_sys_privs
where grantee = 'HR'
union
select privilege
from dba_sys_privs
where grantee in ( 
    select granted_role 
    from dba_role_privs 
    where grantee = 'HR')
order by 1
/


PRIVILEGE                               
----------------------------------------
ALTER SESSION                           
CREATE CLUSTER                          
CREATE DATABASE LINK                    
CREATE INDEXTYPE                        
CREATE OPERATOR                         
CREATE PROCEDURE                        
CREATE SEQUENCE                         
CREATE SESSION                          
CREATE SYNONYM                          
CREATE TABLE                            
CREATE TRIGGER                          
CREATE TYPE                             
CREATE VIEW                             
UNLIMITED TABLESPACE                    

14 rows selected. 

Oracle’s system privileges are not really that granular. You can either have the privileges to create objects in your own schema, or you can have privileges to create them in any schema.
So, if we follow Mike’s suggestion, we’d end up with something like…

create user sledgehammer identified by nutcracker
/

grant create session,
    create any table,
    create any view,
    select any table
    to sledgehammer
/

Which would do what we need…

-- If we're using a container database, check that we're in the right one...
select sys_context('userenv', 'con_name') from dual;

SYS_CONTEXT('USERENV','CON_NAME')                                                                                                                                      
---------------------------------
XEPDB1                                                                                                                                                                 

alter session set current_schema = HR;

Session altered.

create table new_dept as select * from departments;

Table created.

create or replace view it_vw as
    select *
    from departments
    where department_id in (60, 210, 230)
/

View created.

The problem here is that it doesn’t so much ignore the principle of least privilege as walk up to it and make a rude gesture…

create table system.oops(unexpected varchar2(1));

Table created.

Fortunately, as Debbie knows, there is a better way…

The Proxy Schema

To start with, it’s probably worth taking a quick peek at the documentation for Proxy User which suggests that the only privilege such a user should be granted is CREATE SESSION.

Therefore, we can create our proxy user like this :

grant create session to app_deploy identified by debbieknowsbest
/

Next, we need to configure the HR user to allow the proxy user to connect to it :

alter user hr grant connect through app_deploy
/

Now we can connect to the HR schema using the proxy…

connect app_deploy[hr]@xepdb1
Enter password: 
Connected.
SQL> 

…which we can confirm with this query…

select sys_context('userenv', 'session_user') as "Session User",
    sys_context('userenv', 'proxy_user') as "Proxy User" 
from dual;

Session User  Proxy User
------------ ------------------------------
HR           APP_DEPLOY

Now we can run our DDL statements in the HR schema…

create table new_dept as select * from departments
/

Table created.

create or replace view it_vw as
    select *
    from departments
    where department_id in (60, 210, 230)
/

View created

If we now connect as a sufficiently privileged user, we can confirm that these objects have been created in the HR schema :

select owner, object_name, object_type
from dba_objects
where object_name in ('NEW_DEPT', 'IT_VW')
/

OWNER                          OBJECT_NAME                    OBJECT_TYPE            
------------------------------ ------------------------------ -----------------------
HR                             IT_VW                          VIEW                   
HR                             NEW_DEPT                       TABLE                  

We can now use the APP_DEPLOY schema for any code releases. All we need to do to make an application owner account accessible in this way is :

alter user <application_owner> grant connect through app_deploy
Further Reading

Debbie isn’t the only person to wear clogs of cleverness when it comes to Proxy Users.

There’s this article by Norm on the subject.
Additionally, it’s covered in the context of Schema Only accounts in this post on the Pythian site by Simon Pane.

Installing and Configuring Oracle 18cXE on CentOS

Thu, 2019-01-03 11:50

After seven years, the much anticipated Oracle 18c Express Edition (XE) has finally seen the light of day.
Reaction to the new version can be summed up as…

It’s the Dog’s Wotsits !

Gerald Venzl, the person we have to thank for this new and vastly improved version of XE, has already published an installation guide.

At this point you may well ask yourself that what – apart from gratuitous puppy pics and cheesy-snack-based puns – is the difference between that post and this.

Well, if you’re a long-time user of 11gXE and you’re looking to upgrade then you will find 18cXE a rather different proposition.
The introduction of Multitenant databases aside, 18cXE differs greatly from it’s predecessor in terms of it’s functional scope.
Wheras 11gXE was – broadly speaking – functionally equivalent to Oracle Standard Edition, the approach for 18cXE has been to shoe-horn in as many Enterprise Edition features as possible.
No doubt, this will leave you anxious to play with the new version. However, there are some “home comforts” that were present in the old version that you’ll need to configure yourself this time around.
What I’m going to go through is :

  • Installing 18cXE on a Red Hat compatible distro (CentOS7)
  • Connecting to the database and exploring the containers
  • Checking the TNS Listener
  • Manual and Automatic Startup and Shutdown of the database and listener
  • Setting and persisting the Oracle environment variables
  • Accessing Enterprise Manager Express
  • Installing the HR demo application in a Pluggable Database (PDB)
  • Configuring the firewall to allow remote access to Oracle

The steps documented here have been performed on a vanilla installation of CentOS7. As such, they should work pretty much unaltered for other Red Hat based distros based on or similar to Red Hat Enterprise Linux (RHEL) version 7.

Before all of that though…

Resource Limits for 18cXE

As with previous versions of XE, there are limitations on the system resources that will be used. These include :

  • 2 CPU cores ( up from 1 in 11gXE)
  • 2 GB Ram ( 1 GB in 11g)
  • 12GB of User Data ( 11GB in 11g)
  • A maximum of 3 PDBs

In addition, you can only install one instance of XE per host. However, it does seem to be possible to install XE alongside other Oracle Editions on the same host.

One final point to note – the amount of space taken up by the initial installation is not insignificant. The space usage in the $ORACLE_BASE is :

sudo du -h -d1
5.3G	./product
76K	./oraInventory
0	./checkpoints
12M	./diag
20M	./cfgtoollogs
4.9M	./admin
0	./audit
3.4G	./oradata
8.7G	.

This is worth bearing in mind when sizing your environment.
Additionally, if you’re tight on space, you may also consider removing the rpm files once the installation is complete as this frees up 2.4GB (although not in $ORACLE_BASE).
Speaking of rpm files…

Downloading and installation

Head over to the Downloads page and download the Oracle Database 18c Express Edition for Linux x64 version.

If you’re running a Red Hat compatible distro that’s not Oracle Linux, you’ll also need the Oracle Database Preinstall RPM for RHEL and CentOS. I’m running on CentOS7 so I’ll get the Release 7 version of this file.

At this point, we should now have two rpm files :

-rw-rw-r--. 1 mike mike      18244 Dec 25 17:37 oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm
-rw-rw-r--. 1 mike mike 2574155124 Dec 25 17:37 oracle-database-xe-18c-1.0-1.x86_64.rpm

Next, we need to become root for a bit. If you’re worried that all this power might go to your head, fear not, I’ll let you know when we can become mere mortals again. For now though :

sudo -s
[sudo] password for mike: 
#

Now we can install the RPMs. The preinstall first (note that you need to have an internet connection available when running this)…

yum localinstall oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm 

…This results in :

Loaded plugins: fastestmirror, langpacks
Examining oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm: oracle-database-preinstall-18c-1.0-1.el7.x86_64
Marking oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package oracle-database-preinstall-18c.x86_64 0:1.0-1.el7 will be installed
--> Processing Dependency: compat-libcap1 for package: oracle-database-preinstall-18c-1.0-1.el7.x86_64
Loading mirror speeds from cached hostfile
 * base: mirrors.vooservers.com
 * extras: mirror.sov.uk.goscomb.net
 * updates: mirrors.vooservers.com
base                                                                                     | 3.6 kB  00:00:00     
extras                                                                                   | 3.4 kB  00:00:00     
updates                                                                                  | 3.4 kB  00:00:00     
--> Processing Dependency: compat-libstdc++-33 for package: oracle-database-preinstall-18c-1.0-1.el7.x86_64
--> Processing Dependency: ksh for package: oracle-database-preinstall-18c-1.0-1.el7.x86_64
--> Processing Dependency: libaio-devel for package: oracle-database-preinstall-18c-1.0-1.el7.x86_64
--> Running transaction check
---> Package compat-libcap1.x86_64 0:1.10-7.el7 will be installed
---> Package compat-libstdc++-33.x86_64 0:3.2.3-72.el7 will be installed
---> Package ksh.x86_64 0:20120801-139.el7 will be installed
---> Package libaio-devel.x86_64 0:0.3.109-13.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================================================
 Package                        Arch   Version           Repository                                        Size
================================================================================================================
Installing:
 oracle-database-preinstall-18c x86_64 1.0-1.el7         /oracle-database-preinstall-18c-1.0-1.el7.x86_64  55 k
Installing for dependencies:
 compat-libcap1                 x86_64 1.10-7.el7        base                                              19 k
 compat-libstdc++-33            x86_64 3.2.3-72.el7      base                                             191 k
 ksh                            x86_64 20120801-139.el7  base                                             885 k
 libaio-devel                   x86_64 0.3.109-13.el7    base                                              13 k

Transaction Summary
================================================================================================================
Install  1 Package (+4 Dependent packages)

Total size: 1.1 M
Total download size: 1.1 M
Installed size: 4.0 M
Is this ok [y/d/N]: 

Enter ‘y’ and…

Downloading packages:
(1/4): compat-libcap1-1.10-7.el7.x86_64.rpm                                              |  19 kB  00:00:00     
(2/4): compat-libstdc++-33-3.2.3-72.el7.x86_64.rpm                                       | 191 kB  00:00:00     
(3/4): libaio-devel-0.3.109-13.el7.x86_64.rpm                                            |  13 kB  00:00:00     
(4/4): ksh-20120801-139.el7.x86_64.rpm                                                   | 885 kB  00:00:00     
----------------------------------------------------------------------------------------------------------------
Total                                                                           1.8 MB/s | 1.1 MB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : compat-libstdc++-33-3.2.3-72.el7.x86_64                                                      1/5 
  Installing : libaio-devel-0.3.109-13.el7.x86_64                                                           2/5 
  Installing : compat-libcap1-1.10-7.el7.x86_64                                                             3/5 
  Installing : ksh-20120801-139.el7.x86_64                                                                  4/5 
  Installing : oracle-database-preinstall-18c-1.0-1.el7.x86_64                                              5/5 
  Verifying  : ksh-20120801-139.el7.x86_64                                                                  1/5 
  Verifying  : compat-libcap1-1.10-7.el7.x86_64                                                             2/5 
  Verifying  : oracle-database-preinstall-18c-1.0-1.el7.x86_64                                              3/5 
  Verifying  : libaio-devel-0.3.109-13.el7.x86_64                                                           4/5 
  Verifying  : compat-libstdc++-33-3.2.3-72.el7.x86_64                                                      5/5 

Installed:
  oracle-database-preinstall-18c.x86_64 0:1.0-1.el7                                                             

Dependency Installed:
  compat-libcap1.x86_64 0:1.10-7.el7   compat-libstdc++-33.x86_64 0:3.2.3-72.el7 ksh.x86_64 0:20120801-139.el7
  libaio-devel.x86_64 0:0.3.109-13.el7

Complete!

Now for the main event…

yum localinstall oracle-database-xe-18c-1.0-1.x86_64.rpm

…which results in ( after quite a while) …

Loaded plugins: fastestmirror, langpacks
Examining oracle-database-xe-18c-1.0-1.x86_64.rpm: oracle-database-xe-18c-1.0-1.x86_64
Marking oracle-database-xe-18c-1.0-1.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package oracle-database-xe-18c.x86_64 0:1.0-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================================================
 Package                        Arch           Version       Repository                                    Size
================================================================================================================
Installing:
 oracle-database-xe-18c         x86_64         1.0-1         /oracle-database-xe-18c-1.0-1.x86_64         5.2 G

Transaction Summary
================================================================================================================
Install  1 Package

Total size: 5.2 G
Installed size: 5.2 G
Is this ok [y/d/N]: 

Once again, enter ‘y’…then go and get a coffee (or other bevarage if you prefer), this next bit takes a while…

Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : oracle-database-xe-18c-1.0-1.x86_64                                                          1/1 
[INFO] Executing post installation scripts...
[INFO] Oracle home installed successfully and ready to be configured.
To configure Oracle Database XE, optionally modify the parameters in '/etc/sysconfig/oracle-xe-18c.conf' and then execute '/etc/init.d/oracle-xe-18c configure' as root.
  Verifying  : oracle-database-xe-18c-1.0-1.x86_64                                                          1/1 

Installed:
  oracle-database-xe-18c.x86_64 0:1.0-1                                                                         

Complete!

Finally, we need to run the configuration.
NOTE : I quit the previous session and began a new one as root before running this.

sudo -s
/etc/init.d/oracle-xe-18c configure

…once again this takes a while to complete, which is fair enough because there’s a bit going on :

Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:
Confirm the password:
Configuring Oracle Listener.
Listener configuration succeeded.
Configuring Oracle Database XE.
Enter SYS user password: 
*********
Enter SYSTEM user password: 
*********
Enter PDBADMIN User Password: 
**********
Prepare for db operation
7% complete
Copying database files
29% complete
Creating and starting Oracle instance
30% complete
31% complete
34% complete
38% complete
41% complete
43% complete
Completing Database Creation
47% complete
50% complete
Creating Pluggable Databases
54% complete
71% complete
Executing Post Configuration Actions
93% complete
Running Custom Scripts
100% complete
Database creation complete. For details check the logfiles at:
 /opt/oracle/cfgtoollogs/dbca/XE.
Database Information:
Global Database Name:XE
System Identifier(SID):XE
Look at the log file "/opt/oracle/cfgtoollogs/dbca/XE/XE.log" for further details.

Connect to Oracle Database using one of the connect strings:
     Pluggable database: frea.virtualbox:1522/XEPDB1
     Multitenant container database: frea.virtualbox:1522
Use https://localhost:5500/em to access Oracle Enterprise Manager for Oracle Database XE

At this point we can stop being root.

Connecting to the database

First up, we need to make sure that the appropriate environment variables are set. So run the following, entering XE when prompted for the ORACLE_SID…

 . oraenv
ORACLE_SID = [mike] ? XE
ORACLE_BASE environment variable is not being set since this
information is not available for the current user ID mike.
You can set ORACLE_BASE manually if it is required.
Resetting ORACLE_BASE to its previous value or ORACLE_HOME
The Oracle base has been set to /opt/oracle/product/18c/dbhomeXE

Now we should be able to connect to the database via sqlplus :

sqlplus system

We can now confirm that the database is up :

select instance_name, version, status
from v$instance;

INSTANCE_NAME    VERSION	      STATUS
---------------- ----------------- ------------
XE		    18.0.0.0.0	      OPEN

One significant new feature of 18c XE as compared with it’s predecessor is the capability to use the database as a container (CDB) for zero or more Pluggable Databases (PDBs).
In the case of XE, you can have up to three PDBs and we can see that one has already been created as part of the installation :

select con_id, name
from v$containers;

    CON_ID NAME
---------- ------------------------------
	 1 CDB$ROOT
	 2 PDB$SEED
	 3 XEPDB1

In this case :

  • CDB$ROOT is the Container Database
  • PDB$SEED is a read-only template for creating PDBS
  • XEPDB1 is a PDB

In the CDB, we can see details of the PDB seed database and the PDB itself :

select con_id, name, open_mode 
from v$pdbs;

    CON_ID NAME 			  OPEN_MODE
---------- ------------------------------ ----------
	 2 PDB$SEED			  READ ONLY
	 3 XEPDB1			  READ WRITE

However, if we switch to the PDB…

alter session set container = XEPDB1;

…the same query returns information only about the current PDB…

select con_id, name, open_mode 
from v$pdbs;

    CON_ID NAME 			  OPEN_MODE
---------- ------------------------------ ----------
	 3 XEPDB1			  READ WRITE

If you want to check which PDB you are in you can use :

select sys_context('userenv', 'con_name') from dual;

In the CDB this should return :

CDB$ROOT

in our PDB however, we should get :

XEPDB1
Checking the Listener

For ongoing administration operations from the OS, you’ll need to add your user to a couple of groups. In my case, my user is “mike” :

sudo usermod -a -G dba mike
sudo usermod -a -G oinstall mike

Once you’ve added these groups to your user you need to log off and log on again for them to take effect.
You should now be able to check the status of the Net Listener by means of the lsnrctl utility.

Having first run oraenv as before to set your environment…

lsnrctl status

When the listener is up, you should get something like :

LSNRCTL for Linux: Version 18.0.0.0.0 - Production on 26-DEC-2018 20:38:31

Copyright (c) 1991, 2018, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=frea.virtualbox)(PORT=1522)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 18.0.0.0.0 - Production
Start Date                26-DEC-2018 19:24:54
Uptime                    0 days 1 hr. 13 min. 37 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Default Service           XE
Listener Parameter File   /opt/oracle/product/18c/dbhomeXE/network/admin/listener.ora
Listener Log File         /opt/oracle/diag/tnslsnr/frea/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=frea.virtualbox)(PORT=1522)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1522)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=127.0.0.1)(PORT=5500))(Security=(my_wallet_directory=/opt/oracle/product/18c/dbhomeXE/admin/XE/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "7de2a3259d9c3747e0530f84f25ce87c" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
Service "XE" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
Service "XEXDB" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
Service "xepdb1" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
The command completed successfully

If however, it’s not running, you’ll get :


LSNRCTL for Linux: Version 18.0.0.0.0 - Production on 26-DEC-2018 20:40:30

Copyright (c) 1991, 2018, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=frea.virtualbox)(PORT=1522)))
TNS-12541: TNS:no listener
 TNS-12560: TNS:protocol adapter error
  TNS-00511: No listener
   Linux Error: 111: Connection refused
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1522)))
TNS-12541: TNS:no listener
 TNS-12560: TNS:protocol adapter error
  TNS-00511: No listener
   Linux Error: 2: No such file or directory

This brings us onto…

Starting and Stopping Oracle

The first time you restart the server after the installation, you will find that neither the database nor the TNS Listener are running.

To start them up from the command line you can run :

sudo /etc/init.d/oracle-xe-18c start

To shut them down, it’s :

sudo /etc/init.d/oracle-xe-18c stop

If, like me, you are configuring your server for the sole or main purpose of running Oracle, then you may want the database and listener to start when the server does.

To do this, switch to root…

sudo -s

…and set the oracle-xe-18c service to start on boot…

systemctl daemon-reload
systemctl enable oracle-xe-18c

The output will probably be something like :

oracle-xe-18c.service is not a native service, redirecting to /sbin/chkconfig.
Executing /sbin/chkconfig oracle-xe-18c on
[root@frea mike]# systemctl status oracle-xe-18c
 oracle-xe-18c.service - SYSV: This script is responsible for taking care of configuring the RPM Oracle XE Database and its associated services.
   Loaded: loaded (/etc/rc.d/init.d/oracle-xe-18c; bad; vendor preset: disabled)
   Active: inactive (dead)
     Docs: man:systemd-sysv-generator(8)

If you then reboot the server, you should be able to confirm that the service is up by running…

systemctl status -l oracle-xe-18c

…which should return something like …

 oracle-xe-18c.service - SYSV: This script is responsible for taking care of configuring the RPM Oracle XE Database and its associated services.
   Loaded: loaded (/etc/rc.d/init.d/oracle-xe-18c; bad; vendor preset: disabled)
   Active: active (exited) since Fri 2018-12-28 13:20:23 GMT; 1min 48s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 3475 ExecStart=/etc/rc.d/init.d/oracle-xe-18c start (code=exited, status=0/SUCCESS)
    Tasks: 0

Dec 28 13:19:59 frea.virtualbox systemd[1]: Starting SYSV: This script is responsible for taking care of configuring the RPM Oracle XE Database and its associated services....
Dec 28 13:19:59 frea.virtualbox oracle-xe-18c[3475]: Starting Oracle Net Listener.
Dec 28 13:19:59 frea.virtualbox su[3510]: (to oracle) root on none
Dec 28 13:20:02 frea.virtualbox oracle-xe-18c[3475]: Oracle Net Listener started.
Dec 28 13:20:02 frea.virtualbox oracle-xe-18c[3475]: Starting Oracle Database instance XE.
Dec 28 13:20:02 frea.virtualbox su[3864]: (to oracle) root on none
Dec 28 13:20:23 frea.virtualbox oracle-xe-18c[3475]: Oracle Database instance XE started.
Dec 28 13:20:23 frea.virtualbox systemd[1]: Started SYSV: This script is responsible for taking care of configuring the RPM Oracle XE Database and its associated services..
Setting Oracle Environment Variables

You’ll have noticed that, up until now, we have to set some environment variables every time we want to interact with the database from the server command line.
To do this, we need to run :

. oraenv

When you run this script, you will always get asked for ORACLE_SID value :

. oraenv
ORACLE_SID = [mike] ? XE
The Oracle base has been set to /opt/oracle

We can see that the oraenv script affects four environment variables :

echo $ORACLE_SID
XE
echo $ORACLE_HOME
/opt/oracle/product/18c/dbhomeXE
echo $ORACLE_BASE
/opt/oracle
echo $PATH
...snip.../opt/oracle/product/18c/dbhomeXE/bin

You have some options as to how you can manage these environment variables.
One option is to setup some environment variables to prevent oraenv prompting for the SID every time it’s run.
Another is to set the environment variables automatically for all sessions.

Stopping oraenv prompting for input

To do this we need to ensure that the ORACLE_SID environment variable is set prior to invoking the script and also that the ORAENV_ASK variable is set to NO.
We can see the result of this with the following quick test :

export ORACLE_SID=XE
export ORAENV_ASK=NO
. oraenv
The Oracle base has been set to /opt/oracle

To set these environment variables automatically, we can simply define them in the /etc/profile.d/sh.local script :

sudo nano /etc/profile.d/sh.local

Add the two variable assigments :

export ORACLE_SID=XE
export ORAENV_ASK=NO

You will need to logout and login again for this change to take effect.

Setting the environment variables automatically

If you want to dispense with the need to call the oraenv script altogether, you can simply add a script with a .sh extension to the /etc/profile.d directory…

nano /etc/profile.d/set_oraenv.sh

…which should contain…

export ORACLE_SID=XE
export ORACLE_HOME=/opt/oracle/product/18c/dbhomeXE
export ORACLE_BASE=/opt/oracle
export PATH=$PATH:/opt/oracle/product/18c/dbhomeXE/bin

Next time you fire up a shell ( assuming you’re using bash – the default on CentOS), these environment variables will be set.

Enterprise Manager Express

Whilst 11gXE came with a database home page which consisted of an Apex application to administer XE, 18c does not come with Apex.
The good news is that, in common with all other 18c Editions, it comes with Enterprise Manager Express – a purpose built Admin tool.
Acessing it should be fairly simple. You just open a web browser (Firefox being the default on CentOS) and point it at the address specified in the output from our configuration run earlier. In our case this is :

https://localhost:5500/em

However, you may be ever so slightly disappointed…

Fortunately for us, someone was good enough to document the solution for this particular problem.

In short, we need to follow the link to the Adobe download site and select the .tar.gz option for the Flash Download :

After this, we should now have the following :

ls -l *.tar.gz
-rwxrwx---. 1 mike mike 9045426 Dec 22 15:02 flash_player_npapi_linux.x86_64.tar.gz

Next, we extract the libflashplayer.so file from the archive…

tar -xf flash_player_npapi_linux.x86_64.tar.gz *libflashplayer.so
ls -l libflashplayer.so
-rw-rw-r--. 1 mike mike 16607528 Nov 29 23:06 libflashplayer.so

…and copy it to the location that Firefox expects it to be…

sudo cp libflashplayer.so /usr/lib64/mozilla/plugins/.

…before finally setting the file ownership and permissions…

cd /usr/lib64/mozilla/plugins
sudo chmod 755 libflashplayer.so
sudo chgrp root libflashplayer.so
sudo chown root libflashplayer.so

Our file should now look like this :

libflashplayer.so
-rwxr-xr-x. 1 root root 16607528 Dec 31 17:05 libflashplayer.so

If we go to the EM page now :

Activate the plugin and login as sys (as sysdba) :

Eventually, you should see the Enterprise Manager Home Page :

I’ll leave you to explore for a bit.

Installing the HR demo application

Unlike it’s predecessor, 18cXE does not come with the HR demo application pre-installed. However, it does include the scripts that enable us to perform this installation ourselves.

As this is an application as opposed to a system-wide utility, we’re going to install it in the PDB rather than the main CDB.

We’ll need to switch to the oracle OS user so that we have permissions to write to the log file that we’re going to specify. Then we connect to the database…

sudo su oracle
sqlplus system

Once connected :

alter session set container = XEPDB1;
select sys_context('userenv', 'con_name') from dual;

SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
XEPDB1

Now we’ve confirmed that we’re in the PDB, simply run :

@$ORACLE_HOME/demo/schema/human_resources/hr_main.sql

This script will prompt for :

  1. the password for the HR user – enter an appropriate password and remember it as you will need it to access the new HR schema
  2. the default tablespace to use for the HR user – enter USERS
  3. the temporary tablespace to use for the HR user – enter TEMP
  4. the path of the log file written by this installation script – enter $ORACLE_HOME/demo/schema/log

NOTE – the script does not obfuscate the password you enter but echos it to the screen. In any case, you may consider that changing it shortly after installation is a wise move.

The output will look something like this :


specify password for HR as parameter 1:
Enter value for 1: mysupersecretpassword

specify default tablespeace for HR as parameter 2:
Enter value for 2: USERS

specify temporary tablespace for HR as parameter 3:
Enter value for 3: TEMP

specify log path as parameter 4:
Enter value for 4: $ORACLE_HOME/demo/schema/log


PL/SQL procedure successfully completed.


User created.


User altered.


User altered.


Grant succeeded.


Grant succeeded.


Session altered.

...snip...
Comment created.


Comment created.


Comment created.


Commit complete.


PL/SQL procedure successfully completed.

We should now see that we have a “local” user called HR :

select account_status, default_tablespace, temporary_tablespace, common
from dba_users
where username = 'HR';

ACCOUNT_STATUS                   DEFAULT_TABLESPACE             TEMPORARY_TABLESPACE           COM
-------------------------------- ------------------------------ ------------------------------ ---
OPEN                             USERS                          TEMP                           NO

As the account is not locked, we can connect to it from SQL*Plus. Note that we’ll have to use the connect string for the PDB (as specified in the installation feedback earlier) as the schema does not exist in the CDB :

sqlplus hr/mysupersecretpassword@frea.virtualbox:1522/xepdb1

Alternatively we could use a method which doesn’t record the password in the bash history…

sqlplus /nolog

SQL*Plus: Release 18.0.0.0.0 - Production on Wed Jan 2 16:55:31 2019
Version 18.4.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.

SQL> conn hr/mysupersecretpassword@frea.virtualbox:1522/xepdb1
Connected.
SQL> 

If you want to see the difference, just try both connection methods in the same Terminal session and then run :

history |grep sqlplus

It’s probably worth remembering this if you are running on a shared environment.

Anyway, we can now see that the HR schema has the following objects :

Acessing the database from remote machines

Up to this point we’ve been working on the database server itself. This is fine if you’re running your Red-Hat based system as your desktop ( although in that case it’s more likely to be Fedora than CentOS), but if you want to be able to access it remotely, you’ll need to configure the firewall to allow remote access to specific ports.

Our objectives here are :

  1. to allow access to the database from a client machine via TNS
  2. to allow access to the Enterprise Manager Express site

For CentOS 7 the default firewall is firewalld :

systemctl status firewalld

 firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: <font color="#8AE234"><b>active (running)</b></font> since Tue 2019-01-01 14:53:08 GMT; 4min 30s ago
     Docs: man:firewalld(1)
 Main PID: 2842 (firewalld)
    Tasks: 2
   CGroup: /system.slice/firewalld.service
           └─2842 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

Jan 01 14:53:07 frea.virtualbox systemd[1]: Starting firewalld - dynamic fir....
Jan 01 14:53:08 frea.virtualbox systemd[1]: Started firewalld - dynamic fire....
Hint: Some lines were ellipsized, use -l to show in full.

On my client machine, I’ve added the following entries to the $ORACLE_HOME/network/admin/tnsnames.ora file :

XE18 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = frea.virtualbox)(PORT = 1522))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = XE)
    )
  )
  
xepdb1  =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = frea.virtualbox)(PORT = 1522))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = xepdb1)
    )
  )  

XE18 will allow me to connect to the CDB and xepdb1 will let me connect to the PDB.

At the moment, when we try to connect to the datbase from a client machine we hit…

sqlplus system@XE18

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 1 15:10:34 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password: 
ERROR:
ORA-12543: TNS:destination host unreachable

Back on the server, we can remedy this by issuing the following command to open the port that the TNS Listener is listening on ( in my case 1522) :

sudo firewall-cmd --permanent --add-port=1522/tcp
success

…and verify with :

sudo firewall-cmd --list-ports
1522/tcp

This then allows the remote connection :

sqlplus system@XE18

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 1 15:12:44 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password: 
Last Successful login time: Mon Dec 31 2018 23:22:40 +00:00

Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production

SQL> 

As for EM Express, we need to do the same for the port it’s running on (5500 in this case) :

sudo firewall-cmd --permanent --add-port=5500/tcp

However, we also need to connect to the database as SYSTEM and allow remote access by running:

exec dbms_xdb_config.SetListenerLocalAccess(false);

Once this is done we should now be able to access the EM Express home page remotely …

References

As ever, I’ve found a number articles that have proved useful in writing this post. I’d like to make a point of saying thank-you to the people who have essentially provided free consultancy to me (and you, if you’ve followed this guide). So thanks to…

Debbie Saves Christmas – Database Development in a Devops Wonderland : Re-runnable DDL

Wed, 2018-12-19 11:45

Debbie felt a shiver run down her spine. To be fair, that wasn’t much of a surprise since Lapland at this time of the year does tend to be a little chilly.
However, it wasn’t the weather that was the cause of her discomfort. Someone high up in the IT Department of her employer, The National Elf ( aka Santa’s Grotto) had decided that Continuous Integration was the way to go and had decreed that it should be used forthwith across all projects and technologies in the Company.
This included the application that Debbie was responsible for.
Written around 15 years ago, this Stock Control Application had already survived one major database upgrade but was now resolutely “stuck” on Oracle 11g.
The thing about so many modern software development techniques is that they were based on the premise that code was file based. Of course, this was also true ( or at least, true enough) for some database objects, but tables were a little different.
You couldn’t simply “replace” a table like you could any other program as doing so would destroy any data in that table. For this reason, any changes required to tables for a mature application such as this would be applied by means of DDL ALTER statements.
Of course, there are tools around for this sort of thing. Liquibase, FlexDeploy – these were just two of the tools that Debbie had no chance of getting approval to use in the face of a bureaucracy that made the Vogon Civil Service look like it was following Extreme Programming.
If she was going to get her changes through by her Christmas Eve deadline, she would have to get creative…

Object types and their inherent “re-runnability”

Debbie began by making a list of the types of database object in her application and whether or not they could be generated by a re-runnable DDL statement – i.e. a CREATE OR REPLACE stataement.
Those that could included :

  • packages (specifications and bodies)
  • functions
  • procedures
  • views
  • triggers
  • types (specifications and bodies)

These were more of a challenge :

  • tables
  • sequences
  • indexes
  • constraints
  • materialized views

The code changes that Debbie needed to deploy mostly consisted of object types of the latter group…

Elf and Safety Gone Mad !

The staff in the warehouse had gotten fed up with loading parcels that didn’t actually need to be delivered. Elf and Safety had gotten involved, saying something about repetitive heavy lifting and…well..Debbie had come up with the following code.

First of all, there were a couple of new tables :

create table countries(
    iso_code varchar2(3),
    country_name varchar2(4000)
)
/

…and…

create table deliveries(
    id number constraint del_pk primary key,
    recipient_name varchar2(4000) not null,
    date_of_birth date not null,
    country_code varchar2(3),
    belief_end_date date, 
    constraint del_uk unique ( recipient_name, date_of_birth)
)
/

…then a sequence for the DELIVERIES.ID values( this is 11g remember, identity columns are not yet a thing)…

create sequence del_id_seq
    start with 1
    increment by 1
    nocycle
/

In subsequent iterations of her development, Debbie decided to add a Primary Key…

alter table countries add constraint coun_pk primary key (iso_code)
/

…rename a column…

alter table deliveries 
    rename column country_code to coun_iso_code
/

… add a Foreign Key…

alter table deliveries add constraint 
    del_coun_fk foreign key (coun_iso_code) references countries(iso_code)
/

…and a Not Null constraint…

alter table deliveries modify coun_iso_code not null
/

…a physical column…

alter table deliveries add naughty_flag varchar2(1) not null
/

…a check constraint…

alter table deliveries add constraint del_nf_chk check (naughty_flag in ('Y', 'N'))
/

…and a virtual column…

alter table deliveries add 
    active_flag generated always as ( 
        case when belief_end_date is not null then 'Y' else 'N' end) virtual visible
/

Finally, there was a view on the table

create or replace view deliveries_vw as
    select del.recipient_name, coun.country_name
    from deliveries del
    inner join countries coun 
        on coun.iso_code = del.coun_iso_code
    where del.active_flag = 'Y'
    and del.naughty_flag = 'N'
/

After all of that, Debbie then needed to add some static (reference) data :

-- Furthest stop on the round !
insert into countries(iso_code, country_name)
values('ATA', 'ANTARCTICA')
/

insert into countries(iso_code, country_name)
values('CAN', 'CANADA')
/

insert into countries(iso_code, country_name)
values('COL', 'COLOMBIA')
/

-- Company Head Office is here...
insert into countries(iso_code, country_name)
values('FIN', 'FINLAND')
/


insert into countries(iso_code, country_name)
values('DEU', 'GERMANY')
/


insert into countries(iso_code, country_name)
values('IND', 'INDIA')
/


insert into countries(iso_code, country_name)
values('MDG', 'MADACASCAR')
/

insert into countries(iso_code, country_name)
values('NZL', 'NEW ZEALAND')
/


-- Can't find the ISO code for Wales for some reason !
insert into countries(iso_code, country_name)
values('GBR', 'UNITED KINGDOM')
/


insert into countries(iso_code, country_name)
values('USA', 'UNITED STATES OF AMERICA')
/

commit;

It’s probably worth noting that Debbie was following the company’s coding standards which – among other things – specified that :

  1. Primary Key, Foreign Key and Check constraints were always explicitly named (although not NOT NULL constraints)
  2. The definitions of tables, constraints etc would not change between runs – i.e. if the Primary Key columns on a table were to change then the appropriate DDL would be written to drop the existing Primary Key

These changes were executed via a master release script :

prompt Creating new tables

@countries_tbl.sql
@deliveries_tbl.sql

prompt Creating a sequence

@del_id_seq.sql

prompt Adding PK to a table

@coun_pk.sql

prompt Renaming a column 

@rename_del_coun_code.sql

prompt Adding Foreign Key

@del_coun_fk.sql

prompt adding a Not Null constraint

@del_iso_code_nn.sql

prompt adding new column 

@naughty_flag.sql

prompt creating check constraint

@del_nf_chk.sql

prompt adding virtual column

@del_active_virt.sql

prompt creating view

@deliveries_vw.sql

prompt creating COUNTRIES records...

@add_countries.sql

prompt Deployment completed.

Whilst the first run of this script would work as expected…

…second and subsequent runs would be cause the CI server to light up like a Christmas Tree ( and not in a good way)…

So, armed with a roll of PL/SQL and a sprinkling of SQL*Plus fairy dust, Debbie prepared to ensure a soft Irish border write some re-runnable DDL…

Option 1 – Exceptional Scripting

The first option was simply to anticipate the errors that might come up when the same DDL statement was executed multiple times.
The general format of such a script would be something like :

declare
    e_obj_exists exception;
    
    -- ORA-00955: name is already used by an existing object
    pragma exception_init( e_obj_exists, -955);    
begin
    -- nested blocks so that each exception can be handled individually and the script can then continue
    begin
        execute immediate 'create table sleeps_till_xmas( sleeps number)';
    exception when e_obj_exists then
        dbms_output.put_line('Someone started counting early this year !');
    end;
end;
/

Using this pattern, Debbie could knock up something like…

set serveroutput on size unlimited
spool rerun_master1.log

declare
    --
    -- Create exceptions for the errors we may anticipate in the event of second or subsequent
    -- execution of DDL. 
    --

    -- Table and object exceptions
    e_obj_exists exception;
    e_no_such_tab exception;
    
    -- Sequence exceptions
    e_no_such_seq exception;

    -- Column change exceptions
    e_dup_col_name exception;
    e_col_exists exception;
    e_no_such_col exception;
    e_col_already_not_null exception;

    -- Constraint exceptions
    e_tab_has_pk exception;
    e_cons_exists exception;
    e_fk_exists exception;
    
    -- ORA-00955: name is already used by an existing object
    pragma exception_init( e_obj_exists, -955);

    -- ORA-00942: table or view does not exist 
    pragma exception_init( e_no_such_tab, -942);

    -- ORA-02289: sequence does not exist
    pragma exception_init( e_no_such_seq, -2289);

    -- ORA-00957: duplicate column name
    pragma exception_init( e_dup_col_name, -957);
    
    -- ORA-01430: column being added already exists in table
    pragma exception_init( e_col_exists, -1430);

    -- ORA-00904: "%s": invalid identifier
    pragma exception_init( e_no_such_col, -904);

    -- ORA-01442: column to be modified to NOT NULL is already NOT NULL
    pragma exception_init( e_col_already_not_null, -1442);

    -- ORA-02260 : table can have only one primary key
    pragma exception_init( e_tab_has_pk, -2260);

    -- ORA-02264: name already used by an existing constraint
    pragma exception_init( e_cons_exists, -2264);

   

    -- ORA-02275: such a referential constraint already exists in the table
    pragma exception_init( e_fk_exists, -2275);


begin
    dbms_output.put_line('Creating new tables');
    -- each DDL statement will need to be in it's own block so we can handle the exceptions separately
    begin
        dbms_output.put_line('COUNTRIES');
        execute immediate 
            'create table countries(
                iso_code varchar2(3),
                country_name varchar2(4000))';
    exception when e_obj_exists then 
        dbms_output.put_line('Table exists - skipping.');
    end;

    begin
        dbms_output.put_line('DELIVERIES');
        execute immediate
            'create table deliveries(
                id number not null,
                recipient_name varchar2(4000) not null,
                country_code varchar2(3),
                belief_end_date date,
                date_of_birth date)';
    exception when e_obj_exists then 
        dbms_output.put_line('Table exists - skipping.');
    end;

    dbms_output.put_line('Creating Sequence');
    begin
        execute immediate 
            'create sequence del_id_seq
                start with 1
                increment by 1
                nocycle';
        exception when e_obj_exists then
            dbms_output.put_line('Sequence exists - skipping');
    end;

    dbms_output.put_line('Adding PK to a table');
    begin
        execute immediate 'alter table countries add constraint coun_pk primary key (iso_code)';
    exception when e_tab_has_pk then
        dbms_output.put_line('PK already exists - skipping');
    end;

    dbms_output.put_line('Renaming a column');
    begin
        execute immediate 'alter table deliveries rename column country_code to coun_iso_code';
    exception when e_dup_col_name then
        dbms_output.put_line('Column already renamed - skipping');
    end;

    dbms_output.put_line('Adding a Foreign Key');
    begin
        execute immediate 
            'alter table deliveries add constraint 
                del_coun_fk foreign key (coun_iso_code) references countries(iso_code)';
    exception when e_fk_exists then
        dbms_output.put_line('FK already exists - skipping');
    end;

    dbms_output.put_line('adding a Not Null constraint');
    begin
        execute immediate 'alter table deliveries modify coun_iso_code not null';
    exception when e_col_already_not_null then
        dbms_output.put_line('Column is already Not Null - skipping');
    end;

    dbms_output.put_line('adding new column');
    begin
        execute immediate 'alter table deliveries add naughty_flag varchar2(1) not null';
    exception when e_col_exists then
        dbms_output.put_line('Column already exists - skipping');
    end;

    dbms_output.put_line('creating check constraint');
    begin
        execute immediate q'[alter table deliveries add constraint del_nf_chk check (naughty_flag in ('Y', 'N'))]';
    exception when e_cons_exists then 
        dbms_output.put_line('Constraint already exists - skipping');
    end;

    dbms_output.put_line('adding virtual column');
    begin
        execute immediate 
            q'[alter table deliveries add 
                active_flag generated always as ( 
                    case when belief_end_date is not null then 'Y' else 'N' end) virtual visible]';
    exception when e_col_exists then
        dbms_output.put_line('Column already exists - skipping');
    end;

end;
/

rem 
rem View statement is inherently re-runnable (CREATE OR REPLACE) so just execute the script...
rem

prompt Creating View
@deliveries_vw.sql


prompt Creating COUNTRIES records

declare

    procedure ins( i_code in countries.iso_code%type, i_name countries.country_name%type) 
    is
    begin
        merge into countries
        using dual 
        on ( iso_code = i_code)
        when not matched then 
            insert( iso_code, country_name)
            values( i_code, i_name);
    end ins;

begin
  -- Furthest stop on the round !
ins('ATA', 'ANTARCTICA');
ins('CAN', 'CANADA');
ins('COL', 'COLOMBIA');
-- Company Head Office is here...
ins('FIN', 'FINLAND');
ins('DEU', 'GERMANY');
ins('IND', 'INDIA');
ins('MDG', 'MADACASCAR');
ins('NZL', 'NEW ZEALAND');
-- Can't find the ISO code for Wales for some reason !
ins('GBR', 'UNITED KINGDOM');
ins('USA', 'UNITED STATES OF AMERICA');
commit;

end;
/

prompt Deployment completed.
spool off

On first execution, this script would run pretty much in the same way as the original :

Subsequent runs, however, would be a little smoother…

Whilst it did the job, Debbie felt that this approach had some shortcomings.

For one thing, table creation statements could get quite lengthy and complex so having them in-line as literals could get a bit fiddly.
For another, it was necessary to anticipate which exceptions you would run into and handle them accordingly. This was not always straightforward.
For example, trying to add an existing column to a table would result in :

ORA-01430: column being added already exists in table

However, renaming a column that already exists would give you :

ORA-00957: duplicate column name

On top of that, this approach required the re-coding of the exception handlers every time you wrote a new master release script.
It was for this reason that you would find exceptions that were declared but not used in a script ( as there are in this one).
Additionally, there was the temptation to throw all of the DDL into a single script to allow re-use of the exception declarations. This was likely to lead to a script which would quickly become quite large.

Debbie thought that there was an alternative that, with a bit of preparation, would require her to do rather less typing in the long run…

Using the Data Dictionary

To start with, Debbie created a package containing functions to check for the existing state of objects :

create or replace package ddl_checks
    authid current_user
as

    function object_exists( 
        i_name in user_objects.object_name%type,
        i_type in user_objects.object_type%type)
        return boolean;

    function column_exists(
        i_table in user_tab_columns.table_name%type,
        i_column in user_tab_columns.column_name%type)
        return boolean;

    function column_is_nullable(
        i_table in user_tab_columns.table_name%type,
        i_column in user_tab_columns.column_name%type)
        return boolean;

    function constraint_exists(
        i_table user_constraints.table_name%type,
        i_cons_name user_constraints.constraint_name%type,
        i_cons_type user_constraints.constraint_type%type)
        return boolean;
end ddl_checks;
/

create or replace package body ddl_checks
as

    -- PRIVATE package members
    function is_valid_type( i_type in user_objects.object_name%type)
        return boolean deterministic
    --
    -- Returns true if the object type is one that would be included in user_objects
    -- and can be used as the direct subject of a CREATE statement
    --
    is
    begin
        return upper(i_type) in (
            'TABLE', 'SEQUENCE', 'SYNONYM', 'INDEX', 'MATERIALIZED VIEW', 'VIEW',
            'FUNCTION', 'PROCEDURE', 'TRIGGER',
            'PACKAGE', 'PACKAGE BODY',
            'TYPE', 'TYPE BODY');
    end is_valid_type;

    -- PUBLIC package members
    function object_exists( 
        i_name in user_objects.object_name%type,
        i_type in user_objects.object_type%type)
        return boolean
    is
        dummy pls_integer;
        e_invalid_type exception;
    begin
        if not is_valid_type( i_type) then
            raise e_invalid_type;
        end if;
        select null into dummy 
        from user_objects
        where object_name = upper( i_name)
        and object_type = upper( i_type);

        return true;
    exception 
        when no_data_found then
            return false;
        when e_invalid_type then
            raise_application_error(-20900, 'Cannot verify the existence of this type of object');
    end object_exists;

    function column_exists(
        i_table in user_tab_columns.table_name%type,
        i_column in user_tab_columns.column_name%type)
        return boolean
    is
        dummy pls_integer;
    begin
        select null into dummy
        from user_tab_columns
        where table_name = upper(i_table)
        and column_name = upper(i_column);

        return true;
    exception when no_data_found then
        return false;
    end column_exists;

    function column_is_nullable(
        i_table in user_tab_columns.table_name%type,
        i_column in user_tab_columns.column_name%type)
        return boolean
    is
        l_nullable user_tab_columns.nullable%type;
    begin 
        select nullable into l_nullable  
        from user_tab_columns 
        where table_name = upper(i_table)
        and column_name = upper( i_column);

        return( l_nullable = 'Y');
    end column_is_nullable;

    function constraint_exists(
        i_table user_constraints.table_name%type,
        i_cons_name user_constraints.constraint_name%type,
        i_cons_type user_constraints.constraint_type%type)
        return boolean
    is
        dummy pls_integer;
        e_invalid_type exception;
    begin
        if upper(i_cons_type) not in ('P', 'U', 'R', 'C', 'V') then
            -- valid constraint types in 11g are :
            -- P(rimary Key)
            -- U(nique Key)
            -- R(eferrential Integrity Constraint or Foreign Key)
            -- C(heck constraint)
            -- V(iew - usually a check option)
            raise e_invalid_type;
        end if;

        select null into dummy
        from user_constraints
        where table_name = upper(i_table)
        and constraint_name = upper(i_cons_name)
        and constraint_type = upper(i_cons_type);

        return true;
    exception
        when no_data_found then
            return false;
        when e_invalid_type then
            raise_application_error( -20901, 'Not a valid constraint type value');
    end constraint_exists;

end ddl_checks;
/

Nothing too exotic here – Debbie simply use the data dictionary to work out the existence or state of database objects in the current schema.
The use of invoker’s rights ensures that these function will only report on objects in the calling schema.
This fitted in with the deployment practices for this application where the application owner schema was used to run the deployment.

Now, Debbie may have been tempted at this point to make use of some of SQLCL’s nifty new features. Unfortunately the Vogons had put they kybosh on that particular option, so she just had to do her best with good old SQL*Plus…

set serverout on
var command varchar2(4000)
set verify off

declare
    i_type varchar2(30) := '&1';
    i_name varchar2(30) := '&2';
    i_path varchar2(4000) := '&3';

begin    
    if ddl_checks.object_exists( i_name, i_type) then
        :command := 'prompt '||i_type||' '||i_name||' already exists - skipping';
    else
        :command := '@'||i_path;
    end if;   
end;
/
set heading off 
set feedback off 
set termout off

spool create_object.tmp
print :command
spool off

set termout on
set feedback on
@create_object.tmp

undef command

It’s probably worth pausing here to go through what this script is doing.
First, Debbie declared a SQL*Plus variable called command.

This variable is then populated in an anonymous PL/SQL block based on whether the object specified by the first two arguments passed in already exists. If it does then command will simply be set to output a message to this effect. Otherwise it will be set to call the script specified in the third argument.

The value of command is written to a file called create_object.tmp which is then executed.

So, to use this script for the COUNTRIES table which has it’s DDL in the script ddl/tables/countries_tbl.sql we can run…

@create_object.sql TABLE COUNTRIES ddl/tables/countries_tbl.sql

The first time this is run ( i.e. when the table does not already exists), the output file – create_object.tmp
looks like this :

@ddl/tables/countries_tbl.sql

As a result, the script is executed and the table is created :


The second and subsequent runs produce a file containing…

prompt table countries already exists - skipping

which produces the output :

A similar script can then be used for column creation :

set serverout on
var command varchar2(4000)
set verify off
declare
    i_table varchar2(30) := '&1';
    i_column varchar2(30) := '&2';
    i_path varchar2(4000) := '&3';

begin    
    if ddl_checks.column_exists( i_table, i_column) then
        :command := 'prompt Column '||i_table||'.'||i_column||' already exists - skipping';
    else
        :command := '@'||i_path;
    end if;   
end;
/
set heading off 
set feedback off 
set termout off

spool add_column.tmp
print :command
spool off

set termout on
set feedback on
@add_column.tmp

undef command

To add the new COUNTRIES records, Debbie’s preferred the option of simply moving the merge statement into a package :

create or replace package manage_countries as
    procedure save_country( i_code countries.iso_code%type, i_name countries.country_name%type);
end manage_countries;
/

create or replace package body manage_countries as
    procedure save_country( i_code countries.iso_code%type, i_name countries.country_name%type)
    is 
    begin
        merge into countries
            using dual
            on( iso_code = i_code)
            when matched then update
                set country_name = i_name
            when not matched then 
                insert( iso_code, country_name)
                values( i_code, i_name);
    end save_country;
                
                
end manage_countries;
/

With these changes in place, her individual scripts could remain largely recognisable (and in some cases, unchanged) as simple DDL statements.
The scripts that she would have to change were those to do with constraints…

The Primary Key :

declare
    msg varchar2(4000);
begin
    if ddl_checks.constraint_exists( 'countries', 'coun_pk', 'P') then
        msg := 'PK already exists - skipping';
    else
        execute immediate 'alter table countries add constraint coun_pk primary key (iso_code)';
        msg := 'PK Constraint added';
    end if;
    dbms_output.put_line(msg);
end;
/

The Foreign Key :

declare
    msg varchar2(4000);
begin
    if ddl_checks.constraint_exists( 'deliveries', 'del_coun_fk', 'R') then
        msg := 'FK already exists - skipping';
    else
        execute immediate 
            'alter table deliveries add constraint del_coun_fk foreign key (coun_iso_code) references countries(iso_code)';
        msg := 'FK Constraint added';
    end if;
    dbms_output.put_line(msg);
end;
/

The Not Null constraint :

declare
    msg varchar2(4000);
begin
    if ddl_checks.column_is_nullable( 'deliveries', 'coun_iso_code') then
        execute immediate 'alter table deliveries modify coun_iso_code not null';
        msg := 'Column made mandatory';
    else 
        msg := 'Column is already Not Null - skipping';
    end if;
    dbms_output.put_line(msg);
end;
/

The check constraint :

declare
    msg varchar2(4000);
begin
    if ddl_checks.constraint_exists( 'deliveries', 'del_nf_chk', 'C') then
        msg := 'FK already exists - skipping';
    else
        execute immediate 
            q'[alter table deliveries add constraint del_nf_chk check (naughty_flag in ('Y', 'N'))]';
        msg := 'FK Constraint added';
    end if;
    dbms_output.put_line(msg);
end;
/

…and the insert script :

begin
    manage_countries.save_country('ATA', 'ANTARCTICA');
    manage_countries.save_country('CAN', 'CANADA');
    manage_countries.save_country('COL', 'COLOMBIA');
    manage_countries.save_country('FIN', 'FINLAND');
    manage_countries.save_country('DEU', 'GERMANY');
    manage_countries.save_country('IND', 'INDIA');
    manage_countries.save_country('MDG', 'MADACASCAR');
    manage_countries.save_country('NZL', 'NEW ZEALAND');
    manage_countries.save_country('GBR', 'UNITED KINGDOM');
    manage_countries.save_country('USA', 'UNITED STATES OF AMERICA');

    commit;
end;
/

With these changes in place, Debbie was now able to run the following master release script, confident that any errors reported would be genuine…

prompt Creating new tables

@create_object.sql TABLE COUNTRIES ddl/tables/countries_tbl.sql
@create_object.sql Table Deliveries ddl/tables/deliveries_tbl.sql

prompt Creating a sequence

@create_object.sql sequence del_id_seq ddl/sequences/del_id_seq.sql

prompt Adding PK to a table

@ddl/table_alter/coun_pk.sql

prompt Renaming a column 

@add_column.sql DELIVERIES coun_iso_code ddl/table_alter/rename_del_coun_code.sql

prompt Adding Foreign Key

@ddl/table_alter/del_coun_fk.sql

prompt adding a Not Null constraint

@ddl/table_alter/del_iso_code_nn.sql

prompt adding new column 
@add_column.sql deliveries Naughty_Flag ddl/table_alter/naughty_flag.sql

prompt creating check constraint

@ddl/table_alter/del_nf_chk.sql

prompt adding virtual column

@add_column.sql Deliveries ACTIVE_FLAG ddl/table_alter/del_active_virt.sql

prompt creating view

@ddl/views/deliveries_vw.sql

prompt Creating package

@ddl/packages/manage_countries.pks
@ddl/packages/manage_countries.pkb

prompt creating COUNTRIES records...

@static_data/add_countries.sql

prompt Deployment completed.

The initial run looked like this :

Subsequent runs worked as expected :

“Well”, thought Debbie, “I think I’ve really earned my Christmas Bonus this year !”

NOTE : This story is fictional. Any resemblance to any Debbies, living or Welsh is purely intentional.

I’ll drink to that !

VirtualBox – configuring a Host-Only Network

Sat, 2018-11-17 08:34

I’m currently indulging in the pastime that’s sweeping the country – trying not to think about Brexit.
It’s a craze that’s even spread as far our political elite. In their case, it manifests itself in slightly different ways.
On the one hand, there are those who are refusing to accept any solution offered to maintain a “soft” border on the island of Ireland. As far as I can tell, they haven’t managed to offer any practical solution that they would accept as that would involve thinking about Brexit.
On the other hand there are those who are pushing for a new referendum because, apparently, some politicians lied when campaigning. Maybe someone was “Putin” ’em up to it ?

For my part, as I don’t quite have the space for a bunker at the bottom of my garden, I’ve decided to hide out in to a world of make-believe…well Virtual Machines at any rate.

I want to setup a CentOS Virtual Machine (VM) that I can then use as to clone environments to host various software stacks that I may want to play with.
I’d like to be able to connect to these VMs directly from my host OS, just like a real-world server. However, I’d also like to be able to connect the VM to the outside world occasionally so I can run package updates via yum.
The specific steps I’m going to go through are :

  • Install CentOS7 into a Virtualbox VM
  • Setup Host Only Network in VirtualBox
  • Create a Network Interface on the Guest to use the Host Only Network
  • Assign a static IP address to the Guest

The software I’m using for this is :

Before we get cracking, it’s probably a good idea to have a quick look at…

VirtualBox Networking

VirtualBox supports four networking modes of which Network Address Translation (NAT) is the default.
Whilst this is perfectly fine for allowing internet access directly from the Guest, it does have some limitations, including the inability to connect via ssh from the Host machine.

To overcome this, we can setup a Host Only Network.
The catch here is that, using this networking mode, the Guest will not be able to see the internet so updates via yum will be problematic.
Fortunately, VirtualBox allows you to configure multiple Network Cards (NICs) on a VM so it’s possible to toggle between the two modes depending on what’s required.
This is the approach I’m going to take here.

Incidentally, the VirtualBox documentation includes a chapter on Virtual Networking which you may find useful.

Right, onwards…

Installing CentOS7 into a VirtualBox VM

Essentially, I’ve followed these steps to do the basic installation. However there are some changes around configuring the Guest Additions, which I’ll come onto shortly.

Installation Options

When you first fire-up CentOS in the VM, the installer kicks in.
At this point, I’ve selected Server with GUI as the Base Environment along with the Development Tools add-on :

The next step is to rename the server to something a bit more memorable. Now, I’ve known data centres where server names followed a theme – they could all be artists or even race horses.
In this case I’ve decided to call this server “thor” – as it’s the only one you’re going to see in this article, you’ll be left to ponder whether I’m following a high-brow intellectual theme ( Norse Gods), or whether I’m just using characters from the Marvel Comic Universe.
We also need to make sure that networking is enabled by ensuring the default NIC ( enp0s3 in this case) is on :

No, it’s not pronounced “phwoar”…even in Welsh.

Note that, at this point, the domain name “virtualbox” is arbitrary.

After the installation is complete, you’ll need to restart the machine and accept the license.
Before we go any further we should now see that networking is enabled and that we have access to the internet :

Getting sudo access

In order to keep things simple, I’m going to give my user sudo privileges. In CentOS, these are derived from the wheel group. So, in a Terminal:

su root
usermod -aG wheel mike

NOTE – you’ll need to logout and log back in (or simply restart the VM) for this change to take effect.

Once the server has restarted, you can check that things have worked as expected :

groups mike
mike : mike wheel
Update packages with yum

However recently (or otherwise) you downloaded your CentOS iso, it’s always a good idea to update the packages before we get too much further :

sudo yum update
Installing Guest Additions

There’s a whole chapter on Guest Additions in the documentation.
For this VM, I’m installing Guest Additions 5.1.38. This is done in the usual way – i.e.
With the Guest running, go to the VirtualBox Devices Menu and select Insert Guest Additions CD Image.
When prompted, hit the Run button.

For my part, I’m installing Guest Additions to take advantage of the bi-directional clibpoard and ability to cut and paste between Guest and Host.
If you’re tempted to go further and try to maximise the viewport for the GUI, a word of warning, enabling 3D Accelaration for the VM’s display caused the VM to fail to start.
This may simply be an issue with the versions of CentOS/Virtualbox/Guest Additions that I’m using, but I thought I should mention it, just in case.

Now we’ve completed the initial setup of the CentOS VM, the next step is to…

Create a Host Only Network

In VirtualBox itself, go to the File menu and select Preferences.
Then choose the Network icon and go to the Host-only Networks tab.
Click on the Add icon on the right-hand side and create a new network :

Click on the screwdriver icon and you should be able to see Details of the new network, including it’s starting IP address :

Initially, we’re going to use DHCP to confirm that our configuration has worked. Therefore, we need to go to the DHCP Server tab and check Enable Server.

Note that, in this example, I’ve set the Lower and Upper Address bounds manually.
Of course, using DHCP means that an IP address will be allocated to the VM each time it starts. We’ll come onto how to configure a fixed IP address in a bit. For now though, we just want to make sure everything is working.

Add a Host Only NIC to the VM

For our VM to use our new network, we need to add a Host Only Network Interface to it.

To do this, we need to open the Network settings for that VM in Virtualbox and click on the Adapter 2 tab.
Make sure that Enable Network Adapter is checked
Then set Attached to to Host Only Adapter and the Name to that of the network we’ve just created :

If we now start the VM again, we can see that we have a second NIC, which is now connected :


However, we no longer have access to the internet from within the VM :

We can, however, connect to the running VM via ssh.
First of all, we need to determine the IP address that’s been allocated to the VM by the DHCP server. We can do this from the host using :

vboxmanage guestproperty enumerate CentOS7_HO1 |grep VirtualBox/GuestInfo/Net/0/V4/IP
Name: /VirtualBox/GuestInfo/Net/0/V4/IP, value: 192.168.57.100, timestamp: 1541956114954604000, flags: 

Now, using this IP address, we can connect from the host …

If we look in the VM itself, we can see that there is now a second Network Interface – enp0s8

We want to make this the default NIC so we need to go to the settings…

…and set it to Connect Automatically

We then need to uncheck the Connect Automatically checkbox for the original NIC ( enp0s3 in my case), so that it does not connect unless we want it to.
Now, when the VM starts up, it will be using the Host Only Network by default.

Right, we’ve got the Host Only Network up and running but finding out the IP address for the VM every time we start it up is going to be a bit of messing about.
Fortunately we can dispense with that by simply…

Assigning a Static IP Address

To be on the safe side, I had the VM powered down when I started these steps.

First, we need to go back to the Host Only Network Settings in Virtualbox for our network, go to the DHCP tab and uncheck the enable server option :

Next we need to fire up the VM and create a config file for the NIC we’re using for the Host Only Network ( en0s8) :

sudo nano /etc/sysconfig/network-scripts/ifcfg-en0s8

The file should look like this. Well, actually it’s probably more accurate to say that my file looks like this :

TYPE=ETHERNET
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
DEVICE=enp0s8
ONBOOT=yes
IPADDR=192.168.57.123
PREFIX=24
GATEWAY=192.168.57.254
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_PRIVACY=no

NOTE :

  • DEVICE is the name of the Host-only NIC we created
  • IPADDR is the static IP address you want to assign to the VM
  • GATEWAY is the upper IP address in the range where IPADDR is located (as far as I can tell)

Restart networking on the VM …

sudo systemctl restart network

…and confirm that the static IP address is now being used…

ip address show dev enp0s8

…in my case returns…

3: enp0s8:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:5b:02:a1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.57.123/24 brd 192.168.57.255 scope global noprefixroute enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::ad98:f0f4:9406:5348/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

At this point I’ve chosen to re-start the VM completely to allow it to pick up the new networking changes.
In fact, as I’m feeling brave, I’ve started the VM headless.

Anyway, we can now ping the new static IP address, and connect to the server using it :

I don’t want to have to remember the IP address every time I want to connect to the server so I’ve simply added an entry for it in the hosts file on the Host :

sudo nano /etc/hosts

At the end of the file I’ve added the lines…

#VirtualBox Host Only Network hosts
192.168.57.123 thor.virtualbox

I can now use the alias I’ve set to hit the VM :

Using the Host-Only network should be fine for me for the most part. However, if I do want to update the packages on the CentOS VM, all I need to do is to switch to the NAT network card and I’ll be able to run an update with no issues.

Well, that’s taken my mind off things for a bit, now I just need to find something else to distract myself until the end of March when Brexit will (might) finally be over.

New Dog, Old Tricks – how to save yourself some typing with sed

Sun, 2018-11-04 10:07

We have a new addition to our household –

Teddy


Cute and fluffy he may be, but he’s got to earn his keep. He can start making himself useful by helping me with this post.

It begins one Friday afternoon when an urgent request lands on my desk with a large splat.

The requirement is that some csv files be uploaded into the Oracle 11g Datbasae serving the UAT environment to facilitate some testing.
There are around 20 files, each with a slightly different set of attributes.
The files are currently sitting on the on the Red Hat Linux Server hosting the database.
I have sufficient OS permissions on the server to move them to a directory that has a corresponding database object in the UAT instance.
Nevertheless, the thought of having to knock out 20-odd external tables to read these files might leave me feeling a bit like this…


Fortunately, a certain Lee E. McMahon had the foresight to predict the potential risk to my weekend and wrote the Stream Editor (sed) program

The File

The file I’m using as an example is here on the Database Server :

and it’s contents are :

employee_id,first_name,last_name,email,salary,commission%
145,"John","Russell","JRUSSEL",14000,0.4
146,"Karen","Partners","KPARTNER",13500,0.3
147,"Alberto","Errazuriz","AERRAZUR",12000,0.3
148,"Gerald","Cambrault","GCAMBRAU",11000,0.3
149,"Eleni","Zlotkey","EZLOTKEY",10500,0.2
150,"Peter","Tucker","PTUCKER",10000,0.3
151,"David","Bernstein","DBERNSTE",9500,0.25
152,"Peter","Hall","PHALL",9000,0.25
153,"Christopher","Olsen","COLSEN",8000,0.2
154,"Nanette","Cambrault","NCAMBRAU",7500,0.2
155,"Oliver","Tuvault","OTUVAULT",7000,0.15
156,"Janette","King","JKING",10000,0.35
157,"Patrick","Sully","PSULLY",9500,0.35
158,"Allan","McEwen","AMCEWEN",9000,0.35
159,"Lindsey","Smith","LSMITH",8000,0.3
160,"Louise","Doran","LDORAN",7500,0.3
161,"Sarath","Sewall","SSEWALL",7000,0.25
162,"Clara","Vishney","CVISHNEY",10500,0.25
163,"Danielle","Greene","DGREENE",9500,0.15
164,"Mattea","Marvins","MMARVINS",7200,0.1
165,"David","Lee","DLEE",6800,0.1
166,"Sundar","Ande","SANDE",6400,0.1
167,"Amit","Banda","ABANDA",6200,0.1
168,"Lisa","Ozer","LOZER",11500,0.25
169,"Harrison","Bloom","HBLOOM",10000,0.2
170,"Tayler","Fox","TFOX",9600,0.2
171,"William","Smith","WSMITH",7400,0.15
172,"Elizabeth","Bates","EBATES",7300,0.15
173,"Sundita","Kumar","SKUMAR",6100,0.1
174,"Ellen","Abel","EABEL",11000,0.3
175,"Alyssa","Hutton","AHUTTON",8800,0.25
176,"Jonathon","Taylor","JTAYLOR",8600,0.2
177,"Jack","Livingston","JLIVINGS",8400,0.2
179,"Charles","Johnson","CJOHNSON",6200,0.1

The database object for this directory is :

select directory_name
from dba_directories
where directory_path = '/u01/app/oracle/myfiles'
/

DIRECTORY_NAME
------------------------------
MYFILES

Building the External Table DDL

As you can see from the file, external table columns can be taken from the header record, with a couple of tweaks :

  1. all columns will be defined as varchar2(4000)
  2. the “%” in “commission%” needs to be replaced with “_pct” to make the column name legal in Oracle

First up then we get sed to replace each of the pipes in the header row…

head -1 emps.dat|sed s/,/" varchar2(4000),"/g

employee_id varchar2(4000),first_name varchar2(4000),last_name varchar2(4000),email varchar2(4000),salary varchar2(4000),commission%

The /s switch means “substitute string1 with string2
The /g means – apply this whenever you find string1

We can use the same technique to replace the “%” signs…

$ head -1 emps.dat|sed s/%/_pct/g
employee_id,first_name,last_name,email,salary,commission_pct

This means that we now have the basis for a simple shell script to do the External Table creation legwork for us…

#!/bin/sh
# Script to generate an external table based on the .dat file supplied as $1
baseFname=`basename -s .dat $1`
tsuff=_xt
tname=$baseFname$tsuff
fname=$tname.sql
echo "create table $tname (" >$fname
echo -n `head -1 $1`|sed s/,/" varchar2(4000),\n"/g|sed s/%/_pct/g >>$fname
echo " varchar2(4000))">>$fname
echo "organization external ( type oracle_loader">>$fname
echo "default directory MYFILES access parameters (">>$fname
echo -e "\t records delimited by newline">>$fname
echo -e "\t\t badfile '$baseFname.bad'">>$fname
echo -e "\t\t logfile '$baseFname.log'">>$fname           
echo -e "\t\t skip 1">>$fname
echo -e "\t\t fields terminated by ',' (">>$fname
echo -ne "\t\t\t">>$fname
echo -n `head -1 $1`|sed s/,/" char(4000),\n"/g|sed s/%/_pct/g >>$fname
echo -e " char(4000)">>$fname
echo -e "\t\t)">>$fname
echo -e "\t ) location ( '$1' )">>$fname
echo -e ") reject limit unlimited;">>$fname

Running this for our file…

. ./gentxt.sh emps.dat

… gives us a complete External Table definition in a file called emps_xt.sql …

create table emps_xt (
employee_id varchar2(4000),
first_name varchar2(4000),
last_name varchar2(4000),
email varchar2(4000),
salary varchar2(4000),
commission_pct varchar2(4000))
organization external ( type oracle_loader
default directory MYFILES access parameters (
	 records delimited by newline
		 badfile 'emps.bad'
		 logfile 'emps.log'
		 skip 1
		 fields terminated by ',' (
			employee_id char(4000),
first_name char(4000),
last_name char(4000),
email char(4000),
salary char(4000),
commission_pct char(4000)
		)
	 ) location ( 'emps.dat' )
) reject limit unlimited;

OK, the formatting could use some work. however, the sql itself is valid…

SQL> @emps_xt.sql

Table created.

With the external table in place, we can now upload the data from the file…

Hours of drudgery have been avoided, which is just as well because someone gets a bit grumpy when they don’t get their walkies !

Cocktails and Traffic Cones – party time with DVDs and Blu-Rays in Ubuntu

Wed, 2018-08-29 14:37

This title may evoke images of a rumbustious night out filled with exotic drinks and highjinks followed by a morning waking up in possession of a traffic cone, the acquisition of which has somehow escaped the wreckage of your short-term memory.
If this is the case, you may be a tiny bit disappointed. This is all about how to play and rip DVDs and Blu-rays on Ubuntu.
Whilst that may not sound like quite as much fun, it’s less to leave you with a raging hangover. It should however, enable you to enjoy your video on your OS of choice.
What cocktails and traffic cones have to do with all of this will become apparent shortly.

What I’m going to cover here is :

  • How to Decode and Play DVDs using VLC
  • How to Convert DVD and Blu-ray files to mp4 video using Handbrake
  • How to Transcode DVD and Blu-ray discs to Matroska (mkv) format using MakeMKV

This should give you all of the steps required to watch and – if required – copy movies, tv shows etc from an optical disc.

First of all though…

The Legal Disclaimer
The legality of ripping copyrighted material differs across jurisdictions. You may want to check the situation where you are before you follow any of the steps detailed in this article.

Whilst we’re on the subject of disclaimers…

The Taste Disclaimer
The subject matter at hand means that there is a strong temptation to include quotes and (possibly) oblique references to movies here and there. Of course I wouldn’t dream of stooping so low just to get cheap laughs…much.

Oh, one more thing…

Efficacy disclaimer – The steps described here will work most discs. In the rare instances where this is not the case do not seem to follow and discernible pattern.
For example, the same steps to persuade a dark comedy to present you with a Marmalade Sandwich (in mp4 format), may cause a loveable cartoon bear to fix you with a stare that’s harder than a coffin nail.

Moving swiftly on…

Required Libraries

In order to read DVDs and Blu-rays there are a number of packages that you’ll need. As I don’t know which desktop you’re running (I’m currently on Unity on Ubuntu 16.04 LTS), I’ll do this bit in the Terminal (the CLI, not the Tom Hanks film)…

To start with, you should find that most of the packages we need are there already…

apt list libdvd* libaacs* libbluray* --installed

…should confirm that the following packages are installed :

There are a few more packages required which we can acquire in two stages. First of all…

sudo apt-get install libdvd-pkg

…which produces the following output


[sudo] password for mike: 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  autoconf automake autopoint autotools-dev debhelper dh-autoreconf dh-strip-nondeterminism libfile-stripnondeterminism-perl libltdl-dev libmail-sendmail-perl libsigsegv2
  libsys-hostname-long-perl libtool m4 po-debconf
Suggested packages:
  autoconf-archive gnu-standards autoconf-doc dh-make libtool-doc gfortran | fortran95-compiler gcj-jdk libmail-box-perl
The following NEW packages will be installed
  autoconf automake autopoint autotools-dev debhelper dh-autoreconf dh-strip-nondeterminism libdvd-pkg libfile-stripnondeterminism-perl libltdl-dev libmail-sendmail-perl libsigsegv2
  libsys-hostname-long-perl libtool m4 po-debconf
0 to upgrade, 16 to newly install, 0 to remove and 496 not to upgrade.
Need to get 2,897 kB of archives.
After this operation, 8,428 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y

Enter Y to continue.

Next up :

sudo apt-get install libbluray-bdj

…initially the output will be something like :

The following additional packages will be installed:
  ca-certificates-java default-jre-headless java-common libasm4-java openjdk-8-jre-headless
Suggested packages:
  default-jre fonts-dejavu-extra fonts-ipafont-gothic fonts-ipafont-mincho fonts-wqy-microhei fonts-wqy-zenhei fonts-indic
The following NEW packages will be installed
  ca-certificates-java default-jre-headless java-common libasm4-java libbluray-bdj openjdk-8-jre-headless
0 to upgrade, 6 to newly install, 0 to remove and 496 not to upgrade.
Need to get 27.8 MB of archives.
After this operation, 101 MB of additional disk space will be used.
Do you want to continue? [Y/n] 

This time, hitting Y will cause a number of things to happen. Apart from a Java Runtime being installed ( if one is not already present on the system), you are likely to find yourself presented with this screen :

Say Yes ( hit Enter) and you’ll then get :

Once again, say Yes (hit Enter).

Now that’s all done, if we re-run our initial check…

apt list libdvd* libaacs* libbluray* --installed

we should now get :

Before we leave the Command Line (for now, at least) there’s one more step requried to facilitate reading blu-rays…

mkdir -p ~/.config/aacs/
cd ~/.config/aacs/ && wget http://vlc-bluray.whoknowsmy.name/files/KEYDB.cfg

After all that, it’s probably a good idea to check that you can now play any discs that you’re planning to rip.
Whilst Totem is default tool for this on my desktop, in the wider Linux world – and pretty much everywhere else for that matter – the best app for playing video is undoubtedly…

VLC

The Video Lan Media Player (VLC) is available across multiple platforms and will pretty much play anything. As a clincher, it’s versions are named after Discworld characters. I’ve used Wetherwax (2.2.2) here.
Oh, and it’s icon is a traffic cone, which goes some way to explaining the title of this article.

To see if you already have VLC installed, you can run :

which vlc

If this returns something like…

/usr/bin/vlc

…then you’re good to go. Otherwise, you can either install from the Terminal…

sudo apt-get install vlc

…or via the Software centre

Either way, you can now use VLC to play your DVDs.

With a DVD disc in the drive, start VLC and go to the Media menu and select Open disc…

Playing a Blu-Ray requires some slightly different settings when opening the disc in VLC :

When playing blu-rays with VLC there are a couple of points to note –

  1. don’t try to open the disc with VLC because VLC will assume it’s a DVD. Instead, open VLC then open the disc from the VLC menu
  2. make sure that “No disc menus” is checked when you open the disc in VLC

If all you need is a means of playing discs on your Ubuntu machine, then VLC and the aforementioned packages do the job.
If however, you’ve ever opened a DVD case in anticipation of an hour-and-a-half of explosions and mayhem, only to find a Disney Musical, and reflected that high spirits are just no substitute for 800 rounds-a-minute, you’ll be interested in finding a way to store all your movies in one place on file.

Fortunately, as well as playing video from disc, VLC allows you to see which Title holds the movie (on DVDs at least) :


Whilst this is usually fairly obvious, there are discs that contain several titles of the same or similar length.

Using VLC enables you to determine which track you need to rip.

All we need now is a cocktail tool to do the ripping …

Handbrake

Of all the ripping tools available in Linux, Handbrake appears to be the most well-supported and long-lived.
Whilst it is included in the Ubuntu repositories, the documentation on the Handbrake Website suggests that sourcing the package from there may not be the best approach.
If you want to follow this advice, but persist with the package route, you can add the Ubuntu handbrake repository to your system…

 
sudo add-apt-repository ppa:stebbins/handbrake-releases 
sudo apt-get update

The screenshots here are taken from version 1.1.0 (64-bit version) :

Hanbrake is a converter – it will take your source video and convert it to another format. In this case, we’re going to convert to MP4.
The MP4 file you end up with is likely to be a fraction of the size of the source video file(s) you start with.
The act of conversion is rather CPU-intensive. As a result, you will find that the time taken to rip a movie will be as dependent on the speed of the available hardware as it is on the settings you choose.

The other point to note is that prolonged high CPU activity can cause your machine to run quite hot. Therefore, I’ve found that it’s a good idea to :

  • Ensure that your machine is sitting somewhere that is well-ventilated
  • Allow the machine to cool down between prolonged bouts of ripping

If you want to get an idea of the internal temperature of your machine, you could use the sensors command :

By now, you’ll probably have noticed Handbrake’s distinctive icon. This should give you a hint that, even with an extremely fast processor, transcoding is an activity that you can kick off and leave to run for a while.

When you start Handbrake, the first decision you are presented with is which preset to use.
These presets provide default settings for how you want to rip your movie.

Not being particularly fussy about such things, I usually select Very Fast 1080p30 because it makes the transcoding process…well…a bit faster than the default (Fast 1080p30).

Once you’ve made your selection, simply close the Presets Window.
You then need to click the Open Source button to select the location of the source you want to transcode. If there is an optical disc available, Handbrake should defualt to this.
Simply click Open

The disc will then be scanned ( which can take a while) before Handbrake presents you with it’s choice of which title to encode :

…which you can then change if you with using the drop-down.

Once you have chosen which title, you may want to have a look at the Subtitles tab.
By default, Handbrake will only attempt to rip “forced” subtitles. These are subtitles for parts of the movie that aren’t in the main language of the movie. If the movie you are working on does not include these then you can save yourself 1 encode pass ( and a fair bit of time) by removing them by clicking the ‘X’ button

Once done, you can return to the Summary Tab and hit the Start Encoding button

If you do select subtitiles, Handbrake will do two encoding passes. The first appears to be for the subtitles and has no appreciable impact on system resource usage.

When the second pass starts, however, you’ll observe a spike in the CPU usage…

For most discs, this approach should work perfectly well.
But what can you do when a disc decides to do an impression of a barnacle covered in bioluminescent algae ?

MakeMKV

MakeMKV is a transcoder – i.e. it converts video into MKV format.
MakeMKV has also been in Beta since at least 2008.

Usually, I’m a bit reluctant to risk beta programs on my machine. On this occasion however, I’ve decided that I just need to chill out and Let It Go…

Also, whilst Blu-Ray processing will be an additional cost feature when (if) the tool ever moves to a “production” version, it is currently free.

The installation route I followed was :

sudo add-apt-repository ppa:heyarje/makemkv-beta
sudo apt-get update
sudo apt-get install makemkv-oss makemkv-bin

Note that you may occasionally be prompted for a license key when starting this tool. If so, you can find it on this MakeMKV official forum post.

Fire up MakeMKV and it will automatically scan any optical drives.
Once the tool recognises that there is a disc in the drive, click on the drive graphic to open the disc :

MakeMKV will then scan the disc and then present you with a list of titles.
You will need to check/uncheck the appropriate titles to copy…

For DVDs this should simply correspond to the title selected by VLC (see above).
For Blu-Rays, things are slightly more complex.

If you check the right-hand pane for each Title, it will show the duration. You’re looking for one that is the approximate runtime of the film.
If there are multiple titles that fit the bill then check the number of chapters as well. This should also match the number of chapters in the Title auto-selected by VLC
Note If the Title does not contain any chapters then the Chapters row will be omitted from the Title details pane.

Once you’re happy with your selection – uncheck the checkboxes for all of the other titles.
Expand your chosen title and make sure that you’re happy with the sub-menu selections.

Once all that’s done, hit the Save button.

By default the tool will save the file to a newly created sub-directory under $HOME/Videos. It will prompt you before creating this. In my case, I just say yes at this point.
Whilst the transcoding is not resource intensive, it also does not tend to be that fast. You may be twiddling your thumbs for a bit…

Current size is shown as Output size

MakeMKV demand on system resources is modest…

…and you will eventually end up with an mkv file…although it’s quite big (over 20GB in this case).

The good news is that you can now point handbrake at said file…

Converting mkv to mp4 using Handbrake

Now we’ve got our rather large .mkv file, we can turn it into a more reasonably sized mp4 file by using Handbrake.
Simply open Handbrake, and select Open Source as before.
This time however, instead of using a disc directly, navigate to the .mkv file and select it.

Handbrake should then work as before, down to the resource intensive conversion. Ultimately however, you will be left with an mp4 file which is a rather more manageable size. For movies originally copied from Blu-Ray it’s usually no more than 2GB, depending on the Handbrake settings you’ve chosen.

Wrap-up

Hopefully, the tools and techniques covered here will work for most (if not all) of the discs you want to view or copy on your Ubuntu device.
Once you have your videos on file, you may find it useful to use some kind of media management software, such as Plex.

There are a number of articles around that may also be of use in your continuing Ubuntu video adventures…

As for me, I’m off in search of some cookie robots.

Read Only Access for providing backend support for an Oracle Application

Wed, 2018-07-25 15:59

The World Cup is finally over and “It’s Coming Home !”
For quite a long time, we English laboured under the illusion that “it” was football.
Fortunately for Scots everywhere, “It” turned out to be the World Cup which, like so many international sporting competitions, was conceived in France.

Another area that is often subject to flawed assumptions is what privileges are required to provide read-only access for someone to provide support to an Oracle Application.
So, for any passing auditors who may be wondering why “read only” access to an Oracle application sometimes means Write, or even Execute on certain objects…

The Application

We’re using the standard HR sample application provided with any Oracle database. For the purposes of this post, we’ve added couple of enhancements.
The application has use of a directory object called HR_FILES :

create directory hr_files as '/u01/app/oracle/hr_files'
/

grant read, write on directory hr_files to hr
/

There is an External Table which is used for an ETL process…


create table countries_xt
(
    iso_code varchar2(2),
    country_name varchar2(100)
)
    organization external
    (
        type oracle_loader
        default directory hr_files
        access parameters
        (
            records delimited by newline
            badfile 'countries.bad'
            logfile 'countries.log'
            skip 1
            fields terminated by ','
            (
                iso_code char(2),
                country_name char(100)
            )
        )
            location('countries.csv')
    )
reject limit unlimited
/

In addition to it’s standard grants, HR also has create any context :

grant create any context to hr
/

…which means the application can use contexts in a view.

The context package created for this is :

create or replace package hr_deptno_ctx as
    procedure set_ctx_val( i_dept_id departments.department_id%type);
    function get_ctx_val return departments.department_id%type;
end hr_deptno_ctx;
/

create or replace package body hr_deptno_ctx as

    procedure set_ctx_val( i_dept_id departments.department_id%type) 
    is
    begin
        dbms_session.set_context('hr_deptno', 'department_id', i_dept_id);
    end set_ctx_val;
    
    function get_ctx_val 
        return departments.department_id%type 
    is
    begin   
        return sys_context('hr_deptno', 'department_id');
    end get_ctx_val;
end hr_deptno_ctx;
/

The context itself is created as follows :

create context hr_deptno using hr_deptno_ctx
/

…and used in a view…

create or replace view emp_restricted as
    select *
    from employees
    where department_id = hr_deptno_ctx.get_ctx_val
/    

Remember, what we want to do here, is give “Read Only” access to this application to a database user. Let’s start with something simple…

Creating a Read Only Role

To begin with, let’s simply create a role called HR_READONLY and grant select on all HR tables and views to that role :

create role hr_readonly
/

grant select on hr.countries to hr_readonly;
grant select on hr.countries_xt to hr_readonly;
grant select on hr.departments to hr_readonly;
grant select on hr.employees to hr_readonly;
grant select on hr.jobs to hr_readonly;
grant select on hr.job_history to hr_readonly;
grant select on hr.locations to hr_readonly;
grant select on hr.regions to hr_readonly;
grant select on hr.emp_details_view to hr_readonly;
grant select on hr.emp_restricted to hr_readonly;

Now we can simply grant this role to our read only user …

set verify off
accept pwd prompt 'Enter password for new user MIKE_RO : ' hide
create user mike_ro identified by &pwd;
grant create session, hr_readonly to mike_ro
/

…and this is a really short post…

Selecting from External Tables

Let’s just connect as MIKE_RO and confirm that all is well…

select region_id, region_name
from hr.regions
order by 1
/

REGION_ID REGION_NAME              
--------- -------------------------
        1 Europe                   
        2 Americas                 
        3 Asia                     
        4 Middle East and Africa  
        

…see, no problem. Let’s try the new external table…

…so, it looks like our read only user will need READ access on the directory. No biggie, it’s still “READ” only…

grant read on directory hr_files to mike_ro
/

It’s when we have this privilege and then attempt to access the external table again, where things get interesting…

select *
from hr.countries_xt
/

Error starting at line : 1 in command -
select *
from hr.countries_xt
Error report -
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04074: no write access to directory object HR_FILES

The problem here is that the act of selecting from an external table will cause one or more files to be written (logfile, badfile, discardfile). Therefore, WRITE permissions are required on the directories to which these files will be written. In our case, the files are all written to HR_FILES so…

grant write on directory hr_files to mike_ro
/

…means that we can now select from the table :

select *
from hr.countries_xt
/

ISO_CODE                  COUNTRY_NAME             
------------------------- -------------------------
FR                        FRANCE                   
HR                        CROATIA                  
BE                        BELGIUM                  

NOTE – it’s always an outstandingly good idea to check the privileges a user already has on a Directory object before granting more.

“That’s not a problem”, I hear you say, “after all, our read only user won’t have EXECUTE permissions on anything. Well…

Changing context values in a session

Now let’s have a look at our new view. Connected as HR, we can see that the context value must be set for any rows to be returned…

select hr.hr_deptno_ctx.get_ctx_val 
from dual
/

GET_CTX_VAL
-----------


select count(*) 
from hr.emp_restricted
/

 COUNT(*)
----------
         0
         
exec hr.hr_deptno_ctx.set_ctx_val(60)



PL/SQL procedure successfully completed.


select count(*) 
from hr.emp_restricted
/


  COUNT(*)
----------
         5

So, in order for our Read-Only account to be able to “read” this view, it will require execute privileges on a package.
Now, you might wonder why we can’t simply grant execute on DBMS_SESSION, which is the package called by HR_DEPTNO_CTX to get and set the context values.
The answer can be found in the Oracle docs for the SET_CONTEXT procedure in DBMS_SESSION which state :

“The caller of SET_CONTEXT must be in the calling stack of a procedure that has been associated to the context namespace through a CREATE CONTEXT statement. ”

Therefore, our read only user needs to be granted execute on the HR package itself :

grant execute on hr.hr_deptno_ctx to mike_ro
/
Viewing stored source code

The next requirement for our read only user is to be able to see the source code that’s actually in the data dictionary (as opposed to say, in a source control repository somewhere).
Yes, I know that your Source Control Repo master/trunk/main branch should be a faithful copy of your production code. However, biter experience to the contrary leaves me reluctant to make this assumption. It’s much safer to see the actual code that’s being executed, not what it probably is.

Unfortunately, as things stand, we do not even have access to DBA_SOURCE.

At this point though, we can give our auditor a moment’s respite, we only want to grant the SELECT_CATALOG_ROLE role.

grant select_catalog_role to mike_ro
/

This now enables our Read Only account to view the source for HR’s objects…

set lines 130
set heading off
set feedback off
select text
from dba_source
where owner = 'HR'
and name = 'SECURE_DML'
and type = 'PROCEDURE'
order by line
/


PROCEDURE secure_dml
IS
BEGIN
  IF TO_CHAR (SYSDATE, 'HH24:MI') NOT BETWEEN '08:00' AND '18:00'
        OR TO_CHAR (SYSDATE, 'DY') IN ('SAT', 'SUN') THEN
	RAISE_APPLICATION_ERROR (-20205, 
		'You may only make changes during normal office hours');  
  END IF;
END secure_dml;

An additional benefit of this role is that it now gives us access to the dynamic performance views which in turn allows us to do some performance investigations.
For example, we can now run the Session Monitor in SQLDeveloper.

Conclusion

As we’ve demonstrated, the definition of Read Only access to an application running on an Oracle database is usually dependent on the features being used in that application.
Hopefully our imaginary auditor now has some understanding of this and won’t get their SOX in a twist when they find out that the developers providing Level 3 support have these privileges.

You have chosen not to trust… – Citrix Receiver and SSL error 61 on Ubuntu

Fri, 2018-06-08 14:01

After months of trouble-free operation, Citrix Receiver decided to wreak some havoc one morning last week.
Connecting to work (using Firefox on Ubuntu and Citrix Receiver for Linux 13.8) was trouble free as usual.
However, when I then tried to select a PC to remote into, Citrix informed me that …

“You have chosen not to trust Entrust Root Certification Authority – G2. SSL error 61”

At that point, I reflected that what I knew about Citrix and SSL certificates would fit on the back of a fag packet.
After some intensive “research” it should now fit into a short blog post…

Citrix Receiver for Linux has a bug…again

A quick internet search lead me to the Citrix Support site.
Their advice ( perhaps understandably), was to upgrade Citrix Receiver to the latest version.
After some fiddling around – steps for installing on Ubuntu can be found here if you’re interested, I had the latest version.
Now, I’m not sure whether it’s just an unfortunate coincidence or whether I’ve upset the good people at Citrix, but it seems that whenever I install the latest version, there is a Linux specific bug.
So, after about half an hour of messing about, I was back where I started with the same error and the same version of Citrix Receiver.

Attempting to connect via Chrome gave exactly the same result.

Re-installing my certificates

Re-reading the error message, I noticed that it was a specific certificate that was the problem.
Running a search, I was able to confirm that the said certificate is available from Entrust.

Once I’d downloaded the certificate in question, it was simply a matter of putting it where Citrix Receiver could see it.
So…

sudo cp entrust_g2_ca.cer /opt/Citrix/ICAClient/keystore/cacerts/.

Magically, Citrix Receiver was happy again and I was able to connect.

Some points to note for next time

A colleague of mine had the same issue. He is running Debian.
His solution was to :

– delete the files in the Citrix Receiver certs directory :

/opt/Citrix/ICAClient/keystore/cacerts/

– create a symlink in the directory from the certificates in

/etc/ssl/certs

If you’re reading this because you have a similar problem and the first solution doesn’t work, then perhaps this may be worth a try ( backup the certificate files before deleting them though !)
I’m still not sure of the root cause of this issue, although I suspect it may be something to do with browser updates.
On the plus side I’ve avoided having to drag myself into the office…for now.

First Steps in SQLDeveloper Data Modeler

Wed, 2018-05-30 13:12

It’s true, Oracle are giving away free stuff. “Oracle ?”, I hear you say, “as in Larry’s Database Emporium and Cloud base ?” The very same.
It’s been going on for quite a while and includes relatively hidden gems such as SQLDeveloper Data Modeler.

There is some confusion around this particular tool for a couple of reasons.
When it was first released (sometime around 2009 as I recall), Data Modeler was an additional cost option. However, that didn’t last long.
At present (and for a number of years now), it is available either as a completely standalone tool, or as a fully integrated component of the SQLDeveloper IDE.
Either way, it costs exactly the same as the SQLDeveloper IDE – i.e. nothing.

I can tell you like the price, want to take it for a spin ?

I’m going to focus here on using the integrated version of Data Modeler. This is because

  • I want to use it for small-scale modelling of the type you might expect to find when using an Agile Methodology
  • I’m a developer and don’t want to leave the comfort of my IDE if I don’t need to

What I’m going to cover is :

  • Viewing a Table Relationship Diagram (TRD) for an existing database table
  • Creating a Logical Data Model and Entity Relationship Diagram (ERD)
  • Generating a physical model from a logical model
  • Generating DDL from a Physical Model (including some scripting tweaks to suit your needs)
  • Using a Reporting Schema and pre-canned SQLDeveloper Reports to explore your models

Disclaimer
This post is about introducing the features of Data Modeler in the hope that you may find them useful.
It’s not intended as a paragon of data modelling virtue.
Come to that, it’s not intended as a definitive guide on how to use this tool. I’m no expert with Data Modeler (as you are about to find out). Fortunately, there are people out there who are.
If, after reading this, you want to explore further, then you could do worse than checking out words of Data Modeler wisdom from :

Let’s get started…

The Model Tab

Say I’m connected to the database as HR and I’m looking at the DEPARTMENTS table.
I’d really like to see a TRD for this table.
The bad news is that the data model originally created when building the application fell into disuse long ago and is now hopelessly outdated.
The good news is that I’m using a recent version of SQLDeveloper ( 18.1, since you ask), so I just need to navigate to the DEPARTMENTS table in the tree, hit the Model Tab and wait a few seconds…

That’s Data Modeler doing it’s thing – in this case, reading information from the Data Dictionary and building the resulting diagram on the fly.
Note that, you may wait some time for the tab to initialize and then be greeted with an apparently empty space. I’ve found that this is especially true when looking at Fact tables in a star-schema.
The tables are there, just not where you can see them.
If you want to explore the diagram, tweak the layout, or even just export it to a file, you can click on the Copy to Data Modeler button :

Incidentally, if you do have a diagram where nothing is visible, you should now be able to use the Fit Screen button to make sure you can see everything.

You should then be able to drag objects around and amend the layout to suit your purposes.
Once you’re finished, if you’d like to save the diagram into a separate file, you can use the right-click menu and select Print Diagram :

Whilst it’s extremely useful to be able to generate an accurate and up-to-date TRD from the Data Dictionary, where Data Modeler comes into it’s own is when you want to …er…create a Data Model.

Creating a Data Model for a new Application

The traditional approach to Data Modelling was to create a full sized, complete model before moving on to write the rest of the application. The extended timelines for this approach may not be practical if you are following one of the Agile development methodologies. However, having a properly designed data model is rather important for an Application running on the Oracle platform.
We can go some way to squaring this circle by creating models for sub-sections of the application in one sprint for the developers to code against in the next sprint.
This is the context in which I imagine the modelling that follows to be taking place.

In this case, I’m using my trusty World Cup Wallchart application as the example. This is a “new” application rather than being an enhancement to an existing one.
The Entities I’ve identified at this point are :

  • A COMPETITION is an international football competition, such as the World Cup or the Copa America
  • A TOURNAMENT is an instance of a COMPETITION (e.g. Russia 2018 is a World Cup Finals Tournament).
  • A TEAM is a competitior in international football and may compete in one or more TOURNAMENTs, such as Brazil in all World Cups or England (briefly) in some

Right, let’s get cracking then, starting with…

The Logical Model

The first challenge is to choose the correct Data Modeler sub-menu to open to start creating our model. There are three separate entries in the IDE – in the File menu, the View menu, and the Tools menu.

In this case, we want to open the Data Modeler Browser tree using View/Data Modeler/Browser

…which opens the browser in the left-hand pane.

Note – You can find Jeff Smith’s useful guide to the Integrated Data Modeler menus here.

If we expand the Untitled_1 node we can see some items which may be of interest…

We can now right-click the Logical Model node and select Show from the pop-up menu and we can see that a Diagram area, together with a widget toolbar appears…

We need to use the toolbar widgets to create our model objects.
The toolbar looks like this :

The widgets are :

  • Select
  • New Entity
  • New View
  • New M:N Relation
  • New 1:N Relation
  • New 1:N Relation Identifying
  • New Type Substitution
  • New Arc
  • Add Foreign Key to Arc
  • Remove Foreign Key from Arc
  • New Note
  • New Picture
  • Delete
  • Engineer to Relational Model
  • Zoom in
  • Zoom Out
  • Fit Screen
  • Default Size
  • Search

For now, we’re going to click on New Entity and then click in the diagram area, which presents us with :

Our first Entity is COMPETITION. We specify this in the Name field in the General Tab.
Next we select Attributes section of the Entity Properties so that we can start specifying the COMPETITION attributes.

To add an attribute, simply click the green plus button in the Attributes toolbar that appears and then fill in the details.
In this case, I’m adding a column called COMP_CID which is a VARCHAR with a maximum length of 25 characters.
The value in this column will uniquely identify an instance of a COMPETITION so I’ve ticked the Primary UID box.
I’ve also selected the Comments in RDBMS Tab and added a comment for what will ultimately become a column in the finished database table.
Finally, I’ve hit the Apply button and can now see this :

If we now navigate to the Unique Identifiers section we can see that a Primary Key has been created for us

Once we’ve returned to Attributes and added a few more, the finishing touch for this entity is to navigate to the Comments in RDBMS section.
This time, we’re entering comments for what will ultimately be the database table :

Once we’re finished, we can now see our finished entity in the diagram :

Having added the other Entities, our model now looks like this :

We can now save the design by going to the File menu and clicking Data Modeler/Save.
We need to save the design as a file with a .dmd extension.
Notice that once you’ve saved the file, the name of the Untitled_1 node in the Data Modeler browser changes to the base name of the file you’ve just created.
Notice also that at no point have we been connected to the database during the creation of our model.
In fact, it’s quite possible to progress from a Logical Model to a Physical model and even generate the DDL required to implement it without connecting to a database at any point.
However, SQLDeveloper does offer some useful Data Modeler reports which we can take advantage of if we decide to create a Reporting Schema.

Using a Reporting Schema

To start with, we need to create the Reporting Schema, in this case, DM_REPORT_REPOS :

set verify off
accept passwd prompt 'Password for DM_REPORT_REPOS : ' hide
create user dm_report_repos identified by &passwd
/

alter user dm_report_repos default tablespace users
/

alter user dm_report_repos quota unlimited on users
/

Now we need to edit and run the Reporting_Schema_Permissions.sql script, which is located in the datamodeler\datamodeler\reports folder.

In my case, runninig on an Ubuntu client with SQLDeveloper installed in /opt/sqldeveloper181, the path to this script is :

/opt/sqldeveloper181/sqldeveloper/sqldeveloper/extensions/oracle.datamodeler/reports/Reporting_Schema_Permissions.sql

Before running the script, we need to edit the file to replace and with desired values:

  • is the schema to hold the reporting repository (DM_REPORT_REPOS in this example)
  • is a directory on the database server

Incidentally, I’ve also created a new directory on the Operating System so that when the script creates the directory object in the database, it will be pointing to an existing directory on the OS :

sudo su oracle
mkdir /u01/app/oracle/dm_reports

With my changes, the script now looks like this :

CREATE OR REPLACE DIRECTORY OSDDM_REPORTS_DIR AS '/u01/app/oracle/dm_reports';
GRANT READ, WRITE ON DIRECTORY OSDDM_REPORTS_DIR TO dm_report_repos;
GRANT CREATE SESSION TO dm_report_repos;
GRANT RESOURCE TO dm_report_repos;
GRANT CREATE TABLE TO dm_report_repos;
GRANT CREATE SEQUENCE TO dm_report_repos;
GRANT CREATE VIEW TO dm_report_repos;
GRANT CREATE PROCEDURE TO dm_report_repos;

Once we’ve created the Repository schema and added a connection for it in SQLDeveloper, we can then report on our model.
In order to do this, we first need to export our model into the repository.

So, go to the File Menu and select Data Modeler/Export/To Reporting Schema.

You should be rewarded with :

Select the DM_REPORT_REPOS connection and hit OK. You should eventually get :

If the Reports Tree is not already visible, open it by selecting on View/Reports.

If you now expand the Data Modeler Reports node, you’ll see a number of pre-built reports available.
For example, I can see which of my Logical entities are missing relationships :

It’s probably worth remembering that, after you’ve made changes to your data model, you will need to export it again to the reporting repository for those changes to be reflected in the reports.

Anyway, it’s clear that we need to finish off our Logical model with some relations.

First, we create a 1:N relationship between COMPETITION and TOURNAMENT by clicking on the 1:N widget then clicking in the COMPETITION entity and then in the TOURNAMENT ENTITY.
Once the line appears on the diagram we can then name the relationship :

If we now go into the TOURNAMENTS entity properties, we can see that the COMP_CID column has been added automatically :

We can now also add it to the TOURNAMENTS existing Unique Key :

One final point to note is that, although it’s not yet apparent, we are going to have a Surrogate Key for TOURNAMENT :

Finally, we’re going to add a Many-to-Many ( M:N relationship) between TEAM and TOURNAMENT, which leaves our logical model looking like this :

Let’s see how smart Data Modeler is when we move on to…

Engineering a Relational Model from the Logical Model

With the Logical Model Diagram displayed, if we hit the Engineer to Relational Model button in the toolbar we’ll get this window :

Now hit the Engineer button and you should now see :

The main thing that jumps out when looking at this are :

  • the layout could do with some work
  • Tournament has had TOURNAMENT_ID column generated which is now it’s Primary Key
  • a join table – tourn_team_mm – has been generated to resolve the many-to-many relationship

Using options on the right-click menu, or even just dragging objects around, you can adjust the diagram to be more to your liking.
In this case I’ve used the Layout/Resize Objects to Visible option as well so that we can see everything for each table :


Before we can turn this into physical database objects, we probably want to do a bit of tweaking…

Refining the Relational Model Templates

First of all, we want to make some changes to our table and column names.
By convention, Entity names in a Logical model are singular, but once they become tables, they become plural.
This may not be a convention you necessarily feel obliged to follow, but it does give me the chance to demonstrate one method of changing this in the Data Modeler.
The other thing we want to do is to change column names which include a table name( e.g. the TOURNAMENT_ID column in the TOURNAMENT table), to use an abbreviation instead.
To do this, we first need to edit the template used to generate names for these objects.
In the Data Modeler Tree, right-click the Design and select Properties.

Expand the tree in the left-hand side of the pop-up window and you should get to the Templates :

Using the Add Variable buttons we can amend these templates to look something like this :

I’ve also specified abbreviations for each of the tables. For example :

In order to implement our improved object naming we need to right-click the relational model and then click Apply Naming Standards to Keys and Constraints

Hit OK and…

So, the TOURNAMENT_ID column has now been renamed to TOURN_ID. However, there are still some things that I’ll need to change manually ( probably because I haven’t figured out the proper way to do it).
Bear with me, I’ll be back in a minute…

Right, that’s better…

Now, let’s sort out those table names.

Transformation Scripts

First off, I’d like to make all of my table names uppercase.

One way of doing this, as described by Heli, is to use a pre-supplied transformation script.
To do this, go to the Tools menu and select Data Modeler/Design Rules and Transformations/Transformations.
Then simply select the appropriate script – in this case – Tables to upper case – Rhino and hit Apply.

The table names should now show in uppercase on this diagram.

I also wanted to pluralize the names ( e.g. COMPETITIONS rather than COMPETITION).

Whilst there are a number of ways to do this, we’re going to do a little light hacking to produce our own Transformation script to accomplish this.

So, back to the menu and select Tools/Data Modeler/Design Rules and Transformations/Transformations.

If I hit the Green Plus button, I can then add a script of my own.

Now, I know that my model contains table names which can all be pluralized simply by adding an “S”. I also know that this will not make any of the table names too long ( i.e. over 30 characters).
This makes the script fairly simple.

tables = model.getTableSet().toArray();
for (var t = 0; t<tables.length;t++){
 table = tables[t];
 name = table.getName()+"S"
 table.setName(name);
 table.setDirty(true);
}

I’ve named the script Append upper case ‘S’ to table names. I’ve also specified the object as relational and the engine as Oracle Nashorn.
Finally, I hit Save :

Now hit the Apply button and…

Table DDL Transformation Script

We’re almost ready to generate the DDL scripts to translate our Relational Model into a Physical one.
As I’m planning to use Editions in this application so I need to make sure that each table has an Editioning View created.

So, once more unto the Tools/Data Modeler/Design Rules and Transformations menu and this time we select Table DDL Transformation Scripts.

If we then navigate to the After Create scripts for the Journal Tables, we come across a script that we can plaigarise…

We need to create a new Script Set by clicking the Green Plus button, filling out some details and then, in the After Create section, adding the following code :

var ddl;
var lname;
//Editions Based Redefinition View suffix
ebrSuf = "#EBR";
prompt = model.getAppView().getSettings().isIncludePromptInDDL();
useSchema = model.getAppView().getSettings().isIncludeSchemaInDDL();
if(model.getStorageDesign().isOpen()){
	if(useSchema){
	     lname = tableProxy.getLongName();
	}else{
		lname = tableProxy.getName();
	}
}else{
	if(useSchema){
	     lname = table.getLongName();
	}else{
		lname = table.getName();
	}
}
if(prompt){
	ddl= "PROMPT Creating Editioning View for '"+lname+"';\n";
}else{
	ddl = "";
}
ddl = ddl + "create or replace editioning view "+lname+ebrSuf+" as select * from "+lname+";\n"
ddlStatementsList.add(new java.lang.String(ddl));

Finally, Save the new Script.

Now, we can test our script on a single table ( in this case COMPETITIONS), by clicking the Test button :

Generating DDL from a Relational Model

Let’s find out if our Relational Model is ready to go out into the big wide world.
With the Relational Model Diagram in focus, hit the Generate DDL toolbar button.

Now click Generate Button.

Go to Include Table DDL scripts tab and select our script from the drop-down and ensure all tables are checked

Click OK and…

We can see that Data Modeler has found a couple of errors. Taking a closer look at the script it’s generated we can find the culprits :

The easiest way to fix this is to edit the script directly before hitting Save.

At last, we have a script containing the DDL statements required to deploy our relational model to a real database.

Conclusion

I realise that this has not been so much a stroll through the well-kept garden as a hack through the undergrowth of Data Modeler in it’s integrated form.
Hopefully though, it’s enough for you to consider exploring further.
After all, a well-designed relational data model is the bedrock of a good Oracle Database Application and this is the tool that could well help you achieve this…and at minimal cost.

utPLSQL 3.0 – How to have your cake and eat it

Wed, 2018-04-25 14:13

“You can’t have your cake and eat it !” This seems to be a regular refrain from the EU in the ongoing Brexit negotiations.
They also seem to be a bit intolerant of “cherry picking”.
I’ve never really understood the saying, “You can’t have your cake and eat it”.
What’s the point in having the cake unless you are going to eat it ?
Fortunately, I’m not alone in my perplexity – just ask any Brexiteer member of the British Cabinet.
For those who want to make sense of it ( the saying, not Brexit), there is a handy Wikepedia page that explains all.

When it comes to Unit Testing frameworks for PL/SQL, compromise between cake ownership and consumption is usually required.
Both utPLSQL 2.0 and ruby-plsql-spec have their good points, as well as some shortcomings.
Of course, if you want a more declarative approach to writing Unit Tests, you can always use TOAD or SQLDeveloper’s built-in tools.

Recently, a new player has arrived on the PL/SQL testing scene.
Despite it’s name, utPLSQL 3.0 appears to be less an evolution of utPLSQL 2.0 as a new framework all of it’s own.
What I’m going to do here, is put utPLSQL 3.0 through it’s paces and see how it measures up to the other solutions I’ve looked at previously.
Be warned, there may be crumbs…

Installation and Setup

If you’re comfortable on the command line, you can follow the instructions in the utPLSQL 3.0 documentation.
On the other hand, if you’re feeling old-fashioned, you can just head over to the Project’s GitHub page and download the latest version.
At the time of writing this is 3.0.4.

The downloaded file is utPLSQL.zip.

Now to unzip it. In my case, on Ubuntu, things look like this…

unzip utPLSQL.zip
Archive:  utPLSQL.zip
980af88b62c3c75b11a8f81d6ad96d1c835021b8
   creating: utPLSQL/
  inflating: utPLSQL/CONTRIBUTING.md  
  inflating: utPLSQL/LICENSE         
 extracting: utPLSQL/VERSION         
   creating: utPLSQL/docs/
...
***snip***
...
   creating: utPLSQL/test/ut_suite_manager/
  inflating: utPLSQL/test/ut_suite_manager/test_suite_manager.pkb  
  inflating: utPLSQL/test/ut_suite_manager/test_suite_manager.pks  
   creating: utPLSQL/test/ut_utils/
  inflating: utPLSQL/test/ut_utils/test_ut_utils.pkb  
  inflating: utPLSQL/test/ut_utils/test_ut_utils.pks  

The archive will have unzipped into a directory called utPLSQL.

We now have some decisions to make in terms of how we want to install the framework.
To save a bit of time, I’m going to go with the default.
Essentially this is :

DBMS_PROFILER is used by the framework to provide testing coverage statistics, more of which later.

Note that the documentation includes setup steps that provide you with a bit more control. However, if you’re happy to go with the default then you simply need to run the appropriate script as a user connected as SYSDBA…

cd utPLSQL/source
sqlplus berrym@bakeoff_tent as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Tue Mar 6 17:26:37 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password: 

Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production

SQL> @install_headless.sql

no rows selected

Creating utPLSQL user UT3
--------------------------------------------------------------
Installing utPLSQL v3 framework into UT3 schema
--------------------------------------------------------------
Switching current schema to UT3
--------------------------------------------------------------
Installing component UT_DBMS_OUTPUT_CACHE
Installing component UT_EXPECTATION_PROCESSOR
--------------------------------------------------------------
...
***snip***
...

Installing PLSQL profiler objects into UT3 schema
PLSQL_PROFILER_RUNS table created
PLSQL_PROFILER_UNITS table created
PLSQL_PROFILER_DATA table created
Sequence PLSQL_PROFILER_RUNNUMBER created
Installing component UT_FILE_MAPPER
--------------------------------------------------------------
...
***snip***
...
Synonym created.


Synonym created.


Synonym created.

We should now have a schema called UT3 which owns lots of database objects…

select object_type, count(*)
from dba_objects
where owner = 'UT3'
group by object_type
order by object_type
/
  
OBJECT_TYPE   COUNT(*)  
-----------   ---------
INDEX         13        
LOB            1         
PACKAGE       16        
PACKAGE BODY  16        
SEQUENCE       3         
SYNONYM       13        
TABLE          9         
TYPE          71        
TYPE BODY     53        
VIEW           2         


10 rows selected. 

One subtle difference that you may notice between utPLSQL 3.0 and it’s predecessor is the fact that the default application owner schema has a fairly “modest” set of privileges :

select privilege
from dba_sys_privs
where grantee = 'UT3'
/

PRIVILEGE         
---------
CREATE SESSION    
CREATE TYPE       
CREATE VIEW       
CREATE SYNONYM    
CREATE SEQUENCE   
CREATE PROCEDURE  
CREATE TABLE      
ALTER SESSION     


8 rows selected. 

However, the default password for this account is known…

SQL> connect ut3/XNtxj8eEgA6X6b6f@centos_xe
Connected.
SQL> show user
USER is "UT3"
SQL> 

Whilst it’s true that, as a testing framework, utPLSQL should be deployed only in non-production environments you may nevertheless find it prudent to lock the account immediately after installation…

alter user ut3 account lock
/

…and possibly even change the password for good measure.

Annotations and Matchers

There are two main component types in a utPLSQL 3.0 unit test – Annotations and Matchers.

Annotations allow the framework to identify packaged procedures as tests and (if required), group them into suites. This obviates the need for separate storage of configuration information.
Matchers are used to validate the results from a test execution.

This explanation would probably benefit from an example…

create or replace package ut3_demo
as
    -- %suite(Demonstrate Framework)
    
    -- %test(Will always pass)
    procedure perfect_cake;
    
    -- %test( Will always fail)
    procedure dontlike_cake;
  
end ut3_demo;  
/

The package begins with the suite annotation to identify it as a package that contains unit tests.

-- %suite(Demonstrate Framework)

The text in brackets displays when the test suite is executed.
The positioning of this annotation is important. It needs to be the first thing in the package after the CREATE OR REPLACE statement.
Also, as it’s a package level annotation, it needs to have one or more blank lines between it and any procedure level annotations.

Each of the procedures in the package is identified as an individual test

-- %test(Will always pass)
-- %test( Will always fail)

Once again the text will display when the test is executed.

In the package body, we can see the matchers come into play :

create or replace package body ut3_demo
as

    procedure perfect_cake is
    begin
        ut.expect( 1).to_( equal(1) );
    end;
    
    procedure dontlike_cake is
    begin
        ut.expect(1, 'Oops').to_( equal(0) );
    end;
end ut3_demo;
/

First impressions are that the code seems to have more in common with ruby-plsql-spec than it does with utPLSQL 2.0.
This impression is re-enforced when we execute the tests…

I was going to re-introduce the Footie app at this point as I’ve used it to demonstrate all of the other PL/SQL testing frameworks I’ve looked at so far.
However, in these unprecedented times, I feel that an unprecedented (and very British) example is called for.
Therefore, I humbly present…

The Great Brexit Bake-Off Application

The application owner is one hollywoodp ( the observant among you will have already noticed that Mary Berry is the DBA)…

The application consists of some tables :

alter session set current_schema = hollywoodp
/

create table brexit_bake_off
(
    id number primary key,
    contestant varchar2(100),
    show_stopper varchar2(100),
    notes varchar2(4000)
)
/

--
-- Create an error table for bulk loads - ERR$_BREXIT_BAKE_OFF
--
exec dbms_errlog.create_error_log('brexit_bake_off');


--
-- External table for ETL process to upload records to the application
--
create table contestants_xt
(
    id number,
    contestant varchar2(100),
    show_stopper varchar2(100),
    notes varchar2(4000)
)
    organization external
    (
        type oracle_loader
        default directory my_files
        access parameters
        (
            records delimited by newline
            badfile 'contestants.bad'
            logfile 'contestants.log'
            skip 1
            fields terminated by '|'
            (
                id integer external,
                contestant char(100),
                show_stopper char(100),
                notes char(4000)
            )
        )    
        location ( 'contestants.csv')
    )
    reject limit unlimited
/

…and a package :

create or replace package bake_off as
    procedure add_contestant( 
        i_id number,
        i_contestant varchar2,
        i_show_stopper varchar2,
        i_notes varchar2 default null);

    function get_contestant_id( i_contestant varchar2)
        return number;

    function get_show_stopper( i_id number)
        return varchar2;

    procedure list_contestants( io_contestant_list in out sys_refcursor);

    procedure upload_contestants;
end bake_off;
/

create or replace package body bake_off as
    procedure add_contestant( 
        i_id number,
        i_contestant varchar2,
        i_show_stopper varchar2,
        i_notes varchar2 default null)
    is
    begin
        insert into brexit_bake_off( id, contestant, show_stopper, notes)
        values( i_id, i_contestant, i_show_stopper, i_notes);
    end add_contestant;    

    function get_contestant_id( i_contestant varchar2)
        return number
    is
        l_rtn number;
    begin
        select id
        into l_rtn
        from brexit_bake_off
        where upper(contestant) = upper(i_contestant);
        
        return l_rtn;
    exception when no_data_found then
        raise_application_error(-20900, 'This contestant is not in The Tent at the moment.');
    end get_contestant_id;
    
    function get_show_stopper( i_id number)
        return varchar2
    is
        l_rtn varchar2(100);
    begin
        select show_stopper
        into l_rtn
        from brexit_bake_off
        where id = i_id;
        
        return l_rtn;
    exception when no_data_found then
        raise_application_error(-20901, 'Soggy Bottom Error !');
    end get_show_stopper;    
        
    procedure list_contestants( io_contestant_list in out sys_refcursor)
    is
    begin
        open io_contestant_list for
        select id, contestant, show_stopper, notes
        from brexit_bake_off
        order by id;
    end list_contestants;
    
    procedure upload_contestants
    is
    begin
        insert into brexit_bake_off( id, contestant, show_stopper, notes)
        select id, contestant, show_stopper, notes
        from contestants_xt
        log errors reject limit unlimited;
    end upload_contestants;    
end bake_off;
/

Now, whilst Mr Hollywood is a renowned TV Chef, his PL/SQL coding skills do leave a little to be desired.
Also, the application in it’s current state is just about the minimum he could come up with to demonstrate the framework, which is, after all, why we’re here.
Therefore, I’d ask you to overlook the lack of anchored declarations etc. because, before we put the oven on, we need to make a fairly important design decision.

Where should I put my tests ?

According to the documentation, the default for utPLSQL is to have the tests located in the same schema as the code they are to run against. However, you may well have good reasons for wanting to keep the tests in a separate schema.

For one thing, you may want to ensure that the process to promote your codebase through to Test and Production environments remains consistent and that you don’t have to worry about taking specific steps to ensure that your test code ends up somewhere it shouldn’t.
Additionally, you may find it useful to create “helper” packages for your unit tests. These packages won’t themselves contain tests but will need to be treated as part of your test codebase rather than the application codebase.

If you decide to go down this route with utPLSQL, then you will have to ensure that the schema that owns your tests has the CREATE ANY PROCEDURE privilege if you want to avail yourself of the code coverage reporting provided by the framework.

This privilege does not need to be granted if the application owning schema also holds the tests.

I really would prefer to have my tests in an entirely separate schema. So, I’ve created this schema as follows :

set verify off
accept passwd prompt 'Enter password for UTP_BAKEOFF : ' hide

create user utp_bakeoff identified by &passwd
    default tablespace users
    temporary tablespace temp
/

grant create session, create view, create sequence, create table,  
    create any procedure to utp_bakeoff
/

alter user utp_bakeoff quota unlimited on users
/

--
-- Application specific grants required to generate test file for data load to
-- external table

grant read, write on directory my_files to utp_bakeoff
/

grant execute on utl_file to utp_bakeoff
/

The test schema also requires privileges on all of the Application’s database objects :

set serveroutput on size unlimited

declare
    l_priv varchar2(30);
begin
 
    for r_object in
    (
        select object_name, object_type
        from dba_objects
        where owner = 'HOLLYWOODP'
        and object_type in ('PACKAGE', 'PROCEDURE', 'SEQUENCE', 'TABLE', 'VIEW')
    )
    loop
        l_priv :=
        case r_object.object_type
            when 'PACKAGE' then 'EXECUTE'
            when 'PROCEDURE' then 'EXECUTE'
            when 'TABLE' then 'ALL'
            else 'SELECT'
        end;
        dbms_output.put_line('Granting '||l_priv||' on '||r_object.object_name);
        execute immediate 'grant '||l_priv||' on hollywoodp.'||r_object.object_name||' to UTP_BAKEOFF';
    end loop;
end;
/

Run this and we get …

Granting ALL on BREXIT_BAKE_OFF
Granting EXECUTE on BAKE_OFF
Granting ALL on ERR$_BREXIT_BAKE_OFF
Granting ALL on CONTESTANTS_XT

Finally, we’re ready to start testing our application…

Testing Single Row Operations

First, we’re going to write some tests for the BAKE_OFF.ADD_CONTESTANT procedure. So, in the utp_bakeoff schema, we create a package

create or replace package add_contestant_ut
as

    -- %suite(add_contestant)      
    -- %suitepath(brexit_bake_off.bake_off)
    
    -- helper function to generate a single contestant record
    function setup_contestant return hollywoodp.brexit_bake_off%rowtype;
    
    -- %test(Add a new contestant)
    procedure add_contestant;
    
    -- %test( Add existing contestant)
    procedure add_duplicate_contestant;
end add_contestant_ut;
/

Before we take a look at the package body, it’s worth pausing to take note of the %suitepath annotation.
This Annotation allows separate test packages to be grouped together. In this instance, I’ve defined the path as Application Name/Package Name.
Note that if you want to use this annotation then it must be on the line directly after the %suite annotation in the package header. Otherwise utPLSQL won’t pick it up.

Now for the test package body…

create or replace package body add_contestant_ut
as

    function setup_contestant return hollywoodp.brexit_bake_off%rowtype
    is
        rec_contestant hollywoodp.brexit_bake_off%rowtype;
    begin
        select nvl(max(id), 0) + 1 as id,
            'David Davis' as contestant,
            'Black Forest Gateaux' as show_stopper,
            'Full of cherries to pick' as notes
        into rec_contestant
        from hollywoodp.brexit_bake_off;
        
        return rec_contestant;
    end setup_contestant;    
    
    
    function contestant_exists( i_id in number)
        return boolean
    is
        dummy pls_integer;
    begin
        select 1
        into dummy
        from hollywoodp.brexit_bake_off
        where id = i_id;
        
        return true;
    exception when no_data_found then return false;
    end contestant_exists;
    
    -- %test(Add a new contestant)
    procedure add_contestant is
    
        rec_contestant hollywoodp.brexit_bake_off%rowtype;
    begin
        -- Test setup phase
        rec_contestant := setup_contestant;
        
        -- Test execution
        hollywoodp.bake_off.add_contestant( 
            i_id => rec_contestant.id, 
            i_contestant => rec_contestant.contestant,
            i_show_stopper => rec_contestant.show_stopper,
            i_notes => rec_contestant.notes);
            
        -- Verify result
        ut.expect( contestant_exists(rec_contestant.id)).to_( be_true() );
    end add_contestant;
    
    -- %test( Add existing contestant)
    procedure add_duplicate_contestant is
        
        rec_contestant hollywoodp.brexit_bake_off%rowtype;
    begin
        -- Test setup phase
        rec_contestant := setup_contestant;
        insert into hollywoodp.brexit_bake_off( id, contestant, show_stopper, notes)
        values( rec_contestant.id, rec_contestant.contestant, rec_contestant.show_stopper, rec_contestant.notes);
        
        -- Test execution - use a nested block as we're expecting an error...
    
        begin
            hollywoodp.bake_off.add_contestant( 
                i_id => rec_contestant.id, 
                i_contestant => rec_contestant.contestant,
                i_show_stopper => rec_contestant.show_stopper,
                i_notes => rec_contestant.notes);
            -- Validation
            ut.fail('Expected unique key violation error but none raised');
        exception when others then
            ut.expect( sqlcode).to_( equal( -1));
        end;
    end add_duplicate_contestant;
    
end add_contestant_ut;
/

The structure of the tests is quite familiar in that there are four distinct phases, the first three of which are explicit :

  • Setup – prepare the system for the test
  • Execute – run the code to be tested
  • Verify – check the result
  • Teardown – reset the system to the state it was in prior to the test being run

Note that, in this instance, we are using the default behaviour of the framework for the teardown. This involves a savepoint being automatically created prior to each test being run and a rollback to that savepoint once the test completes. Later on, we’ll have a look at circumstances where we need to handle the Teardown phase ourselves.

The first test – add_contestant – uses a helper function and a boolean matcher :

ut.expect( contestant_exists(rec_contestant.id)).to_( be_true() );

The second test is checking both that we get an error when we try to add a duplicate record and that the error returned is the one we expect, namely :

ORA-00001: unique constraint (constraint_name) violated

As we’re expecting the call to the application code to error, we’re using a nested block :

begin
    hollywoodp.bake_off.add_contestant( 
        i_id => rec_contestant.id, 
        i_contestant => rec_contestant.contestant,
        i_show_stopper => rec_contestant.show_stopper,
        i_notes => rec_contestant.notes);
    -- Validation
    ut.fail('Expected unique key violation error but none raised');
exception when others then
    ut.expect( sqlcode).to_( equal( -1));
end;

If we now run the test, we can see that our code works as expected.

Incidentally, we can also see how utPLSQL recognises the hierarchy we’ve defined in the suitepath.

Whilst this approach works just fine for single-row operations, what happens when the framework is confronted with the need for …

Testing Ref Cursor values

This is always something of an ordeal in PL/SQL test frameworks – at least all of the ones I’ve looked at up until now. Fortunately utPLSQL’s equality matcher makes testing Ref Cursors as simple as you feel it really should be…

create or replace package list_contestants_ut as

    -- %suite(list_contestants)      
    -- %suitepath(brexit_bake_off.bake_off)

    -- %test( List all the contestants)
    procedure list_contestants;
end list_contestants_ut;
/

create or replace package body list_contestants_ut as
    procedure list_contestants
    is
        l_rc_expected sys_refcursor;
        l_rc_actual sys_refcursor;

    begin
    
        -- setup
        insert into hollywoodp.brexit_bake_off 
        with recs as
        (
            select nvl(max(id), 0) + 1 as id, 
                'David Davis' as contestant, 
                'Black Forest Gateau' as show_stopper, 
                'Lots of cherries' as notes 
            from hollywoodp.brexit_bake_off
            union all
            select nvl(max(id), 0) + 2, 
                'Michel Barnier', 
                'Chocolate Eclair', 
                'No cherries to pick' 
            from hollywoodp.brexit_bake_off
            union all
            select nvl(max(id), 0) + 3, 
                'Jacob Rees-Mogg', 
                'Victoria Sponge', 
                'Traditional and no need for cherries'
            from hollywoodp.brexit_bake_off
            union all
            select nvl(max(id), 0) + 4, 
                'Tony Blair', 
                'Jaffa Cake', 
                'Definitely not a biscuit and a new referendum is required to settle this'
            from hollywoodp.brexit_bake_off
        )
            select * from recs;
    
        -- Get expected results
        open l_rc_expected for 
            select id, contestant, show_stopper, notes 
            from hollywoodp.brexit_bake_off
            order by 1;

        -- execute
        hollywoodp.bake_off.list_contestants(l_rc_actual);
    
        -- Verify
        ut.expect( l_rc_actual).to_equal( l_rc_expected);
        close l_rc_actual;
        close l_rc_expected;
    end list_contestants;

end list_contestants_ut;
/

Run this and we get :

Incidentally, you may notice that the call to ut.run in this instance is a little different to what I was using previously.
There are a number of ways to execute one or more utPLSQL tests through the ut.run procedure and we’ll be taking a look at some of these in a little while.

Testing across Transaction Boundaries

In this case, we’re testing the bulk upload of records from a file into the application tables via an external table.
The load itself makes use of the LOG ERRORS clause which initiates an Autonomous Transaction in the background.
This means we’re going to need to handle the teardown phase of the tests ourselves as utPLSQL’s default rollback-to-savepoint operation will not do the job.

First of all, here’s a quick reminder of the BAKE_OFF.UPLOAD_CONTESTANTS procedure that we want to test :

...
procedure upload_contestants
is
begin
    insert into brexit_bake_off( id, contestant, show_stopper, notes)
    select id, contestant, show_stopper, notes
    from contestants_xt
    log errors reject limit unlimited;
end upload_contestants;    
...

As part of the setup and teardown for the test, we’ll need to do a number of file operations – i.e.

  • backup the existing data file
  • create a test file for the external table
  • remove the test file
  • move the original file (if any) back into place

As we may have other loads we want to test this way in the future, then it would seem sensible to separate the code for these file operations into a helper package :

create or replace package test_file_utils as

    function file_exists( i_dir all_directories.directory_name%type, i_fname varchar2)
        return boolean;
        
    procedure backup_file( i_dir all_directories.directory_name%type, i_fname varchar2, o_backup_fname out varchar2);

    procedure revert_file( i_dir all_directories.directory_name%type, i_fname varchar2, i_backup_fname varchar2);
end test_file_utils;
/

create or replace package body test_file_utils as

    function file_exists( i_dir all_directories.directory_name%type, i_fname varchar2)
        return boolean
    is
        fh utl_file.file_type;
        e_no_file exception;
        -- ORA-29283 is returned if file does not exist or is not accessible.
        -- If the latter then the whole thing will fall over when we try to overwrite it.
        -- For now then, we can assume that this error means "file does not exist"
        pragma exception_init(e_no_file, -29283);
    begin
        fh := utl_file.fopen( i_dir, i_fname, 'r');
        utl_file.fclose(fh);
        return true;
    exception when e_no_file then
        return false;
    end file_exists;
    
    procedure backup_file( 
        i_dir all_directories.directory_name%type, 
        i_fname varchar2, 
        o_backup_fname out varchar2)
    is
        backup_fname varchar2(100);
    begin
        backup_fname := i_fname||systimestamp||'.bak';
        utl_file.frename( i_dir, i_fname, i_dir, backup_fname);
        o_backup_fname := backup_fname;
    end backup_file;
    
    procedure revert_file( 
        i_dir all_directories.directory_name%type, 
        i_fname varchar2, 
        i_backup_fname varchar2)
    is
    begin
        -- delete i_fname - the file created for the test
        utl_file.fremove(i_dir, i_fname);
        -- if a backup filename exists then put it back
        if i_backup_fname is not null then
            utl_file.frename( i_dir, i_backup_fname, i_dir, i_fname);
        end if;
    end revert_file;
end test_file_utils;
/

Remember, as we’ve decided to hold all of our test code in a separate schema, we don’t have to worry about distinguishing this package from the application codebase itself.

Now for the test. In the package header, we’re using the rollback annotation to let utPLSQL know that we’ll look after the teardown phase manually for any test in this package :

create or replace package upload_contestants_ut as
    
    -- %suite(upload_contestants)     
    -- %rollback(manual)
    -- %suitepath(brexit_bake_off.bake_off)
    
    -- %test( bulk_upload_contestants)
    procedure upload_contestants;
end upload_contestants_ut;
/

Now for the test code itself. There’s quite a bit going on here.
In the setup phase we :

  • backup the target application table and it’s associated error table
  • generate the file to be uploaded
  • populate ref cursors with the expected results

In the verification phase, we use the to_equal matcher to compare the expected results refcursors with the actual results ( also ref cursors).

Finally, we re-set the application to it’s state prior to the test being executed by :

  • removing test records from the application and error tables
  • dropping the backup tables
  • tidying up the data files

All of which looks something like this :

create or replace package body upload_contestants_ut as

    --
    -- Private helper procedures
    --
    procedure backup_tables is

        pragma autonomous_transaction;

    begin
                
        execute immediate 'create table brexit_bake_off_bu as select rowid as bu_rowid, tab.* from hollywoodp.brexit_bake_off tab';
        execute immediate 'create table err$_brexit_bake_off_bu as select rowid as bu_rowid, tab.* from hollywoodp.err$_brexit_bake_off tab';
    end backup_tables;
    
    procedure create_contestant_file( i_dir all_directories.directory_name%type, i_fname varchar2)
    is
        fh utl_file.file_type;
    begin
        fh := utl_file.fopen(i_dir, i_fname, 'w');
        utl_file.put_line(fh, 'id|contestant|show_stopper|notes');
        for r_contestant in
        (
            select nvl(max(id), 0) + 1||chr(124)||'David Davis'
                ||chr(124)||'Black Forest Gateau'||chr(124)||null||chr(124) as rec
            from hollywoodp.brexit_bake_off 
            union all
            select nvl(max(id), 0) + 2||chr(124)||'Michel Barnier'
                ||chr(124)||'Chocolate Eclair'||chr(124)||'Leave my cherries alone !'||chr(124)
            from hollywoodp.brexit_bake_off
            union all
            -- Duplicate records (by ID)
            select nvl(max(id), 0) + 1||chr(124)||'Jacob Rees-Mogg' 
                ||chr(124)||'Victoria Sponge'||chr(124)||null||chr(124)
            from hollywoodp.brexit_bake_off
            union all
            select nvl(max(id), 0) + 2||chr(124)||'Tony Blair'
                ||chr(124)||'Jaffa Cakes'||chr(124)||'Tough on brexit, tough on the causes of Brexit'||chr(124)
            from hollywoodp.brexit_bake_off
        )
        loop
            utl_file.put_line(fh, r_contestant.rec);
        end loop;
        utl_file.fflush(fh);
        utl_file.fclose(fh);
    end create_contestant_file;
    
    procedure cleardown_test_records is
    
        pragma autonomous_transaction;
    begin
        execute immediate 
            'delete from hollywoodp.brexit_bake_off
            where rowid not in (select bu_rowid from brexit_bake_off_bu)';
        
        execute immediate
            'delete from hollywoodp.err$_brexit_bake_off
            where rowid not in (select bu_rowid from err$_brexit_bake_off_bu)';
            
        commit;    
        
    end cleardown_test_records;
    
    procedure drop_backup_tables is
    
        pragma autonomous_transaction;
    begin
    
        execute immediate 'drop table brexit_bake_off_bu';
        execute immediate 'drop table err$_brexit_bake_off_bu';
    end drop_backup_tables;
    
    -- The test itself
    procedure upload_contestants
    is
    
        target_dir constant all_directories.directory_name%type := 'MY_FILES';
        fname constant varchar2(100) := 'contestants.csv';
        
        backup_fname varchar2(100);
        
        expected_load sys_refcursor;
        expected_err sys_refcursor;
        actual_load sys_refcursor;
        actual_err sys_refcursor;
    begin
        --
        -- Setup Phase
        --
        backup_tables;
        -- Backup the external table file 
        if test_file_utils.file_exists( target_dir, fname) then
            test_file_utils.backup_file( target_dir, fname, backup_fname);
        end if;
        -- Create a load file
        create_contestant_file( target_dir, fname);
        
        -- Populate the expected results
        open expected_load for 
            select *
            from hollywoodp.contestants_xt
            where upper(contestant) in ('DAVID DAVIS', 'MICHEL BARNIER')
            order by id;
            
        open expected_err for    
            select 1 as ora_err_number$, 
                id, contestant, show_stopper, notes 
            from hollywoodp.contestants_xt
            where upper( contestant) in ('JACOB REES-MOGG', 'TONY BLAIR')
            order by id;
            
        --    
        -- Execute
        --
        hollywoodp.bake_off.upload_contestants;
        
        --
        -- Verify
        --
        open actual_load for
            select *
            from hollywoodp.brexit_bake_off
            order by id;
            
        open actual_err for
            select ora_err_number$, 
                id, contestant, show_stopper, notes
            from hollywoodp.err$_brexit_bake_off
            order by id;
            
        ut.expect( expected_load).to_equal( actual_load);
        ut.expect( expected_err).to_equal( actual_err);
        
        --
        -- Teardown
        --
        cleardown_test_records;
        drop_backup_tables;
        test_file_utils.revert_file( target_dir, fname, backup_fname);

    end upload_contestants;
end upload_contestants_ut;
/
Running tests and reporting results

The framework does offer an API for use to execute tests programatically. However, whilst you’re writing the tests themselves, you’ll probably want something a bit more interactive.

You can simply run all of the tests in the current schema as follows :

set serveroutput on size unlimited
exec ut.run

However, there are times when you’ll probably need to be a bit more selective.
Therefore, it’s good to know that utPLSQL will let you execute tests interactively in a number of different ways :

set serveroutput on size unlimited

alter session set current_schema = utp_bakeoff
/

-- single test passing in (package name.procedure name)
exec ut.run('add_contestant_ut.add_contestant')

-- all tests in a package (package name)
exec ut.run('add_contestant_ut')

-- all suites in a suitepath (owning schema:suitepath) 
exec ut.run('utp_bakeoff:brexit_bake_off.bake_off')

If we run this for the application tests we’ve written, the output looks like this :

By default ut_run uses the ut_document_reporter to format the output from the tests.
However, there are other possible formats, which you can invoke with a second argument to UT_RUN.

For example…

exec ut_run('add_contestant_ut', ut_xunit_reporter());

…outputs…

<testsuites tests="2" skipped="0" error="0" failure="0" name="" time=".002972" >
<testsuite tests="2" id="1" package="brexit_bake_off"  skipped="0" error="0" failure="0" name="brexit_bake_off" time=".002894" >
<testsuite tests="2" id="2" package="brexit_bake_off.bake_off"  skipped="0" error="0" failure="0" name="bake_off" time=".002859" >
<testsuite tests="2" id="3" package="brexit_bake_off.bake_off.add_contestant_ut"  skipped="0" error="0" failure="0" name="add_contestant" time=".00279" >
<testcase classname="brexit_bake_off.bake_off"  assertions="1" skipped="0" error="0" failure="0" name="Add a new contestant" time=".001087" >
</testcase>
<testcase classname="brexit_bake_off.bake_off.add_contestant_ut"  assertions="1" skipped="0" error="0" failure="0" name="Add existing contestant" time=".001175" >
</testcase>
</testsuite>
</testsuite>
</testsuite>
</testsuites>

By contrast, if you want something slightly more colourful…

set serveroutput on size unlimited
exec ut.run('add_contestant_ut', a_color_console => true)

…or even…

set serveroutput on size unlimited
exec ut.run(a_color_console => true)

Note that, unlike the previous executions, the a_color_console parameter is being passed by reference rather than position.

Provided your command line supports ANSICONSOLE, you are rewarded with…

Test Coverage reporting

As mentioned a couple of times already, utPLSQL does also provide coverage reporting functionality.
In this case, we’re going to look at the HTML report.

set serveroutput on size unlimited
alter session set current_schema = utp_bakeoff;
set feedback off
spool add_contestant_coverage.html
exec ut.run('add_contestant_ut', ut_coverage_html_reporter(), a_coverage_schemes => ut_varchar2_list('hollywoodp'))
spool off

Opening the file in a web browser we can see some summary information :

Clicking on the magnifying glass allows us to drill-down into individual program units :

Of course, you’ll probably want to get an overall picture of coverage in terms of all tests for the application code. In this case you can simply run :

set serveroutput on size unlimited
alter session set current_schema = utp_bakeoff;
set feedback off
spool brexit_bake_off_coverage.html
exec ut.run(ut_coverage_html_reporter(), a_coverage_schemes => ut_varchar2_list('hollywoodp'))
spool off

When we look at this file in the browser, we can see that at least we’ve made a start :

Keeping track of your Annotations

Whilst annotations provide a method of identifying and organising tests in a way that avoids the need for storing large amounts of metadata, it can be easy to “lose” tests as a result.
For example, if you have a fat-finger moment and mis-type a suitepath value, that test will not execute when you expect it to ( i.e. when you run that suitepath).

Fortunately, utPLSQL does keep track of the annotations under the covers, using the UT_ANNOTATION_CACHE_INFO and UT_ANNOTATION_CACHE tables. Despite their names, these are permanent tables :

So, if I want to make sure that I haven’t leaned on the keyboard at an inopportune moment, I can run a query like :

select aci.object_name, ac.annotation_text
from ut3.ut_annotation_cache_info aci
inner join ut3.ut_annotation_cache ac
    on ac.cache_id = aci.cache_id
    and ac.annotation_name = 'suitepath'
    and aci.object_owner = 'UTP_BAKEOFF'
order by 1
/

…which in my case returns…


OBJECT_NAME                    ANNOTATION_TEXT                         
------------------------------ ----------------------------------------
ADD_CONTESTANT_UT              brexit_bake_off.bake_off                
LIST_CONTESTANTS_UT            brexit_bake_off.bake_off                
UPLOAD_CONTESTANTS_UT          brexit_bake_off.bake_off                
Final Thoughts

I’ve tried to give some flavour of what the framework is capable of, but I’ve really just scratched the surface.
For more information, I’d suggest you take a look at the framework’s excellent documentation.

Also, Jacek Gebal, one of the authors of the framework has shared a presentation which you may find useful.

The utPLSQL 3.0 framework is a very different beast from it’s predecessor. The ground up re-write of the framework has brought it bang up to date in terms of both functionality and ease of use.
If you’re looking for a PL/SQL testing framework that’s contained entirely within the database then look no further…unless your allergic to cherries.

Streaming Videos with Plex on Raspberry Pi

Sun, 2018-04-08 05:52

The recent Bank Holiday weekend in England provided me with a perfect opportunity to get on with some D.I.Y.

We have a collection of movie files, which I’ve stored on an external USB hard-drive. At the moment, these files are only accessible from the smart TV it’s plugged into.
I want to be able to stream these movies to the various Connected Devices we have around the house.

Time, once again, to call on my trusty Raspberry Pi 2 b-spec, running on Raspbian Jessie.

What I’m going to do is :

  • Mount my USB Drive on the Pi
  • Install Plex Server on the Pi to facilitate streaming
  • Install Plex Client on relevant Connected Devices
  • Create a Library containing all of my movies
  • Stream a movie whilst I wait for it to stop raining

Hopefully after all that, I’ll be looking at something like this :

plex

Before we do any of that however, we need to get the USB drive to work on the Pi…

Hardware setup

The hard drive is powered through the USB port it’s connected to.
Unfortunately, the Pi does not provide sufficient power on it’s own.
Therefore, we need to use a powered USB hub to connect the two.

So, we plug the USB hub into the mains and then connect it to one of the USB ports on the Pi.
Then we plug the hard-drive into the hub :

pi_hub_hd.jpg

Now that’s done, we can get on with making the files on the hard-drive available to the Pi.

A word on Software versions

The first thing to note is that I’m connected as pi for all of the steps that follow.

It would also be helpful to know the version of Raspbian that’s running on the Pi.
One way of determining this is to run the following in a terminal :

cat /etc/os-release |grep 'PRETTY_NAME'

…which, in my case, returns :

PRETTY_NAME="Raspbian GNU/Linux 8 (jessie)"

If you are using a different version of Raspbian (e.g. Wheezy), you may need to amend some of the commands that follow to use the appropriate repository for your version.

Finally, before we go making any software changes, it’s probably a good idea to make sure that we’re all up-to-date on the Pi :

sudo apt-get update
sudo apt-get upgrade
Mounting the USB Drive on the Pi

In order to properly interact with the files on the external disk, we need to mount it.
First of all, we need to create a directory on which to mount the drive.

So, in the terminal…

cd /mnt
sudo mkdir usbstorage

sudo chown -R pi:pi /mnt/usbstorage
sudo chmod -R 775 /mnt/usbstorage

We should now have a directory that looks like this :

ls -l
total 4
drwxrwxr-x 2 pi pi 4096 Apr  1 13:52 usbstorage

Next, we need to know what type of filesystem the external disk is using.

To find this out we can run…

sudo blkid

Alternatively, if you only have a single USB device connected to your Pi, you can filter out some of the noise…

sudo blkid |grep /dev/sda1

In my case, I get back :

/dev/sda1: LABEL="Seagate Backup Plus Drive" UUID="EC26E71826E6E316" TYPE="ntfs" PARTUUID="945986dc-01"

The “TYPE=” part of the output identifies the file system type you will need to specify when mounting the drive.
The “/dev/sda1:” part of the output confirms that our disk is currently mounted as this device.

Note that you may need to install an additional package to be able to mount ntfs disks ( although I already had this on my Pi). To make sure, you can simply run :

sudo apt-get update
sudo apt-get install ntfs-3g -y

To test mounting the disk on the mount point we’ve just created…

sudo umount /dev/sda1
sudo mount /dev/sda1 /mnt/usbstorage

We should now be able to see the files on the disk…

mounted_dir

In order to make sure that the disk is mounted in this way every time the Pi starts, we need to add an entry to the /etc/fstab file.

One way to specify the device to mount is to use the UUID, which you can find by running :

sudo ls -l /dev/disk/by-uuid/

…and looking for your device name (in my case /sda1) :

lrwxrwxrwx 1 root root 15 Apr  1 13:49 0A23-A597 -> ../../mmcblk0p5
lrwxrwxrwx 1 root root 15 Apr  1 13:49 5a632192-64d9-41fd-afe4-55a4d570cd8c -> ../../mmcblk0p3
lrwxrwxrwx 1 root root 15 Apr  1 13:49 accadb67-46be-46e3-9573-62f80f0cb652 -> ../../mmcblk0p6
lrwxrwxrwx 1 root root 15 Apr  1 13:49 BB43-8E16 -> ../../mmcblk0p1
lrwxrwxrwx 1 root root 10 Apr  1 14:14 EC26E71826E6E316 -> ../../sda1

The UUID is the hexadecimal number at the start of the listing ( in this case EC26E71826E6E316).

Now we edit the fstab…

sudo nano /etc/fstab

…by adding the line…

UUID=EC26E71826E6E316    /mnt/usbstorage    ntfs   nofail,uid=pi,gid=pi    0   0

To make sure that you’ve entered everything correctly you can run :

sudo mount -a

If you get any errors from this command then you will need to check your fstab changes.
Otherwise, you can go ahead and re-boot by running the following ( NOTE – I’m doing this via an ssh session on another computer) :

sudo reboot

When the Pi comes back up, you should still be able to see the files on the disk in the /mnt/usbstorage directory.

Installing the Plex Server

First of all, we need to enable secure installations via https :

sudo apt-get install apt-transport-https -y

Now we need to get the key for the Plex download site…

wget -O -  https://dev2day.de/pms/dev2day-pms.gpg.key   | sudo apt-key add -

…and add the repository to the sources list on the Pi…

echo "deb https://dev2day.de/pms/ jessie main" | sudo tee /etc/apt/sources.list.d/pms.list

…which should output…

deb https://dev2day.de/pms/ jessie main

Now make sure that the plex server is included in sources…

sudo apt-get update

NOTE – this should produce some output similar to …

Get:1 http://mirrordirector.raspbian.org jessie InRelease [14.9 kB]
Hit http://archive.raspberrypi.org jessie InRelease  
Get:2 https://dev2day.de jessie InRelease [1,803 B]     
***edited for brevity***

Ign http://mirrordirector.raspbian.org jessie/rpi Translation-en_GB
Ign http://mirrordirector.raspbian.org jessie/rpi Translation-en
Fetched 9,683 kB in 32s (297 kB/s)
Reading package lists... Done

…and install it…

sudo apt-get install -t jessie plexmediaserver -y

…which produces output something like…

Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
  plexmediaserver-installer
The following NEW packages will be installed:
  plexmediaserver plexmediaserver-installer
0 upgraded, 2 newly installed, 0 to remove and 16 not upgraded.
Need to get 8,468 B of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 https://dev2day.de/pms/ jessie/main plexmediaserver-installer armhf 1.12.1.4885-1046ba85f-1 [7,520 B]
Get:2 https://dev2day.de/pms/ jessie/main plexmediaserver all 0.9.16.4.1911-ee6e505-2~jessie [948 B]
Fetched 8,468 B in 1s (7,776 B/s)   
Selecting previously unselected package plexmediaserver-installer.
(Reading database ... 132214 files and directories currently installed.)
Preparing to unpack .../plexmediaserver-installer_1.12.1.4885-1046ba85f-1_armhf.deb ...
Unpacking plexmediaserver-installer (1.12.1.4885-1046ba85f-1) ...
Selecting previously unselected package plexmediaserver.
Preparing to unpack .../plexmediaserver_0.9.16.4.1911-ee6e505-2~jessie_all.deb ...
Unpacking plexmediaserver (0.9.16.4.1911-ee6e505-2~jessie) ...
Processing triggers for systemd (215-17+deb8u7) ...
Setting up plexmediaserver-installer (1.12.1.4885-1046ba85f-1) ...
Downloading readynas package ...
######################################################################## 100.0%
Passed checksum test.
Extracting readynas.deb ...
rm: cannot remove ‘/etc/systemd/system/plexmediaserver.service.d/override.conf’: No such file or directory
Synchronizing state for plexmediaserver.service with sysvinit using update-rc.d...
Executing /usr/sbin/update-rc.d plexmediaserver defaults
Executing /usr/sbin/update-rc.d plexmediaserver disable
insserv: warning: current start runlevel(s) (empty) of script `plexmediaserver' overrides LSB defaults (2 3 4 5).
insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `plexmediaserver' overrides LSB defaults (0 1 6).
Synchronizing state for plexmediaserver.service with sysvinit using update-rc.d...
Executing /usr/sbin/update-rc.d plexmediaserver defaults
insserv: warning: current start runlevel(s) (empty) of script `plexmediaserver' overrides LSB defaults (2 3 4 5).
insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `plexmediaserver' overrides LSB defaults (0 1 6).
Executing /usr/sbin/update-rc.d plexmediaserver enable
Synchronizing state for plexmediaserver.service with sysvinit using update-rc.d...
Executing /usr/sbin/update-rc.d plexmediaserver defaults
Executing /usr/sbin/update-rc.d plexmediaserver enable
Setting up plexmediaserver (0.9.16.4.1911-ee6e505-2~jessie) ...

Now to re-start the pi…

sudo reboot
Configuring the Plex server

All of this can be done via the Plex web interface.

To confirm the web address for our newly installed Plex Server…

hostname -i

In my case I get :

127.0.1.1

Therefore, I’ll need to use the following URL to access the Plex Web Interface via Epiphany :

127.0.1.1:32400/web

Note – alternatively, you can access the web interface via another computer if your pi is accessible from the network.
If the network name is raspberrypi.local, you can use :

http://raspberrypi.local:32400/web

When you first land on the page you will be asked to sign in :

plex1

At this point you can either use an existing account, or create a new one specifically for Plex.

Note that if you’re using a Google account that you have not previously used from the Pi then Plex may “hang” silently until you verify that this is a legitimate connection ( i.e. Google drop a mail to your backup mail address for you to logon and verify the connection).

The reason you need an account is eloquently explained in the next screen – i.e. to get all of that lovely metadata to make your library look pretty …

plex2

Click the Got it button and Plex will then look for your Plex server…

I don’t want to access my media from outside of my home network, so I’ve left the box unchecked and hit Next

Yes, we do want to Add Library

…for Movies, so click the Movies icon and then Next

…and Browse for Media Folder

…where we point it at the disk we mounted earlier and click Add

…before finally, clicking Done.

Plex will now go through the files at the location you’ve specified and try to match up with the appropriate metadata. It seems to do this rather well, but it doesn’t always get it right.
We’ll come back to that shortly.

One final configuration step is to set the transcoder directory to be on the external disk, where there’s more space available than on the memory card in the Pi :

Now we need to install the Plex client on our connected devices…

Installing the Plex Client

Of necessity, this section is somewhat generic. I have two devices that I want to stream to. The first is a Samsung Smart TV running Tizen.
The second is an Amazon Fire TV stick plugged into an elderly – and definitely not very smart – Toshiba.

In each case, you need to go to the relevant App Store and download the Plex client onto the device.
When you first open Plex on your TV/Stick, you’ll be instructed to use a separate device to visit :

https://plex.tv/link

…and enter the displayed 4-digit code.

Once you do this, your device will be associated with your Plex account :

…and you will then be able to see the name of your Plex server on your device.

When Plex gets perplexed

If Plex has managed to mismatch meta data with a given film, you can persuade it to change it’s mind.

For example, the actual version of A Christmas Carol that’s on the disk is the 2009 animated version. However, Plex has matched to a different version.
To fix this, we can select Fix Match… from the menu

…and wait for Plex to present us with a list of alternatives….

…before we select the correct one…

There are, of course, several ways to do this but this method should work as a first attempt for most mismatches.

Useful Links

Addictive Tips have a rather good guide installing Plex on a Pi, in case you get stuck following this one ( or just prefer theirs).

There’s also a useful article on the Plex site about how to name your media files.

As for me, well it seems simply ages since I’ve sung along to Frozen…

ORA-06592 and the Case of the Happy Australians

Fri, 2018-01-26 14:21

Another Ashes Tour to Australia has come and gone and the home team once again hold The Urn.
For any non-cricket fans, I should probably explain.
Every four years, England sends their Men’s and Women’s Cricket Teams to Australia on a goodwill mission.
The object of the exercise is to make Australians feel good about their country as their teams inevitably triumph.

These recently concluded contests provide the theme for the illustration of the less-than-straightforward circumstance surrounding the ORA-06592 error which follows.
When encountering this error, you’ll probably see something like

ORA-06592: CASE not found while executing CASE statement

06592. 00000 -  "CASE not found while executing CASE statement"
*Cause:    A CASE statement must either list all possible cases or have an
           else clause.
*Action:   Add all missing cases or an else clause.

Despite this apparently definitive advice, you don’t always need to cover any possible case, or include an ELSE clause…

The buggy code

Say we have the following table and records…

create table ashes_big_inns (
    batter varchar2(50),
    team varchar2(10),
    runs number,
    not_out_ind varchar2(1))
/

insert into ashes_big_inns( batter, team, runs, not_out_ind)
values('STEVE SMITH', 'AUSTRALIA', 239, 'N')
/

insert into ashes_big_inns( batter, team, runs, not_out_ind)
values('ELLYSE PERRY', 'AUSTRALIA', 213, 'Y')
/

insert into ashes_big_inns( batter, team, runs, not_out_ind)
values('ALISTAIR COOK', 'ENGLAND', 244, 'Y')
/

commit;

Now, we may want to celebrate these achievements by means of the following :

set serveroutput on size unlimited
declare
    message varchar2(100);
begin
    for r_player in (select team from ashes_big_inns) loop
        case r_player.team 
            when 'AUSTRALIA' then message := q'[C'mon Aussie, C'mon !]'; 
        end case;
        dbms_output.put_line(nvl(message, 'Where did that urn go ?'));
    end loop;
end;
/

Things start promisingly enough (especially if you’re Australian) …

C'mon Aussie, C'mon !
C'mon Aussie, C'mon !

…before falling apart like England’s middle order…

...
Error report -
ORA-06592: CASE not found while executing CASE statement
ORA-06512: at line 5
06592. 00000 -  "CASE not found while executing CASE statement"
*Cause:    A CASE statement must either list all possible cases or have an
           else clause.
*Action:   Add all missing cases or an else clause.

According to this error then, a CASE statement must either list all possible cases or have an else clause…

An unsolved CASE

Let’s change things around a bit. This time, we’re going to put the case statement on the right hand side of an assignment…

set serveroutput on size unlimited
declare
    message varchar2(100);
begin
    for r_player in (select team from ashes_big_inns) loop
        message :=
            case r_player.team 
                when 'AUSTRALIA' then q'[C'mon Aussie, C'mon !]' 
            end;
        dbms_output.put_line(nvl(message, 'Where did that urn go ?'));
    end loop;
end;
/

The CASE statement is still lacking a case for the team being ‘ENGLAND’ as well as an ELSE clause. So when we run it we should get the error, right?

Well, that was unexpected.

So, on the face of it, is this…

Before we get too indignant about the seemingly blatant inaccuracy in the Oracle Error message, it’s probably worth remembering that there are two distinct processing engines at work when PL/SQL is running on an Oracle Database – SQL and PL/SQL.

Reading the Manual

According to the PL/SQL documentation for the CASE Statement :

“Without the ELSE clause, if no boolean_expression has the value TRUE, the system raises the predefined exception CASE_NOT_FOUND.”

By contrast, the SQL docs for the CASE expression say that :

“If no condition is found to be true, and an ELSE clause exists, then Oracle returns else_expr. Otherwise, Oracle returns null. ”

Let’s take a fresh look at our original effort…

set serveroutput on size unlimited
declare
    message varchar2(100);
begin
    for r_player in (select team from ashes_big_inns) loop
        case r_player.team 
            when 'AUSTRALIA' then message := q'[C'mon Aussie, C'mon !]'; 
        end case;
        dbms_output.put_line(nvl(message, 'Where did that urn go ?'));
    end loop;
exception when CASE_NOT_FOUND then
    dbms_output.put_line('I was expecting that !');
end;
/

Sure enough, when we run this we get…

From this, we can conclude that we’re running the PL/SQL CASE statement hence the raising of the pre-defined CASE_NOT_FOUND exception.
One other way of distinguishing between the PL/SQL CASE statement and the SQL CASE expression is the minor syntactical difference in their ending.
In PL/SQL you need to terminate the statement with END CASE. In SQL, you simply type END.

So, whilst the error message is correct in what it says about a CASE statement, the solution may well be to use a CASE expression instead.

Private Functions and ACCESSIBLE BY Packages in 12c

Mon, 2018-01-15 07:48

My recent post about PLS-00231 prompted an entirely reasonable question from Andrew :

“OK so the obvious question why [can’t you reference a private function in SQL] and doesn’t that defeat the objective of having it as a private function, and if so what about other ways of achieving the same goal ?”

I’ll be honest – that particular post was really just a note to self. I tend to write package members as public initially so that I can test them by calling them directly.
Once I’ve finished coding the package, I’ll then go through and make all of the helper package members private. My note was simply to remind myself that the PLS-00231 error when compiling a package usually means that I’ve referenced a function in a SQL statement and then made it private.

So, we know that a PL/SQL function can only be called in a SQL statement if it’s a schema level object or it’s definied in the package header because that’s the definition of a Public function in PL/SQL. Or at least it was…

In formulating an answer to Andrew’s question, it became apparent that the nature of Private functions have evolved a bit in 12c.

So, what I’m going to look at here is :

  • What are Private and Public package members in PL/SQL and why you might want to keep a package member private
  • How 12c language features change our definition of private and public in terms of PL/SQL objects
  • Hopefully provide some up-to-date answers for Andrew

Private and Public in the olden days

As most real-world PL/SQL functions are written within the context of a package, this is where we’ll focus our attention.

From the time that PL/SQL stored program units were introduced into Oracle, right up to and including 11g, the definition was simple.

A PL/SQL package member ( function or procedure) was public if it’s specification was declared in the package header.
Otherwise, it was private.
A private package member can only be referenced from inside it’s package.

A private package member might be used to encapsulate some functionality that is used in multiple places inside your package but not outside of it.
These “helper” functions tend to be quite common.
Another reason for using a private function would be to reduce clutter in the package signature. If your package is serving as an API to some business functionality, having few public members as entry points helps to ensure that the API is used as intended.

Of course, a private package member cannot be referenced in a SQL query, even from inside the package…

Changes in 12c and (probably) beyond

The ability to use PL/SQL constructs in SQL with clauses provided by 12c manages to take some of the certainty out of our definition of public and private. For example…

with function catchphrase return varchar2 is
    begin
        return 'I have a cunning plan which cannot fail';
    end;
select catchphrase 
from dual
/

…in 12c rewards you with :

CATCHPHRASE                                       
--------------------------------------------------
I have a cunning plan which cannot fail

Possibly more significant is the ability to create packages that are useable only by certain other stored program units using the ACCESSIBLE BY clause.

Using this new feature, we can split out our helper package members from the main package :

create or replace package baldrick_helper 
    accessible by (package baldrick)
as
    function catchphrase return varchar2;
end baldrick_helper;
/

create or replace package body baldrick_helper 
as    
    function catchphrase return varchar2
    is
    begin
        return 'I have a cunning plan which cannot fail';
    end catchphrase;
end baldrick_helper;
/

As well as reducing the size of individual packages, it should also mean that we can now reference the catchphrase function directly in a SQL statement right ? After all, it’s declared in the package header.

create or replace package baldrick 
as
    procedure cunning_plan;
end baldrick;
/

create or replace package body baldrick as
    procedure cunning_plan is
        optimism varchar2(100);
    begin
        select baldrick_helper.catchphrase
        into optimism
        from dual;
        
        dbms_output.put_line(optimism);
    end cunning_plan;
end baldrick;
/

This compiles without error. However, when we try to run it we get :

set serveroutput on size unlimited
exec baldrick.cunning_plan;

ORA-06553: PLS-904: insufficient privilege to access object BALDRICK_HELPER
ORA-06512: at "MIKE.BALDRICK", line 5
ORA-06512: at line 1

Although the function is declared in the package header, it appears to remain private due to the use of the ACCESSIBLE BY whitelist. Therefore, if you want to reference it, you need to do it in straight PL/SQL :

create or replace package body baldrick as
    procedure cunning_plan is
        optimism varchar2(100);
    begin
    optimism := baldrick_helper.catchphrase;
        
        dbms_output.put_line(optimism);
    end cunning_plan;
end baldrick;
/

This works as expected :

set serveroutput on size unlimited
exec baldrick.cunning_plan;

I have a cunning plan which cannot fail


PL/SQL procedure successfully completed.
Answers for Andrew

If your goal is to reference a PL/SQL package member in a SQL statement then it must be public.
In 12c this means it must be declared in the header of a package which is not defined using an ACCESSIBLE BY clause.

On the other hand, if your goal is to keep your package member private then you cannot reference it in a SQL statement.
In 12c, you do have the option of re-defining it in a with clause as mentioned earlier. However, this only works in straight SQL.
As far as code in a package is concerned, you can’t use an in-line with clause as a wrapper for the call to the private function like this…

create or replace package body baldrick as
    procedure cunning_plan is
        optimism varchar2(100);
    begin
        with function cheating return varchar2 is
        begin 
            return baldrick_helper.catchphrase;
        end;     
        begin
        select catchphrase
        into optimism
        from dual;
        dbms_output.put_line(optimism);
    end cunning_plan;
end baldrick;
/

…because it’s not currently supported in PL/SQL.

Cloning around with VirtualBox

Wed, 2017-12-27 12:54

I’m in a slight trough of a week at the moment.
The excitement of seeing what Santa brought has begun to fade but I’ve yet to summon the requisite enthusiasm for seeing in the New Year.
So this post is really one of those little “Notes to self” so that I can save myself some time when next I need to spin up an Oracle database on a VirtualBox VM…

I’ve got a CentOS7 VM on which I’ve installed and configured Oracle XE.
Now, if I want to experiment on XE, I can simply create a temporary copy of this configuration by cloning it, thus saving myself the trouble of going through the basic configuration all over again.

Opening VirtualBox, we can see that I have a number of VMs already set up

To create an exact copy of ( clone) an existing VM, simply select it and either right-click and select Clone, or hit CTRL+O


…then provide the name of the new vm…

…and click Next.

NOTE – I usually don’t bother re-initialising the MAC address as I don’t run these VMs concurrently. If you are planning to do this then it’s something you may want to consider.

For the clone type, I select Full Clone as I want a completely standalone environment.

Finally, I hit the Clone button.

After a short interval, I am rewarded with an exact copy of the VM, with the new name I specified for it.

Now I’m all set to do something creative and daring in my new environment…or will be after another mince pie.

Pages