Feed aggregator

Networking 102

Pat Shuff - Mon, 2016-08-22 13:08
This week we are going to go through some basic networking tutorials. It is important to understand how to do some simple stuff before we can do more complex stuff. We are going to start out by deploying an Oracle Linux instance, installing the Apache httpd service, and opening up port 80 to the world. This is a simple task but it helps understand how to find the ip address of your instance, how to open a port in the operating system as well as the cloud compute console, and how to connect to the instance from your desktop and from a second instance that we will spin up. Later this week we will spin up two Oracle Linux instances and configure an Apache httpd service on one but configure it only to talk to the other instance in the cloud.

Step 1: Open the Oracle Compute Console and Create Instance. We are going to create an Oracle Linux 6.6 instance. We are not going to do anything special but accept the defaults. We will call our server WebServer1 and give it the default network connection so that we can ssh into the instance.

After a few minutes we should have a Linux instance that has just port 22 open and we can ssh into the server. We don't have an Apache Web Server installed and if we did port 80 is locked down in the operating system and cloud networking interfaces.

Step 2: Connect to our instance with ssh to verify that we have command line access. We connect as opc so that we can execute commands as root. In this example we do this with the ssh command in a terminal window from MacOS. We could have just as easily used putty from a Windows box to make the connection.

Step 3: Add the Apache httpd software to the Linux instance with yum. We could just as easily have downloaded the software from apache.org and installed it that way but yum allows us to do this quickly and easily in one step. You need to make sure that you logged in as opc in the previous step because to sudo command will not work if you logged in as oracle. Note that the first time that you run this command it will take a while because you have to download all of the manifests for the different kernel versions and check for dependencies. The httpd package does not need many extras so the install is relatively clean. It will take a while to download the manifests but the actual install should not take long.

Step 4:Configure the httpd software to run by editing the index.html file and starting the service. Note that this will not allow us to see the service anywhere other than on this computer because we need to enable port 80 in the operating system and in the cloud service to pass the requests from the client to the operating system.

Step 5:Configure the cloud service to pass port 80 from the public internet to our instance. This is done in the Compute Console by clicking on the Networking tab and creating a new Security List. In this example we are going to create a new list that includes http and ssh as the protocols that we will pass through. We first create a Security List. We will call it WebServer.

Step 6:Configure port 80 as a Security Rule for the Security List that we just created. We create a rule for http and a rule for ssh. We then verify that the new rule has been created. Note that our instance is associated with the default rule, We need to change that in the next step.

Step 7:Associate our new rule with our instance. This is done by going into the Instance tab and clicking on View instance. We want to see what Security List is associated with our instance and change it. We are initially connected to the default list which only contains ssh. We want to add WebServer list and then delete the default list. The resulting list should only contain our WebServer list which enables ssh and http. We can easily now add https or sftp if we wanted to to help maintain our web server and not effect any other instances that are using the default rule/list.

Step 8:We now need to open up the ports in the operating system. This is done by modifying the SELINUX interface and iptables interface. We want to let traffic come into the server on port 80 so we can either turn off these services or add an iptables rule to allow everything on port 80 to pass through. We can disable all firewall rules by turning off the SELINUX services and iptables as shown below. It is not recommended to do this because it opens up all ports and makes your operating system vulnerable to attacks if other ports are open to this machine or other machines inside the same rack that you are running in. You can either watch a video or execute the commands shown on a tutorial web site that disables SELINUX and iptables. The important thing is to set SELINUX=disabled and turn off the iptables services for all of this to work.

Step 9:To test the changes, open a browser and try to attach to the Apache server. We should be able to go to the public ip address with a simple web client and get the index.html file. We should get back the message "I am here!" on the web page. Again, this is the insecure way of doing this. We really want to customize iptables to allows port 80 to pass and deny everything else that is not ssh.

In summary, we configured a Linux server, installed the Apache httpd, and configured the network rules at the cloud console and at the operating system to allow traffic to pass from the public internet into our compute instance. We are blocking all traffic at the cloud interface other than ports 80 and 22. Even though it is poor practice we disabled the firewall on the compute operating system and are allowing all traffic in and using our cloud instance as a firewall. This is not good practice because other compute services in the data center can access these open ports. We will dive deeper into that tomorrow and look at turning the operating system firewall back on and configuring it properly. We will also look at inter server communications inside the data center to allow hiding services from public access but allowing our front end public facing server to access the services securely.

Basicfile LOBS

Jonathan Lewis - Mon, 2016-08-22 11:56

In the previous article in this mini-series I described how the option for setting freepools N when defining Basicfile LOBs was a feature aimed at giving you improved concurrency for inserts and deletes that worked by splitting the LOBINDEX into 2N sections: N sections to index the current LOB chunks by LOB id, alternating with N sections to map the reusable LOB chunks by deletion time.

In this article we’ll look a little further into the lifecycle of the LOB segment but before getting into the details I’ll just throw out a couple of consequences of the basic behaviour of LOBs that might let you pick the best match for the workload you have to deal with.

  • If you have enabled storage in row the first 12 chunks of a lob will be identified by the LOB Locator stored in the row, so if all your LOBs are sized between 4KB and 96KB (approximately) the LOB Index will consist only of entries for the reusable LOB space due to deleted LOBs even though the LOBs themselves will be stored out of line. This makes it look like a good idea to enable storage in row even when you expect all of your (smallish) LOBs to be stored out of row.
  • It’s quite nice to cache LOBs (at least for a little while) if your pattern of use means you access a specific LOB for a little while before it ceases to be interesting; but LOBs can swamp a large fraction of the buffer cache if you’re not careful. If you expect to follow this pattern of behaviour you might define a RECYCLE cache and then assign the LOB to that cache so that you get the benefits of caching while still protecting the main volume of your buffer cache.
  • Depending on the expected size of your LOBs you may have a good justification for creating a tablespace of a non-standard size for the LOB segment so that it takes fewer block reads to read the entire LOB. If (for example) you have a LOB which is always in the range of 62KB then a tablespace with a blocksize of 32KB would be a good choice because the LOB could be read with just two block reads. A fringe benefit of the non-standard block size, of course, is that you have to define a non-standard cache, which separates the LOB activity from the rest of the buffer cache. (Note: Oracle reads LOBs one chunk at a time, so the number of LOB reads – as opposed to block reads – for a 32KB chunk is the same whether the block size is 8KB or 32KB)
  • If you’re going to be logging your LOBs then remember that nocache LOBs will write entire chunks into the redo logs – think about how much extra redo this might generate: it might be better to have a small recycle cache and cache your LOBS as cached LOBs are logged at the byte level. (You don’t want a 32KB block size, nocache, logging if your LOBs are all 33KB).
The LOB lifetime

Before deciding on the suitability of a feature the first thing to do is define what you’re trying to achieve so that you can think realistically about where the threats may be and what tests are going to be important – so I’m going to describe a scenario, then talk about what threats might appear based on the current details I’ve given about Basicfile LOBs and freepools.

  • We have many processes inserting “small” (16KB to 24KB) LOBs concurrently in bursts during the day.
  • Typically we peak at about 20 processes inserting at the same moment, and we end up with about 100K new LOBs per day though this varies between 50K and 200K.
  • The inserts are all “insert one row; commit”.
  • The LOBs have to be kept for 28 days, after which they (the rows that hold them) are deleted by an overnight batch job.
  • The LOBs have to be logged and the database is running in archivelog mode

As soon as you see the “aged 28 days” you might immediately think “partitioning” (though perhaps your first thought might be that restaurant in Cincinnati airport that hangs its beef to air-dry for 28 days before cooking). Unfortunately not everyone has licensed the partitioning option, so what do you have to worry about when you start to design for this requirement. (We’re also going to assume that securefiles are going to be saved for another blog mini-series).

Clearly we should make use of multple freepools to avoid the insert contention on the LOBINDEX. With about 20 concurrent processes we might immediate go for freepools 20, but we might decide that a smaller number like 4 or 8 is sufficient. We probably ought to do some tests to see if we can discover any penalties for larger numbers of freepools, and to see what sort of contention we get with a smaller number of freepools.

We got a hint from the previous article that when a process deletes a LOB it indexes the reusable chunks in the same freepool as it inserts LOBs – at least, that’s what seemed to happen in our little test case in the previous article. Does Oracle always follow this pattern, or will a multi-row delete, or a large number of single “delete;commt;” cycles spread the reusable chunks evenly across all the available freepools ? If you do a single large delete do you end up with all the reusable space in one freepool – if so, does it matter or should we have multiple processes do our “big batch delete” ?

On second thoughts, my little demo showed that when you insert a LOB into freepool X and then delete it the reusable space goes into freepool X. Maybe I’ve misinterpreted the test and need to do a better test; maybe the reusable space goes into the freepool that the LOB was originally attached to, not into the freepool dictated by the process id. That would mean that a bulk delete would tend to spread the LOBs across all the freepools – which means if you used multiple processes to delete data they might cause contention on the “reusable” segments of the LOBINDEX.

If we do a single large delete and all the reusable chunks go into the same freepool what happens when we start inserting new LOBs ? If the LOB segment is “full” is it only the processes associated with that one freepool that can use the reusable space, or will EVERY process start to raid the freepool that has the only reusable space If the latter then all we’ve done by using multiple freepools is postpone (by roughly 28 days) the moment when we start to get contention on our LOBINDEX ?

Fortunately if we’ve made some poor choices in the orginal design Oracle does allow us to “rebuild freepools”, and even change the number of freepools:

alter table t1 modify lob (c1) (rebuild freepools);
alter table t1 modify lob (c1) (freepools (3));

Mind you, there is a little note on MoS that rebuilding freepools “may take some time” and locks the table in exclusive mode while it’s going on. So perhaps we should check to see how the rebuild works, and try to figure out how long it might take. A maxim for dealing with very large objects is that you really want to get it right first time because it’s hard to test the effects of change especially since you probably end up wanting to do your final tests on a backup copy of the production system.

Getting Started

I’ve specified 100K LOBs per day, sized between 16KB and 24KB, kept for 28 days – that’s about 50 GB, and I don’t really want to sit waiting for Oracle to build that much data while running 20 concurrent processes that are logging and generating archived redo log. (Especially since I may want to repeat the exercise two or three times with different numbers of freepools.) I’m going to start small and grow the scale when it’s necessary.

I’ll start with 4 concurrent processes inserting 1,000 LOBs each, sized at 12KB, with freepools 4, and I’ll rig the system very carefully so that each process uses a different freepool. After that I’ll run a single batch delete to delete the first 3,000 LOBs – I’ll pick a process that ought to use freepool 1 or 2 (i.e. not 0 or 3, the “end” freepools); then I’ll repeat the insert cycle but insert just 750 LOBs per process. At various points in this sequence of events I’ll stop and dump some index blocks and look at some stats to see if I can spot any important patterns emerging.

Once I’ve got through that cycle I’ll decide what to do next – the first set of results may produce some important new questions – but I’m guessing that I’ll probably end up repeating the “delete / insert” cycle at least one more time.

Here’s a little code to create a suitable table,

create sequence s1 cache 10000;

create table t1(
        id      number constraint t1_pk primary key,
        c1      clob
lob (c1)
store as basicfile
            disable storage in row
            chunk 8k
            freepools 4
            tablespace test_8k_assm

        m_v1 varchar2(32767) := rpad('x',12000,'x');
        for i in 0..0 loop
                insert into t1 values (i, m_v1);
        end loop;

I’ve inserted a row to make sure that all the objects appear in all the right places. The code I’ve used to do this insert is a version of the code that I’m going to use for the concurrency testing but restricted to insert one row with an id of zero. In the concurrency test I’ll make use of the sequence I’ve created to act as the primary key to the table.

Having created the table I then start four more sessions, carefully ensuring that they will each pick a different freepool. To make sure I had one session per freepool I just kept connecting sessions and running a silly little check for each session’s process id (pid) until I had four that returned each of the values from 0 to 3:

select mod(pid,4) from v$process where addr = (
        select paddr from v$session where sid = (
                select sid from v$mystat where rownum = 1

Once I had the four extra sessions set up, I issued a simple “lock table t1 in exclusive mode” from my original session then started the following script in each of the other four:

spool temp&1

        m_v1 varchar2(32767) := rpad('x',12000,'x');
        for i in 1..1000 loop
                insert into t1 values (s1.nextval, m_v1);
        end loop;

spool off

(I supplied A, B, C, and D as the first parameter to the script so that I got four sets of output, but I haven’t included the code I used to get a snapshot of the session stats, session waits, and system enqueues recorded by each session.)

First check – did I get all four freepools evenly used (which is what I had assumed would happen when I chose the 4 process ids so carefully. I can check this by doing a block dump of the LOBINDEX root block because with 4,001 entries I’m (almost certainly) going to get a root block, no further branch levels, and a few dozen leaf blocks.

As with all B-tree indexes the “row directory” of the root block will contain a list of “truncated” key values that allow Oracle to search down to the correct block in the next layer of the index so I’m going to extract just the key values, and only the first column of those keys in the same way that I did with the previous article. This means every line in the following output shows you, in order, the first LOB id (with a few of them truncated) in each leaf block:

col 0; len 10; (10):  00 00 00 01 00 00 09 da fe a7
col 0; len 10; (10):  00 00 00 01 00 00 09 db 00 24
col 0; len 10; (10):  00 00 00 01 00 00 09 db 01 6f
col 0; len 10; (10):  00 00 00 01 00 00 09 db 02 ec
col 0; len  9; ( 9):  00 00 00 01 00 00 09 db 04
col 0; len 10; (10):  00 00 00 01 00 00 09 db 05 7c
col 0; len 10; (10):  00 00 00 01 00 00 09 db 07 2b
col 0; len 10; (10):  00 00 00 01 00 00 09 db 07 e0
col 0; len 10; (10):  00 00 00 01 00 00 09 db 09 8f
col 0; len  9; ( 9):  00 00 00 01 00 00 09 db 0b
col 0; len 10; (10):  00 00 00 01 00 00 09 db 0b bc

col 0; len 10; (10):  00 02 00 01 00 00 09 da fb 74
col 0; len 10; (10):  00 02 00 01 00 00 09 da fe 81
col 0; len 10; (10):  00 02 00 01 00 00 09 db 00 62
col 0; len 10; (10):  00 02 00 01 00 00 09 db 01 ad
col 0; len 10; (10):  00 02 00 01 00 00 09 db 02 94
col 0; len 10; (10):  00 02 00 01 00 00 09 db 04 11
col 0; len 10; (10):  00 02 00 01 00 00 09 db 04 f8
col 0; len 10; (10):  00 02 00 01 00 00 09 db 06 11
col 0; len 10; (10):  00 02 00 01 00 00 09 db 07 f2
col 0; len 10; (10):  00 02 00 01 00 00 09 db 08 d9
col 0; len 10; (10):  00 02 00 01 00 00 09 db 09 c0
col 0; len  9; ( 9):  00 02 00 01 00 00 09 db 0b

col 0; len 10; (10):  00 04 00 01 00 00 09 da fd fb
col 0; len 10; (10):  00 04 00 01 00 00 09 da fe 4c
col 0; len 10; (10):  00 04 00 01 00 00 09 da ff c9
col 0; len  9; ( 9):  00 04 00 01 00 00 09 db 01
col 0; len 10; (10):  00 04 00 01 00 00 09 db 01 f8
col 0; len 10; (10):  00 04 00 01 00 00 09 db 03 75
col 0; len 10; (10):  00 04 00 01 00 00 09 db 04 5c
col 0; len 10; (10):  00 04 00 01 00 00 09 db 06 3d
col 0; len  9; ( 9):  00 04 00 01 00 00 09 db 07
col 0; len 10; (10):  00 04 00 01 00 00 09 db 08 38
col 0; len 10; (10):  00 04 00 01 00 00 09 db 0a 19
col 0; len  9; ( 9):  00 04 00 01 00 00 09 db 0b

col 0; len  2; ( 2):  00 06
col 0; len 10; (10):  00 06 00 01 00 00 09 da fe d4
col 0; len 10; (10):  00 06 00 01 00 00 09 db 00 ca
col 0; len 10; (10):  00 06 00 01 00 00 09 db 03 24
col 0; len 10; (10):  00 06 00 01 00 00 09 db 05 4c
col 0; len 10; (10):  00 06 00 01 00 00 09 db 07 a6
col 0; len  9; ( 9):  00 06 00 01 00 00 09 db 0a
col 0; len 10; (10):  00 06 00 01 00 00 09 db 0c 5a

As you can see, we have the expected pattern (for 4 freepools) of entries starting with (00 00), (00 02), (00 04), and (00 06); but you might wonder why there are 11 leaf blocks for 00, 12 leaf blocks for 02 and 04, and only 8 leaf blocks for 06. We can answer the 11/12 anomaly by remembering that any branch blocks will have a “leftmost child” entry that won’t appear in the row directory – so the 12th leaf (or rather the 1st leaf) block for 00 is being pointed to by the “LMC”. But what about the missing blocks for 06 ? A treedump shows the answer:

branch: 0x1800204 25166340 (0: nrow: 44, level: 1)
   leaf: 0x1800225 25166373 (-1: nrow: 81 rrow: 81)
   leaf: 0x180022d 25166381 (0: nrow: 81 rrow: 81)
   leaf: 0x1800231 25166385 (1: nrow: 81 rrow: 81)
   leaf: 0x1800235 25166389 (2: nrow: 81 rrow: 81)
   leaf: 0x1800239 25166393 (3: nrow: 75 rrow: 75)
   leaf: 0x180023d 25166397 (4: nrow: 81 rrow: 81)
   leaf: 0x1800206 25166342 (5: nrow: 81 rrow: 81)
   leaf: 0x180020a 25166346 (6: nrow: 81 rrow: 81)
   leaf: 0x180020e 25166350 (7: nrow: 81 rrow: 81)
   leaf: 0x1800212 25166354 (8: nrow: 76 rrow: 76)
   leaf: 0x1800216 25166358 (9: nrow: 81 rrow: 81)
   leaf: 0x180021a 25166362 (10: nrow: 132 rrow: 132)

   leaf: 0x1800226 25166374 (11: nrow: 81 rrow: 81)
   leaf: 0x180022a 25166378 (12: nrow: 81 rrow: 81)
   leaf: 0x180022e 25166382 (13: nrow: 81 rrow: 81)
   leaf: 0x1800232 25166386 (14: nrow: 81 rrow: 81)
   leaf: 0x1800236 25166390 (15: nrow: 81 rrow: 81)
   leaf: 0x180023a 25166394 (16: nrow: 81 rrow: 81)
   leaf: 0x180023e 25166398 (17: nrow: 81 rrow: 81)
   leaf: 0x1800207 25166343 (18: nrow: 81 rrow: 81)
   leaf: 0x180020b 25166347 (19: nrow: 81 rrow: 81)
   leaf: 0x180020f 25166351 (20: nrow: 81 rrow: 81)
   leaf: 0x1800213 25166355 (21: nrow: 77 rrow: 77)
   leaf: 0x1800217 25166359 (22: nrow: 111 rrow: 111)

   leaf: 0x1800229 25166377 (23: nrow: 81 rrow: 81)
   leaf: 0x180022f 25166383 (24: nrow: 81 rrow: 81)
   leaf: 0x1800233 25166387 (25: nrow: 78 rrow: 78)
   leaf: 0x1800237 25166391 (26: nrow: 81 rrow: 81)
   leaf: 0x180023b 25166395 (27: nrow: 81 rrow: 81)
   leaf: 0x180023f 25166399 (28: nrow: 81 rrow: 81)
   leaf: 0x1800208 25166344 (29: nrow: 81 rrow: 81)
   leaf: 0x180020c 25166348 (30: nrow: 76 rrow: 76)
   leaf: 0x1800210 25166352 (31: nrow: 81 rrow: 81)
   leaf: 0x1800214 25166356 (32: nrow: 81 rrow: 81)
   leaf: 0x1800230 25166384 (33: nrow: 81 rrow: 81)
   leaf: 0x1800238 25166392 (34: nrow: 107 rrow: 107)

   leaf: 0x180022b 25166379 (35: nrow: 78 rrow: 78)
   leaf: 0x1800218 25166360 (36: nrow: 152 rrow: 152)
   leaf: 0x180021c 25166364 (37: nrow: 152 rrow: 152)
   leaf: 0x1800220 25166368 (38: nrow: 152 rrow: 152)
   leaf: 0x1800224 25166372 (39: nrow: 152 rrow: 152)
   leaf: 0x1800228 25166376 (40: nrow: 152 rrow: 152)
   leaf: 0x180022c 25166380 (41: nrow: 152 rrow: 152)
   leaf: 0x1800234 25166388 (42: nrow: 11 rrow: 11)

Although there are little glitches along the way the leaf blocks for freepools 0, 1, and 2 have 81 entries per leaf block, the leaf blocks for freepool 3 have 152 – it’s the difference between inserting rows at the end of an index and getting a “90/10” split compared with inserting somewhere in the middle and getting a “50/50” split. In fact, although Oracle uses the idgen1$ sequence to drive the generation of LOB ids, the way it “batches” IDs (the sequence cache size is 50) means that LOB ids can appear out of order when generated by multiple sessions – even in the same freepool: it is possible for the top freepool to suffer 50/50 splits though these are likely to appear relatively rarely. (As a side effect, the LOB ids in my table and the id column on the underlying table are out of order with respect to each other.)

You’ll notice that I’ve left blank links in the treedump list corresponding to the breaks between the free pools (remember there is a “-1” entry in the treedump for the “leftmost child” that doesn’t appear in the row directory). This helps to explain the few leaf blocks with unusual row counts – they’re the ones that at (or very near) the boundaries between freepools.

Bulk Delete

After loading some data in with concurrent inserts and multiple freepools I’m going to do a single big delete from the table to get rid of a lot of “old” data. In fact all I’ll do is delete the rows where id <= 3000. The big question is this – will a single delete put all the reusable blocks into a single freepool, or will it put the reusable space for each LOB into the freepool that the LOB was originally in, or will it find some other way to spread the reusable space evenly across all the free pools ? One freepool or many – both options have good points, both options have bad points.

Here’s what I got as the treedump after the delete:

branch: 0x1800204 25166340 (0: nrow: 72, level: 1)
   leaf: 0x1800225 25166373 (-1: nrow: 81 rrow: 0)
   leaf: 0x180022d 25166381 (0: nrow: 81 rrow: 0)
   leaf: 0x1800231 25166385 (1: nrow: 81 rrow: 0)
   leaf: 0x1800235 25166389 (2: nrow: 81 rrow: 0)
   leaf: 0x1800239 25166393 (3: nrow: 75 rrow: 0)
   leaf: 0x180023d 25166397 (4: nrow: 81 rrow: 0)
   leaf: 0x1800206 25166342 (5: nrow: 81 rrow: 0)
   leaf: 0x180020a 25166346 (6: nrow: 81 rrow: 0)
   leaf: 0x180020e 25166350 (7: nrow: 81 rrow: 22)
   leaf: 0x1800212 25166354 (8: nrow: 76 rrow: 76)
   leaf: 0x1800216 25166358 (9: nrow: 81 rrow: 81)
   leaf: 0x180021a 25166362 (10: nrow: 132 rrow: 120)

   leaf: 0x1800226 25166374 (11: nrow: 81 rrow: 0)
   leaf: 0x180022a 25166378 (12: nrow: 81 rrow: 0)
   leaf: 0x180022e 25166382 (13: nrow: 81 rrow: 0)
   leaf: 0x1800232 25166386 (14: nrow: 81 rrow: 0)
   leaf: 0x1800236 25166390 (15: nrow: 81 rrow: 0)
   leaf: 0x180023a 25166394 (16: nrow: 81 rrow: 0)
   leaf: 0x180023e 25166398 (17: nrow: 81 rrow: 0)
   leaf: 0x1800207 25166343 (18: nrow: 81 rrow: 0)
   leaf: 0x180020b 25166347 (19: nrow: 81 rrow: 0)
   leaf: 0x180020f 25166351 (20: nrow: 81 rrow: 64)
   leaf: 0x1800213 25166355 (21: nrow: 77 rrow: 77)
   leaf: 0x1800217 25166359 (22: nrow: 111 rrow: 101)

   leaf: 0x1800229 25166377 (23: nrow: 81 rrow: 0)
   leaf: 0x180022f 25166383 (24: nrow: 81 rrow: 0)
   leaf: 0x1800233 25166387 (25: nrow: 78 rrow: 0)
   leaf: 0x1800237 25166391 (26: nrow: 81 rrow: 0)
   leaf: 0x180023b 25166395 (27: nrow: 81 rrow: 0)
   leaf: 0x180023f 25166399 (28: nrow: 81 rrow: 0)
   leaf: 0x1800208 25166344 (29: nrow: 81 rrow: 0)
   leaf: 0x180020c 25166348 (30: nrow: 76 rrow: 0)
   leaf: 0x1800210 25166352 (31: nrow: 81 rrow: 0)
   leaf: 0x1800214 25166356 (32: nrow: 81 rrow: 36)
   leaf: 0x1800230 25166384 (33: nrow: 81 rrow: 81)
   leaf: 0x1800238 25166392 (34: nrow: 81 rrow: 81)
   leaf: 0x180023c 25166396 (35: nrow: 139 rrow: 139)

   leaf: 0x1800227 25166375 (36: nrow: 138 rrow: 138)
   leaf: 0x1800205 25166341 (37: nrow: 126 rrow: 126)
   leaf: 0x1800219 25166361 (38: nrow: 82 rrow: 82)
   leaf: 0x1800272 25166450 (39: nrow: 95 rrow: 95)
   leaf: 0x1800209 25166345 (40: nrow: 118 rrow: 118)
   leaf: 0x180021f 25166367 (41: nrow: 143 rrow: 143)
   leaf: 0x180020d 25166349 (42: nrow: 81 rrow: 81)
   leaf: 0x1800243 25166403 (43: nrow: 90 rrow: 90)
   leaf: 0x1800222 25166370 (44: nrow: 147 rrow: 147)
   leaf: 0x1800211 25166353 (45: nrow: 81 rrow: 81)
   leaf: 0x1800247 25166407 (46: nrow: 73 rrow: 73)
   leaf: 0x1800223 25166371 (47: nrow: 98 rrow: 98)
   leaf: 0x180026a 25166442 (48: nrow: 98 rrow: 98)
   leaf: 0x180021d 25166365 (49: nrow: 127 rrow: 127)
   leaf: 0x1800266 25166438 (50: nrow: 131 rrow: 131)
   leaf: 0x1800215 25166357 (51: nrow: 133 rrow: 133)
   leaf: 0x180026e 25166446 (52: nrow: 141 rrow: 141)
   leaf: 0x180021b 25166363 (53: nrow: 82 rrow: 82)
   leaf: 0x180024b 25166411 (54: nrow: 93 rrow: 93)
   leaf: 0x1800276 25166454 (55: nrow: 109 rrow: 109)
   leaf: 0x180024f 25166415 (56: nrow: 77 rrow: 77)
   leaf: 0x180021e 25166366 (57: nrow: 143 rrow: 143)
   leaf: 0x180027e 25166462 (58: nrow: 126 rrow: 126)
   leaf: 0x1800221 25166369 (59: nrow: 93 rrow: 93)
   leaf: 0x1800253 25166419 (60: nrow: 82 rrow: 82)
   leaf: 0x180027a 25166458 (61: nrow: 97 rrow: 97)
   leaf: 0x1800257 25166423 (62: nrow: 84 rrow: 84)

   leaf: 0x180022b 25166379 (63: nrow: 78 rrow: 0)
   leaf: 0x1800218 25166360 (64: nrow: 152 rrow: 0)
   leaf: 0x180021c 25166364 (65: nrow: 152 rrow: 0)
   leaf: 0x1800220 25166368 (66: nrow: 152 rrow: 0)
   leaf: 0x1800224 25166372 (67: nrow: 152 rrow: 0)
   leaf: 0x1800228 25166376 (68: nrow: 152 rrow: 72)
   leaf: 0x180022c 25166380 (69: nrow: 152 rrow: 152)
   leaf: 0x1800234 25166388 (70: nrow: 11 rrow: 11)

The number of leaf blocks has gone up from 44 to 70 (but that shouldn’t be too much of a surprise – index leaf block space can’t be reused until after the commit, so we were bound to grow the index to insert the entries for reusable chunks).

As before I’ve inserted a few blank lines to break the list into the separate index sections, and you can see that the first few blocks in each of the first three freepools has nrow = 81 and (typically) rrow = 0. These are the leaf blocks where all the LOB entries have been marked as deleted. There are a couple of variations – leaf block 10, for example, shows nrow = 132, rrow = 120: this is the leaf block where freepool 0 (LOB section) overlapped with freepool 1 (LOB section), and the first 10 LOBs in freepool 1 have been marked as deleted. The LOB section for freepool 4 follows the same sort of pattern, though nrow = 152 in most of the blocks.

The important detail is in leaf blocks 36 to 62 – which show nrow = rrow throughout, but with a degree of randomness as to the actual number of index entries. These are the leaf blocks that record the “reusable chunks”, and they’ve all been associated with freepool 2 (counting from zero). There are several details that combine to explain why the numbers of entries per leaf block vary so much, but I don’t want to get too distracted by them now; remember, though, that I pointed out that the LOB ids and table id column weren’t in synch with each other so part of what you’re seeing here is 50/50 leaf node splits followed by a little back-filling.

Again I’ve extracted the “col 0” values from the block dump of the root block – I won’t show all of them, I’ll just show you the entries from entries 35 to 63 so that you can see the leaf block pointers for the “reusable” section of freepool 2, and the LOB section of freepools 2 and 3:

col 0; len 10; (10):  00 04 00 01 00 00 09 db 0c 7d

col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00
col 0; len 10; (10):  00 05 57 b4 d3 7d 00 00 00 00

col 0; len 02; (02):  00 06

The starting “00 05” tells us that this is the reusable chunks associated with freepool 2 (2 * 2 + 1 = 5), and the fact that the next four bytes are identical across the entries tells you that I managed to delete my 3,000 LOBs in less than one second.

After seeing the effect of this monolithic delete you should now be asking yourself a few questions, such as:

  • Why would the Oracle developer think that this use of one freepool is a good idea ?
  • Why might it be a bad idea ?
  • What happens when we start inserting more data ?



OOW16: All Things DX

WebCenter Team - Mon, 2016-08-22 10:32

Each year at OOW, we drive the best and most innovative sessions for our attendees. This year is no different. With over 20 speaking sessions reflecting strategy and perspectives from our executives, technology experts, customers, partners, user groups and industry thought leaders; 5 live product demos; and 5 hands-on-labs devoted to Oracle WebCenter, Digital Experience and our Content and Experience Management solutions, we are excited to showcase our product innovations, customer successes and vision at OOW 2016!

This Year’s Highlights:

Over 20 sessions featuring customers and partners like Cox Communications, EMO Transportation, McKesson, OMNI, PricewaterhouseCoopers LLP, Fishbowl Solutions, Hellosign, IntraSee, TekStream Solutions, and more.

A Meet the Experts session on Tuesday, Sep 20, from  6:15 p.m. - 7:00 p.m. at Moscone South—302. This is your chance to meet the gurus from the Content and Experience Management team as they discuss how to “Transform Businesses with Content and Experience Management”.

5 live product demo stations; 5 hands-on labs sessions, including Cloud labs focusing on Content and Experience Management solutions, highlighting Sites Cloud Service, Documents Cloud Service, Process Cloud Service and Mobile Cloud Service.

Innovation Awards ceremony with winners and representation from our Content and Experience Management solutions. 

OpenWorld Content and Experience Management CVC available for account teams and customers. 

Must Attend Sessions:

Don’t miss the Strategy and Vision sessions for the overall Content and Experience Management portfolio and each of the respective solution and cloud technologies. Invite your prospects and customers to attend these not-to-be-missed sessions that can help organizations plan their roadmaps. Attendees will also get an opportunity to hear from customer organizations on implementation successes.

Monday, Sep 19

  • Content and Experience Management: Roadmap and Vision  [CON7256]
    • 11:00 a.m. - 11:45 a.m. | Moscone West—2014
  • Oracle Process Cloud Service: Digital Process Apps Made Simple [HOL7613]
    • 11:00 a.m. - 12:00 p.m. | Hotel Nikko—Nikko Ballroom III (3rd Floor)
  • Content Management in the Cloud: Strategy & Roadmap [CON7257]
    • 12:30 p.m. - 1:15 p.m. | Moscone West—2014
  • Digital Experience in the Cloud: Strategy & Roadmap [CON7258] 
    • 1:45 p.m. - 2:30 p.m. | Moscone West—3000
  • Create an Employee Portal Using Oracle Documents Cloud Service/Oracle Sites Cloud Service [HOL7607]
    • 2:30 p.m. - 3:30 p.m. | Hotel Nikko—Nikko Ballroom II (3rd Floor)

Tuesday, Sep 20

  • Drive Productivity in Your Enterprise with Oracle Documents Cloud Service [HOL7611]
    • 8:00 a.m. - 9:00 a.m. | Hotel Nikko—Nikko Ballroom II (3rd Floor)
  • WebCenter Content, Imaging, Capture & Forms Recognition: Roadmap & Strategy [CON7259]
    • 4:00 p.m. - 4:45 p.m. | Moscone West—2014
  • Oracle WebCenter Digital Experience: Sites & Portal Strategy and Roadmap [CON7260]
    • 5:15 p.m. - 6:00 p.m. | Moscone West—2014

Wednesday, Sep 21

  • Deep Dive into Oracle's Content and Experience Cloud Platform [HOL7612]
    • 11:30 a.m. - 12:30 p.m. | Hotel Nikko—Nikko Ballroom II (3rd Floor)
  • Cox Enterprises Reimagines the Digital Workplace with Oracle WebCenter [CAS4789] 
    • 3:00 p.m. - 3:45 p.m. | Marriott Marquis—Golden Gate C3

Thursday, Sep 22

  • Create Rich Solution Templates with Oracle's Content and Experience Cloud Platform [HOL7616]
    • 9:45 a.m. - 10:45 a.m. | Hotel Nikko—Nikko Ballroom II (3rd Floor)

There are PaaS General Sessions and many more sessions highlighting customer successes, product deep dives, partner discussions, persona and/or industry based discussions, Cloud/PaaS lessons, live product demonstrations and Hands-On-Labs (HOL) sessions so do bookmark the following link for a complete, up to date listing:

Note that My Schedule will be going live this week for Oracle OpenWorld. Customer and partner attendees can use My Schedule to plan and optimize their time during the conferences by building personalized conference schedules beforehand. We recommend adding the sessions mentioned above and the others of interest from the Focus On document.

Your Conference on Mobile:

Attendees with Apple iOS or Android devices can access Oracle conference information via the Oracle OpenWorld mobile app (available mid-September). Download the app from the Apple iTunes store or Google Play, and get news and updates, access your schedule and the Oracle OpenWorld Content Catalog, explore interactive maps, find exhibitors and sponsors, and discover social channels.

Social Media Communications:

We will be highlighting our key sessions and other important information on the Oracle WebCenter blog periodically. In addition, please use the following hashtags to discuss OOW on your respective channels and follow along on @oraclewebcenter!

#OOW16 | #OracleDX | #OracleDOCS | #OraclePCS | #OracleSCS | #OracleMCS 

We are looking forward to a successful #OOW16!

Analysing Social Media Data for the Lightyear Foundation - Part 1

Rittman Mead Consulting - Mon, 2016-08-22 08:49

Outside of my job at Rittman Mead, I'm fortunate enough to be involved with a project called the Lightyear Foundation. We are a charitable organisation which aims to promote the techniques and philosophies of science, working with children and teachers in the UK and Ghana. Particularly, we try to exemplify the fact that fundamental science principles can be demonstrated without the need for formal labs and expensive equipment. We believe that in increasing the accessibility to science, it will increase the prevalence of broadly applicable scientific skills such as critical thinking, communication of information, and the perception of failure.

Lightyear in Ghana

As a charity, we have need for marketing and donations and are thus required to understand our donor base, just as any business might try to understand their customers. Which brings us neatly to this blog. It is common for businesses to analyse the efficacy of marketing campaigns with regards to revenue, which in the modern world can range from TV to Twitter. The collection and analysis of this data has then formed a multi-million dollar industry in and of itself. Unfortunately as a charity, we don't have multi-millions to spend on this analysis. Fortunately, we do have a search engine, a bit of ingenuity, and some elbow grease. Also, Rittman Mead was kind enough to donate a server to tinker with.

Looking for a Wheel

Before re-inventing the wheel, it seemed like a good idea to see if any wheels existed and indeed what they looked like. As you might expect, there are a great deal of analytics tools aimed at social media. The problem is that most of them require some sort of payment, and the vast majority of the free ones only allow analysis of a specific source. For example, Facebook provide an extensive insights platform for your page data, but this does not access anything from Twitter.

A notable exception is Cyfe, an absolutely fantastic platform that is free to use if you don't require historic analysis (queries are limited to the last 30 days). This is a very impressive application, and I will be revisiting this in a later blog. However, historic data is essential for holistic analysis rather than simply reporting, so I'm going to take a look at how difficult it would be to build something to allow that.

Exploring the Data

With any analytical project, exploring, collecting, and processing the data is the most important and time consuming part. To start with, a few data sources were identified as being useful for the investigation:

  • Website
  • Facebook
  • Twitter
  • YouTube
  • MailChimp

This would give us information about our reach across the most important social media platforms as well as the website statistics and e-mail subscriptions. One notable omission from this blog is the donation system itself which is currently being changed. Each of the sources has an API for accessing various levels of information, with differing limitations. The first step was to try out each of the APIs and see what kind of data was retrievable.

Website Data

The easiest way to collect website data is to use Google Analytics, a free and excellent service that can be used to monitor and analyse traffic to our WordPress site. It works by embedding a small amount of JavaScript to each page, which then executes Google's tracking function, sending a variety of information about the session.

Google Analytics

The above image is an excerpt of the kind of data provided by Google Analytics, showing the number of sessions activated on the website over a period of a few months. The platform has an incredibly detailed and vast store of data, including (among others):

  • Session
  • Clicks
  • Geographic
  • User Information (although this is often missing or incomplete)
  • Referrals
  • Network
  • Browser

Naturally it also has an API, allowing online access in real time or offline access secured by a key on the server. For my initial testing, I setup an API key and wrote a python script to query the service:

>>> import lyf
>>> service = lyf.google_api('analytics', 'v3', ['https://www.googleapis.com/auth/analytics.readonly'])
>>> results = lyf.ga.query(service, '2016-07-21', '2016-07-24', 'ga:sessions', 'ga:date')
>>> for result in results:
[u'20160721', u'6']
[u'20160722', u'7']
[u'20160723', u'26']

The authentication is performed using OAuth2 and is well documented. There is also a web interface for the API complete with handy auto-complete fields for each of the parameters.

While the platform is very fast and detailed, there is one limitation, which is that you can only choose up to 8 dimensions at a time for a query. For most use cases this is perfectly acceptable, but is worth noting.


Facebook provides their Graph API for querying data in this fashion as well as an explorer tool for testing. Authorisation can be granted and stored in the form of an API key. There is a permissions system of which the most important features are Manage Pages and Read Insights which give access to Facebook page information and reach data respectively. Some insights data is publicly available but appears to be limited to viewing a page's fanbase by country. Also, they store your data for two years, purging the rest. This means if you wish to store longer historic trends, you will need to siphon off the desired data and store it manually (we'll get to this later). Result sets are paginated, but feature a useful key-based system which allows easy retrieval of subsequent pages. As with Google, the performance of querying is very impressive, returning detailed data about 500 posts made over the last two years in under 30 seconds.

Once again, it was simple to produce the API HTTP requests using python's requests library. This library is able to automatically parse the returned JSON payload into a useful Python object. Below is an example of some simple queries run once the API was configured:

>>> results = lyf.fb_insights_query(['page_impressions'], 'day', since='2016-07-26')
>>> print(results['data'][0]['values'][0]['value'])

Twitter is a little trickier to work with. They have a public API which can be used to interface with Twitter quite extensively, but is not necessarily appropriate for analysis. Specifically, I was looking for metrics about reach and the number of impressions, similar to those obtained from Facebook's Insights API. These are available from Twitter's website, but only for a given time frame, as seen below.

Twitter Analytics

They do provide an analytical API for their Ads Service, but this does not appear to be publicly or freely available. Additionally, Gnip provides the official commercial outlet for analytical Twitter data.

Nevertheless, it was quite easy to get a Python integration set up for the public API by storing the consumer keys and access tokens for our Twitter account. Then I used Tweepy as a wrapper for Twitter's REST API. At the very least, follower and tweet count information is accessible, which can give some indication of popularity. Also, we'll be able to use this API to automate some tweet management if we need to at a later date.


YouTube data is expectedly accessible using the same Google framework as before. This allows us to access publicly available YouTube data with only a small amount of additional set up. YouTube Analytics also has a very rich data platform similar to Google Analytics which can be seen below.

YouTube Analytics

However, while it has an API, it cannot be used with a service account, the server-to-server setup I used earlier. So for now, we'll have to settle with collecting simple statistics (views and likes per video), but if we want the richer data set (watch time, demographics, audience retention) we will need to set up a live OAuth2 connection.


Last but not least, MailChimp also provides some analytics capability from within the site itself.

MailChimp Analytics

In addition, there is also a well documented API for looking at your subscription lists and campaigns. This can be simply queried using the requests library and supplying an API key.

Once again, as well as the ability to perform maintenance tasks, a decent amount of information is available:

  • Click Rates
  • Subscription Numbers
  • Open Rates
  • Geographical Information
  • Recipients
  • Devices
  • Revenue
  • Monthly History

That wraps up this first exploratory part, just dipping my toes into the various data streams out there. It's clear that there's a lot of data available but there still lies some complexity on tying it together for meaningful analysis. The next part of this blog series will look at trying to collect some of this data on a single platform so they can be queried together.

If you'd like to have a go at using the code from this blog, it's all on GitHub. Bear in mind that you will need to fill in a config.ini file from the sample, and configure it for authentication with your own social media accounts.

If you'd like to know more about how Rittman Mead can help your organisation make the most of this kind of data, please get in touch via the website.

Categories: BI & Warehousing

65 Business Jargon Phrases to Stop Using and What Do Use Instead

Complete IT Professional - Mon, 2016-08-22 06:00
If you’ve worked in an office for any length of time, you’ve probably heard one of these phrases. A phrase that makes no sense at all unless you explain it. A word that is used by management in not quite the correct way. A sentence that has a bunch of words but doesn’t really say […]
Categories: Development

PeopleSoft Jolt Security

Jolt along with Tuxedo supports PeopleSoft web requests. Specifically, Jolt is the layer between the application server and the web server. It is also described as a Java-enabled version of Tuxedo.

When performing a PeopleSoft security audit, Integrigy reviews in detail the PeopleSoft Jot security settings to ensure they are set per best practice recommendations.  To do this yourself, use the table below to review your settings. These settings should also be regularly reviewed to ensure against configuration drift.



Recommended Value

Disconnect Timeout

Seconds to wait before disconnecting Oracle Jolt connection. Zero (0) means no limit.


Send Timeout

Maximum number of seconds servlet allowed to send a request.


Receive Timeout

Maximum number of seconds servlet will wait for a response.



If you have questions, please contact us at info@integrigy.com

Michael A. Miller, CISSP-ISSMP, CCSP


PeopleSoft Database Security

PeopleSoft Security Quick Reference

Auditing, Oracle PeopleSoft
Categories: APPS Blogs, Security Blogs

Stop Constantly Adding Disks To Your ASM Disk Groups. Resize Your ASM Disks On All-Flash Array Storage. Adding Disks Is Really “The Y2K Way.” Here’s Why.

Kevin Closson - Sun, 2016-08-21 23:29

This blog post is centered on All-Flash Array(AFA) technology. I mostly work with EMC XtremIO but the majority of my points will be relevant for any AFA. I’ll specifically call out an array that doesn’t fit any of the value propositions / methods I’m writing about in this post.

Oracle Automatic Storage Management (ASM) is a very good volume manager and since it is purpose-built for Oracle Database it is the most popular storage presentation model DBAs use today. That is not to say alternatives such as NFS (with optional Direct NFS) and simple non-clustered file systems are obsolete. Not at all. However, this post is about adding capacity to ASM disk groups in an all-flash storage environment.

Are You Adding Capacity or Adding I/O Performance?

One of the historical strengths of ASM is the fact that it supports adding a disk even though the disk group is more or less striped and mirrored (in the case of normal or high redundancy). After adding a disk to an ASM disk group there is a rebalancing of existing data to spread it out over all of the disks–including the newly-added disk(s). This was never possible with a host volume manager in, for example, RAID-10. The significant positive effect of an ASM rebalance is realized, first and foremost, in a mechanical storage environment. In short, adding a disk historically meant adding more read/write heads over your data, therefore, adding capacity meant adding IOPS capability (presuming no other bottlenecks in the plumbing).

The historical benefit of adding a disk was also seen at the host level. Adding a disk (or LUN) means adding a block device and, therefore, more I/O queues at the host level. More aggregate queue depth means more I/O can be “in-flight.”

With All-Flash Array technology, neither of these reasons for rebalance make it worth adding ASM disks when additional space is needed. I’ll just come out and say it in a quotable form:

If you have All-Flash Array technology it is not necessary to treat it precisely the same way you did mechanical storage.

It Isn’t Even A Disk

In the All-Flash Array world the object you are adding as an ASM disk is not a disk at all and it certainly has nothing like arms, heads and actuators that need to scale out in order to handle more IOPS. All-Flash Arrays allows you to create a volume of a particular size. That’s it. You don’t toil with particulars such as what the object “looks like” inside the array. When you allocate a volume from an All-Flash Array you don’t have to think about which controller within the array, which disk shelf, nor what internal RAID attributes are involved. An AFA volume is a thing of a particular size. That’s it. These words are 100% true about EMC XtremIO and, to the best of my knowledge, most competitors offerings are this was as well. The notable exception is the HP 3PAR StoreServ 7450 All-Flash Array which burdens administrators with details more suited to mechanical storage as is clearly evident in the technical white paper available on the HP website (click here).

What About Aggregate Host I/O Queue Depth?

So, it’s true that adding a disk to an ASM disk group in the All-Flash Array world is not a way to make better use of the array–unlike an array built on mechanical storage. What about the host-level benefit of adding a block device and therefore increasing host aggregate I/O queue depth? As it turns out, I just blogged a rather in-depth series of posts on the matter. Please see the following posts where I aim to convince readers that you really do not need to assemble large numbers of block devices in order to get significant IOPS capacity on modern hosts attached to low-latency storage such as EMC XtremIO.

What’s It All Mean?

To summarize the current state of the art regarding adding disks to ASM disks groups:

  • Adding disks to ASM disk groups is not necessary to improve All Flash Array “drive” utilization.
  • Adding disks to ASM disk groups is not necessary to improve aggregate host I/O queue depth–unless your database instance demands huge IOPS–which it most likely doesn’t.

So why do so many–if not most–Oracle shops still do the old add-a-disk-when-I-need-space thing? Well, I’m inclined to say it’s because that’s how they’ve always done it.  By saying that I am not denigrating anyone! After all, if that’s the way it’s always been done then there is a track record of success and in today’s chaotic IT world I have no qualms with doing some that is proven. But loading JES3 card decks into a card reader to fire off an IBM 370 job was proven and we don’t do much of that these days.

If doing something simpler has no ill effect, it’s probably worth consideration.

If You Need More Capacity, Um, Why Not Make Your Disk(s) Larger?

I brought that up in twitter recently and was met with a surprising amount of negative feedback. I understood the face value of the objections and that’s why I’m starting this section of the post with objection-handling. The objections all seemed to have revolved about the number of “changes” involved with resizing disks in an ASM disk group when more space is needed.  That is, the consensus seemed to believe that resizing, say, 4 ASM disks accounts for more “changes” than adding a single disk to 4 existing disks. Actually, adding a disk makes more changes. Please read on.

Note: Please don’t forget that I’m writing about resizing disks in an All-Flash Array like EMC XtremIO or even competitive products in the same product space.

A Scenario

Consider, for example, an ASM disk group that is comprised of 4 LUNs mapped to 4 volumes in an All Flash Array like (like XtremIO). Let’s say the LUNs are each 128GB for a disk group capacity of 512GB (external redundancy of course). Let’s say further that the amount of space to be added is another 128GB–a 25% increase and that the existing space is nearly exhausted. The administrators can pick from the following options:

  1. Add a new 128GB disk (LUN). This involves a) creating the volume in the array and b) discovering the block device on the host and c) editing udev rules configuration files for the new device and c) adding the disk to ASM and, finally, d) performing a rebalance.
  2. Resize the existing 4 LUNs to 160GB each. This involves a) modifying 4 volumes in the array to increase their size and b) discovering the block device on the host and c) updating the run-time multipath metadata (runtime command, no config file changes) and d) executing the ASM alter diskgroup resize all command (merely updates ASM metadata).

Option #1 in the list makes a change in the array (adding a volume deducts from fixed object counts) and two Operating System changes (you are creating a block device and editing udev config files and–most importantly–ASM will perform significant physical I/O to redistribute the existing data to fan it out from 4 disks to 5 disks.

Option #2 in the list actually make no changes.

If doing something simpler has no ill effect, it’s probably worth consideration.

The Resizing Approach Really Involves No Changes?

How can I say resizing 4 volumes in an array constitutes no changes? OK, I admit I might be splitting hairs on this but bear with me. If you create a volume in an array you have a new object that has to be associated with the ASM disk group. This means everything from naming it to tagging it and so forth. Additionally, arrays do not have an infinite number of volumes available. Moreover, arrays like XtremIO support vast numbers of volumes and snapshots but if your ASM disk groups are comprised of large numbers of volumes it takes little time to exhaust even the huge supported limit of snapshots in a product like XtremIO. If you can take the leap of faith with me regarding the difference between creating a volume in an All-Flash Array versus increasing the size of a volume then the difference at the host and ASM level will only be icing on the cake.

The host in Option  #2 truly undergoes no changes. None. In the case study below you’ll see that resizing block devices on modern Linux hosts is an operation that involves no changes. None.

But It’s Really All About The Disruption

If you add a disk to an ASM disk group you are making storage and host changes and you are disrupting operations due to the rebalancing. On the contrary the resize disks approach is clearly free of changes and is even more clearly free of disruption. Allow me to explain.

The Rebalance Is A Disruption–And More

The prime concern about adding disks should be the overhead of the rebalance operation. But so many DBAs say they can simply lower the rebalance power limit (throttle the rebalance to lessen its toll on other I/O activity).

If administrators wish to complete the rebalance operation as quickly as possible then the task is postponed for a maintenance window. Otherwise production I/O service times can suffer due to the aggressive nature of ASM disk rebalance I/O. On the other hand, some administrators add disks during production processing and simply set the ASM rebalance POWER level to the lowest value. This introduces significant risk. If an ASM disk is added to an ASM disk group in a space-full situation the only free space for new data being inserted is in the newly added disk. The effect this has on data distribution can be significant if the rebalance operation takes significant time while new data is being inserted.

In other words, with the add-disk method administrators are a) making changes in the array, making changes in the Operating System and physically rebalancing existing data and doing so in a maintenance window or with a low rebalance power limit and likely causing data placement skew.

The resize-disk approach makes no changes and causes no disruption and is nearly immediate. It is a task administrators can perform outside maintenance windows.

What If My Disks Cannot Be Resized Because They are Already Large?

An ASM disk in 11g can be 2TB and in 12c, 4PB. Now, of course, Linux block devices cannot be 4PB but that’s what Oracle documentation says they can (obviously theoretically) be. If you have an ASM disk group where all the disks have been resized to 2TB then you have to add a disk. What’s the trade off? We’ll, as the disks were being resized over time to 2TB you made no changes in the array nor the operating system and you never once suffered a rebalance operation. Sure, eventually a disk needed to be added but that is a much less disruptive evolution for a disk group.

Case Study

The following section of this blog post shows a case study of what’s involved when choosing to resize disks as opposed to constantly adding disks. The case study was, of course, conducted on XtremIO so the array-level information is specific to that array.

Every task necessary to resize ASM disks can be conducted without application interruption on modern Linux servers attached to XtremIO storage array. The following section shows an example of the tasks necessary to resize ASM disks in an XtremIO environment—without application interruption.

Figure 1 shows a screen shot of the ASM Configuration Assistant (ASMCA). In the example, SALESDATA is the disk group that will be resized from one terabyte to two terabytes.


Figure 1

Figure 2 shows the XtremIO GUI with focus on the four volumes that comprise the SALESDATA disk group. Since all of the ASM disk space for SALESDATA has been allocated to tablespaces in the database, the Space in Use column shows that the volume space is entirely consumed.


Figure 2

Figure 3 shows the simple, non-disruptive operating system commands needed to determine the multipath device name that corresponds to each XtremIO volume. This is a simple procedure. The NAA Identifier (see Figure 2) is used to query the Device Mapper metadata. As the Figure 3 shows, each LUN is 256GB and the corresponding multipath device for each LUN is reported in the left-most column of the xargs(1) output.


Figure 3

The next step in the resize procedure is to increase the size of the XtremIO volumes. Figure 4 shows the screen output just prior to resizing the fourth of four volumes from the original size of 256GB to the new size of 512GB.


Figure 4

Once the XtremIO volume resize operations are complete (these operations are immediate with XtremIO), the next step is to rescan SCSI busses on the host for any attribute changes to the underlying LUNs. As figure 5 shows, only a matter of seconds is required to rescan for changes. This, too, is non-disruptive.


Figure 5

Once the rescan has completed, the administrator can once again query the multipath devices to find that the LUNs are, in fact, recognized as having been resized as seen in Figure 6.


Figure 6

The final operating system level step is to use the multipathd(8) command to resize the multipath device (see Figure 7). This is non-disruptive as well.


Figure 7

As Figure 8 shows, the next step is to use the ALTER DISKGROUP command while attached to the ASM instance. The execution of this command is nearly immediate and, of course, non-disruptive. Most importantly, after this command completes the new capacity is available and no rebalance operation was required!


Figure 8

Finally, as Figure 9 shows, ASM Configuration Assistant will now show the new size of the disk group. In the example, the SALESDATA disk group has been resized from 1TB to 2TB in a matter of seconds—with no application interruption and no I/O impact from a rebalance operation.


Figure 9


If you have an All-Flash Array, like EMC XtremIO, take advantage of modern technology. Memories of constantly adding disks to ASM disk groups all over your datacenter can fade into vague memories–just like loading those JES2 decks into the card reader of your IBM 370. And, yes, I’ve written and loaded JES2 decks for an IBM 370 but I don’t feel compelled to do that sort of thing any more. Just like constantly adding disks to ASM disk groups some of the old ways are no longer the best ways.


Filed under: oracle

Introduction to data Artisans and Flink

DBMS2 - Sun, 2016-08-21 16:15

data Artisans and Flink basics start:

  • Flink is an Apache project sponsored by the Berlin-based company data Artisans.
  • Flink has been viewed in a few different ways, all of which are similar to how Spark is seen. In particular, per co-founder Kostas Tzoumas:
    • Flink’s original goal was “Hadoop done right”.
    • Now Flink is focused on streaming analytics, as an alternative to Spark Streaming, Samza, et al.
  • Kostas seems to see Flink as a batch-plus-streaming engine that’s streaming-first.

Like many open source projects, Flink seems to have been partly inspired by a Google paper.

To this point, data Artisans and Flink have less maturity and traction than Databricks and Spark. For example: 

  • The first line of Flink code dates back to 2010.
  • data Artisans and the Flink open source project both started in 2014.
  • When I met him in late June, Kostas told me that Data Artisans had raised $7 million and had 15 employees.
  • Flink’s current SQL support is very minor.

Per Kostas, about half of Flink committers are at Data Artisans; others are at Cloudera, Hortonworks, Confluent, Intel, at least one production user, and some universities. Kostas provided about 5 examples of production Flink users, plus a couple of very big names that were sort-of-users (one was using a forked version of Flink, while another is becoming a user “soon”).

The technical story at data Artisans/Flink revolves around the assertion “We have the right architecture for streaming.” If I understood data Artisans co-founder Stephan Ewen correctly on a later call, the two key principles in support of that seem to be:

  • The key is to keep data “transport” running smoothly without interruptions, delays or bottlenecks, where the relevant sense of “transport” is movement from one operator/operation to the next.
  • In this case, the Flink folks feel that modularity supports efficiency.

In particular:

  • Anything that relates to consistency/recovery is kept almost entirely separate from basic processing, with minimal overhead and nothing that resembles a lock.
  • Windowing and so on operate separately from basic “transport” as well.
  • The core idea is that special markers — currently in the ~20 byte range in size — are injected into the streams. When the marker gets to an operator, the operator snapshots the then-current state of its part of the stream.
  • Should recovery ever be needed, consistency is achieved by assembling all the snapshots corresponding to a single marker, and replaying any processing that happened after those snapshots were taken.
    • Actually, this is oversimplified, in that it assumes there’s only a single input stream.
    • Alot of Flink’s cleverness, I gather, is involved in assembling a consistent snapshot despite the realities of multiple input streams.

The upshot, Flink partisans believe, is to match the high throughput of Spark Streaming while also matching the low latency of Storm.

The Flink folks naturally have a rich set of opinions about streaming. Besides the points already noted, these include:

  • “Exactly once” semantics are best in almost all use cases, as opposed to “at least once”, or to turning off fault tolerance altogether. (Exceptions might arise in extreme performance scenarios, or because of legacy systems’ expectations.)
  • Repetitive, scheduled batch jobs are often “streaming processes in disguise”. Besides any latency benefits, reimplementing them using streaming technology might simplify certain issues that can occur around the boundaries of batch windows. (The phrase “continuous processing” could reasonably be used here.)

We discussed joins quite a bit, but this was before I realized that Flink didn’t have much SQL support. Let’s just say they sounded rather primitive even when I assumed they were done via SQL.

Our discussion of windowing was more upbeat. Flink supports windows based either on timestamps or data arrival time, and these can be combined as needed. Stephan thinks this flexibility is important.

As for Flink use cases, they’re about what you’d expect:

  • Plenty of data transformation, because that’s how all these systems start out. Indeed, the earliest Flink adoption was for batch transformation.
  • Plenty of stream processing.

But Flink doesn’t have all the capabilities one would want for the kinds of investigative analytics commonly done on Spark.

Related links

Categories: Other

More about Databricks and Spark

DBMS2 - Sun, 2016-08-21 15:36

Databricks CEO Ali Ghodsi checked in because he disagreed with part of my recent post about Databricks. Ali’s take on Databricks’ position in the Spark world includes:

  • What I called Databricks’ “secondary business” of “licensing stuff to Spark distributors” was really about second/third tier support. Fair enough. But distributors of stacks including Spark, for whatever combination of on-premise and cloud as the case may be, may in many cases be viewed as competitors to Databricks cloud-only service. So why should Databricks help them?
  • Databricks’ investment in Spark Summit and similar evangelism is larger than I realized.
  • Ali suggests that the fraction of Databricks’ engineering devoted to open source Spark is greater than I understood during my recent visit.

Ali also walked me through customer use cases and adoption in wonderful detail. In general:

  • A large majority of Databricks customers have machine learning use cases.
  • Predicting and preventing user/customer churn is a huge issue across multiple market sectors.

The story on those sectors, per Ali, is: 

  • First, Databricks penetrated ad-tech, for use cases such as ad selection.
  • Databricks’ second market was “mass media”.
    • Disclosed examples include Viacom and NBC/Universal.
    • There are “many” specific use cases. Personalization is a big one.
    • Conviva-style video operations optimization is a use case for several customers, naturally including Conviva. (Reminder: Conviva was Ion Stoica’s previous company.)
  • Health care came third.
    • Use cases here seem to be concentrated on a variety of approaches to predict patient outcomes.
    • Analytic techniques often combine machine learning with traditional statistics.
    • Security is a major requirement in this sector; fortunately, Databricks believes it excels at that.
  • Next came what he calls “industrial IT”. This group includes cool examples such as:
    • Finding oil.
    • Predictive maintenance of wind turbines.
    • Predicting weather based on sensor data.
  • Finally (for now), there’s financial services. Of course, “financial services” comprises a variety of quite different business segments. Example use cases include:
    • Credit card marketing.
    • Investment analysis (based on expensive third-party data sets that are already in the cloud).
    • Anti-fraud.

At an unspecified place in the timeline is national security, for a use case very similar to anti-fraud — identifying communities of bad people. Graph analytics plays a big role here.

And finally, of course we discussed some technical stuff, in philosophy, futures and usage as the case may be. In particular, Ali stressed that Spark 2.0 is the first that “breaks”/changes the APIs; hence the release number. It is now the case that:

  • There’s a single API for batch and streaming alike, and for machine learning “too”. This is DataFrames/DataSets. In this API …
  • … everything is a table. That said:
    • Tables can be nested.
    • Tables can be infinitely large, in which case you’re doing streaming.
  • Based on this, Ali thinks Spark 2.0 is now really a streaming engine.

Other tidbits included:

  • Ali said that every Databricks customer uses SQL. No exceptions.
    • Indeed, a “number” of customers are using business intelligence tools. Therefore …
    • … Databricks is licensing connector technology from Simba.
  • They’re working on model serving, with a REST API, rather than just model building. This was demoed at the recent Spark Summit, but is still in the “nascent” stage.
  • Ali insists that every streaming system with good performance does some kind of micro-batching under the hood. But the Spark programmers no longer need to take that directly into account. (In earlier versions, programmatic window sizes needed to be integer multiples of the low-level system’s chosen interval.)
  • In the future, when Databricks runs on more than just the Amazon cloud, Databricks customers will of course have cloud-to-cloud portability.
Categories: Other

Journalismus, den die Welt braucht. Teil 2 | Pearls of journalism. Part 2

Dietrich Schroff - Sun, 2016-08-21 13:47
Two weeks ago i started a first experiment with translating strange articles by Google (see here).
This was the next snippet (from IT-BUSINESS), which was a little bit weird:
Aha. Fog Computing...

Let's see, what Google Translate means to this:

Fog Compuing

The Fog is all symbolize the cloud around. It is all about the primary processing of data close to the data encoder, such as in the car or in the refrigerator before it is to be forwarded to cloud services. The more devices are connected to the cloud, the more data must be downloaded from and to the cloud. Fog Computing to store data that are needed for a specific device right there too and process.
"Downloaded from and to the cloud"...
For this snippet it wasn't really helpful to translate it into to english. But Fog Computing is a nice neologism ;-)

Adaptive Cursor Sharing Fail

Randolf Geist - Sun, 2016-08-21 10:34
Here is another example (besides the fact that Adaptive Cursor Sharing only gets evaluated during a PARSE call (still valid in 12c) and supports a maximum of 14 bind variables) I've recently come across at a client site where the default implementation of Adaptive Cursor Sharing fails to create a more suitable execution plan for different bind variable values.

Broken down to a bare minimum the query was sometimes executed using non-existing values for a particular bind variable, but other times these values were existing and very popular. There were two suitable candidate indexes and one of them appeared to the optimizer more attractive in case of the "non-existing" value case. Unfortunately this index was a very bad choice for the "existing and very popular" case.

The critical point of the case is that due to other, additional filters on the same table/index the final number of rows produced by the row source operation was (almost) the same for the "non-existing" and the "existing and very popular" case, but in the latter case a large proportion of the bad index had to be scanned due to the popularity of the bind value, which was the only predicate that could be used as ACCESS predicate and the additional ones could only be applied as FILTER on the index.

So although there was a suitable histogram available on the column in question and the optimizer in principle was aware of the different popularities of values and marked the cursor as "bind sensitive" it looks like when a cursor is then monitored it just is assigned to different buckets as exposed via V$SQL_CS_HISTOGRAM, based on the rows produced by the row source. Since the operation in question didn't differ much in the final number of rows produced between the two cases (either 0 rows or 1 row) the different executions were all assigned to the same bucket - although the amount of work required to produce this single row was huge, reflected in execution time and physical / logical I/Os performed per execution.

Obviously this vastly different amount of work performed to produce a similar number of rows is not used as criteria for marking a cursor as "bind aware" and evaluate different execution plans based on ranges of bind values.

There is a view V$SQL_CS_STATISTICS that includes information about CPU_TIME and BUFFER_GETS but in it only reflects the initial execution when starting with the "non-existing" value case, but not the slow executions afterwards. In the view is no longer populated at all, which suggests that this information is simply not used for deciding the "bind aware" status of a cursor.

Discussing the case with Mohamed Houri, who has spent significant time on investigating the "Adaptive Cursor Sharing" feature seemed to confirm this assumption.

Here is a simple test case that allows reproducing the issue:

-- FILTER1 is highly skewed (here one very popular value -1)
create table t
rownum as id
, rownum as n
, case when rownum <= 100000 then rownum else -1 end as filter1
, rownum as filter2
, rpad('x', 200) as filler1
, rownum as filter3
, rpad('x', 200) as filler2
connect by level <= 1e6;

-- Histogram on FILTER1 should make the sample query "bind sensitive"
exec dbms_stats.gather_table_stats(null, 't', method_opt => 'for all columns size 1 for columns size 254 filter1')

-- The "good" index, ACCESS on all three columns
-- But higher BLEVEL (3) due to FILLER1 only one value and wide - needs to be part of every branch block
create index t_idx1 on t (filter1, filler1, filter2) compress 2 /*advanced low*/;

-- The "bad" index, ACCESS only on first column, FILTER on subsequent columns
-- But lower BLEVEL (2) due to FILTER3 before FILLER1, leads to much more compact branch blocks
create index t_idx2 on t (filter1, filter3, filler1, filler2, filter2) compress 1 /*advanced low*/;
The critical point is that the good index has a higher BLEVEL than the bad index. So in case a non-existing value for FILTER1 gets used the optimizer will favor the index with the lower BLEVEL, which is for that case the key cost component:

set echo on

-- Non-existing value
-- T_IDX2 gets preferred
explain plan for
select sum(n) from t where filter1 = -999999 and filler1 = rpad('x', 200) and filter2 = 999999;

select * from table(dbms_xplan.display);

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 216 | 4 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 216 | | |
| 2 | TABLE ACCESS BY INDEX ROWID BATCHED| T | 1 | 216 | 4 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | T_IDX2 | 1 | | 3 (0)| 00:00:01 |

Predicate Information (identified by operation id):

3 - access("FILTER1"=(-999999) AND "FILLER1"='x

filter("FILTER2"=999999 AND "FILLER1"='x


-- Non-existing value
-- T_IDX1 has a higher cost
explain plan for
select /*+ index(t t_idx1) */ sum(n) from t where filter1 = -999999 and filler1 = rpad('x', 200) and filter2 = 999999;

select * from table(dbms_xplan.display);

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 216 | 5 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 216 | | |
| 2 | TABLE ACCESS BY INDEX ROWID BATCHED| T | 1 | 216 | 5 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | T_IDX1 | 1 | | 4 (0)| 00:00:01 |

Predicate Information (identified by operation id):

3 - access("FILTER1"=(-999999) AND "FILLER1"='x


-- Existing and very popular value
-- T_IDX1 gets preferred
explain plan for
select sum(n) from t where filter1 = -1 and filler1 = rpad('x', 200) and filter2 = 999999;

select * from table(dbms_xplan.display);

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 216 | 5 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 216 | | |
| 2 | TABLE ACCESS BY INDEX ROWID BATCHED| T | 1 | 216 | 5 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | T_IDX1 | 1 | | 4 (0)| 00:00:01 |

Predicate Information (identified by operation id):

3 - access("FILTER1"=(-1) AND "FILLER1"='x


-- Existing and very popular value
-- T_IDX2 is a bad idea, correctly reflected in the cost estimate
explain plan for
select /*+ index(t t_idx2) */ sum(n) from t where filter1 = -1 and filler1 = rpad('x', 200) and filter2 = 999999;

select * from table(dbms_xplan.display);

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 216 | 58269 (1)| 00:00:03 |
| 1 | SORT AGGREGATE | | 1 | 216 | | |
| 2 | TABLE ACCESS BY INDEX ROWID BATCHED| T | 1 | 216 | 58269 (1)| 00:00:03 |
|* 3 | INDEX RANGE SCAN | T_IDX2 | 1 | | 58268 (1)| 00:00:03 |

Predicate Information (identified by operation id):

3 - access("FILTER1"=(-1) AND "FILLER1"='x

filter("FILTER2"=999999 AND "FILLER1"='x

So when optimizing for the two different cases the optimizer does the right thing and understands that for the case "existing and very popular" T_IDX2 is a bad choice. It's also obvious from the "Predicate Information" section that the index T_IDX2 only can use FILTER1 as ACCESS predicate.

But when using bind variables the different cases are not recognized and the bad index is used for both cases when the optimization is based on the "non-existing value" case:

-- Default behaviour, although being BIND_SENSITIVE thanks to the histogram, no ACS kicks in
-- The V$SQL_CS_HISTOGRAM shows all executions being in bucket 0, so according to BIND_SENSITIVE monitoring no need for action
set echo on timing on time on

alter session set statistics_level = all;

variable n number

exec :n := -999999

res number;
select sum(n) into res from t where filter1 = :n and filler1 = rpad('x', 200) and filter2 = 999999;

exec :n := -1

res number;
for i in 1..1000 loop
select sum(n) into res from t where filter1 = :n and filler1 = rpad('x', 200) and filter2 = 999999;
end loop;

SQL> alter session set statistics_level = all;

Session altered.

Elapsed: 00:00:00.03
SQL> variable n number
SQL> exec :n := -999999

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03
SQL> declare
2 res number;
3 begin
4 select sum(n) into res from t where filter1 = :n and filler1 = rpad('x', 200) and filter2 = 999999;
5 end;
6 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.28
SQL> exec :n := -1

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
SQL> declare
2 res number;
3 begin
4 for i in 1..1000 loop
5 select sum(n) into res from t where filter1 = :n and filler1 = rpad('x', 200) and filter2 = 999999;
6 end loop;
7 end;
8 /

PL/SQL procedure successfully completed.

Elapsed: 00:04:48.64

-- With BIND_AWARE a second child cursors gets generated and used on second parse, interesting
set echo on timing on time on

alter session set statistics_level = all;

variable n number

exec :n := -999999

res number;
select /*+ bind_aware */ sum(n) into res from t where filter1 = :n and filler1 = rpad('x', 200) and filter2 = 999999;

exec :n := -1

res number;
for i in 1..1000 loop
select /*+ bind_aware */ sum(n) into res from t where filter1 = :n and filler1 = rpad('x', 200) and filter2 = 999999;
end loop;

SQL> alter session set statistics_level = all;

Session altered.

Elapsed: 00:00:00.03
SQL> variable n number
SQL> exec :n := -999999

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03
SQL> declare
2 res number;
3 begin
4 select /*+ bind_aware */ sum(n) into res from t where filter1 = :n and filler1 = rpad('x', 200) and filter2 = 999999;
5 end;
6 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
SQL> exec :n := -1

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03
SQL> declare
2 res number;
3 begin
4 for i in 1..1000 loop
5 select /*+ bind_aware */ sum(n) into res from t where filter1 = :n and filler1 = rpad('x', 200) and filter2 = 999999;
6 end loop;
7 end;
8 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.20

-- Or hinting for a fixed plan helps, too
set echo on timing on time on

alter session set statistics_level = all;

variable n number

exec :n := -999999

res number;
select /*+ index(t t_idx1) */ sum(n) into res from t where filter1 = :n and filler1 = rpad('x', 200) and filter2 = 999999;

exec :n := -1

res number;
for i in 1..1000 loop
select /*+ index(t t_idx1) */ sum(n) into res from t where filter1 = :n and filler1 = rpad('x', 200) and filter2 = 999999;
end loop;
Executing the unhinted, default case leads to a very inefficient execution of the thousand executions using the popular value (and the actual table / index at the client site was much larger than this example and the query got executed very frequently) - compare that to the execution time for the other two variants, either using the BIND_AWARE hint or simply requesting the good index.

So when requesting to mark the cursor immediately BIND_AWARE via the corresponding hint (only possible if the cursor qualifies to be bind sensitive) Oracle happily generates a second child cursor at the second PARSE call with a different, more efficient execution plan for the popular value, because when a cursor is marked bind aware a completely different algorithm gets used that is based on range of values as exposed via V$SQL_CS_SELECTIVITY, and since the different values are falling into different ranges of values a new plan gets generated that is different from the previous one and hence gets used from then on for those (range of) values.

Sequence.nextval not working in sqlldr

Tom Kyte - Sun, 2016-08-21 09:06
sequence.nextval not working when i am using in sqlldr , sqlldr loading null value in DB Snapshot of my control file load data append into table Selectiotab WHEN (1:2) <> 'H|' and (1:2) <> 'T|' fields terminated by "|" trailing nullcols ( ...
Categories: DBA Blogs

As a DBA how to verify if private database link is functional

Tom Kyte - Sun, 2016-08-21 09:06
Hi, Being a DBA, how can we verify if private database link created in application schema is working fine? If application schema's password is not known to DBA. We can verify public database link using select * from dual@public_db_link; How pri...
Categories: DBA Blogs


Tom Kyte - Sun, 2016-08-21 09:06
HELLO SIR, my name is bhargav reddy from Bangalore, actually i am a student, i want to know how to take backup of .DBF file which is belongs to live application, so please tell me the process to do that and one more thing that is it safe to take ...
Categories: DBA Blogs

Notification mail from database server

Tom Kyte - Sun, 2016-08-21 09:06
Hi team, I have manually check the tablespace size daily basis now i want to configure the script which check the tablespace size and if the size is less than 10GB then it will send a mail to mai Id So, Which things we need to configure on ser...
Categories: DBA Blogs

Community and the “Hive Mind”

Tim Hall - Sun, 2016-08-21 04:56

A really great thing about being part of the Oracle community is the networking and access to information it gives you. In my current job I have to do a number of things that are not really my forte. I know how to learn new stuff, but it’s always nice to get some feedback from people who know more than you. Over the last few years there have been a number of specific cases that spring to mind.

  • When I started getting involved in WebLogic administration I reached out to Simon Haslam, Jacco Landlust, Chris Muir and Marcus Eisele for some advice. I had RTFMed and I kind-of knew what I was doing, but I just wanted someone with more experience to validate my approach. Their feedback was, and still is, really valuable.
  • On the Nordic Tour a couple of years ago I spent some time speaking to Lonneke Dikmans about SOA. Our company were starting down the path of SOA and although I was not directly involved in the project, I wanted to know more about the issues we might encounter. Lonnikke was the perfect person for me to speak to.
  • As I started to take on more MySQL work, including some upgrades from early releases of 4.x to 5.6, I reached out to Ronald Bradford and Sheeri Cabral for advice. In some cases they confirmed what I already knew. In other cases they completely altered what I had planned.
  • Our company have recently signed a deal to move to Oracle Cloud Apps. I know almost nothing about them, but fortunately the ball-and-chain does. I’m not directly involved in our Oracle Cloud Apps migration project, but as I said before I like to know what problems may come my way during the process, and Debra is a great source of information.
  • Even when I’m dealing with database stuff, which I think I’m pretty good at, it’s always nice to have access to other opinions. The number of people I’ve had help from in the database world is to long to list. Just a few days ago I reached out to Richard Harrison about a problem I was having and in one email exchange he solved it for me. That probably saved me hours of messing about.

This is not about expecting help. This is not about trying to offload my responsibilities onto others. It’s mostly about getting a second opinion and the odd bit of advice. More importantly, the other people in the community need to know that you’ve got their back too. This has to be a two-way street!

Access to the community “Hive Mind” is a wonderful thing and will make your life so much easier! Get involved!



Community and the “Hive Mind” was first posted on August 21, 2016 at 10:56 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

(Almost) Everything you ever wanted to know about SQLDeveloper Unit Testing but were afraid to ask

The Anti-Kyte - Sat, 2016-08-20 15:46

The Political fallout from Brexit continues unabated.
In the immediate aftermath of the vote, David Cameron resigned as Prime Minister, triggering a power-struggle to succeed him.
Secret pacts, double-crosses and political “assassination” followed in short order.
It was like watching an episode of Game of Thrones although, mercifully, without the full-frontal nudity.
As for the Labour Party, they have captured the Zeitgeist…by descending into the sort of internal conflict for which the word “internecine” was invented.

Following the current trend that promises are made for breaking, this post has arrived a little bit later than advertised.
I write most of this stuff on the train to and from work, and, as they have been unusually punctual of late, my Sprint 1 velocity is somewhat lower than anticipated.
So, with apologies for my tardiness…

A Quick Overview

When it comes to Unit Testing Tools/Frameworks for PL/SQL, SQLDeveloper Unit Testing can probably be considered the “official” Oracle offering.
After all, SQLDeveloper is shipped with the Oracle Database.

Therefore, this seems like a fitting place to continue my PL/SQL Unit Testing odyssey.

Useful Links and Resources

There is a very good tutorial covering setting up Tests on a Stored Procedure right there in the SQLDeveloper Help, as well as other useful info about SQlDeveloper Unit Testing :


This post from Jeff Smith is a useful introduction to SQLDeveloper Unit Testing, in which he takes a different approach to some of the tasks that I’ve covered here.

For a completely different take on things, this article by Wouter Groeneveld is so good that I will be plagiarising bits of it later on.

Before I go any further, SQLDeveloper Unit Testing is a bit of a mouthful. Additionally I’m never sure if I should be calling it a Tool or a Framework. Therefore, I’m going to refer to it as SUT for the remainder of this post.

Testing Approach

The approach I’ve taken to evaluating SUT (and the other frameworks that I will look at in later posts) is to use the application I introduced last time and to see how I can test some of the User Stories I have for it.
These Stories cover functionality that you would commonly find in a PL/SQL application ( DML operations, ETL data load and Ref Cursor population).
Additionally, I’ve taken the view that the Application Code should not be changed in any way to accommodate any vagaries of SUT.

As it’s the Testing Tool that’s my primary concern here, the Application Code I’ve included here is pretty much the minimum required to get the tests to pass.
It’s not what you’d call Production ready.

Also, as this is my first foray into SUT so some of the Tests may not have been written in the optimum fashion.
The examples below are more a case of me saying “this is how I managed to test this feature in SUT” rather than “this is the recommended way to test this feature”.

Test Environment

These tests have been put together and executed in the current Production version of SQLDeveloper ( 4.1.3).
The database used is Oracle 11g Express Edition.

Toolbars and Buttons

The SUT toolbar looks like this :


From left-to-right, the buttons are :

  • Freeze View
  • Refresh
  • Debug – provides feedback on the runnable bits of your test without actually running it
  • Run

Additionally, I’ll be mentioning the “Plus” button quite a bit. That would be this :


Right, I think we’re just about ready to go.
Before we start writing any tests, we need to do a bit of groundwork…

SQLDeveloper Unit Testing – Configuration

First things first, according to the SQLDeveloper Help :
“The Unit Test Repository is a set of tables, views, indexes and other schema objects…”

Look, don’t panic. Despite this apparently database-centric architecture, you do not have to deploy your test code along with your application code base to execute your tests. Remember, as with all the mainstream Oracle IDEs, SQLDeveloper allows you to be connected to different databases simultaneously.
Added to this, it would appear that a significant element of the SUT architecture involves XML configuration files. This would also explain the lack of any requirement to mess about with database links to get all this stuff to work.

Repository Pre-Setup Tasks

The first design decision I made was that I wanted to keep any testing infrastructure entirely separate from my Application Code.
Therefore, I’ve created a separate database schema – TEST_REPO – specifically to hold the Repository.

Whilst it’s perfectly possible to have SQLDeveloper guide you through all of the setup work, this does require that you connect as a highly privileged user (SQLDeveloper tends to ask for the SYS password).
That’s not a problem if, like me, you’re just messing about on your own copy of Oracle Express Edition. If you’re in a more formal environment however, you may well need to provide your DBA with a script.

This should do the job (runnable in SQL*Plus) …

-- Script to create the roles required for a SQLDeveloper Unit Testing Repository
-- and a schema to host the repository
set verify off
define repo_owner = test_repo
define repo_default_ts = users
define repo_temp_ts = temp
accept passwd prompt 'Enter a password for the new schema [] : ' hide

-- Setup Roles
create role ut_repo_administrator;
grant create public synonym,drop public synonym to ut_repo_administrator;
grant select on dba_role_privs to ut_repo_administrator;
grant select on dba_roles to ut_repo_administrator;
grant select on dba_tab_privs to ut_repo_administrator;
grant execute on dbms_lock to ut_repo_administrator;

create role ut_repo_user;
grant select on dba_role_privs to ut_repo_user;
grant select on dba_roles to ut_repo_user;
grant select on dba_tab_privs to ut_repo_user;
grant execute on dbms_lock to ut_repo_user;

-- Create Schema to host Repository
create user &repo_owner identified by &passwd;
alter user &repo_owner default tablespace &repo_default_ts;
alter user &repo_owner temporary tablespace &repo_temp_ts;
alter user &repo_owner quota unlimited on &repo_default_ts;

-- System Privs
grant create session, connect, resource, create view to &repo_owner;

-- Role Privs
grant ut_repo_administrator to &repo_owner with admin option;
grant ut_repo_user to &repo_owner with admin option;

-- Object Priv
grant select on dba_roles to &repo_owner;
grant select on dba_role_privs to &repo_owner;

Note that you may want to change the the Repository Owner Schema and tablespace variables to values more suited to your environment.

Now we’ve created the new schema, we need to create a connection for it in SQLDeveloper.

In the Connections Navigator, click the Plus button …


Input the appropriate details in the New/Select Database Connection Window…


If you hit the Test button, you should see the Status set to “Success” in the bottom left-hand corner of the window.

Once it’s all working, hit the Save button, to retain the connection, then hit Connect to logon as TEST_REPO.

Now that’s all done we’re ready for the next step…

Creating the Repository

From the Main Menu, select View/Unit Test

A new, but currently quite boring navigator window will open up under the Connections Navigator :


From the Main Menu, select Tools/Unit Test/Select Current Repository…

In the Repository Connection window, select the test_repo connection we’ve just created…


…and hit OK. This should give you :


Just say Yes.

After a brief interlude, during which SQLDeveloper does an impression of a Cylon from the original Battlestar Galactica…


By your command !

you will get..


If you now have a look in the Object Browser for the test_repo user, you’ll see that SQLDeveloper has been quite busy…


Next up…

Granting Access to the Repository

From the Tools menu select Unit Tests then Manage Users…

When prompted for the connection to manage users, I’ve chosen TEST_REPO as it’s the only one that currently has admin rights on the repository.

The Manage Users Window that appears has two tabs, one for Users and one for Administrators :


I’ve added the owner of the application code that I’ll be testing to keep things simple.

The result is that, if I now connect as FOOTIE ( the application owner), I should be able to start adding some tests.

Naming Conventions

The SQLDeveloper Help has something to say on naming tests, which you may want to consider.
I’ve simply gone down the route of using the format package_name.procedure_name_n.
I want to group the unit tests at the same level as the programs for which they are written, so the Test Suites I’ll be creating are grouped by (and named for) the packages that the tests run against.

One aspect of object naming that I haven’t formalised in this post is that of items that I’ve added to the Library. This is an area to which you may well want to give some consideration.

Testing Insert Statements

The first set of tests I need to write centre around inserting data into tables. To cover some of the scenarios we might commonly encounter when performing this sort of operation, I’ll be looking at adding records to Tables related to each other by means of Referential Integrity constraints.

My First Test – Inserting a Record into a table with a Primary Key

I’ve got a User Story about adding Competitions to the application. The Story has two Acceptance Criteria :

  1. A new competition can be added
  2. A Competition cannot be added more than once

Here’s a quick reminder of what the COMPETITIONS table looks like :

create table competitions
    comp_code varchar2(5) constraint comp_pk primary key,
    comp_name varchar2(50) not null,
    description varchar2(4000)

As we’re trying to follow the approach of Test Driven Development and write our tests first, we just have a stub PL/SQL procedure to run the initial tests against :

create or replace package body manage_competitions

    procedure add_competitions
        i_code competitions.comp_code%type,
        i_name competitions.comp_name%type,
        i_desc competitions.description%type default null
    end add_competitions;
end manage_competitions;

Time to meet…

The Create Unit Test Wizard

As this is my first test, I’m going to go through it step-by-step here. After that, the novelty will quickly wear off and I’ll just reference the steps or the Test Process Stages as appropriate.

In the Unit Test Navigator, right-click the Tests node and select Create Test…


This will bring up the Create Unit Test Wizard


Select the Connection for the Application Owner (FOOTIE, in this case) from the Connections drop-down…


…and navigate the tree to the procedure we want to test – i.e. manage_competitions.add_competition…


And now click Next

This brings us to step 2 of the Wizard, specifying the Test Name. In line with my standard, I’m calling this one MANAGE_COMPETITIONS.ADD_COMPETITION_1.
Leave the Radio Group selection as Create with single Dummy implementation


Click Next

I’m not going to create a Setup Process in this test for the moment. If you did want to, then you’d click the Plus button and…well, we’ll return to that later.
Anyway, step 3 looks like this :


Click Next

Now we need to specify the input parameter values that we’ll be using in the test.
The values I’m going to use are :

  • I_CODE – UT1
  • I_NAME – Test1
  • I_DESC – Test competition

Leave the Expected Result as Success


Click Next

…which takes us to…


Click the Plus to add a Process Validation and a drop-down will appear.
Scroll through this and select Query returning row(s) :


…and you’ll be presented with…


We now replace the default SQL statement with one that checks that we’ve inserted our record successfully.
Rather than hard-coding the COMP_CODE value we’ve input into the procedure, we can use SUT’s substitution syntax, thus inheriting the value of the I_CODE input parameter we specified back in Step 4. The code we’re going to add is :

select null
from competitions
where comp_code = '{I_CODE}'

Note that the parameter names appear to be case sensitive. If it’s entered in lowercase, SUT will complain at runtime.
The end result looks like this :


Click OK to dismiss the Process Validation Window.

Back on the Wizard page, click Next

Finally, we need to specify the Teardown Process – i.e. code to return the application to the state it was prior to the test being run.


Hit the Plus button again and you will get a slightly different drop_down. This time, we want User PL/SQL Code


You’ll now get the Teardown Process Window. As we’ve only inserted a single row as part of this test, we can simply rollback the transaction to put things back as they were. This being a PL/SQL block, the code is :



Click OK to dismiss the Window.
Now click Next to get to the Summary Page, which should look something like…


Once you’re happy with the Summary, click Finish.

You should now see the new test in the Unit Test Navigator :


Click on the new test and you will see :


We can now run the test by clicking the Run Button. This will cause the Test Pane to switch to the details tab where the results of our first execution will be displayed :


As you’d expect, because the program we’re calling doesn’t do anything, the test validation fails.

Before we go any further, there are a couple of things we probably want to do with this test.
First of all, we really should have code that ensures that we won’t be trying to insert a value that already exists in this table.
Secondly, it’s quite likely that we’ll want to reuse some of the test code we’ve written in future tests.

OK, first of all then …

Dynamic Value Queries

In my previous attempt to write Unit Tests in anonymous PL/SQL blocks, it was relatively simple to set an appropriate value for the Primary Key of the record we wanted to insert programmatically.
Obviously, this is not the case with the Unit Test we’ve just written.

Fortunately, SUT allows you to populate the call parameters with values from what it calls a Dynamic Value Query.

In the SQLDeveloper help, the method demonstrated for setting up a Dynamic Value Query, involves creating a table to to hold the values and then querying that at runtime.
In this instance, we don’t need a table to determine the parameter values we need to use.

In the Details tab of the test, we need to click the Dynamic Value Query pencil icon :


This will bring up the Dynamic Value Query window.

This code should do the job for now…

-- Make (reasonably) certain that the comp_code value we're trying to create
-- does not already exist in the table.
with suffix as
    select max( to_number( substr( comp_code, regexp_instr( comp_code, '[[:digit:]]')))) + 1 as numeral
    from competitions
    where comp_code like 'UT%'
    and regexp_instr( substr( comp_code, -1, 1), '[[:digit:]]') = 1 -- only want codes with a numeric suffix
    union -- required if there are no records in the table...
    select 1 from dual
select 'UT'||max(numeral) as I_CODE,
    'Test1' as I_NAME,
    'A test' as I_DESC
from suffix
where numeral is not null

…so I’ve dropped it into the Dynamic Value Query…


Click OK and we’re ready to re-run the test to check that our changes have worked.

Press the Run button and SQLDeveloper will ask if you want to save the changes you’ve just made.


We do.

The test will still fail ( we’re still running against a stub, remember), however, we can see from the test results that the Dynamic Values Query has generated the input values we were expecting :


NOTE – whilst the Dynamic Values Query we have written here will (eventually) work as expected, there is rather more to them than meets the eye. This is mainly because they are executed before any other step in a test. This can be problematic when a Dynamic Values Query needs to rely on values that are set in a Test’s Startup Process. There is a solution to this however, which I’ll cover later on.

Adding Code to the Library

We’ve written three distinct bits of code for our first test which we’ll probably want to use again.

SUT provides the Library as a means of storing such code for re-use in other tests.

Looking at our test in SQLDeveloper, the first code we see is the Teardown Process.

To add this to the library, click the Pencil icon to bring up the Teardown Process code window.

In the field at the bottom, we can now give this code a name and then hit the Publish button to send it to the library :


You should now see that the label we’ve given this code (rollback_transaction, in this case) appears in the Library drop-down at the top of the Window :


Now click the OK button to dismiss the window.

We can repeat this process for our Dynamic Values Query ( labelled add_new_competition) and our Validation ( competition_row_added).

If we now expand the Library node in the Unit Test pane, we can see that our code appears in the Library :


This should mean that creating our second test requires a little less typing…

Testing for an Exception

Our second test is to check that the Application does not allow us to add a duplicate COMPETITIONS record.

Opening the Wizard again, we select the same package and procedure as for our first test.

Following my naming convention, this test is called MANAGE_COMPETITIONS.ADD_COMPETITION_2.
This time, we do need a Startup Process to make sure that the record we attempt to insert already exists.
So this time, hit the Plus button on the Startup Process step and select User PL/Sql Code from the drop-down.

In the Startup Process Code Window, we’re going to add the following PL/SQL code :

    merge into competitions
    using dual
    on (comp_code = 'UT1')
    when not matched then
        insert ( comp_code, comp_name, description)
        values( 'UT1', 'Test', null);

Essentially, we want to begin by making sure that a competition with a COMP_CODE value of ‘UT1’ exists in the table.

We’ll probably need to do this again somewhere down the line, so I’ve added it to the library as setup_competition_UT1.

For this test, in the Specify Parameters step, we can type in the parameters manually. Our Setup Process ensures that a record exists and our test execution is to try to insert this record again. The parameter values we enter are :

  • I_CODE = UT1
  • I_NAME = Test
  • I_DESC – leave as null

This time, the expected result should be changed from Success to Exception.
We’re expecting this call to fail with “ORA-00001: unique constraint violated”.
Therefore, we need to set the Expected Error number to 1.


Our test will fail unless it encounters the exception we’ve specified when we call the procedure.
Given this, we don’t need to add a separate validation step.

We will be needing a Teardown Process however.
As before, we’re going to choose User PL/Sql Code from the drop-down.
This time however, we won’t have to type anything in to the window.
Instead, click on the library drop-down at the top of the code window :


… select rollback_transaction from the drop-down, and then check Subscribe


By subscribing to this code snippet, we “inherit” it from the library. Therefore it cannot edit it directly in the test.
If we did want to “tweak” it for the purposes of this specific test, we could hit the Copy button instead of subscribing.
This would copy the code block into the current test where it would then be editable for that test.

The test Summary looks like this :


As expected, when we execute this, it fails because the expected exception was not raised :


Now we have both of our tests written, we need to write some application code to get them to pass.

create or replace package body manage_competitions
	procedure add_competition
		i_code competitions.comp_code%type,
		i_name competitions.comp_name%type,
		i_desc competitions.description%type default null
        insert into competitions( comp_code, comp_name, description)
        values( i_code, i_name, i_desc);
	end add_competition;
end manage_competitions;

Now when we re-execute our tests, we can see that they pass :




Note that the latest test runs appear at the top of the Results listing.

Removing Old Test Results

If you want to remove an older result you can do this by right-clicking on it in the Results Tab and selecting Delete Result….

You can also clear down all results for a test by right-clicking in the Unit Test pane and selecting Purge Test Results…


This will bring up the Purge Test Results dialogue which gives you the option to remove all results for the test, or just those from before a given time and date.


NOTE – if you have a Test or a Suite with it’s Results tab showing when you purge, then they may not disappear immediately.
If this is the case, just click the Refresh button on the Test toolbar.

Creating a Test Suite

The development process around an application such as ours, will revolve around the PL/SQL package as the atomic unit of code.
Even though packages are containers for specific procedures, functions etc, it’s at the package level that the code is deployed to the database and therefore, it’s at that level it’s stored in Source Control.
So, when a developer needs to make a change to a package, we want them to be able to checkout and run all of the tests for the package.
SUT allows us to group our tests by means of a Test Suite.

So, we’re going to create a Suite for the MANAGE_COMPETITIONS package so that we can group the tests we’ve just created, and add more tests to it later on.

In the Unit Test tree, right-click on Suites and select Add Suite…


In the Add Suite dialogue that appears, enter a name for the Suite.


The new suite now appears in the Unit Test tree.
Now we need to add our tests to it.

Expand the tree under the new test suite, right-click on Tests and select Add Test…


Now select the tests you want to add from the list that appears in the Add to Suite Window :


…and click OK.

Notice that, although the tests have been added to the Suite, they still appear under the Test Node in the tree.
This node is going to get fairly crowded as more tests are added. This is one reason that a sensible naming convention is quite useful.

If we now run our new suite, we can see that all tests in the suite will run :


There is more to Test Suites than simply grouping tests together, but more on that later.

Thus far, we’ve covered similar ground to the Tutorial exercise in the SQLDeveloper Help, although it’s fair to say we’ve taken a slightly different route.

We’re now going back to a time when Scotland ruled the world in order to test…

Adding data to a “child” table

The Acceptance Criteria for the Add a Tournament Story are :

  1. A tournament can be added
  2. A tournament cannot be added for a non-existent competition
  3. The competition that this tournament is for must be specified
  4. The same tournament cannot be added for a competition more than once
  5. If specified, the year the tournament begins cannot be greater than the year that it ends

The stub of the code we’ll be testing is…

create or replace package manage_tournaments
	procedure add_tournament
		i_code tournaments.comp_code%type,
		i_year_end tournaments.year_completed%type,
		i_teams tournaments.number_of_teams%type,
		i_host tournaments.host_nation%type default null,
		i_year_start tournaments.year_started%type default null
end manage_tournaments;

create or replace package  body manage_tournaments
	procedure add_tournament
		i_code tournaments.comp_code%type,
		i_year_end tournaments.year_completed%type,
		i_teams tournaments.number_of_teams%type,
		i_host tournaments.host_nation%type default null,
		i_year_start tournaments.year_started%type default null
	end add_tournament;

…and the table DDL is…

create table tournaments
    id number constraint tourn_pk primary key,
    comp_code varchar2(5),
    year_completed number(4) not null,
    host_nation varchar2(100),
    year_started number(4),
    number_of_teams number(3) not null,

    constraint tourn_uk unique( comp_code, year_completed, host_nation)

alter table tournaments
    add constraint tourn_comp_fk foreign key
        (comp_code) references competitions(comp_code)

Hmmm, does something there look odd to you ? We may come back to it in a while.
First though, let’s write the tests for…

Variable Substitutions using binds

Our first test is wittily and originally entitled MANAGE_TOURNAMENTS.ADD_TOURNAMENT_1.
It’s simply testing that we can legitimately add a TOURNAMENT record to our application.

For the Test Startup, we need to make sure that we have a COMPETITION record to assign the tournament to so I’ve subscribed to the setup_competition_ut1 that we added to the Library earlier.

As for the call to the package, the parameters I’m using are from the very first International Football Tournament – the British Home Championships ( won by Scotland ) :

  • I_CODE = UT1
  • I_YEAR_END = 1884
  • I_TEAMS = 4
  • I_HOST = null
  • I_YEAR_START = null

The Validation Process is a Boolean function. Now, I had a few issues with the I_HOST replacement as a string (possibly because I passed in a parameter value of null in the test).
Fortunately, you can reference parameter values as bind variables…

    l_count pls_integer;
    l_host tournaments.host_nation%type := :I_HOST;
    select count(*)
    into l_count
    from tournaments
    where comp_code = '{I_CODE}'
    and year_completed = {I_YEAR_END}
    and nvl(host_nation, 'MIKE') = nvl(l_host, 'MIKE');

    return l_count = 1;

I know I’ll need to use this again to verify the test for adding a duplicate tournament so I’ve saved it to the Library as single_tournament_record_exists.

The teardown process is also taken from the library (a simple rollback once again).

We’ve got a database and we’re not afraid to use it !

The test to ensure that you can’t add a TOURNAMENT for a non-existent COMPETITION is more about the Data Model than the code. What we’re actually testing is that the Foreign Key from TOURNAMENTS to COMPETITIONS is in place and working as expected.

It follows a similar pattern to the very first test we created for adding a competition.
Indeed, the Dynamic Values Query looks rather familiar :

with suffix as
    select max( to_number( substr( comp_code, regexp_instr( comp_code, '[[:digit:]]')))) + 1 as numeral
    from competitions
    where comp_code like 'UT%'
    and regexp_instr( substr( comp_code, -1, 1), '[[:digit:]]') = 1 -- only want codes with a numeric suffix
    union -- required if there are no records in the table...
    select 1 from dual
select 'UT'||max(numeral) as I_CODE,
    1884 as I_YEAR_END,
    4 as I_TEAMS,
    null as I_HOST,
    null as I_YEAR_START
from suffix
where numeral is not null

Whilst the hard-coded names and parameter values reflect the fact that we’re calling a different procedure, the code to derive the I_CODE parameter value is identical.

Whilst we could add this query to the Library then copy it where we needed to and make necessary changes in each test, there is an alternative method of reuse we might consider. We could create a database function in the Repository to return the desired value for I_CODE.

NOTE – there are several considerations when determining whether or not this is a route that you wish to go down in terms of your own projects. However, in this instance, my Test Repository is being used for a single application, and by doing this, I’m ensuring that this code only needs to be written once.

We’re going to create this function in the TEST_REPO schema.
Before we do that though, we need to grant access to the COMPETITIONS table to TEST_REPO. So, connected as FOOTIE :

grant select on competitions to test_repo

Then, as TEST_REPO :

create function get_new_comp_code
    return varchar2
    l_comp_code varchar2(5);
    with suffix as
        select max( to_number( substr( comp_code, regexp_instr( comp_code, '[[:digit:]]')))) + 1 as numeral
        from footie.competitions
        where comp_code like 'UT%'
        and regexp_instr( substr( comp_code, -1, 1), '[[:digit:]]') = 1 -- only want codes with a numeric suffix
        union -- required if there are no records in the table...
        select 1 from dual
    select 'UT'||max(numeral) into l_comp_code
    from suffix
    where numeral is not null;

    return l_comp_code;
end get_new_comp_code;

Yes, in the Real World you may very well do this as part of a package rather than as a stand-alone function.

Next, we need to grant privileges on the function to the Repository Role :

grant execute on get_new_comp_code to ut_repo_user

The Dynamic Values Query can now be written to use the new function :

select test_repo.get_new_comp_code as I_CODE,
    1884 as I_YEAR_END,
    4 as I_TEAMS,
    null as I_HOST,
    null as I_YEAR_START
from dual

We expect this test to fail with ORA-2291 – Integrity Constraint violated.

Multiple Processes for a single Testing Stage

To test that the application will not allow the addition of a duplicate TOURNAMENT we need to make sure that records exist in two tables rather than one – i.e. a “parent” COMPETITIONS record and a “child” TOURNAMENTS record, which we’ll be attempting to duplicate.

Now we could do this in a single Startup Process. However, if we do at as two separate steps then we can save both to the Library and get a greater degree of re-use out of them. So, resisting the distant memory of Forms 2.3 Step Triggers and the feeling that you’re coding like it’s 1989…

On the Specify Startup Step of the Wizard, click the Plus button, add the first Startup Process ( i.e. ensure that the parent COMPETITIONS record exists).
Once you’re done with that, hit the Plus button again :


Select User PL/Sql code from the drop-down and then enter the following :

    merge into tournaments
    using dual
        comp_code = 'UT1'
        and year_completed = 1916
        and host_nation = 'ARGENTINA'
    when not matched then
            id, comp_code, year_completed,
            host_nation, year_started, number_of_teams
            tourn_id_seq.nextval, 'UT1', 1916,
            'ARGENTINA', null, 4

Incidentally we’ve moved on a bit in terms of test data and are now using details from the first Continental Tournament – the Copa America.
I’ve added this to the library as setup_tournament_for_UT1.

After all that, you should end up with something like …


The remaining tests don’t present anything new so we come to the point where we need to get the code to pass.
At this point you might be confident that this will do the job…

create or replace package body manage_tournaments
	procedure add_tournament
		i_code tournaments.comp_code%type,
		i_year_end tournaments.year_completed%type,
		i_teams tournaments.number_of_teams%type,
		i_host tournaments.host_nation%type default null,
		i_year_start tournaments.year_started%type default null
        if i_year_start is not null then
            if nvl(i_year_end, i_year_start) &amp;amp;amp;lt; i_year_start then
                raise_application_error( -20000, q'[A tournament cannot end before it has begun...unless you're England !]');
            end if;
        end if;
		insert into tournaments
			id, comp_code, year_completed,
			host_nation, year_started, number_of_teams
			tourn_id_seq.nextval, i_code, i_year_end,
			i_host, i_year_start, i_teams
	end add_tournament;
end manage_tournaments;

Right, let’s test it shall we ?

When Data Models go bad…

I’ve added all five of the tests I’ve written to the MANAGE_TOURNAMENTS Test Suite so, I’m going to use it to execute my tests…

In the Unit Test Pane, expand the Suites node and click on the Suite we want to run. This will bring up the details of the Suite :


Now press the Run button and…not everything went as expected.

By collapsing the result tree, we can see which Test (or Tests) failed :


I think you noticed the problem a bit earlier. Someone has been a bit remiss when doing the data modelling. the COMP_CODE column is missing a NOT NULL constraint.

The Data Modeller claims that this was a deliberate oversight so we can see the value of testing the Data Model and not simply the code. Yeah, right.

Once we address this :

alter table tournaments modify comp_code not null

… and re-run the offending test on it’s own to check that it passes, we can then re-run the entire suite as a regression test.
Looks like the change hasn’t broken anything else ( the failure at the bottom is just the previous run result):


Summary of the Insert Tests

So far, we’ve found out how SUT is put together as a Testing Framework.

The tests follow the expected four stage pattern – Setup, Execute, Validate, Teardown, but of these only the Execute stage is mandatory.
Multiply Processes are permissible in any stage apart from Execute.
The facility to group tests into Test Suites is present, as you would expect.

We’ve also started to explore some of the specifics of SUT.

Pleasingly, it handles expected Exceptions with the minimum of input down to the level of the expected error code.

The Dynamic Values Query provides a means of generating conditional test input criteria.
The Variable Substitution syntax makes validation of tests executed with such generated values somewhat more straightforward.

If and when limitations of SUT are encountered then you always have the option of extending it’s capabilities using custom PL/SQL Stored Program Units.
I can see that you’re still a bit dubious about that last point. OK, I’m sure we’ll come back to that later( mainly because I’ve already written that bit !)

In the meantime, let’s see what we can learn from the tests for the second DML activity we’re looking at…

Testing Deletes

As with any DML action, how well an application handles Deletion of records is crucial in terms of how well it ensures data integrity.
With that statement of the obvious out of the way, let’s take a look at how SUT can help us with testing this aspect of our Application’s functionality.

Some of the tests we need to write require records to be present in the TOURNAMENT_TEAMS table, which looks like this :

create table tournament_teams
    tourn_id number,
    team_name varchar2(100),
    group_stage varchar2(2),

    constraint team_pk primary key (tourn_id, team_name)

alter table tournament_teams
    add constraint team_tourn_fk foreign key
        (tourn_id) references tournaments(id)
The thing about Dynamic Value Queries

The Acceptance Criteria we want to test for in the User Story for deleting a tournament are :

  1. A Tournament can be deleted
  2. A Tournament cannot be deleted if there are Teams assigned to the Tournament

I’ve created a stub procedure in the MANAGE_TOURNAMENTS package.
The appropriate signature has been added to the package header and the procedure in the body currently looks like this :

	procedure remove_tournament( i_id tournaments.id%type)
	end remove_tournament;

Yes, using the Natural Key for a TOURNAMENT record as input into this procedure would have made life easier in terms of writing the test. However, I feel quite strongly that we should not be making compromises in the Application Code to accommodate any Testing Tools we may need to use.

To ensure that our test is completely standalone, we may need to create both a COMPETITIONS and a TOURNAMENTS record in the Test Startup.
We then need to find out what the ID value is of the TOURNAMENT record we want to delete.
No problem there. We can re-use the routines we already have in the Library to Create the UT1 Competition and associated tournament.
Then we just need to use a Dynamic Values Query to retrieve the ID – something like :

select id as I_ID
from tournaments
where comp_code = 'UT1'
and year_completed = 1916
and host_nation = 'ARGENTINA'

Nothing to see here, right ? Well, if you had taken a look at the SQLDeveloper Help Dynamic Value Queries you would have noticed that…

“A dynamic value query is executed before the execution of all implementations in a test, including any startup action for the test. If you must populate a table before a dynamic value query is evaluated, you can do this in the startup action for a suite that includes the test.”

Fortunately, if you’re not entirely sure about how your test is going to work, you can click the Debug button and get details of the execution steps of the test or, in this case…


So, it would appear that the solution to this particular problem is to wrap this test in it’s own Test Suite.
This way, we can run the Startup Processes at the Suite level instead of the the Test level to ensure that they execute before the Dynamic Values Query.

So, I’m going to create a suite called manage_tournaments.remove_tournament_ws – for Wrapper Suite
As well as the Startup Processes, I’ll also move the Teardown Process from the test to this Suite.
Then, I’ll allocate the test to the Suite.

Creating the Wrapper Suite

This is the same process for creating any other Test Suite – i.e. Go to the Unit Test Navigator, right-click on the Suites node and select Add Suite… from the menu.


If we now bring up our new Suite, we can add the Startup and Teardown processes in the same way as we do for a test (i.e. hit the Plus button and away you go).
Once that’s done, we need to assign a test to the Suite.
Once again Hit the Plus button, this time in the Test or Suite Section to bring up the Add to Suite dialogue :


Select the test we want to add, in our case, MANAGE_TOURNAMENTS.REMOVE_TOURNAMENT_1 and make sure that the Run Test Startups and Run Test Teardowns are unchecked :add_to_suite2

Click OK and…


Even though we’ve specified that the Startup and Teardown Processes should not be executed when the test is run within the Suite, it’s probably a good idea to go back and remove them, if only to save much confusion later on.

Anyhow, when we now execute the suite we can see that the results are what we’re expecting and that, reassuringly, the generated synthetic key value ( I_ID) is being passed in :


Running Startup Processes in the Suite and the Test

To make sure that the Foreign Key from TOURNAMENT_TEAMS to TOURNAMENTS is working, we need to insert a TOURNAMENT_TEAMS record for the appropriate TOURNAMENT.ID as part of the Setup Process.

As with the previous tests, we’re going to need to include this in the Wrapper Suite we’ve just created so that the Dynamic Values Query to get the ID value works.

Hang on, let’s consider that decision for a moment.

It is true that the second test will require the same Startup Processes that we have in our existing Wrapper Suite for the first test. It will also need these Startup Processes to be executed in a Wrapper Suite as it needs to have access to the TOURNAMENTS.ID value in a Dynamic Values Query.

To a programmer, it’s logical therefore that the second test should be allocated to the same Test Suite as the first as the code has already been written there and there’s absolutely no need to go duplicating effort ( even if it is mainly just importing stuff from the Library).

Of course, we will need to “move things about a bit” to make sure that both tests can run properly within the same suite. For example, we need to perform the “successful” delete test last as the test for an Exception is relying on the record to be deleted…er…not being deleted when it runs.

To a tester, things may appear a little different. One of the principles of Unit Testing is to make sure that, as far as possible, that tests can be run independently of each other.

It is for this reason that you should give serious consideration to creating a separate Wrapper Suite for our second test.

The alternative, as I’m about to demonstrate, gets a bit messy…

So for our new test, as well as the Setup Processes in the Suite, we’ve also included one in the test for the addition of the TOURNAMENT_TEAMS record.
The creation of the TOURNAMENT_TEAMS record needs to remain in the Test Startup Process rather than in the Suite as it’s only relevant to this test and not for all tests in the Suite. However, as the TOURNAMENT record we’re looking for will definitely have been created by the Wrapper Suite Startup Processes – before the Test Startup Process fires, this should not be a problem.

So, the main differences between this test – MANAGE_TOURNAMENTS.REMOVE_TOURNAMENT_2 – and it’s predecessor are simply that we are expecting this test to error with ORA-2292 – “Integrity constraint violated” – and that we now include the following Startup Process code to create the TOURNAMENT_TEAMS record :

    l_id tournaments.id%type;
    select id
    into l_id
    from tournaments
    where comp_code = 'UT1'
    and year_completed = 1916
    and host_nation = 'ARGENTINA';

    merge into tournament_teams
    using dual
        tourn_id = l_id
        and team_name = 'URUGUAY'
    when not matched then
    insert( tourn_id, team_name, group_stage)
    values(l_id, 'URUGUAY', null);

Now things start to get a bit complicated. In order to make sure that the test for a legitimate delete does not fail, we need to “teardown” the child record in TOURNAMENT_TEAMS that we created in our Startup Process. Well, no problem…except that SUT does not appear to allow Variable Substitution syntax to be used in a Teardown Process.
Therefore, we need to indulge in a little light hacking and put the following code in a Validation Action in our test :

    l_id tournament_teams.tourn_id%type := :I_ID;
    delete from tournament_teams
    where tourn_id = l_id
    and team_name = 'URUGUAY';

This time, when we add the test to the Wrapper Suite, we make sure that the Test Startups are run :


Finally, we need to make sure that our new test runs first in the suite. In the Test Suite listing, click on the Test name then click the blue up arrow…


…until our test is at the top of the pile…


Once we’re happy with our tests, we can then fix the application code :

procedure remove_tournament( i_id tournaments.id%type)
    delete from tournaments
    where id = i_id;
end remove_tournament;

…and run our wrapper suite to make sure everything passes…


Finally, we need to add the manage_tournaments.remove_tournaments_ws suite to the Suite we have for all the tests for the MANAGE_TOURNAMENTS package.
To do this, go to the Unit Test Navigator and expand the MANAGE_TOURNAMENTS suite.
Then, right-click the Suites node and select Add Suite…


Now select the manage_tournements.remove_tournaments_ws suite from the list …


…and click OK.

Near enough is good enough ? Finding a key value generated during a Startup Process

The other record deletion story we have concerns COMPETITIONS records.

The procedure we’ll be testing is in the MANAGE_COMPETITIONS package :

procedure remove_competition( i_code competitions.comp_code%type) is
    delete from competitions
    where comp_code = i_code;
end remove_competition;

The acceptance criteria, and indeed, the functionality, that we’re testing here is very similar to our previous User Story.
The Acceptance Criteria are :

  • A competition can be deleted
  • A competition cannot be deleted if a tournament exists for it

To make sure that the test to Delete a COMPETITION is self-contained, we need to make sure that the record we are deleting has no child records.
The easiest way to do this is to create the record as part of the Startup Process.
Obviously, this will need to referenced by a Dynamic Values Query and therefore this code will need to run in another Wrapper Suite.

Once again, I’m using the GET_NEW_COMP_CODE function I created earlier. Yes, that one that you weren’t sure about. The one that you’re probably still not sure about. The Startup Process in my Wrapper Suite will be User PL/Sql Code :

    l_code competitions.comp_code%type;
    l_code := test_repo.get_new_comp_code;
    insert into competitions(comp_code, comp_name, description)
    values( l_code, 'Test', null);

The next step may well be a bit tricky – in the Dynamic Values Query we use to determine the parameters to pass to the procedure, we need to find the COMP_CODE created in the Startup Process.
Now, we can do something like this…

select 'UT'
    || to_char(substr(test_repo.get_new_comp_code, regexp_instr( test_repo.get_new_comp_code, '[[:digit:]]')) -1) as I_CODE
from dual;

…but if the table has a change made to it in another session in the interval between our Startup Process and our Dynamic Values Query executing then we may well end up using an incorrect COMP_CODE value.

Let’s stop and think for a moment.
What we are writing here is not application code that may be executed concurrently by multiple users. We are writing Unit Tests.
Therefore, whilst this potential inexactitude would be problematic within the core Application Codebase, it’s not so much of an issue for a Unit Test.
Remember, the Unit tests will probably be run on a Development Environment with few users ( i.e. Developers) connected. They may also be run on a Continuous Integration Environment, in which case they are likely to be the only thing running on the database.
OK, so I could do something clever with the COMP_CODE value used in the Startup Process being assigned to a package variable/temporary table/whatever for reference by later testing steps in the same session, but I’m really not sure I need to go to all that effort right now.
You may well disagree with this approach, but as I’m the one at the keyboard right now, we’re pressing on…

The validation code for this test will be a Query returning no row(s) :

select null
from competitions
where comp_code = '{I_CODE}'

The final Test (inside it’s wrapper suite), looks like this :


By contrast, making sure that we can’t delete a COMPETITIONS record which has TOURNAMENTS records associated with it is pretty straightforward.
We simply use the MERGE statements we’ve already added to the library to make sure we have a COMP_CODE UT1 and test against that.
As we know the value that we want to pass in ahead of time, we don’t even need a Dynamic Values Query. Therefore, we don’t need another Wrapper Suite.

The test ends up looking like this :

Having checked that they work as expected, I’ve added the second test and the wrapper suite for the first test to the MANAGE_COMPETITIONS suite.
Together with our earlier tests for this package, the Suite as a whole now looks like this :


Deletion Tests Summary

By using the ability to define Startup (and Teardown) Processes at Suite level, we can workaround some of the limitations of Dynamic Values Queries.
Additionally, this property of Suites offers some additional flexibility within SUT.
This does mean that some test code may end up in parts of the test structure where you would not normally expect to find them.


There doesn’t seem to be anything too different about the way SUT lets you test Update operations, except for the facility to have multiple implementations of a single test. There’s an example of this in the Testing Tutorial in the SQLDeveloper Help. Alternatively…

Multiple Implementations of the same test

We’ve got a Story about updating TOURNAMENTS records.

The Acceptance Criteria are :

  • Change the number of teams taking part in a tournament
  • Change the year a tournament started
  • The year a tournament started cannot be after the year a tournament finished

The procedure that we’re testing is in the MANAGE_TOURNAMENTS package :

procedure edit_tournament
    i_id tournaments.id%type,
    i_teams tournaments.number_of_teams%type default null,
    i_year_start tournaments.year_started%type default null
    update tournaments
    set number_of_teams = nvl(i_teams, number_of_teams),
        year_started = nvl(i_year_start, year_started)
    where id = i_id;
end edit_tournament;

We’ve also added a check constraint to the table :

alter table tournaments add constraint chk_end_after_start
    check(nvl(year_started, year_completed) &amp;amp;amp;lt;= year_completed)

Once again, as the procedure we’re testing requires an ID value that we may or may not be creating at runtime, we’ll be needing a Wrapper Suite to feed a Dynamic Values Query to generate the TOURNAMENTS.ID value we pass into the procedure we’re testing.

Once we’ve got our Suite – which I’ve called manage_tournaments.edit_tournament_ws, we can start looking at the Test.

The Dynamic Value Query for the first test is :

select id as I_ID,
    5 as I_TEAMS,
    null as I_YEAR_START
from tournaments
where comp_code = 'UT1'
and year_completed = 1916
and host_nation = 'ARGENTINA'

I’ve published this to the Library as edit_tourn_params_ut1 as we’re going to need variations of it shortly.
The Expected Result is SUCCESS.

The Process Validation is a Boolean Function, which I’m adding to the library as verify_edit_tournament :

    l_id tournaments.id%type := :I_ID;
    l_teams tournaments.number_of_teams%type := :I_TEAMS;
    l_year_start tournaments.year_started%type := :I_YEAR_START;
    l_count pls_integer;
    select 1
    into l_count
    from tournaments
    where id = l_id
    and number_of_teams = nvl(l_teams, number_of_teams)
    and nvl(year_started, year_completed) = coalesce(l_year_start, year_started, year_completed);

    return l_count = 1;

If we expand our new test in the Unit Test Navigator we can see that we have something called Test Implementation 1 under it.


Each test can have multiple implementations, a feature we’re going to make the most of for the User Acceptance Criteria we’re dealing with now.
First thing to do then, is to rename Test Implementation 1 to something a bit more meaningful.

To do this, right-click on the Implementation and select Rename Implementation…


Then enter the new name and hit OK


Now we can create a second implementation by right-clicking the Test itself and selecting Add Implementation…


This time, I’ve called the implementation update_year_started.

We can now see that the new Implementation is in focus in the test, but that the Execution and Validation Processes have not been populated.impl_in_focus

I’ve copied in the Dynamic Values Query from the Library and made the necessary changes for this implementation…


…and subscribed to the verify_edit_tournament Boolean Function we created in the first implementation.

The third Implementation is called update_start_after_end and is the same as the second, except I’m passing in a year later than the current YEAR_STARTED value for the TOURNAMENTS record.
The Expected Result is Exception with an ORA-2290 Check Constraint violated error so there’s no need to include the validation function.

One point to note here is that the Implementations seem to execute alphabetically by name and I don’t see any way of changing this manually.
This is not an issue in this case, when each test is reasonably independent, but it’s worth bearing in mind.

Once all of the application code is in place, the Test Suite result looks like this :


Testing an In/Out SYS_REFCURSOR

Yes, we’re at the point that you may well have been dreading.
The Ref Cursor has been such a wonderfully useful additions to the PL/SQL language. It makes passing data between PL/SQL and programs written in other languages so much easier.
It is ironic therefore, that getting stuff out of a Ref Cursor is often quite painful when using a PL/SQL client.

Given this, one option we might consider when testing Ref Cursors could be to use whatever test framework is being employed to test the client code calling our PL/SQL API. However, that would be to pass up the opportunity to use Wouter’s clever little trick.

So, with nothing up my sleeves…

Is this a Ref Cursor that I see before me ?

The Acceptance Criterion for our User Story is that the application lists all tournaments in the system the specified competition.

The procedure we need to test is in the MANAGE_TOURNAMENTS package and looks like this :

procedure list_tournaments
    i_comp_code tournaments.comp_code%type,
    io_tourn_list in out SYS_REFCURSOR
    open io_tourn_list for
        select id, comp_code, year_completed, host_nation, year_started, number_of_teams
        from tournaments
        where comp_code = i_comp_code;
end list_tournaments;

The first clue to the fact that this test will be a bit different from normal comes when you select this procedure right at the start of the Create Test Wizard.
Immediately you will get :


In a way this is reassuring. SUT recognizes that we need to handle a REF CURSOR. However, the template Query it provides for the Dynamic Values Query we need to use appears to pose more questions than answers…

select ? as I_COMP_CODE,
    ? as IO_TOURN_LIST,
    ? as IO_TOURN_LIST$
from ?
where ?

Now, there may well be a way of getting this to work as intended, but I’ve not been able to find out what it is.
What we can do instead is a bit of light cheating…

Wouter’s Method

To start with, we need to create a procedure in the TEST_REPO schema. This will act as a dummy Test Execution so we can do the real testing in the Validation Process.

Still don’t like me creating my own objects in the Repository ? Well, fair enough, but in this case, I can’t see any other option.
The procedure then is :

create or replace procedure this_is_not_a_cursor
-- Been listening to a bit of Public Image Ltd, hence the name of this proc...

…look, you can put it in a package if that’ll make you feel any better about it.

Anyway, we need to grant execute permissions to the SUT roles :

grant execute on this_is_not_a_cursor to ut_repo_user
grant execute on this_is_not_a_cursor to ut_repo_administrator

Now, let’s try creating our test again. We’re using the FOOTIE connection as usual. However, this time, we’ll be selecting a program from the Other Users node…


The test name still follows our naming convention -i.e. MANAGE_TOURNAMENTS.LIST_TOURNAMENTS_1.

The Startup Process makes sure that we have records to query :

    procedure ins( i_year tournaments.year_completed%type,
        i_host tournaments.host_nation%type,
        i_teams tournaments.number_of_teams%type)
        merge into tournaments
        using dual
            comp_code = 'WC'
            and year_completed = i_year
            and host_nation = i_host
        when not matched then
            insert ( id, comp_code, year_completed,
                host_nation, year_started, number_of_teams)
            values( tourn_id_seq.nextval, 'WC', i_year,
                i_host, null, i_teams);
    end ins;
    merge into competitions
    using dual
        on ( comp_code = 'WC')
        when not matched then
        insert( comp_code, comp_name, description)
        values('WC', 'World Cup', 'FIFA World Cup');

    ins(1930, 'URUGUAY', 13);
    ins(1934, 'ITALY', 16);
    ins(1938, 'FRANCE', 16);
    ins(1950, 'BRAZIL', 13);


We don’t need to specify any input parameters for the Execution Step of our test so we can skip straight on to Process Validation. Here we define some User PL/Sql Code, where all the fun happens…

    l_rc sys_refcursor;
    rec_tourn tournaments%rowtype;
    l_count pls_integer := 0;
    l_exp_count pls_integer;
    -- The &amp;amp;amp;quot;real&amp;amp;amp;quot; test...
    manage_tournaments.list_tournaments('WC', l_rc);
        fetch l_rc into rec_tourn;
        exit when l_rc%notfound;
        l_count := l_count + 1;
    end loop;
    close l_rc;
    -- confirm that the correct number of records have been retrieved
    select count(*) into l_exp_count
    from tournaments
    where comp_code = 'WC';

    if l_count != l_exp_count then
        raise_application_error(-20900, 'Number of records in ref cursor '||l_count||' does not match expected count of '||l_exp_count);
    end if;

So, as we can use a PL/SQL block in Process Validation, we can define our SYS_REFCURSOR variable and execute the call to our procedure here.

Having added our “standard” Teardown, we’re ready to test.

The result….


The main drawback with this approach is that you now have a “customised” repository and will have to cope with any extra admin around administration and deployment of such objects. On the plus side, you can test Ref Cursor stuff.

Startup/Teardown using Table/Row Copy

Sooner or later you will encounter a testing scenario where a simple rollback just won’t do.
The next User Story is just such an example…

Bulk Upload Competitions – Using Table Copy for testing and rollback

This Story is intended to replicate the sort of ETL process that is quite common, especially in a Data Warehouse environment.
The scenario here is that you receive a delimited file containing data that needs to be loaded into your application.
The load needs to be permissive – i.e. you don’t want to fail the entire load if only a few records error.
The file format is validated as being what is expected by being loaded into an external table,
The load process then uses LOG ERRORS to upload all the records it possibly can, whilst keeping track of those records that failed by dumping them into an Error table.
The thing about LOG ERRORS is that it runs an Autonomous Transaction in the background.
Therefore, even you issue a rollback after the load, any records written to the error table will be persisted.
In light of this, we’re going to need to use something else for our Teardown process.

The Data Model

Just to quickly recap, we already have an external table :

create table competitions_xt
    comp_code varchar2(5),
	comp_name varchar2(50),
	description varchar2(4000)
    organization external
        type oracle_loader
        default directory my_files
        access parameters
            records delimited by newline
            badfile 'competitions.bad'
            logfile 'competitions.log'
            skip 1
            fields terminated by ','
                comp_code char(5),
                comp_name char(50),
                description char(4000)
    reject limit unlimited

We also have a csv file – competitions.csv with the data to load (including a duplicate record) :

HIC,Home International Championship, British Home International Championship
CA,Copa America,Copa America (South American Championship until 1975)
OLY,Olympic Football Tournament,The Olympics
WC,World Cup,The FIFA World Cup
CEIC,Central European International Cup,Central European International Cup - a forerunner to the European Championships
EURO,European Championship,UEFA European Championship
HIC,Home International Championship, British Home International Championship

We have an error table – ERR$_COMPETITIONS – that’s been created by :


…and we have a stub we’ll be using to test the load (in the MANAGE_COMPETITIONS package

    procedure upload_competitions

When we create the test, the Startup Processes need to backup both the COMPETITIONS table and the ERR$_COMPETITIONS table.
Creating the first Startup Process, we select Table or Row Copy from the drop-down :


In the Window that pops up, the Source Table is the table we want to copy.
The Target Table is the temporary table that SQLDeveloper is going to create as a copy of the Source Table.
Note that the Target Table defaults to the same name irrespective of how many Startup Processes we specify.
For our first Startup Process, things look like this :


Notice that the Generated Query field updates as you enter the name of the source table. If you want to make sure that this Query is going to work at runtime, you can hit the Check button and (hopefully) be reassured with the message :


So, we’ve got a backup for the COMPETITIONS table, now we need one for the error table.
This is pretty similar to the first Startup Process except that this time we rename the Temporary Table to TMP$MANAGE_COMPETITIONS.UPLERR :


As with the startup, there are two validation processes required. Actually, you could probably do it all in one but that would be to pass up the opportunity to demonstrate both at work.

Both of these will be User PL/SQL Code blocks. First off, check that we’ve loaded the correct number of rows :

    l_count pls_integer;
    wrong_count exception;
    select count(comp.comp_code)
    into l_count
    from competitions comp
    inner join competitions_xt cxt
    on comp.comp_code = cxt.comp_code;

    if l_count != 7 then
        raise wrong_count;
    end if;

…and then make sure that we have the correct number of error records ….

    l_count pls_integer;
    wrong_count exception;
    select count(*)
    into l_count
    from err$_competitions;

    if l_count != 1 then
        raise wrong_count;
    end if;

Finally, for the Teardown, we need to restore our tables to the state prior to the test execution.

This time, the process type from the drop-down is Table or Row Restore

Note that we can check the check-box to drop the temp table pre-checked…


For the second Teardown process, to restore the Error table to it’s former state, we need to do a bit more typing.
This is because SQLDeveloper defaults to the same values for each Teardown Process.
So, we need to specify that our Target Table is ERR$_COMPETITIONS and our Source Table is “TMP$MANAGE_COMPETITIONS.UPLERR” :


After all that, we can see that we have two Startup Processes, two Validation Processes, and two Teardown processes in our new test :


After confirming that the test fails as expected…


…we update the application code…

procedure upload_competitions
    insert into competitions( comp_code, comp_name, description)
        select comp_code, comp_name, description
        from competitions_xt
        log errors reject limit unlimited;
end upload_competitions;

…and re-run the test…


Sharing your Suites – version control and code promotion for Tests

If you plan to use your SUT tests in anything other than a single repository, chances are that you’ll want to be able to :

  • transfer them between environments
  • put them under some form of source control

Well, you’re in luck. Not only can you export Tests or Suites to an xml file on disk, an export will automatically include any Subscribed objects from the Library.
To demonstrate, right-click on the Suite we want to export :


…and select the file location…


…and SUT will anticipate your every need…exp_conf


Obviously, you’ll need to make sure you deploy any custom Stored Program Units upon which the exported objects are dependent.

To import a file into a repository, you can use the Main Tools/Unit Test menu :imp_menu

…which allows you to choose the file to import, as well as the option of whether or not to overwrite an object of the same name that already exists in the repository :imp_file_select


Overall, SQLDeveloper Unit Testing provides a number of helpful features to reduce the burden of writing and maintaining tests.
Notable plus points are :

  • being able to save code in a Library to promote re-use
  • table backup and restore functionality for test Startups and Teardowns
  • the seamless way that exceptions can be tested down to the level the error code
  • the fact that test results are retained and collated in the tool

Being declarative in nature, SUT provides a common structure for Unit Tests. Being declarative in nature, SUT does have some limitations.
It is possible to overcome some of these limitations by adding custom database objects to the Repository. Some consideration needs to be given as to what extent you want to do this.

I will be comparing SUT with other PL/SQL testing frameworks in a future post. Before that, I need to evaluate some other frameworks.
The next one on my list is utPLSQL…

Filed under: PL/SQL, SQLDeveloper Tagged: Create Unit Test Wizard, Dynamic Value Queries, exceptions, Export, Import, Library, manage users, Purge Results, Repository, SQLDeveloper Unit Testing, Startup Process, sys_refcursor, Table or Row Copy, Teardown Process, Test Implementations, Test Suites, Testing in/out ref cursors, ut_repo_administrator role, ut_repo_user role, Variable Substitution, wrapper suites

Error while trying to retrieve text for error ORA-01804 12c occi c++ application on ubuntu

Tom Kyte - Sat, 2016-08-20 14:46
Hello every one please help me after installing oracle 12c in ubuntu 16.04. I am try to check by simple c++ occi connection code,but from my code at create environment :- env = Environment::createEnvironment(Environment::DEFAULT); display error like:...
Categories: DBA Blogs

Migration from 9i to 12c Pro*C server application.

Tom Kyte - Sat, 2016-08-20 14:46
We are migrating or legacy 32 9i Pro*C server code over on to a new Oracle Solaris 11 Unix and Oracle 12c server. Can we run the legacy 32 legacy code running on the 32 bit server pointing to the new 12c database server? Oracle will be doing the ...
Categories: DBA Blogs

ORA-04052: error occurred when looking up remote object FPL.COIMREGI@FPL

Tom Kyte - Sat, 2016-08-20 14:46
Hi, I have a problem when I try to compile a function in a database oracle, and I got this error: PL/SQL: ORA-04052: error occurred when looking up remote object FPL.COIMREGI@FPL ORA-00604: error occurred at recursive SQL level 1 OR...
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator