- The people here are almost desperate to provide their students with a 21st century experience. They feel the need to have their student systems match the excellence found in their academics and research.
- Many of the schools struggle to implement well. And it's not that they don't understand the technology. The common theme seems to be one of organizational change management.
- While total cost of ownership is obviously a concern, system quality seems to be an even great concern with these folks...a positive in my book.
- Prior point notwithstanding, a huge issue for many international schools seems to be infrastructure reliability. Things many of us take for granted (dependable electricity, broadband internet connections, physical hardware security) are huge issues in some parts of the world.
When performance is measured, performance improves. When performance is measured and reported, the rate of performance accelerates.
(LDS Conference Report, October 1970, p107)
Like everything in life, a SOA Suite installation that is monitored and tracked has a much better chance of performing well than one that is not measured. With that in mind I came up with tool to allow the measurement of the impact of configuration changes on database usage in SOA Suite. This tool can be used to assess the impact of different configurations on both database growth and database performance, helping to decide which optimizations offer real benefit to the composite under test.Basic Approach
The basic approach of the tool is to take a snapshot of the number of rows in the SOA tables before executing a composite. The composite is then executed. After the composite has completed another snapshot is taken of the SOA tables. This is illustrated in the diagram below:
An example of the data collected by the tool is shown below:Test Name Total Tables Changed Total Rows Added Notes AsyncTest1 13 15 Async Interaction with simple SOA composite, one retry to send response. AsyncTest2 12 13 Async interaction with simple SOA composite, no retries on sending response. AsyncTest3 12 13 Async interaction with simple SOA composite, no callback address provided. OneWayTest1 12 13 One-Way interaction with simple SOA composite. SyncTest1 7 7 Sync interaction with simple SOA composite.
Note that the first three columns are provided by the tool, the fourth column is just an aide-memoir to identify what the test name actually did. The tool also allows us to drill into the data to get a better look at what is actually changing as shown in the table below:Test Name Table Name Rows Added AsyncTest1 AUDIT_COUNTER 1 AsyncTest1 AUDIT_DETAILS 1 AsyncTest1 AUDIT_TRAIL 2 AsyncTest1 COMPOSITE_INSTANCE 1 AsyncTest1 CUBE_INSTANCE 1 AsyncTest1 CUBE_SCOPE 1 AsyncTest1 DLV_MESSAGE 1 AsyncTest1 DOCUMENT_CI_REF 1 AsyncTest1 DOCUMENT_DLV_MSG_REF 1 AsyncTest1 HEADERS_PROPERTIES 1 AsyncTest1 INSTANCE_PAYLOAD 1 AsyncTest1 WORK_ITEM 1 AsyncTest1 XML_DOCUMENT 2
Here we have drilled into the test case with the retry of the callback to see what tables are actually being written to.
Finally we can compare two tests to see difference in the number of rows written and the tables updated as shown below:Test Name Base Test Name Table Name Row Difference AsyncTest1 AsyncTest2 AUDIT_TRAIL 1
Here are the additional tables referenced by this testTest Name Base Test Name Additional Table Name Rows Added AsyncTest1 AsyncTest2 WORK_ROWS 1 How it Works
I created a database stored procedure, soa_snapshot.take_soa_snaphot(test_name, phase). that queries all the SOA tables and records the number of rows in each table. By running the stored procedure before and after the execution of a composite we can capture the number of rows in the SOA database before and after a composite executes. I then created a view that shows the difference in the number of rows before and after composite execution. This view has a number of sub-views that allow us to query specific items. The schema is shown below:
The different tables and views are:
- Used to track number of rows in SOA schema, each test case has two or more phases. Usually phase 1 is before execution and phase 2 is after execution.
- This only used by the stored procedure and the views.
- Used to track changes in number of rows in SOA database between phases of a test case. This is a view on CHANGE_TABLE. All other views are based off this view.
- Provides number of rows changed in each table.
- Provides a summary of total rows and tables changed.
- Provides a summary of differences in rows updated between test cases
- Provides a summary of the extra tables and rows used by a test case.
- This view makes use of a session context, soa_ctx, which holds the test case name and the baseline test case name. This context is initialized by calling the stored procedure soa_ctx_pkg.set(testCase, baseTestCase).
I created a web service wrapper to the take_soa_snapshot procedure so that I could use SoapUI to perform the tests.Sample Output How many rows and tables did a particular test use?
Here we can see how many rows in how many tables changed as a result of running a test:
-- Display the total number of rows and tables changed for each test
select * from summary_delta_view
order by test_name;
TEST_NAME TOTALDELTAROWS TOTALDELTASIZE TOTALTABLES
-------------------- -------------- -------------- -----------
AsyncTest1 15 0 13
AsyncTest1noCCIS 15 0 13
AsyncTest1off 8 0 8
AsyncTest1prod 13 0 12
AsyncTest2 13 0 12
AsyncTest2noCCIS 13 0 12
AsyncTest2off 7 0 7
AsyncTest2prod 11 0 11
AsyncTest3 13 0 12
AsyncTest3noCCIS 13 65536 12
AsyncTest3off 7 0 7
AsyncTest3prod 11 0 11
OneWayTest1 13 0 12
OneWayTest1noCCI 13 65536 12
OneWayTest1off 7 0 7
OneWayTest1prod 11 0 11
SyncTest1 7 0 7
SyncTest1noCCIS 7 0 7
SyncTest1off 2 0 2
SyncTest1prod 5 0 5
20 rows selectedWhich tables grew during a test?
Here for a given test we can see which tables had rows inserted.
-- Display the tables which grew and show the number of rows they grew by
select * from simple_delta_view
order by table_name;
TEST_NAME TABLE_NAME DELTAROWS DELTASIZE
-------------------- ------------------------------ ---------- ----------
AsyncTest1 AUDIT_COUNTER 1 0
AsyncTest1 AUDIT_DETAILS 1 0
AsyncTest1 AUDIT_TRAIL 2 0
AsyncTest1 COMPOSITE_INSTANCE 1 0
AsyncTest1 CUBE_INSTANCE 1 0
AsyncTest1 CUBE_SCOPE 1 0
AsyncTest1 DLV_MESSAGE 1 0
AsyncTest1 DOCUMENT_CI_REF 1 0
AsyncTest1 DOCUMENT_DLV_MSG_REF 1 0
AsyncTest1 HEADERS_PROPERTIES 1 0
AsyncTest1 INSTANCE_PAYLOAD 1 0
AsyncTest1 WORK_ITEM 1 0
AsyncTest1 XML_DOCUMENT 2 0
13 rows selected
Here we can see the differences in rows for two tests.
-- Return difference in rows updated (test1)
select * from different_rows_view
where test1='AsyncTest1' and test2='AsyncTest2';
TEST1 TEST2 TABLE_NAME DELTA
-------------------- -------------------- ------------------------------ ----------
AsyncTest1 AsyncTest2 AUDIT_TRAIL 1
Here we can see tables that were used by one test but not by the other test.
-- Register base test case for use in extra_tables_view
-- First parameter (test1) is test we expect to have extra rows/tables
begin soa_ctx_pkg.set('AsyncTest1', 'AsyncTest2'); end;
anonymous block completed
-- Return additional tables used by test1
column TEST2 FORMAT A20
select * from extra_tables_view;
TEST1 TEST2 TABLE_NAME DELTAROWS
-------------------- -------------------- ------------------------------ ----------
AsyncTest1 AsyncTest2 WORK_ITEM 1
I used the tool to find out the following. All tests were run using SOA Suite 18.104.22.168.
The following is based on a very simple composite as shown below:
Each BPEL process is basically the same as the one shown below:
Impact of Fault Policy Retry Being Executed Once Setting Total Rows Written Total Tables Updated No Retry 13 12 One Retry 15 13
When a fault policy causes a retry then the following additional database rows are written:Table Name Number of Rows AUDIT_TRAIL 1 WORK_ITEM 1 Impact of Setting Audit Level = Development Instead of Production Setting Total Rows Written Total Tables Updated Development 13 12 Production 11 11
When the audit level is set at development instead of production then the following additional database rows are written:Table Name Number of Rows AUDIT_TRAIL 1 WORK_ITEM 1 Impact of Setting Audit Level = Production Instead of Off Setting Total Rows Written Total Tables Updated Production 11 11 Off 7 7
When the audit level is set at production rather than off then the following additional database rows are written:Table Name Number of Rows AUDIT_COUNTER 1 AUDIT_DETAILS 1 AUDIT_TRAIL 1 COMPOSITE_INSTANCE 1 Impact of Setting Capture Composite Instance State Setting Total Rows Written Total Tables Updated On 13 12 Off 13 12
When capture composite instance state is on rather than off then no additional database rows are written, note that there are other activities that occur when composite instance state is captured:Impact of Setting oneWayDeliveryPolicy = async.cache or sync Setting Total Rows Written Total Tables Updated async.persist 13 12 async.cache 7 7 sync 7 7
When choosing async.persist (the default) instead of sync or async.cache then the following additional database rows are written:Table Name Number of Rows AUDIT_DETAILS 1 DLV_MESSAGE 1 DOCUMENT_CI_REF 1 DOCUMENT_DLV_MSG_REF 1 HEADERS_PROPERTIES 1 XML_DOCUMENT 1
As you would expect the sync mode behaves just as a regular synchronous (request/reply) interaction and creates the same number of rows in the database. The async.cache also creates the same number of rows as a sync interaction because it stores state in memory and provides no restart guarantee.Caveats & Warnings
The results above are based on a trivial test case. The numbers will be different for bigger and more complex composites. However by taking snapshots of different configurations you can produce the numbers that apply to your composites.
The capture procedure supports multiple steps in a test case, but the views only support two snapshots per test case.Code Download
The sample project I used us available here.
The scripts used to create the user (createUser.sql), create the schema (createSchema.sql) and sample queries (TableCardinality.sql) are available here.
The Web Service wrapper to the capture state stored procedure is available here.
The sample SoapUI project that I used to take a snapshot, perform the test and take a second snapshot is available here.
This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
Just a quick reminder that the second Oracle Midlands event will be taking place on 25th March 2014.
At this event you will get a chance to see Graham Wood (my Dad) speaking about “Advanced ASH Architecture and Usage” and Nikolay Manchev speaking about “Using Clusterware 11g to protect single-instance databases”.
These events live or die based on your support. Please spread the word and come along. See you there!
Tim…Oracle Midlands : Event #2 was first posted on March 9, 2014 at 4:34 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
Once upon a time, working on an IT project in a large organisation meant reams of documentation, tangles of red-tape, and then burning the candle at both ends to get everything finished on time.
Then, someone discovered this Agile thing that dispensed with all of that.
The upshot ? Well, the documentation has been reduced ( although the red-tape is still problematic).
Many large organisations now adopt an approach that is almost – but not completely – unlike SCRUM.
It is at the business end of such a project I now find myself…burning the candle at both ends.
To pile on the excitement, Milton Keynes’ own Welsh enclave has become increasingly voiciferous in recent days.
The only SCRUM Deb is interested in is that of the Welsh Rugby Team grinding remorselessly over English bodies come Sunday.
She keeps giving me oh-so-subtle reminders of the result of last year’s game, such as when picking lottery numbers :
“Hmmm, three is supposed to be lucky…but not if your English. How about THIRTY !”
“But you’re married to an Englishman”, I pointed out during one of her more nationalistic moments.
“Which makes him half Welsh !”, came the retort.
At this point, I decided that discretion was the better part of logic and let the matter drop.
As a result of all this frenzied activity and feverish atmosphere, sometimes I’ve not been quite at the top of my game.
One particularly embarassing mishap occured late one evening and involved PL/SQL Tables – or Associative Arrays as they’re called these days – and the dreaded ORA-06531: Reference to uninitialized collection.
This particular post therefore, is mainly a reminder to myself of how to initialize and (just as importantly) clear down a Collection to prevent mysterious missing or, just as problematic, additional, records ( as well as less mysterious runtime errors).The Insanity Check
At some point, it would be nice to go away somewhere for a weekend of culture (which may or may not include extensive sampling of local alcoholic beverages).
I’ve got a table containing a list of possible destinations…and whether or not they are located in a Rugby playing nation :
CREATE TABLE getaway_plans( city_name VARCHAR2(50), country_name VARCHAR2(50), play_rugby VARCHAR2(1) ) / INSERT INTO getaway_plans( city_name, country_name, play_rugby) VALUES('VIENNA', 'AUSTRIA', 'N') / INSERT INTO getaway_plans( city_name, country_name, play_rugby) VALUES('ROME', 'ITALY', 'Y') / INSERT INTO getaway_plans( city_name, country_name, play_rugby) VALUES('AUCKLAND', 'NEW ZEALAND', 'Y') / INSERT INTO getaway_plans( city_name, country_name, play_rugby) VALUES('CARDIFF', 'WALES', 'Y') / INSERT INTO getaway_plans( city_name, country_name, play_rugby) VALUES('BERLIN', 'GERMANY', 'N') / INSERT INTO getaway_plans( city_name, country_name, play_rugby) VALUES('WARSAW', 'POLAND', 'N') / INSERT INTO getaway_plans( city_name, country_name, play_rugby) VALUES('HAVANA', 'CUBA', 'N') / INSERT INTO getaway_plans( city_name, country_name, play_rugby) VALUES('COPENHAGEN', 'DENMARK', 'N') / INSERT INTO getaway_plans( city_name, country_name, play_rugby) VALUES('BRUGES', 'BELGIUM', 'N') / INSERT INTO getaway_plans( city_name, country_name, play_rugby) VALUES('AMSTERDAM', 'NETHERLANDS', 'N') / INSERT INTO getaway_plans( city_name, country_name, play_rugby) VALUES('BASEL', 'SWITZERLAND', 'N') / INSERT INTO getaway_plans( city_name, country_name, play_rugby) VALUES('PERTH', 'AUSTRALIA', 'N') / INSERT INTO getaway_plans( city_name, country_name, play_rugby) VALUES('NEW YORK', 'USA', 'Y') / COMMIT;
OK – so a “weekend away” would probably require me finally ironing out the bugs in my lottery number picker program ( it keeps selecting the wrong six numbers).
Using those whizzy associative arrays, I shall now attempt to come up with a shortlist…of places where Rugby is not widely popular.
I will concede that this could be done with far less hassle using a simple select statement. That would, however, defeat the object of the exercise…
CREATE TABLE shortlist_cities ( city_name VARCHAR2(50), country_name VARCHAR2(50) ) /
This should do the trick…
DECLARE CURSOR c_getaway IS SELECT city_name, country_name, play_rugby FROM getaway_plans; -- associatve array based on the cursor TYPE typ_all_cities is table of c_getaway%ROWTYPE; -- and another one based on the target table. TYPE typ_shortlist is table of shortlist_cities%ROWTYPE; tbl_all_cities typ_all_cities; tbl_short typ_shortlist; l_idx PLS_INTEGER := 0; BEGIN OPEN c_getaway; LOOP FETCH c_getaway BULK COLLECT INTO tbl_all_cities LIMIT 4; EXIT WHEN tbl_all_cities.COUNT = 0; DBMS_OUTPUT.PUT_LINE('Bulk Collect Loop Iteration'); FOR i IN 1..tbl_all_cities.COUNT LOOP IF tbl_all_cities(i).play_rugby = 'N' THEN l_idx := l_idx + 1; tbl_short(l_idx).city_name := tbl_all_cities(i).city_name; tbl_short(l_idx).country_name := tbl_all_cities(i).country_name; END IF; END LOOP; FORALL j IN 1..tbl_short.COUNT INSERT INTO shortlist_cities VALUES tbl_short(j); l_idx := 0; END LOOP; CLOSE c_getaway; END; /
…Kick it off and…
DECLARE * ERROR at line 1: ORA-06531: Reference to uninitialized collection ORA-06512: at line 23 SQL>
Yep, our first stumbling block. the TBL_SHORT collection has not been initialized.
Now at this point you may be a bit stumped as there is no obvious way to initialize this collection.
The other collection in the program – TBL_ALL_CITIES seems to have been automatically initialised by the FETCH.
We’ll come back to this in a bit.
In the meantime though, just how do you initialize an Associative Array without pointing a cursor at it ?
The refreshingly simple answer is : add an index clause to the type declaration …
set serveroutput on DECLARE TYPE typ_funny_shaped_balls IS TABLE OF shortlist_cities%ROWTYPE INDEX BY PLS_INTEGER; tbl_rugby typ_funny_shaped_balls; BEGIN DBMS_OUTPUT.PUT_LINE('There are '||tbl_rugby.COUNT||' elements in the collection.'); END; /
Run this and we get…
There are 0 elements in the collection. PL/SQL procedure successfully completed. SQL>
No nasty error when referencing the collection.
So, applying this to our code, we should be good to go, right ?
DECLARE CURSOR c_getaway IS SELECT city_name, country_name, play_rugby FROM getaway_plans; TYPE typ_all_cities is table of c_getaway%ROWTYPE; TYPE typ_shortlist is table of shortlist_cities%ROWTYPE INDEX BY PLS_INTEGER; tbl_all_cities typ_all_cities; tbl_short typ_shortlist; l_idx PLS_INTEGER := 0; BEGIN OPEN c_getaway; LOOP FETCH c_getaway BULK COLLECT INTO tbl_all_cities LIMIT 4; EXIT WHEN tbl_all_cities.COUNT = 0; DBMS_OUTPUT.PUT_LINE('Bulk Collect Loop Iteration'); FOR i IN 1..tbl_all_cities.COUNT LOOP IF tbl_all_cities(i).play_rugby = 'N' THEN l_idx := l_idx + 1; tbl_short(l_idx).city_name := tbl_all_cities(i).city_name; tbl_short(l_idx).country_name := tbl_all_cities(i).country_name; END IF; END LOOP; FORALL j IN 1..tbl_short.COUNT INSERT INTO shortlist_cities VALUES tbl_short(j); l_idx := 0; END LOOP; CLOSE c_getaway; END; /
Run this and all seems well…
SQL> @madness2 Bulk Collect Loop Iteration Bulk Collect Loop Iteration Bulk Collect Loop Iteration Bulk Collect Loop Iteration PL/SQL procedure successfully completed. SQL>
…until you look at the resulting data…
SQL> select city_name from shortlist_cities order by 1; CITY_NAME -------------------------------------------------- AMSTERDAM AMSTERDAM BASEL BASEL BERLIN BRUGES BRUGES COPENHAGEN HAVANA PERTH PERTH CITY_NAME -------------------------------------------------- VIENNA WARSAW 13 rows selected.
Hmmm…not quite what we had in mind.
The problem here is that neither array is being cleared down between loop iterations.
The results are…interesting.
Let’s try and address that…
DECLARE CURSOR c_getaway IS SELECT city_name, country_name, play_rugby FROM getaway_plans; TYPE typ_all_cities is table of c_getaway%ROWTYPE; TYPE typ_shortlist is table of shortlist_cities%ROWTYPE INDEX BY PLS_INTEGER; tbl_all_cities typ_all_cities; tbl_short typ_shortlist; tbl_ac_empty typ_all_cities; tbl_short_empty typ_shortlist; l_idx PLS_INTEGER := 0; BEGIN OPEN c_getaway; LOOP FETCH c_getaway BULK COLLECT INTO tbl_all_cities LIMIT 4; EXIT WHEN tbl_all_cities.COUNT = 0; DBMS_OUTPUT.PUT_LINE('Bulk Collect Loop Iteration'); FOR i IN 1..tbl_all_cities.COUNT LOOP IF tbl_all_cities(i).play_rugby = 'N' THEN l_idx := l_idx + 1; tbl_short(l_idx).city_name := tbl_all_cities(i).city_name; tbl_short(l_idx).country_name := tbl_all_cities(i).country_name; END IF; END LOOP; FORALL j IN 1..tbl_short.COUNT INSERT INTO shortlist_cities VALUES tbl_short(j); l_idx := 0; -- clear down the arrays between each iteration. tbl_all_cities := tbl_ac_empty; tbl_short := tbl_short_empty; END LOOP; CLOSE c_getaway; END; /
Here, we’ve simply assigned the arrays to an empty array of the same type at the bottom of the loop.
This seems to solve the problem…
SQL> select city_name from shortlist_cities order by 1; CITY_NAME -------------------------------------------------- AMSTERDAM BASEL BERLIN BRUGES COPENHAGEN HAVANA PERTH VIENNA WARSAW 9 rows selected. SQL>
At this point, you might think that there’s still a potential issue lurking in this code. OK, we’ve initialised the tbl_short array by including the INDEX BY clause in the type definition.
We haven’t done this for the tbl_all_cities array. OK, at the moment this array is intialised by the fetch from the cursor. What would happen if the cursor fetch didn’t return any rows …
DECLARE CURSOR c_getaway IS SELECT city_name, country_name, play_rugby FROM getaway_plans WHERE play_rugby = 'X'; TYPE typ_all_cities is table of c_getaway%ROWTYPE; TYPE typ_shortlist is table of shortlist_cities%ROWTYPE INDEX BY PLS_INTEGER; tbl_all_cities typ_all_cities; tbl_short typ_shortlist; tbl_ac_empty typ_all_cities; tbl_short_empty typ_shortlist; l_idx PLS_INTEGER := 0; BEGIN OPEN c_getaway; LOOP FETCH c_getaway BULK COLLECT INTO tbl_all_cities LIMIT 4; EXIT WHEN tbl_all_cities.COUNT = 0; DBMS_OUTPUT.PUT_LINE('Bulk Collect Loop Iteration'); FOR i IN 1..tbl_all_cities.COUNT LOOP IF tbl_all_cities(i).play_rugby = 'N' THEN l_idx := l_idx + 1; tbl_short(l_idx).city_name := tbl_all_cities(i).city_name; tbl_short(l_idx).country_name := tbl_all_cities(i).country_name; END IF; END LOOP; FORALL j IN 1..tbl_short.COUNT INSERT INTO shortlist_cities VALUES tbl_short(j); l_idx := 0; tbl_all_cities := tbl_ac_empty; tbl_short := tbl_short_empty; END LOOP; CLOSE c_getaway; END; /
…er, it still works. Yep, as long as the Associative Array is the target of a fetch before you reference it, it will be initialised.
One final point to note. Whilst it’s common practice to cleardown your Associative Arrays by simply assigning them the value of an empty array of the same type, there is another way…
set serveroutput on DECLARE CURSOR c_getaway IS SELECT city_name, country_name, play_rugby FROM getaway_plans; TYPE typ_all_cities is table of c_getaway%ROWTYPE; TYPE typ_shortlist is table of shortlist_cities%ROWTYPE INDEX BY PLS_INTEGER; tbl_all_cities typ_all_cities; tbl_short typ_shortlist; l_idx PLS_INTEGER := 0; BEGIN OPEN c_getaway; LOOP FETCH c_getaway BULK COLLECT INTO tbl_all_cities LIMIT 4; EXIT WHEN tbl_all_cities.COUNT = 0; DBMS_OUTPUT.PUT_LINE('Bulk Collect Loop Iteration'); FOR i IN 1..tbl_all_cities.COUNT LOOP IF tbl_all_cities(i).play_rugby = 'N' THEN l_idx := l_idx + 1; tbl_short(l_idx).city_name := tbl_all_cities(i).city_name; tbl_short(l_idx).country_name := tbl_all_cities(i).country_name; END IF; END LOOP; FORALL j IN 1..tbl_short.COUNT INSERT INTO shortlist_cities VALUES tbl_short(j); l_idx := 0; -- clear down the arrays between each iteration. DBMS_OUTPUT.PUT_LINE('Currently '||tbl_all_cities.COUNT||' records in tbl_all_cities.'); DBMS_OUTPUT.PUT_LINE('Currently '||tbl_short.COUNT||' records in tbl_short.'); tbl_all_cities.DELETE; tbl_short.DELETE; DBMS_OUTPUT.PUT_LINE('Now tbl_all_cities contains '||tbl_all_cities.COUNT||' records.'); DBMS_OUTPUT.PUT_LINE('...and tbl_short contains '||tbl_short.COUNT||' records.'); END LOOP; CLOSE c_getaway; END; /
Simply using the DELETE method will cleardown our arrays…
Bulk Collect Loop Iteration Currently 4 records in tbl_all_cities. Currently 1 records in tbl_short. Now tbl_all_cities contains 0 records. ...and tbl_short contains 0 records. Bulk Collect Loop Iteration Currently 4 records in tbl_all_cities. Currently 4 records in tbl_short. Now tbl_all_cities contains 0 records. ...and tbl_short contains 0 records. Bulk Collect Loop Iteration Currently 4 records in tbl_all_cities. Currently 4 records in tbl_short. Now tbl_all_cities contains 0 records. ...and tbl_short contains 0 records. Bulk Collect Loop Iteration Currently 1 records in tbl_all_cities. Currently 0 records in tbl_short. Now tbl_all_cities contains 0 records. ...and tbl_short contains 0 records. PL/SQL procedure successfully completed. SQL>
Right, six-pack of Brain’s bitter.Check. Inflatable Daffodil. Check. Deb humming Land of My Fathers, Check. The game must be about to start…
Filed under: Oracle, PL/SQL Tagged: associative array DELETE method, bulk collect, forall, index by, initializing an associative array, ora-06531, pl/sql associative arrays, PL/SQL Table
The fun thing about doing the hard yards to code up the algorithm is that you get a deeper level of understanding about what's going on. Take these lines:
v_in_str := utl_raw.cast_to_raw(i_username||':'||i_realm||':'||i_password); v_ha1 := lower(DBMS_OBFUSCATION_TOOLKIT.md5(input => v_in_raw));
Every time we build the "who we are" component for this site, we start with exactly the same hash made up of the username, realm (site) and password. This is a batch routine, which means somewhere we would store the username and password for the site - whether that is a parameter in a scheduling tool, coded into a shell script or OS file, or somewhere in the database. If you've got the security option for Oracle, you can use the Wallet, with its own security layers.
But digest authentication gives us another option. Since we actually use the hashed value of the user/site/password, we can store that instead. The receiving site has no idea the code doesn't actually know the REAL password.
Now turn that over in your head. We can call the web service as this user WITHOUT knowing the password, just by knowing the hash. I don't know about you, but it makes me a little bit more worried when I hear of user details being leaked or hacked from sites. It's all very well reassuring us the passwords are hashed and can't be reverse engineered (assuming your own password can't be brute-forced). But depending on the security mechanism, a leak of those hashes can be dangerous. If a hacked provider advises people to change their passwords, take their advice.
'Basic' authentication doesn't have the same weakness. In that environment the provider can store the password hash after applying their own 'secret sauce' mechanism (mostly a salt). When you authenticate, you send the password, they apply the secret sauce and compare the result. You can't get away without knowing the password, because all the work is done at their end.
There's no secret sauce for digest authentication, and there can't be. Even if the provider had the password in the clear, there's no way they can be sure the client has the password since all the client needs is the result of the hash. The provider must store, or be able to work out, the result of that hash because they need to replicate the final hash result using both the client and server nonces. They can store either that same user/realm/password hash as is, or they can encrypt it in a reversible manner, but a one-way hash wouldn't be usable.
In short, digest authentication means that our batch routine doesn't need to 'know' the actual password, just a hash. But it also makes those hashes a lot more dangerous.
I'm an amateur in this field. I checked around and it does seem this is a recognized limitation of digest authentication. EG: This Q&A and this comparison of Digest and Basic.
Hoping to do it at Open World 2014 so ... I guess this is the early version of that presentation eh?
You can find it here: Three approaches to shared pool monitoring Hotsos 2014
My guess is that there were about 100 people in the room and it looked like automatic "won" but not by much. Maybe 55 percent automatic and 45 manual ... so there are still a whole bunch of people nervous about using that functionality.
My presentation was updated ( after delivering it ) to add in some additional doc id's and warnings ( both pro and con ) about manual memory settings.
Specifically worth noting is the un obvious fact that EVEN IF you have disabled oracle from doing SGA resizing operations automatically ... well it may do them anyhow. Makes me a little grumpy eh?
So you can set SGA_TARGET to zero ( and/or MEMORY_TARGET to zero ) and still have oracle make decisions for you? It turns out that after some point you have to set yet another hidden underscore parameter to tell the oracle software "yeah thanks but I really really mean it" ...
Oracle doc id 1269139.1 ... a pretty good writeup here ... really turn off ammasmm
We are very excited to announce the release of Formspider Desktop 1.7. FS Desktop 1.7 includes one of the most requested features by our customers.Built-In Security RepositoryFS Desktop 1.7 comes with an integrated authentication and authorization repository. Formspider developers are now able to secure both data and UI elements and grant access rights to users directly from the Formspider IDE.Formspider Security Highlights:
- Security both at data and UI layer.
Grant read and write access at the query, column and row level. Control editable, enable, visible attributes of every UI component.
- Security aware UI components.
Formspider automatically detects the read&write privileges defined at the data layer and restricts access to data bound UI objects.
- Incredibly easy to manage.
Build and maintain your security model quickly with the point and click graphical user interface.
- Create reports on your security model easily with SQL.
No LDAP, no XML, no files. Formspider security model is implemented in relational database tables.
Yesterday I was informed about someone stealing my content again. I take a pretty hard line to this these days. I used to be a little unsure about how to approach it, but now I just hit them with a DMCA take-down notice straight away. I’ve not got time to explain to everyone and their dog about copyright law…
So the current thief has nicked 35+ of my articles. I went to the WordPress.com DMCA Notice page and they say the usual stuff, except that you have to file a separate notice per blog post. All other services I’ve encountered allow you to post a single notice, listing all the offending posts. Not WordPress.com! (see update below)
So now I’m faced with posting 35+ notices, or not bothering. I’m posting the notices and a shitty email to WordPress.com. By making this process significantly more clumsy than other service providers, they are playing into the hands of the thieves. I think this is an almighty fail on their part!
Update: WordPress.com have got back to me saying they have removed the offending content. They also said I should have posted a single DMCA notice for all the URLs on this blog. Easier said than done when the form prevents this! I think someone at WordPress.com needs to do some proper testing of their form.Copyright Theft: WordPress.com not making my life easy! was first posted on March 8, 2014 at 10:55 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
Focusthread offers Oracle Access Manager 11g Administrator Training starts on 15 March 2014 @Lowest price ever!!!
This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
Here are a few things to consider. Versatility in a summer garden means that you aren't committed to one prevailing purpose or theme. This means that it is not at Kata Kata Bijak Mutiara all inappropriate to plant a few flowering plants or herbs as pest control along with your favorite summertime vegetable offerings. At the same time it is completely acceptable to add a few vegetables that you just can't seem to live without in your flower garden or amid a the plants in a garden of green.
When it comes to colors, the possibilities of the average summer garden are almost limitless. There are plenty of brightly colored flowers that hallmark the summer season and bring a little splash of color to a world that is often drained of color by the sometimes blinding summer sunshine. By brining a few well placed colors into your garden as a highlight or border you are adding dramatic appeal. If you choose to make your garden a sea of brightly colored flowers, plants, leaves, and vines you can also achieve great affect that may be breathtaking in beauty. Be prepared Kata Kata Mutiara to spend a great deal of time tending a garden of this nature however as many of the brightly colored flowering plants are attention hogs to some degree.
If you live in a dryer climate you also have the option of using low water landscaping or creative conservation techniques in order to create a garden that is lovely in its own right. A garden doesn't have to be excessively green or full of bright vividly colored flowers in order to be beautiful. By using landscaping techniques that are conservative in nature you are making your own conservation efforts and the rest of the planet should appreciate not only the effort but also the beauty of the creation that was born of that effort.
If you prefer nice and lush greenery and live in a climate that will accommodate this particular style of a summer garden there is no reason to avoid this either. Be sure to use some conservation efforts though because you never know when drought conditions may present themselves in climates that are typically quite moderate as we are learning in the Midwest this summer.
Vegetable gardens are yet another option when creating your summer garden and another fine example of just how versatile these gardens may be. Keep in mind that many people are discovering the beauty of vertical gardening and hanging plants for a few favorite vegetables and this may be an option if you wish to use limited space wisely and to greater effect.
Another option when it comes to showing the incredible versatility of the summer garden is raised gardening. This is quite literally planting your garden above the ground. Some people create Kata Kata Bijak elaborate 'sand boxes' of sorts in which the garden is planted. This allows great versatility for those with smaller spaces in which to plant their gardens as well as those who have landscaping issues that are inhospitable for gardening.
These are but a few examples of just how versatile a summer garden may be but I think they provide excellent insight and food for thought for those who are looking for a little something new and different for their summer gardens.
However I had a recent requirement to use a web service that makes use of Digest Authentication. If you have look at the UTL_HTTP SET_AUTHENTICATION subprogram, it only addresses Basic authentication (and, apparently, Amazon S3 which looks intriguing).
In Basic authentication, the username and password get sent across as part of the request. Going through SSL, that doesn't seem too bad, as it is encrypted over the transfer and the certificates should ensure you are talking to the legitimate destination. However if that destination has been compromised, you've handed over your username and password. In an ideal world, the server shouldn't need to know your password, which is why database should only have hashed versions of passwords.
Outside of SSL, you might as well just print the username and password on the back of a postcard.
In Digest authentication, you get a more complex interaction that keeps the password secret. You ask for a page, the server responds with an "Authentication Required" plus some bits of information including a nonce. You come up with a hashed value based on the server nonce, your own nonce and a hash of your username and password and send it back with the next request. The server has its own record of your username/password hash and can duplicate the calculations. If everyone is happy, the server can fulfill your request and nobody ever actually needs to know the password.
Our server used SSL, and thanks to Tim's article on SSL and UTL_HTTP, it was a simple set up. I've done it before, but that was in the days when it seemed a lot hard to get certificates OUT of a browser to put them in your Oracle Wallet.
The Interwebs were a lot less forthcoming on a PL/SQL implementation of Digest authentication though. The closest I got was this discussion, which can be summed up as "This may be complex, but I do not see these offhand as being impossible to do in PL/SQL....No Digest configured web server nearby or I would definitely have had a bash at this"
A read through the Wikipedia article, and I came up with the code below:
Firstly, after the initial request, go through the header to get the 'WWW-Authenticate' item. Take the value associated with that header, and pass it to the "auth_digest" procedure.
l_max := UTL_HTTP.GET_HEADER_COUNT(l_http_response);
l_ind := 1;
l_name := '-';
while l_ind <= l_max AND l_name != 'WWW-Authenticate' LOOP
UTL_HTTP.GET_HEADER(l_http_response, l_ind, l_name, l_value);
IF l_name = 'WWW-Authenticate'
AND l_http_response.status_code = UTL_HTTP.HTTP_UNAUTHORIZED THEN
-- Unauthorized. Using the Authorization response header, we can come up with the
-- required values to allow a re-request with the authentication/authorisation details
l_http_request := UTL_HTTP.BEGIN_REQUEST(l_server||l_method);
auth_digest (io_http_request => l_http_request, i_auth_value => l_value,
i_username => nvl(i_username,'xxxx'), i_password => nvl(i_password,'xxxx'),
i_req_path => l_method, i_client_nonce => null);
dbms_output.put_line($$PLSQL_LINE||':Get Response from authenticated request');
l_http_response := UTL_HTTP.GET_RESPONSE(l_http_request);
l_ind := l_ind + 1;
The auth_digest starts with an extraction of the 'valuables' from that value string. I've used regular expressions here. I spent time working with grep, awk and perl, and regexes are habit forming.
(i_text in varchar2,
o_realm out varchar2, o_qop out varchar2, o_nonce out varchar2, o_opaque out varchar2) is
o_realm := substr(regexp_substr(i_text, 'realm="[^"]+' ),8);
o_qop := substr(regexp_substr(i_text, 'qop="[^"]+' ),6);
o_nonce := substr(regexp_substr(i_text, 'nonce="[^"]+' ),8);
o_opaque := substr(regexp_substr(i_text, 'opaque="[^"]+'),9);
Next is the 'meat' where the values are combined in the various hashes. Yes, there's a hard-coded default client nonce in there that, by a strange coincidence, matches on in the wikipedia article. That's how this stuff gets developed, by following through a worked example. Just like school.
function digest_auth_md5_calcs (i_username in varchar2, i_password in varchar2, i_req_path in varchar2, i_realm in varchar2, i_server_nonce in varchar2, i_qop in varchar2 default 'auth', i_client_nonce in varchar2 default '0a4f113b', i_req_type in varchar2 default 'GET', i_req_cnt IN NUMBER default 1) return varchar2 is -- v_in_str varchar2(2000); v_in_raw raw(2000); v_out varchar2(60); -- v_ha1 varchar2(40); v_ha2 varchar2(40); v_response varchar2(40); -- begin -- v_in_str := i_username||':'||i_realm||':'||i_password; v_in_raw := utl_raw.cast_to_raw(v_in_str); v_out := DBMS_OBFUSCATION_TOOLKIT.md5(input => v_in_raw); v_ha1 := lower(v_out); -- v_in_str := i_req_type||':'||i_req_path; v_in_raw := utl_raw.cast_to_raw(v_in_str); v_out := DBMS_OBFUSCATION_TOOLKIT.md5(input => v_in_raw); v_ha2 := lower(v_out); -- v_in_str := v_ha1||':'||i_server_nonce||':'||lpad(i_req_cnt,8,0)||':'|| i_client_nonce||':'||i_qop||':'||v_ha2; v_in_raw := utl_raw.cast_to_raw(v_in_str); v_out := DBMS_OBFUSCATION_TOOLKIT.md5(input => v_in_raw); v_response := lower(v_out); -- return v_response; end digest_auth_md5_calcs;
And this is the full auth_digest bit
(io_http_request in out UTL_HTTP.REQ, i_auth_value in varchar2,
i_username in varchar2, i_password in varchar2,
i_req_path in varchar2, i_qop in varchar2 default 'auth',
i_req_cnt in number default 1, i_client_nonce in varchar2 default null)
-- Apply the username / password for Digest authentication
l_realm, l_qop, l_server_nonce, l_opaque);
IF i_client_nonce is not null then
l_client_nonce := i_client_nonce;
l_client_nonce := lower(utl_raw.cast_to_raw(DBMS_OBFUSCATION_TOOLKIT.md5(
l_response := digest_auth_md5_calcs
(i_username => i_username, i_password => i_password, i_req_path => i_req_path,
i_realm => l_realm, i_server_nonce => l_server_nonce,
i_client_nonce => l_client_nonce);
--i_qop default to auth, i_req_type default to GET and i_req_cnt default to 1
l_value := 'Digest username="' ||i_username ||'",'||
' realm="' ||l_realm ||'",'||
' nonce="' ||l_server_nonce ||'",'||
' uri="' ||i_req_path ||'",'||
' response="' ||l_response ||'",'||
' qop=' ||i_qop ||',' ||
' nc=' ||lpad(i_req_cnt,8,0) ||',' ||
' cnonce="' ||i_client_nonce ||'"'
IF l_opaque is not null then
l_value := l_value||',opaque="'||l_opaque||'"';
UTL_HTTP.SET_HEADER(io_http_request, 'Authorization', l_value);
A package with the code is available from my CodeSpace page, or directly here. There's a lot of debug 'stuff' in there. The code I'm using is still tailored to my single specific need, and I've stripped specific values from this published variant. You'll need to hard-code or parameterize it for any real use. I may be able to do a 'cleaned-up' version in the future, but don't hold your breath.
After years of hounding from me, Noel (@noelportugal) made the jump to Android usage and development about a year ago. He started with a Nexus 7, the first generation one, but it wasn’t until he got a phone, the Moto X, that the transformation was complete.
So now, Noel is mixed ecosystem guy.
We’ve had Chromecasts since they were announced last year (they’re awesome), and with the recent release of the Google Cast SDK, Noel has been kicking the tires and experimenting with the $35 streaming dongle.
All his tinkering lead to his very first app in the Play Store, Newcaster. Newscaster uses voice search to find and read news headlines on your TV via Chromecast. That’s it.
It’s not very functional, more just a proof of concept, but it’s noteworthy, given how long Noel has been an iOS guy. Plus, now that he has some experience with the Google Cast SDK, Noel’s creative juices will start to flow. I hope to see some interesting Chromecast features soon. Stay tuned.
Every time I see him, he manages to work that feature into casual conversation at least once. Unfortunate naming aside, Tony knows his users; his app has a rating of 4.9 stars from 463 reviews, not too shabby.
Anthony’s (@anthonyslai) Moovy has been in the Play Store for nearly four years, since it was called the Android Market. Originally called Happy Feet, this app won the Move Your App! Developer Challenge at Health 2.0 in 2010.
So, there you have it, the collected Android apps of our humble team, at least the ones I can talk about on the intertubes. Stay tuned and maybe someday you’ll read about the others.
Find the comments.
Update: Turns out I missed our most prolific app developer, Raymond. My bad. Check out his three apps.
This one has a story. Raymond’s daughter was asking Santa for a Magic 8 Ball for Christmas, and she got one. Raymond and his son decided to make an app for her.
His son created the 3D ball with center carved out, put light/shade effect, and created background graphics. Raymond created the database to hold standard answers and user-entered answers, and added animation to roll the ball.
Then Raymond’s son did the I18N to include Chinese and Japanese, knowing that Chinese and Japanese speakers do not download apps from English-version of Play Store.
Raymond liked building Android apps so he went ahead and built two more, because, why not?
He and a group of Taleo guys went out for lunch every day, and they got tired of picking a place. So, he made an app for that, i.e. for making choices.
A colleague at Taleo complained to Raymond about always losing receipts when traveling. So we built an expense and receipts app to help people record their expenses and receipts.
So yeah, Raymond is the guy who builds apps to make you stop complaining about first world problems. Helpful dude Possibly Related Posts:
- We’ve Grown
- A Few New Concept Demos
- Four Months with Chromecast
- Google Wave: The Killer Enterprise Apps Platform?
- New Robot Arm Hotness
Here is the sqlplus scripting I use to check whether row chaining is degrading a query’s performance:
VARIABLE monitored_sid number; begin SELECT sid into :monitored_sid from v$session where audsid=USERENV('SESSIONID'); end; / select b.value "table fetch continued rows" from V$SESSTAT b where b.SID = :monitored_sid and b.STATISTIC# = (SELECT STATISTIC# FROM V$STATNAME WHERE NAME='table fetch continued row');
I create a test script with the problem query and put this code after it to see how much chaining affected it.
Here is an example of a fast query:
Elapsed: 00:00:15.87 table fetch continued rows -------------------------- 19723
Here is an example of a slow query:
Elapsed: 00:03:17.46 table fetch continued rows -------------------------- 23775056
This was the same query for two different date ranges. The second range had a lot of row chaining and the query was much slower.
This week has been very busy in the Java Community. Just in case you missed... The IoT Developer Challenge just launched and the Java 8 keynote address and more than 35 sessions coming up March 25th. Now on to the Best of...
SOA's People Problem
Sometimes the biggest obstacle to successful SOA has little to do with technology. Community leaders share insight in this article from the latest issue of Oracle Magazine.
Dynamic endpoint binding in Oracle SOA Suite | Alexander Daeubler
"Sometimes a BPEL process instance has to determine at run-time which implementation of a web service interface is to be called," explains Alexander Daeubler. His post shows you how to achieve that using dynamic endpoint binding.
Programmatic Identity Assertion with Oracle Platform Security Services | Shailesh K. Mishra
This short article from a member of the Oracle Identity Manager team shows how programmatic identity assertion can be performed using OPSS APIs in cases where code must run under the security context of a runtime identity.
Friday Funny from OTN Architect Community Manager Bob Rhubart:
Stores That Tell You Exactly What They Are
A music video
Free Webcast: Upgrade and migrate to Oracle 12c - Almost 90 minutes about Upgrade, Migrate and Consolidate to Oracle Database 12c with or without Multitenant Option. Available now on demand.Register here and watch it whenever it is convenient for you.
Playing Nice Together - Arup Nanda explains how to use the Consolidated Database Replay feature of Oracle Real Application Testing in Oracle Database 12c to gauge the impact of consolidating workloads.
We’ve all got problems. More to the point, every IT department or team has problems of some kind. It’s why we hire consultants, buy products, start long and arduous journeys into the great unknown depths of root cause analysis, and so on.
What fascinates me is the level at which we come to identify with our problems. When I’ve gone into an environment to deliver recommendations, the conversation usually goes something like this:
The reality is of course that there are going to be issues…perhaps budgets are tight and new servers are rarely if ever an option. Or a QA refresh takes days so we all know we’ll never get one into a project timeline. Or we never had the chance to set up security properly on a new application and we know that the developers all have DBA access and can’t do anything about it. The list goes on. What’s interesting (and slightly amusing) is these problems are announced with almost a sense of pride, as though the manager or IT administrator is showcasing their new big screen TV or manicured lawn and not a debilitating architectural deficiency.
In short, through constant business growth and changing requirements while languishing under the limitations of budgets, infrastructure, time, and staff many IT professionals have begun to accept that the answer will always be “no”, the system will never be perfect, the problems will never be fixed, and there is absolutely nothing they can do about it. After a time of this torment, the IT professional not only begins accepting these limitations but embracing them, parroting them, defending them, and reveling in them during meetings and water cooler sessions.
Why? Perhaps it is like Stockholm Syndrome, where Wikipedia explains “One commonly used hypothesis to explain the effect of Stockholm syndrome is based on Freudian theory. It suggests that the bonding is the individual’s response to trauma in becoming a victim. Identifying with the aggressor is one way that the ego defends itself. When a victim believes the same values as the aggressor, they cease to be a threat.” Or perhaps Romans 5:3 has the right of it: “Not only so, but we also glory in our sufferings, because we know that suffering produces perseverance.”
Whatever the case, it is a breath of fresh air when I work with companies that are more focused on finding solutions than reveling in problems. Everyone says they want to find solutions of course, but it’s rare that folks are actually willing to put in the time and thought process to do so rather than come up with reasons why it can’t be done.
So what can we do? Deploy outside the box. Figure out the best path to solve performance problems. Stop taking things like security concerns for granted. Keep learning, because sometimes the “fix” might seem like an erroneous approach until you understand it better. And always, always, always look for solutions instead of focusing on problems.
Now with that being said, enjoy your weekend and forget about those problems for a bit! Unless you’re on call of course.
In our latest episode, the penultimate in the pilot series, we explore the topics that are likely to be moving up the curve of the hype cycle this year—adaptive learning and learning analytics. Like many of the topics in the pilot series, we could have made an entire series about this one. (And maybe we will at some point.) But since the first adaptive learning product faculty will run into is most likely to be from a textbook publisher, we interviewed McGraw Hill’s Al Essa and Pearson’s Jason Jordan about their respective takes on what this trend is all about.
The post New e-Literate TV Episode: Adaptive Learning and Learning Analytics appeared first on e-Literate.
Big kudos to the leadership at EiS for stepping up through this time period and making it all work well. EiS is a special place with a special team...I miss every bit of it.
I started a new adventure with Io Consulting in February. I like the social value of the company's mission: 100% Oracle, 100% Higher Education, 100% Customer Satisfaction. Each and every one of those elements are important to me. My role will be building up Fusion Middleware and Oracle Cloud Applications Services capabilities within the company. So I'll continue to be deeply involved in User Experience, ADF, and the overall Fusion/Cloud Applications architecture. I'll likely get into some PeopleSoft & Campus Solutions areas as well...it's been awhile since I've been intimately involved in either, so it'll be nice to get back there again. So I'm very excited to join the Io Consulting team and look forward to what lies ahead.
In some ways, it will be weird: no more Collaborate or KScope, a little less emphasis on EBS, a more narrowly focused customer set. But I'll get to attend Alliance on a regular basis (rather than the sporadic attendance of the last five years) and I'll still be a regular at OOW. And the new management team uses the word "cool" on a regular basis, so that's a good indicator. And I get to work with Ted Simpson - it gets no better than that!
OK, I'm done being self-centered now. The next post will take us back to our regularly scheduled programming.