Skip navigation.

Feed aggregator

A new index on a small table makes a big difference

Bobby Durrett's DBA Blog - Mon, 2015-02-16 16:12

A few weeks back on the weekend just before I went on call we got a complaint about slowness on an important set of reports.  I worried that the slowness of these reports would continue during my support week so I tried to figure out why they were slow.  I reviewed an AWR report for the 24 hours when the reports were running and found a simple query against a tiny table at the top of the “SQL ordered by Elapsed Time” report:

   SQL Id         Elapsed (s)        Execs
-------------   ------------- ------------
77hcmt4kkr4b6      307,516.33 3.416388E+09

I edited the AWR report to show just elapsed seconds and number of executions.  3.4 billion executions totaling 307,000 seconds of elapsed time.  This was about 90 microseconds per execution.

The previous weekend the same query looked like this:

   SQL Id         Elapsed (s)        Execs
-------------   ------------- ------------
77hcmt4kkr4b6      133,143.65 3.496291E+09

So, about the same number of executions but less than half of the elapsed time.  This was about 38 microseconds per execution.  I never fully explained the change from week to week, but I found a way to improve the query performance by adding a new index.

The plan was the same both weekends so the increase in average execution time was not due to a plan change.  Here is the plan:

SQL_ID 77hcmt4kkr4b6
--------------------
SELECT DIV_NBR FROM DIV_RPT_GEN_CTL WHERE RPT_NM = :B1 AND
GEN_STAT = 1

Plan hash value: 1430621991

------------------------------------------------------------------
| Id  | Operation                   | Name               | Rows  |
------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                    |       |
|   1 |  TABLE ACCESS BY INDEX ROWID| DIV_RPT_GEN_CTL    |     1 |
|   2 |   INDEX RANGE SCAN          | DIV_RPT_GEN_CTL_U1 |     1 |
------------------------------------------------------------------

I found that the table only had 369 rows and 65 blocks so it was tiny.

The table’s only index was on columns RPT_NM and RPT_ID but only RPT_NM was in the query.  For the given value of RPT_NM the index would look up all rows in the table with that value until it found those with GEN_STAT=1.  I suspect that on the weekend of the slowdown that the number of rows being scanned for a given RPT_NM value had increased, but I can not prove it.

I did a count grouping by the column GEN_STAT and found that only 1 of the 300 or so rows had GEN_STAT=1.

SELECT GEN_STAT,count(*)
FROM DIV_RPT_GEN_CTL
group by GEN_STAT;

  GEN_STAT   COUNT(*)
---------- ----------
         1          1
         2        339
         0         29

So, even though this table is tiny it made sense to add an index which included the selective column GEN_STAT.  Also, since the reports execute the query billions of times per day it made sense to include the one column in the select clause as well, DIV_NBR.  By including DIV_NBR in the index the query could get DIV_NBR from the index and not touch the table.  The new index was on the columns RPT_NM, GEN_STAT, and DIV_NBR in that order.

Here is the new plan:

SQL_ID 77hcmt4kkr4b6
--------------------
SELECT DIV_NBR FROM DIV_RPT_GEN_CTL WHERE RPT_NM = :B1 AND
GEN_STAT = 1

Plan hash value: 2395994055

-------------------------------------------------------
| Id  | Operation        | Name               | Rows  |
-------------------------------------------------------
|   0 | SELECT STATEMENT |                    |       |
|   1 |  INDEX RANGE SCAN| DIV_RPT_GEN_CTL_U2 |     1 |
-------------------------------------------------------

Note that it uses the new index and does not access the table.  Here is the part of the AWR report for the problem query for last weekend:

   SQL Id         Elapsed (s)        Execs
-------------   ------------- ------------
77hcmt4kkr4b6       84,303.02 4.837909E+09

4.8 billion executions and only 84,000 seconds elapsed.  That is about 17.4 microseconds per execution.  That is less than half of what the average execution time was the weekend before the problem started.

The first Monday after we put the index in we found that one of the slow reports had its run time reduced from 70 minutes to 50 minutes.  It was great that we could improve the run time so much with such a simple fix.

It was a simple query to tune.  Add an index using the columns in the where clause and the one column in the select clause.  It was a tiny table that normally would not even need an index.  But, any query that an application executes billions of times in a day needs to execute in the most efficient way possible so it made sense to add the best possible index.

– Bobby

Categories: DBA Blogs

Is CDB stable after one patchset and two PSU?

Yann Neuhaus - Mon, 2015-02-16 15:23

There has been the announce that non-CDB is deprecated, and the reaction that CDB is not yet stable.

Well. Let's talk about the major issue I've encountered. Multitenant is there for consolidation. What is the major requirement of consolidation? It's availability. If you put all your databases into one server and managed by one instance, then you don't expect a failure.

When 12c was out (and even earlier as we are beta testers) - 12.1.0.1 - David Hueber has encountered an important issue. When a SYSTEM datafile was lost, then we cannot revocer it without stopping the whole CDB. That's bad of course.

When Patchet 1 was out  (and we were beta tester again) I tried to check it that had been solved. I've seen that they had introduced the undocumented "_enable_pdb_close_abort" parameter in order to allow a shutdown abort of a PDB. But that was worse. When I dropped a SYSTEM datafile the whole CDB instance crashed immediately. I opened a SR and Bug 19001390 'PDB system tablespace media failure causes the whole CDB to crash' was created for that. All is documented in that blog post.

Now the bug status is: fixed in 12.1.0.2.1 (Oct 2014) Database Patch Set Update

Good. I've installed the latest PSU which is 12.1.0.2.2 (Jan 2015) And I test the most basic recovery situation: loss of a non-system tablespace in one PDB.

Here it is:

 

RMAN> report schema;
Report of database schema for database with db_unique_name CDB

List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 800 SYSTEM YES /u02/oradata/CDB/system01.dbf
3 770 SYSAUX NO /u02/oradata/CDB/sysaux01.dbf
4 270 UNDOTBS1 YES /u02/oradata/CDB/undotbs01.dbf
5 250 PDB$SEED:SYSTEM NO /u02/oradata/CDB/pdbseed/system01.dbf
6 5 USERS NO /u02/oradata/CDB/users01.dbf
7 490 PDB$SEED:SYSAUX NO /u02/oradata/CDB/pdbseed/sysaux01.dbf
11 260 PDB2:SYSTEM NO /u02/oradata/CDB/PDB2/system01.dbf
12 520 PDB2:SYSAUX NO /u02/oradata/CDB/PDB2/sysaux01.dbf
13 5 PDB2:USERS NO /u02/oradata/CDB/PDB2/PDB2_users01.dbf
14 250 PDB1:SYSTEM NO /u02/oradata/CDB/PDB1/system01.dbf
15 520 PDB1:SYSAUX NO /u02/oradata/CDB/PDB1/sysaux01.dbf
16 5 PDB1:USERS NO /u02/oradata/CDB/PDB1/PDB1_users01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 60 TEMP 32767 /u02/oradata/CDB/temp01.dbf
2 20 PDB$SEED:TEMP 32767 /u02/oradata/CDB/pdbseed/pdbseed_temp012015-02-06_07-04-28-AM.dbf
3 20 PDB1:TEMP 32767 /u02/oradata/CDB/PDB1/temp012015-02-06_07-04-28-AM.dbf
4 20 PDB2:TEMP 32767 /u02/oradata/CDB/PDB2/temp012015-02-06_07-04-28-AM.dbf


RMAN> host "rm -f /u02/oradata/CDB/PDB1/PDB1_users01.dbf";
host command complete


RMAN> alter system checkpoint;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
ORA-01092: ORACLE instance terminated. Disconnection forced
RMAN-03002: failure of sql statement command at 02/19/2015 22:51:55
ORA-03113: end-of-file on communication channel
Process ID: 19135
Session ID: 357 Serial number: 41977
ORACLE error from target database:
ORA-03114: not connected to ORACLE

 

Ok, but I have the PSU:

 

$ /u01/app/oracle/product/12102EE/OPatch/opatch lspatches
19769480;Database Patch Set Update : 12.1.0.2.2 (19769480)

 

Here is the alert.log:

 

Completed: alter database open
2015-02-19 22:51:46.460000 +01:00
Shared IO Pool defaulting to 20MB. Trying to get it from Buffer Cache for process 19116.
===========================================================
Dumping current patch information
===========================================================
Patch Id: 19769480
Patch Description: Database Patch Set Update : 12.1.0.2.2 (19769480)
Patch Apply Time: 2015-02-19 22:14:05 GMT+01:00
Bugs Fixed: 14643995,16359751,16870214,17835294,18250893,18288842,18354830,
18436647,18456643,18610915,18618122,18674024,18674047,18791688,18845653,
18849537,18885870,18921743,18948177,18952989,18964939,18964978,18967382,
18988834,18990693,19001359,19001390,19016730,19018206,19022470,19024808,
19028800,19044962,19048007,19050649,19052488,19054077,19058490,19065556,
19067244,19068610,19068970,19074147,19075256,19076343,19077215,19124589,
19134173,19143550,19149990,19154375,19155797,19157754,19174430,19174521,
19174942,19176223,19176326,19178851,19180770,19185876,19189317,19189525,
19195895,19197175,19248799,19279273,19280225,19289642,19303936,19304354,
19309466,19329654,19371175,19382851,19390567,19409212,19430401,19434529,
19439759,19440586,19468347,19501299,19518079,19520602,19532017,19561643,
19577410,19597439,19676905,19706965,19708632,19723336,19769480,20074391,
20284155
===========================================================
2015-02-19 22:51:51.113000 +01:00
db_recovery_file_dest_size of 4560 MB is 18.72% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Setting Resource Manager plan SCHEDULER[0x4446]:DEFAULT_MAINTENANCE_PLAN via scheduler window
Setting Resource Manager CDB plan DEFAULT_MAINTENANCE_PLAN via parameter
2015-02-19 22:51:54.892000 +01:00
Errors in file /u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ckpt_19102.trc:
ORA-63999: data file suffered media failure
ORA-01116: error in opening database file 16
ORA-01110: data file 16: '/u02/oradata/CDB/PDB1/PDB1_users01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Errors in file /u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ckpt_19102.trc:
ORA-63999: data file suffered media failure
ORA-01116: error in opening database file 16
ORA-01110: data file 16: '/u02/oradata/CDB/PDB1/PDB1_users01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
USER (ospid: 19102): terminating the instance due to error 63999
System state dump requested by (instance=1, osid=19102 (CKPT)), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_diag_19090_20150219225154.trc
ORA-1092 : opitsk aborting process
2015-02-19 22:52:00.067000 +01:00
Instance terminated by USER, pid = 19102

 

You can see the bug number in 'bug fixed' and the instance is still terminating after media failure on a PDB datafile. That's bad news. 

 

I've lost one datafile. At first checkpoint the CDB is crashed. I'll have to open an SR again. But for sure consolidation through multitenancy architecture is not yet for sensible production.

Webcast Q&A: Delivering Next-Gen Digital Experiences

WebCenter Team - Mon, 2015-02-16 14:45


Oracle Corporation Digital Strategies For Customer Engagement Growth

In case you missed our webcast "Delivering Next-Gen Digital Experiences" last week, we had a great turnout and wanted to provide a Q&A summary for those questions that were asked.

Q. How do I download the slide presentation?
The slides are available by clicking on the folder icon at the bottom of the console.

Q. This covers a lot of different systems and processes, how can you help me understand where it all lives in my organization? Great question, we actually have an Assessment that will cover later in the presentation.
Q. Does Oracle offer an integrated tool, or does one need to purchase each separate tool (i.e. Eloqua, RightNow, etc.)?
The answer is "it depends" -- on your starting point and end-goals. Elements of the DX strategy are embodied in specific products, but the larger story is more of a "solution" which takes specific elements of the products mentioned. The Assessment will help determine the components.
Q. Can you please provide some examples wherever possible?
This white paper outlines some specific customers referenced with some of their results.
Q. Does Eloqua provide an integrated solution? Or would we need to purchase multiple tools to build this (i.e. Eloqua, RightNow), etc.? The complete Digital Experience will include other elements beyond Eloqua depending on your specific needs. But the Marketing Cloud is a key element.
Q. What does the "assessment" cost?
There is no charge from Oracle, but there is time that has to be invested.
Please be sure to view the on demand version of the webcast in case you missed it, and check out the corresponding white paper "Delivering Next-Generation Digital Experiences!"

Next Generation Outline Extractor - New Version Available

Tim Tow - Mon, 2015-02-16 14:42
Today we released a new version of the Next Generation Outline Extractor, version 2.0.3.769.  Here are the release notes from this new version:

Version 2.0.3.769 supports the following Essbase versions:

9.3.1
9.3.1.1
9.3.1.2
9.3.3
11.1.1
11.1.1.1
11.1.1.2
11.1.1.3
11.1.1.4
11.1.2
11.1.2.1
11.1.2.1.102
11.1.2.1.103
11.1.2.1.104
11.1.2.1.105
11.1.2.1.106
11.1.2.2
11.1.2.2.102
11.1.2.2.103
11.1.2.2.104
11.1.2.3
11.1.2.3.001
11.1.2.3.002
11.1.2.3.003
11.1.2.3.500
11.1.2.3.501
11.1.2.3.502
11.1.2.3.505
11.1.2.4

Issues resolved in version 2.0.3.769:

2015.02.15 - Issue 1355 - All Writers - Add functionality to replace all line feeds, carriage returns, tabs, and extraneous spaces in formulas

2015.02.13 - Issue 1354 - RelationalWriter - Changed the default database name from dodeca to extractor

2015.02.13 - Issue 1353 - RelationalWriter - Added CONSOLIDATION_TYPE_SYMBOL, SHARE_FLAG_SYMBOL, TIME_BALANCE, TIME_BALANCE_SYMBOL, TIME_BALANCE_SKIP, TIME_BALANCE_SKIP_SYMBOL, EXPENSE_FLAG, EXPENSE_FLAG_SYMBOL, TWO_PASS_FLAG, and TWO_PASS_FLAG_SYMBOL columns to the CACHED_OUTLINE_MEMBERS table

2015.02.13 - Issue 1352 - RelationalWriter - Added Server, Application, and Cube columns to the CACHED_OUTLINE_VERSIONS table

2015.02.13 - Issue 1351 - Fixed issue with LoadFileWriter where UDA column headers were incorrectly written in the form UDAS0,DimName instead of UDA0,DimName

In addition, a number of fixes, etc, were put into 2.0.2 and earlier releases and those releases went unannounced.  Those updates included the following items:

  1. There is no longer a default .properties file for the Extractor.  This will force a user to specify a .properties file.  (2.0.2.601)
  2. Removed the "/" character as a switch for command line arguments as it causes problems in Linux. (2.0.2.605)
  3. Fixed issue when combining MaxL input with relational output where a "not supported" error message would appear due to certain properties were not being read correctly from the XML file (2.0.2.601)
  4. Command line operations resulted in an error due to an improper attempt to interact with the GUI progress bar. (2.0.2.601)
  5. Shared members attributes where not be properly written resulting in a delimiter/column count mismatch. (2.0.2.625)
  6. Added encoding options where a user can choose between UTF-8 and ANSI encodings.  The Extractor will attempt to detect encoding from selected outline and, if the detected outline encoding is different from the user selected outline encoding, a warning message appears.
Categories: BI & Warehousing

12c Parallel Execution New Features: Hybrid Hash Distribution - Part 1

Randolf Geist - Mon, 2015-02-16 14:21
In this blog post I want to cover some aspects of the the new HYBRID HASH adaptive distribution method that I haven't covered yet in my other posts.

As far as I know it serves two purposes for parallel HASH and MERGE JOINs, adaptive broadcast distribution and hybrid distribution for skewed join expressions. In the first part of this post I want to focus on former one (goto part 2).

1. Adaptive Broadcast Distribution For Small Left Row Sources
It allows the PX SEND / RECEIVE operation for the left (smaller estimated row source) of the hash join to decide dynamically at runtime, actually at each execution, if it should use either a BROADCAST or HASH distribution, and correspondingly for the other row source to use then either a ROUND-ROBIN or a HASH distribution, too. This is described for example in the corresponding white paper by Maria Colgan here.

It's important to emphasize that this decision is really done at each execution of the same cursor, so the same cursor can do a BROADCAST distribution for the left row source at one execution and HASH distribution at another execution depending on whether the number of rows detected by the STATISTICS COLLECTOR operator exceeds the threshold or not. This is different from the behaviour of "adaptive joins" where the final plan will be resolved at first execution and from then on will be re-used, and therefore a STATISTICS COLLECTOR operator as part of an adaptive plan no longer will be evaluated after the first execution.

Here is a simple script demonstrating that the distribution method is evaluated at each execution:

define dop = 4

create table t_1
compress
as
select
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(&dop*2) */ * from dual
connect by
level <= &dop*2) a
;

exec dbms_stats.gather_table_stats(null, 't_1', method_opt=>'for all columns size 1')

create table t_2
compress
as
select
rownum as id
, mod(rownum, &dop) + 1 as fk_id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a
;

exec dbms_stats.gather_table_stats(null, 't_2', method_opt=>'for all columns size 1')

alter table t_1 parallel &dop cache;

alter table t_2 parallel &dop cache;

select /*+ leading(t1) no_swap_join_inputs(t2) pq_distribute(t_2 hash hash) */ max(t_2.id) from t_1, t_2 where t_1.id = t_2.fk_id;

@pqstat

delete from t_1 where rownum <= 1;

select count(*) from t_1;

select /*+ leading(t1) no_swap_join_inputs(t2) pq_distribute(t_2 hash hash) */ max(t_2.id) from t_1, t_2 where t_1.id = t_2.fk_id;

@pqstat

rollback;
For the table queue 0 (the distribution of T_1) the distribution for the first execution in above script look like this:

TQ_ID SERVER_TYP INSTANCE PROCESS NUM_ROWS % GRAPH
---------- ---------- ---------- -------- ---------- ---------- ----------
0 Producer 1 P004 8 100 ##########
P005 0 0
P006 0 0
P007 0 0
********** ********** ----------
Total 8

Consumer 1 P000 3 38 ##########
P001 1 13 ###
P002 2 25 #######
P003 2 25 #######
********** ********** ----------
Total 8
So the eight rows are distributed assumingly by hash. But for the second execution with only seven rows in T_1 I get this output:

TQ_ID SERVER_TYP INSTANCE PROCESS NUM_ROWS % GRAPH
---------- ---------- ---------- -------- ---------- ---------- ----------
0 Producer 1 P004 28 100 ##########
P005 0 0
P006 0 0
P007 0 0
********** ********** ----------
Total 28

Consumer 1 P000 7 25 ##########
P001 7 25 ##########
P002 7 25 ##########
P003 7 25 ##########
********** ********** ----------
Total 28
So the seven rows were this time broadcasted.

The "pqstat" script is simply a query on V$PQ_TQSTAT, which I've mentioned for example here.

So I run the same query twice, the first time the threshold is exceeded and a HASH distribution takes place. After deleting one row the second execution of the same cursor turns into a BROADCAST / ROUND-ROBIN distribution. You can verify that this is the same parent / child cursor via DBMS_XPLAN.DISPLAY_CURSOR / V$SQL. Real-Time SQL Monitoring also can provide more details about the distribution methods used (click on the "binoculars" icon in the "Other" column of the active report for the PX SEND HYBRID HASH operations).

Note that the dynamic switch between HASH to BROADCAST unfortunately isn't the same as a decision of the optimizer at parse time to use BROADCAST distribution, because in such a case the other row source won't be distributed at all, which comes with some important side effects:

Not only the redistribution of larger row sources simply can take significant time and resources (CPU and in case of RAC network), but due to the (in 12c still existing) limitation of Parallel Execution that only a single redistribution is allowed to be active concurrently reducing the number of redistributions in the plan simply as a side effect can reduce the number of BUFFERED operations (mostly HASH JOIN BUFFERED, but could be additional BUFFER SORTs, too), which are a threat to Parallel Execution performance in general.

Here is a very simple example showing the difference:


-- HYBRID HASH with possible BROADCAST distribution of T_1
----------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | PX COORDINATOR | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10002 | Q1,02 | P->S | QC (RAND) |
|* 3 | HASH JOIN BUFFERED | | Q1,02 | PCWP | |
| 4 | PX RECEIVE | | Q1,02 | PCWP | |
| 5 | PX SEND HYBRID HASH | :TQ10000 | Q1,00 | P->P | HYBRID HASH|
| 6 | STATISTICS COLLECTOR | | Q1,00 | PCWC | |
| 7 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 8 | TABLE ACCESS FULL | T_1 | Q1,00 | PCWP | |
| 9 | PX RECEIVE | | Q1,02 | PCWP | |
| 10 | PX SEND HYBRID HASH | :TQ10001 | Q1,01 | P->P | HYBRID HASH|
| 11 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
| 12 | TABLE ACCESS FULL | T_2 | Q1,01 | PCWP | |
----------------------------------------------------------------------------

-- TRUE BROADCAST of T_1
-------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | PX COORDINATOR | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | Q1,01 | P->S | QC (RAND) |
|* 3 | HASH JOIN | | Q1,01 | PCWP | |
| 4 | PX RECEIVE | | Q1,01 | PCWP | |
| 5 | PX SEND BROADCAST | :TQ10000 | Q1,00 | P->P | BROADCAST |
| 6 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 7 | TABLE ACCESS FULL| T_1 | Q1,00 | PCWP | |
| 8 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
| 9 | TABLE ACCESS FULL | T_2 | Q1,01 | PCWP | |
-------------------------------------------------------------------------
So even if in the first plan the T_1 row source really has less than 2*DOP rows and the HYBRID HASH distribution turns into a BROADCAST distribution, this doesn't change the overall plan shape generated by the optimizer. The second HYBRID HASH distribution won't be skipped and will turn into a ROUND-ROBIN distribution instead, which can be confirmed by looking at the output from V$PQ_TQSTAT for example. So the data of the second row source still needs to be distributed, and hence the HASH JOIN will be operating as BUFFERED join due to the plan shape and the limitation that only a single PX SEND / RECEIVE pair can be active at the same time.

In the second plan the BROADCAST distribution of T_1 means that T_2 will not be re-distributed, hence there is no need to operate the HASH JOIN in buffered mode.

So the only purpose of this particular adaptive HYBRID HASH distribution is obviously to avoid skew if there are only a couple of rows (and hence possible join key values) in the left row source, because a HASH distribution based on such a low number of distinct values doesn't work well. Oracle's algorithm needs a certain number of distinct values otherwise it can end up with a bad distribution. This probably also explains why the threshold of 2*DOP was chosen so low.

What Does Unizin Mean for Digital Learning?

Michael Feldstein - Mon, 2015-02-16 13:41

By Michael FeldsteinMore Posts (1015)

Speaking of underpants gnomes sales pitches, Phil and I spent a fair amount of time hearing about Unizin at the ELI conference. Much of that time was spent hearing friends that I know, trust, and respect talk about the project. At length, in some cases. On the one hand, it is remarkable that, after these long conversations, I am not much clearer on the purpose of Unizin than I was the week before. On the other hand, being reminded that some of my friends really believe in this thing helped me refill my reservoir of patience for the project, which had frankly run dry.

Alas, that reservoir was largely drained away again during a Unizin presentation with the same title as this blog post. I went there expecting the presenters to answer that question for the audience.

Alack.

The main presentation was given by Anastasia Morrone of IUPUI, was probably the most straightforward and least hype-filled presentation about Unizin that I have heard so far. It was also short. Just when I was warming to it and figuring we’d get to the real meat, her last slide came up:

Split into groups of 5-7 people and discuss the following:

How can faculty, teaching center consultants, and learning technologists contribute to best practices with the evolving Unizin services?

Wait. What?

That’s right. They wanted us to tell them what Unizin means for digital learning. That might have been a good question to ask before they committed to spend a million dollars each on the initiative.

I joined one of the groups, resolving to try as hard as I could to keep my tongue in check and be constructive (or, at least, silent) for as long as I could. The very first comment in my group—not by me, I swear—was, “Before I can contribute, can somebody please explain to me what Unizin is?” It didn’t get any better from there. At the end of the breakout session, our group’s official answer was essentially, “Yeah, we don’t have any suggestions to contribute, so we’re hoping the other groups come up with something.” None of them did, really. The closest they came were a couple of vague comments on inclusive governance. I understand from a participant in one of the other groups that they simply refused to even try to answer the question. It was brutal.

Click here to view the embedded video.

Still, in the spirit of the good intentions behind their request for collaborative input, I will list here some possible ways in which Unizin could provide value, in descending order of credibility.

I’ll start with the moderately credible:

  • Provide a layer of support services on top of and around the LMS: This barely even gets mentioned by Unizin advocates but it is the one that makes the most sense to me. Increasingly, in addition to your LMS, you have a bunch of connected tools and services. It might be something basic like help desk support for the LMS itself. It might be figuring out how an external application like Voicethread works best with your LMS. As the LMS evolves into the hub of a larger ecosystem, it is putting increasing strain on IT department in everything from procurement to integration to ongoing support. Unizin could be a way of pooling resources across institutions to address those needs. If I were a CIO in a big university with lots of demands for LMS plug-in services, I would want this.
  • Provide a university-controlled environment for open courses: Back when Instructure announced Canvas Network, I commented that the company had cannily targeted the issue that MOOC providers seemed to be taking over the branding, not to mention substantial design and delivery decisions, from their university “partners.” Canvas Network is marketed as “open courses for the rest of us.” By adopting Canvas as their LMS, Unizin gets this for free. Again, if I were a CIO or Provost at a school that was either MOOCing or MOOC-curious, I would want this.
  • Providing buying power: What vendor would not want to sew up a sales deal with ten large universities or university systems (and counting) through one sales process? So far it is unclear how much Unizin has gained in reality through group negotiations, but it’s credible that they could be saving significant money through group contracting.
  • Provide a technology-assisted vehicle for sharing course materials and possibly even course cross-registrations: The institutions involved are large, and most or all probably have specialty strengths in some curricula area or other. I could see them wanting to trade, say, an Arabic degree program for a financial technology degree program. You don’t need a common learning technology infrastructure to make this work, but having one would make it easier.
  • Provide a home for a community researching topics like learning design and learning analytics: Again, you don’t need a common infrastructure for this, but it would help, as would having courses that are shared between institutions.

Would all of this amount to a significant contribution to digital learning, as the title of the ELI presentation seems to ask? Maybe! It depends on what happens in those last two bullet points. But the rollout of the program so far does not inspire confidence that the Unizin leadership knows how to facilitate the necessary kind of community-building. Quite the opposite, in fact. Furthermore, the software has only ancillary value in those areas, and yet it seems to be what Unizin leaders want to talk about 90%+ of the time.

Would these benefits justify a million-dollar price tag? That’s a different question. I’m skeptical, but a lot depends on specific inter-institutional intentions that are not public. A degree program has a monetary value to a university, and some universities can monetize the value better than others depending on which market they can access with significant degrees of penetration. Throw in the dollar savings on group contracting, and you can have a relatively hard number for the value of the coalition to a member. I know that a lot of university folk hate to think like that, but it seems to be the most credible way to add the value of these benefits up and get to a million dollars.

Let’s see if we can sweeten the pot by adding in the unclear or somewhat dubious but not entirely absurd benefits that some Unizin folk have claimed:

  • Unizin will enable universities to “own” the ecosystem: This claim is often immediately followed by the statement that their first step in building that ecosystem was to license Canvas. The Unizin folks seem to have at least some sense that it seems contradictory to claim you are owning the ecosystem by licensing a commercial product, so they immediately start talking about how Canvas is open source and Unizin could take it their own way if they wanted to. Yet this flies in the face of Unizin’s general stated direction of mostly licensing products and building connectors and such when they have to. Will all products they license be open source? Do they seriously commit to forking Canvas should particular circumstances arise? If not, what does “ownership” really mean? I buy it in relation to the MOOC providers, because there they are talking about owning brand and process. But beyond that, the message is pretty garbled. There could be something here, but I don’t know what it is yet.
  • Unizin could pressure vendors and standards groups to build better products: In the abstract, this sounds credible and similar to the buying power argument. The trouble is that it’s not clear either that pressure on these groups will solve our most significant problems or that Unizin will ask for the right things. I have argued that the biggest reason LMSs are…what they are is not vendor incompetence or recalcitrance but that faculty always ask for the same things. Would Unizin change this? Indiana University used what I would characterize as a relatively progressive evaluation framework when they chose Canvas, but there is no sign that they were using the framework to push their faculty to fundamentally rethink what they want to do with a virtual learning environment and therefore what it needs to be. I don’t doubt the intellectual capacity of the stakeholders in these institutions to ask the right questions. I doubt the will of the institutions themselves to push for better answers from their own constituents. As for the standards, as I have argued previously, the IMS is doing quite well at the moment. They could always move faster, and they could always use more university members who are willing to come to the table with concrete use cases and a commitment to put in the time necessary to work through a standards development process (including implementation). Unizin could do that, and it would be a good thing if they did. But it’s still pretty unclear to me how much their collective muscle would be useful to solve the hard problems.

Don’t get me wrong; I believe that both of the goals articulated above are laudable and potentially credible. But Unizin hasn’t really made the case yet.

Instead, at least some of the Unizin leaders have made claims that are either nonsensical (in that they don’t seem to actually mean anything in the real world) or absurd:

  • “We are building common gauge rails:” I love a good analogy, but it can only take you so far. What rides on those rails? And please don’t just say “content.” Are we talking about courses? Test banks? Individual test questions? Individual content pages? Each of these have very different reuse characteristics. Content isn’t just a set of widgets that can be loaded up in rail cars and used interchangeably wherever they are needed. If it were, then reuse would have been a solved problem ten years ago. What problem are you really trying to solve here, and why do you think that what you’re building will solve it (and is worth the price tag)?
  • “Unizin will make migrating to our next LMS easier because moving the content will be easy.” No. No, no, no, no, no, no, no. This is the perfect illustration of why the “common gauge rails” statement is meaningless. All major LMSs today can import IMS Common Cartridge format, and most can export in that format. You could modestly enhance this capability by building some automation that takes the export from one system and imports it into the other. But that is not the hard part of migration. The hard part is that LMSs work differently, so you have to redesign your content to make best use of the design and features of the new platform. Furthermore, these differences are generally not one that you want to stamp out—at least, not if you care about these platforms evolving and innovating. Content migration in education is inherently hard because context makes a huge difference. (And content reuse is exponentially harder for the same reason.) There are no widgets that can be neatly stacked in train cars. Your rails will not help here.
  • “Unizin will be like educational moneyball.” Again with the analogies. What does this mean? Give me an example of a concrete goal, and I will probably be able to evaluate the probability that you can achieve it, it’s value to students and the university, and therefore whether it is worth a million-dollar institutional investment. Unizin doesn’t give us that. Instead, it gives us statements like, “Nobody ever said that your data is too big.” Seriously? The case for Unizin comes down to “my data is bigger than yours”? Is this a well-considered institutional investment or a midlife crisis? The MOOC providers have gobs and gobs of data, but as HarvardX researcher Justin Reich has pointed out, “Big data sets do not, by virtue of their size, inherently possess answers to interesting questions….We have terabytes of data about what students clicked and very little understanding of what changed in their heads.” Tell us what kinds of research questions you intend to ask and how your investment will make it possible to answer them. Please. And also, don’t just wave your hands at PAR and steal some terms from their slides. I like PAR. It’s a Good Thing. But what new thing are you going to do with it that justifies a million bucks per institution?

I want to believe that my friends, who I respect, believe in Unizin because they see a clear justification for it. I want to believe that these schools are going to collectively invest $10 million or more doing something that makes sense and will improve education. But I need more than what I’m getting to be convinced. It can’t be the case that the people not in the inner circle have to convince themselves of the benefit of Unizin. One of my friends inside the Unizin coalition said to me, “You know, a lot of big institutions are signing on. More and more.” I replied, “That means that either something very good is happening or something very bad is happening.” Given the utter disaster that was the ELI session, I’m afraid that I continue to lean in the direction of badness.

 

The post What Does Unizin Mean for Digital Learning? appeared first on e-Literate.

node-oracledb 0.3.1 is on GitHub (Node.js driver for Oracle Database)

Christopher Jones - Mon, 2015-02-16 11:46

On behalf of the development team, I have merged some new features and fixes to node-oracledb

Updates for node-oracledb 0.3.1

  • Added Windows build configuration. See Node-oracledb Installation on Windows. Thanks to Rinie Kervel for submitting a pull request, and thanks to all those that commented and tested.
  • Added Database Resident Connection Pooling (DRCP) support. See API Documentation for the Oracle Database Node.js Driver

    "Database Resident Connection Pooling enables database resource sharing for applications that run in multiple client processes or run on multiple middle-tier application servers. DRCP reduces the overall number of connections that a database must handle. DRCP is distinct from node-oracledb's local connection pool. The two pools can be used separately, or together.
  • Made an explicit connection release() do a rollback, to be consistent with the implicit release behavior.

  • Made install on Linux look for Oracle libraries in a search order:

    • Using install-time environment variables $OCI_LIB_DIR and $OCI_INC_DIR
    • In the highest version Instant Client RPMs installed
    • In $ORACLE_HOME
    • In /opt/oracle/instantclient
  • Added RPATH support on Linux, so LD_LIBRARY_PATH doesn't always need to be set. See Advanced installation on Linux

  • The directory name used by the installer for the final attempt at locating an Instant Client directory is now /opt/oracle/instantclient or C:\oracle\instantclient. This path may be used if OCI_DIR_LIB and OCI_INC_LIB are not set and the installer has to guess where the libraries are.

  • Added a compile error message "Oracle 11.2 or later client libraries are required for building" if attempting to build with older Oracle client libraries. This helps developers self-diagnose this class of build problem.

  • Fixed setting the isAutoCommit property.

  • Fixed a crash using pooled connections on Windows.

  • Fixed a crash querying object types.

  • Fixed a crash doing a release after a failed terminate. (The Pool is still unusable - this will be fixed later)

  • Clarified documentation that terminate() doesn't release connections. Doing an explicit release() of each pooled connection that is no longer needed is recommended to avoid resource leaks and maximize pool usage.

  • Updated version to 0.3.1 (surprise!)

Log Buffer #409, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-02-16 10:29

This Log Buffer Edition sheds light at some of the nifty blog post of the week from Oracle, SQL Server and MySQL.

Oracle:

Patch Set Update: Hyperion Data Relationship Management 11.1.2.3.504

The Hitchhiker’s Guide to the EXPLAIN PLAN Part 33: The mother of all SQL antipatterns?

MongoDB as a Glassfish Security Realm

E-Business Suite customers must ensure that their database remains on a level that is covered by Error Correction Support (ECS)

EM12c: How to Retrieve Passwords from the Named Credentials

SQL Server:

How does a View work on a Table with a Clustered Columnstore Index ?

How do you develop and deploy your database?

Microsoft Azure Storage Queues Part 3: Security and Performance Tips

Stairway to SQL Server Security Level 6: Execution Context and Code Signing

Centralize Your Database Monitoring Process

MySQL:

New Galera Cluster version is now released! It includes patched MySQL server 5.6.21 and Galera replication provider 3.9

Shinguz: Nagios and Icinga plug-ins for MySQL 1.0.0 have been released

The next release of MongoDB includes the ability to select a storage engine, the goal being that different storage engines will have different capabilities/advantages, and user’s can select the one most beneficial to their particular use-case. Storage engines are cool.

The MySQL grant syntax allows you to specify dynamic database names using the wildcard characters.

Oracle‘s 10 commitments to MySQL – a 5 year review

Categories: DBA Blogs

Log Buffer #410, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-02-16 10:28

This Log Buffer Edition spread love of databases just before Valentine’s Day. Lovely blog posts from Oracle, SQL Server and MySQL are here for you to love.

Oracle:

Creating a Mobile-Optimized REST API Using Oracle Service Bus by Steven Davelaar.

GROUP BY – wrong results in 12.1.0.2

Using Edition-Based Redefinition to Bypass Those Pesky Triggers

It’s easy to make mistakes, or overlook defects, when constructing parallel queries – especially if you’re a developer who hasn’t been given the right tools to make it easy to test your code.

If you have a sorted collection of elements, how would you find index of specific value?

SQL Server:

How to use the IsNothing Inspection Function in SSRS

What better way to learn how to construct complex CHECK CONSTRAINTs, use the SQL 2012 window frame capability of the OVER clause and LEAD analytic function, as well as how to pivot rows into columns using a crosstab query?

SQL Server’s GROUP BY clause provides you a way to aggregate your SQL Server data and to group data on a single column, multiple columns, or even expressions. Greg Larsen discusses how to use the GROUP by clause to summarize your data.

Surely, we all know how T-SQL Control-of-flow language works? In fact it is surprisingly easy to get caught out.

Resilient T-SQL code is code that is designed to last, and to be safely reused by others.

MySQL:

The NoSQL databases are gaining increasing popularity. MongoDB, being one of the most established among them, uses JSON data model and offers great scalability and ease of use due to the dynamic data schemas..

Is upgrading RDS like a shit-storm that will not end?

Over the last few MySQL releases the size of the MySQL package have increased in size and it looks like the trend is continuing.

This article details the proper method of load balancing either a Percona XTRADB Cluster (PXC) or MariaDB Cluster.

One common job for a DBA is working with a Development Team member on a batch loading process that is taking more time than expected.

Categories: DBA Blogs

Using rsync to clone Goldengate installation

Michael Dinh - Mon, 2015-02-16 09:15

You may be thinking, why clone Goldengate and why now just download it?
The exact version and patch level might not be available.
Too lazy to search for it and many other reasons you can come up with.

Why use rsync and not tar – scp? I wanted to refresh memory of using rsync.

Commands used:

local source /u01/app/ggs01/ and remote target arrow:/u01/app/ggs03/

rsync -avh --delete --dry-run --exclude 'dirdatold' /u01/app/ggs01/ arrow:/u01/app/ggs03/
rsync -avh --delete --exclude 'dirdatold' /u01/app/ggs01/ arrow:/u01/app/ggs03/

Note:
/u01/app/ggs01/ means synch contents of directory to target
/u01/app/ggs01 means create ggs01 directory and sync contents to target

Demo:

Source: /u01/app/ggs01 and dirdata is symbolic link

oracle@arrow:las:/u01/app/ggs01
$ ls -ld dir*
drwxr-x---. 2 oracle oinstall 4096 Jan 13 13:12 dirchk
lrwxrwxrwx. 1 oracle oinstall   15 Feb 15 06:20 dirdat -> /oradata/backup
drwxr-x---. 2 oracle oinstall 4096 Jul 22  2014 dirdatold
drwxr-x---. 2 oracle oinstall 4096 Apr 26  2014 dirdef
drwxr-x---. 2 oracle oinstall 4096 Apr  4  2014 dirjar
drwxr-x---. 2 oracle oinstall 4096 Apr 26  2014 dirout
drwxr-x---. 2 oracle oinstall 4096 Feb 12 15:35 dirpcs
drwxr-x---. 2 oracle oinstall 4096 Jan 13 12:55 dirprm
drwxr-x---. 2 oracle oinstall 4096 Feb 12 15:36 dirrpt
drwxr-x---. 2 oracle oinstall 4096 Apr 26  2014 dirsql
drwxr-x---. 2 oracle oinstall 4096 Sep 25 08:56 dirtmp

Target: arrow:/u01/app/ggs03/

oracle@arrow:las:/u01/app/ggs01
$ ls -l /u01/app/ggs03/
total 0

Let’s do a dry run first.

oracle@arrow:las:/u01/app/ggs01

$ rsync -avh --delete --dry-run --exclude 'dirdatold' /u01/app/ggs01/ arrow:/u01/app/ggs03/
oracle@arrow's password:

sending incremental file list
./

.....

output ommited for brevity

dirout/
dirpcs/
dirprm/
dirprm/esan.prm
dirprm/jagent.prm
dirprm/mgr.prm
dirrpt/
dirrpt/ESAN.rpt
dirrpt/ESAN0.rpt
dirrpt/ESAN1.rpt
dirrpt/ESAN2.rpt
dirrpt/ESAN3.rpt
dirrpt/ESAN4.rpt
dirrpt/ESAN5.rpt
dirrpt/ESAN6.rpt
dirrpt/ESAN7.rpt
dirrpt/ESAN8.rpt
dirrpt/ESAN9.rpt
dirrpt/MGR.rpt
dirrpt/MGR0.rpt
dirrpt/MGR1.rpt
dirrpt/MGR2.rpt
dirrpt/MGR3.rpt
dirrpt/MGR4.rpt
dirrpt/MGR5.rpt
dirrpt/MGR6.rpt
dirrpt/MGR7.rpt
dirrpt/MGR8.rpt
dirrpt/MGR9.rpt
dirsql/
dirtmp/


sent 6.96K bytes  received 767 bytes  15.44K bytes/sec
total size is 237.10M  speedup is 30704.36 (DRY RUN)

oracle@arrow:las:/u01/app/ggs01
$ ls -l /u01/app/ggs03/
total 0

Perform actual rsync

oracle@arrow:las:/u01/app/ggs01

$ rsync -avh --delete --exclude 'dirdatold' /u01/app/ggs01/ arrow:/u01/app/ggs03/
oracle@arrow's password:

sending incremental file list
./

.....

output ommited for brevity

dirout/
dirpcs/
dirprm/
dirprm/esan.prm
dirprm/jagent.prm
dirprm/mgr.prm
dirrpt/
dirrpt/ESAN.rpt
dirrpt/ESAN0.rpt
dirrpt/ESAN1.rpt
dirrpt/ESAN2.rpt
dirrpt/ESAN3.rpt
dirrpt/ESAN4.rpt
dirrpt/ESAN5.rpt
dirrpt/ESAN6.rpt
dirrpt/ESAN7.rpt
dirrpt/ESAN8.rpt
dirrpt/ESAN9.rpt
dirrpt/MGR.rpt
dirrpt/MGR0.rpt
dirrpt/MGR1.rpt
dirrpt/MGR2.rpt
dirrpt/MGR3.rpt
dirrpt/MGR4.rpt
dirrpt/MGR5.rpt
dirrpt/MGR6.rpt
dirrpt/MGR7.rpt
dirrpt/MGR8.rpt
dirrpt/MGR9.rpt
dirsql/
dirtmp/

sent 237.14M bytes  received 4.40K bytes  31.62M bytes/sec
total size is 237.10M  speedup is 1.00

oracle@arrow:las:/u01/app/ggs01
$ ls -ld /u01/app/ggs03/dir*
drwxr-x---. 2 oracle oinstall 4096 Jan 13 13:12 /u01/app/ggs03/dirchk

lrwxrwxrwx. 1 oracle oinstall   15 Feb 15 06:20 /u01/app/ggs03/dirdat -> /oradata/backup

drwxr-x---. 2 oracle oinstall 4096 Apr 26  2014 /u01/app/ggs03/dirdef
drwxr-x---. 2 oracle oinstall 4096 Apr  4  2014 /u01/app/ggs03/dirjar
drwxr-x---. 2 oracle oinstall 4096 Apr 26  2014 /u01/app/ggs03/dirout
drwxr-x---. 2 oracle oinstall 4096 Feb 12 15:35 /u01/app/ggs03/dirpcs
drwxr-x---. 2 oracle oinstall 4096 Jan 13 12:55 /u01/app/ggs03/dirprm
drwxr-x---. 2 oracle oinstall 4096 Feb 12 15:36 /u01/app/ggs03/dirrpt
drwxr-x---. 2 oracle oinstall 4096 Apr 26  2014 /u01/app/ggs03/dirsql
drwxr-x---. 2 oracle oinstall 4096 Sep 25 08:56 /u01/app/ggs03/dirtmp
oracle@arrow:las:/u01/app/ggs01
$

Did it work?

oracle@arrow:las:/u01/app/ggs01
$ cd /u01/app/ggs03/
oracle@arrow:las:/u01/app/ggs03
$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.21 18343248 OGGCORE_11.2.1.0.0OGGBP_PLATFORMS_140404.1029_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Apr  4 2014 15:18:36

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.



GGSCI (arrow.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED
EXTRACT     STOPPED     ESAN        00:01:02      788:32:31
REPLICAT    STOPPED     RLAS_SAN    00:00:00      4989:14:29


GGSCI (arrow.localdomain) 2> exit

The above was from a neglected test environment
Delete details for extract/replicat at dirchk

oracle@arrow:las:/u01/app/ggs03
$ cd dirchk/
oracle@arrow:las:/u01/app/ggs03/dirchk
$ ll
total 8
-rw-r-----. 1 oracle oinstall 2048 Jan 13 13:12 ESAN.cpe
-rw-r-----. 1 oracle oinstall 2048 Jul 22  2014 RLAS_SAN.cpr
oracle@arrow:las:/u01/app/ggs03/dirchk
$ rm *
oracle@arrow:las:/u01/app/ggs03/dirchk
$ cd ../dirpcs/
oracle@arrow:las:/u01/app/ggs03/dirpcs
$ ll
total 0
oracle@arrow:las:/u01/app/ggs03/dirpcs
$ cd ..
oracle@arrow:las:/u01/app/ggs03
$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.21 18343248 OGGCORE_11.2.1.0.0OGGBP_PLATFORMS_140404.1029_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Apr  4 2014 15:18:36

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.



GGSCI (arrow.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED


GGSCI (arrow.localdomain) 2> start mgr

Manager started.


GGSCI (arrow.localdomain) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED


GGSCI (arrow.localdomain) 4> exit

Now, what’s wrong?

oracle@arrow:las:/u01/app/ggs03
$ tail ggserr.log
2015-02-12 15:36:02  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): start mgr.
2015-02-12 15:36:02  ERROR   OGG-00664  Oracle GoldenGate Manager for Oracle, mgr.prm:  OCI Error during OCIServerAttach (status = 12541-ORA-12541: TNS:no listener).
2015-02-12 15:36:02  ERROR   OGG-01668  Oracle GoldenGate Manager for Oracle, mgr.prm:  PROCESS ABENDING.
2015-02-12 15:36:04  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.
2015-02-15 09:44:51  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.
2015-02-15 09:45:31  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.
2015-02-15 09:45:41  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): start mgr.
2015-02-15 09:45:43  ERROR   OGG-00664  Oracle GoldenGate Manager for Oracle, mgr.prm:  OCI Error during OCIServerAttach (status = 12541-ORA-12541: TNS:no listener).
2015-02-15 09:45:43  ERROR   OGG-01668  Oracle GoldenGate Manager for Oracle, mgr.prm:  PROCESS ABENDING.
2015-02-15 09:45:44  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.

oracle@arrow:las:/u01/app/ggs03
$ cat dirprm/mgr.prm
PORT 7901
DYNAMICPORTLIST 15100-15120

USERID ggs@san, PASSWORD *****

PURGEOLDEXTRACTS /u01/app/ggs01/dirdat/*, USECHECKPOINTS, MINKEEPDAYS 3
PURGEMARKERHISTORY MINKEEPDAYS 5, MAXKEEPDAYS 7, FREQUENCYHOURS 24
PURGEDDLHISTORY MINKEEPDAYS 5, MAXKEEPDAYS 7, FREQUENCYHOURS 24

AUTOSTART ER *
AUTORESTART ER *, RETRIES 5, WAITMINUTES 2, RESETMINUTES 60

CHECKMINUTES 1
LAGINFOMINUTES 0
LAGCRITICALMINUTES 1

oracle@arrow:las:/u01/app/ggs03
$ vi dirprm/mgr.prm

oracle@arrow:las:/u01/app/ggs03
$ cat dirprm/mgr.prm
PORT 7901
DYNAMICPORTLIST 15100-15120

-- USERID ggs@san, PASSWORD 888

PURGEOLDEXTRACTS /u01/app/ggs01/dirdat/*, USECHECKPOINTS, MINKEEPDAYS 3
PURGEMARKERHISTORY MINKEEPDAYS 5, MAXKEEPDAYS 7, FREQUENCYHOURS 24
PURGEDDLHISTORY MINKEEPDAYS 5, MAXKEEPDAYS 7, FREQUENCYHOURS 24

AUTOSTART ER *
AUTORESTART ER *, RETRIES 5, WAITMINUTES 2, RESETMINUTES 60

CHECKMINUTES 1
LAGINFOMINUTES 0
LAGCRITICALMINUTES 1
oracle@arrow:las:/u01/app/ggs03
$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.21 18343248 OGGCORE_11.2.1.0.0OGGBP_PLATFORMS_140404.1029_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Apr  4 2014 15:18:36

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.



GGSCI (arrow.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED


GGSCI (arrow.localdomain) 2> start mgr

Manager started.


GGSCI (arrow.localdomain) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING


GGSCI (arrow.localdomain) 4> exit
oracle@arrow:las:/u01/app/ggs03
$

Don’t forget Oracle libraries are required to run Goldengate

oracle@arrow:las:/u01/app/ggs03
$ ldd ggsci
        linux-vdso.so.1 =>  (0x00007fff95ffa000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00000039e6000000)
        libgglog.so => /u01/app/ggs03/./libgglog.so (0x00007f862ca8d000)
        libggrepo.so => /u01/app/ggs03/./libggrepo.so (0x00007f862c923000)
        libdb-5.2.so => /u01/app/ggs03/./libdb-5.2.so (0x00007f862c688000)
        libicui18n.so.38 => /u01/app/ggs03/./libicui18n.so.38 (0x00007f862c327000)
        libicuuc.so.38 => /u01/app/ggs03/./libicuuc.so.38 (0x00007f862bfee000)
        libicudata.so.38 => /u01/app/ggs03/./libicudata.so.38 (0x00007f862b012000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00000039e6800000)
        libxerces-c.so.28 => /u01/app/ggs03/./libxerces-c.so.28 (0x00007f862aafa000)
        libantlr3c.so => /u01/app/ggs03/./libantlr3c.so (0x00007f862a9e4000)

        libnnz11.so => /u01/app/oracle/product/11.2.0/dbhome_1/lib/libnnz11.so (0x00007f862a616000)
        libclntsh.so.11.1 => /u01/app/oracle/product/11.2.0/dbhome_1/lib/libclntsh.so.11.1 (0x00007f8627ba0000)

        libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00000039f1c00000)
        libm.so.6 => /lib64/libm.so.6 (0x00000039e7400000)
        libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00000039f1800000)
        libc.so.6 => /lib64/libc.so.6 (0x00000039e6400000)
        /lib64/ld-linux-x86-64.so.2 (0x00000039e5c00000)
        libnsl.so.1 => /lib64/libnsl.so.1 (0x00000039f3400000)
        libaio.so.1 => /lib64/libaio.so.1 (0x00007f862799d000)

oracle@arrow:las:/u01/app/ggs03
$ env |egrep 'HOME|LD'
OLDPWD=/home/oracle
LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0/dbhome_1/lib:/lib:/usr/lib
HOME=/home/oracle
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
oracle@arrow:las:/u01/app/ggs03
$ unset ORACLE_HOME
oracle@arrow:las:/u01/app/ggs03
$ export LD_LIBRARY_PATH=/lib:/usr/lib
oracle@arrow:las:/u01/app/ggs03
$ ./ggsci
./ggsci: error while loading shared libraries: libnnz11.so: cannot open shared object file: No such file or directory
oracle@arrow:las:/u01/app/ggs03
$

Proactive Analysis Center (PAC)

Joshua Solomin - Mon, 2015-02-16 08:38

Proactive Analysis Center (PAC) is a comprehensive system health reporting solution for proactive and reactive services accessible through the My Oracle Support portal.
Manage risk through tracking/improving Operational Risk Index (ORI).
Empowers the customer to efficiently manage downtime.
Watch the video below for more information.


The Operational Risk Index (ORI) rank orders systems that have vulnerabilities from high to low, so that downtime can be efficiently planned. These subject matter experts have detailed knowledge of the systems, because they deploy these systems and support these systems.

For more information, review the Document 1634073.2 How to use the Proactive Analysis Center (PAC).

Bookmark this page to view currently available PAC Advisor Webcasts and access to previously recorded sessions: Advisor Webcasts for Oracle Solaris and Systems Hardware (Doc ID 1282218.1).

DBMS_STATS.PURGE_STATS

Dominic Brooks - Mon, 2015-02-16 08:05

Prior to 11.2.0.4, the optimizer history tables are unpartitioned and DBMS_STATS.PURGE_STATS has little choice but to do do a slow delete of stats before the parameterised input timestamp.

Why might you be purging? Here’s one such illustration:

https://jhdba.wordpress.com/tag/dbms_stats-purge_stats/

This delete can be slow if these tables are large and there are a number of reasons why they might be so, notably if MMON cannot complete the purge within its permitted timeframe.

But note that if you’re happy to purge all history, there is a special TRUNCATE option if you make the call with a magic timestamp:

exec DBMS_STATS.PURGE_STATS(DBMS_STATS.PURGE_ALL);

but Oracle Support emphasises that:

This option is planned to be used as a workaround on urgent cases and under the advice of Support…

Ah… the old magic value pattern / antipattern!

PURGE_ALL CONSTANT TIMESTAMP WITH TIME ZONE :=
 TO_TIMESTAMP_TZ('1001-01-0101:00:00-00:00','YYYY-MM-DDHH:MI:SSTZH:TZM');

As part of the upgrade to 11.2.0.4, one gotcha is that these history tables become partitioned.

I don’t have a copy of 11.2.0.4 to hand but I do have 12.1.0.2 and the tables here are daily interval partitioned so I presume this is the same.

One plus side of this newly partitioned table is that the PURGE_STATS can now drop old partitions which is quicker than delete but a minor downside is that the tables have global indexes so the recursive/internal operations have to be done with UPDATE GLOBAL INDEXES

One curiosity in the trace file from this operation was this statement:

delete /*+ dynamic_sampling(4) */ 
from   sys.wri$_optstat_histhead_history
where  savtime_date < to_date('01-01-1900', 'dd-mm-yyyy') 
and    savtime not in (select timestamp '0000-01-01 00:00:00 -0:0' + sid + serial#/86400
                       from   gv$session 
                       where  status = 'ACTIVE' 
                       and    con_id in (0, sys_context('userenv', 'con_id')))       
and    rownum <= NVL(:1, rownum)

This is deleting from the P_PERMANENT default partition but why is this necessary and what is that subquery all about, particularly the timestamp ‘0000-01-01 00:00:00 -0:0′ + sid + serial#/86400 bit?


I’m Alex Lightstone and this is how I work

Duncan Davies - Mon, 2015-02-16 08:00

First up in the ‘How I work‘ series for 2015 is Alex Lightstone. For those in the UK PeopleSoft marketplace Alex should need no introduction, for everyone else though here’s a brief bio:

Alex spent the 2000’s at Oracle, initially supporting, and then as Product Support Manager for PeopleSoft Global Payroll. When he left Oracle in 2010 there was a battle for his services and we’re very grateful that he selected Cedar where he forms part of our ‘trinity of UK-based GP gurus’ – alongside Bill and Gary. Alex can be frequently found sharing his knowledge at Cedar events, and UKOUG conferences and SIGS. His skills aren’t confined to GP however, and he’s already fixed a few bits of code that I’ve left half-completed (including the Field Watermarks).

Alex

Name: Alex Lightstone

Occupation: Lead Consultant at Cedar Consulting
Location: When not on client site, either home or the Cedar Office (Kings Cross, London)
Current computer: Dell Latitude E6430
Current mobile devices: HTC One (M7), Alcatel 4G USB Dongle
I work: best when I understand exactly what everyone else is doing so I can see the whole picture

What apps/software/tools can’t you live without?
As long as I have an internet connection I’m happy. I regularly change my browser but am reliant on Microsoft Office applications (yes, even Outlook). I’ve recently discovered Microsoft One Note and now wonder how I ever managed without it. Of course, App Designer, Data Mover and SQL Developer are a big part of my day to day work, as are Citrix, Remote Desktop and various flavours of VPN software when working remotely as needed to connect to client sites. Smartphones are useful but when space and battery power permit I’d rather be hooked up to my laptop with my 4G mobile broadband. Notepad and Paint are useful pieces of software in my opinion – I know there are alternatives but I like to keep it simple. And last but not least … Spotify.

Besides your phone and computer, what gadget can’t you live without?
Sat Nav is an absolute but is included with most smartphones these days. I’m happy with my phone and laptop. A smart-watch would be nice but I haven’t bought one yet.

What’s your workspace like?
They say “small is beautiful”, which is just as well in my case. Since the birth of my second child, my workspace has been relegated to a corner of my bedroom.

IMAG0260

What do you listen to while you work?
I have somewhat eclectic tastes when it comes to music. I often listen to music via Google Music or Spotify. Anything goes … rock, classical, dance – it’s all on my playlists.

What PeopleSoft-related productivity apps do you use?
I use App Designer, SQL Developer and Notepad (for SQR and COBOL). I occasionally use Firebug or the Chrome Developer Tools to resolve HTML issues. PeopleBooks are useful for reference.

Do you have a 2-line tip that some others might not know?
Global Payroll is just a programming language where you don’t write code. Approach it as a programmer and it will make a lot more sense.

What SQL/Code do you find yourself writing most often?
Working in the world of payroll, I spend a lot of time querying the payroll results tables (GP_RSLT_XXXX).

SELECT PIN_NUM FROM PS_GP_PIN WHERE PIN_NM=<Element Name> is SQL that I use a lot.

What would be the one item you’d add to PeopleSoft if you could?
Better error handling – there are too many generic error messages where your only option is to delve into the code to determine the reason for the error.

What everyday thing are you better at than anyone else?
Teaching myself – it’s how I learnt most of what I know.

What’s the best advice you’ve ever received?
“The PeopleSoft world is a small place, don’t upset people, you will have to work with them again” – Anonymous


PeopleTools 8.54: Descending Indexes are not supported

David Kurtz - Mon, 2015-02-16 07:57
This is the first in a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

"With PeopleTools 8.54, PeopleTools will no longer support descending indexes on the Oracle database platform" - PeopleTools 8.54 Release Notes

They have gone again!  Forgive the trip down memory lane, but I think it is worth reviewing their history.
  • Prior to PeopleTools 8.14, if you specified a key or alternate search field in record as descending then where it appeared in automatically generated key and alternate search key indexes, that column would be descending.  PeopleTools would add the DESC keyword after the column in the CREATE INDEX DDL.  Similarly, columns can be specified as descending in user indexes (that is to say ones created by the developer with index ID A through Z).
  • In PeopleTools 8.14 to 8.47, descending indexes were not built by Application Designer because of a problem with the descending key indexes in some versions of Oracle 8i.  
    • PeopleSoft had previously recommended setting an initialisation parameter on 8i to prevent descending indexes from being created even if the DESC keyword was specified.parameters.
_IGNORE_DESC_IN_INDEX=TRUE
  • From PeopleTools 8.48 the descending keyword came back because this was the first version of PeopleTools that was only certified from Oracle 9i, in which the descending index bug never occurred. (see blog posting 'Descending Indexes are Back!' October 2007).
  • In PeopleTools 8.54, there are again no descending indexes because the descending keyword has been omitted from the column list in the CREATE TABLE DDL. You can still specify descending keys in Application Designer because that controls the order in which rows are queried into scrolls in the PIA.  You can also still specify descending order on user indexes, but it has no effect upon either the application or the index DDL.
I haven’t found any documentation that explains why this change has been made.  This time there is no suggestion of a database bug.  However, I think that there are a good database design reasons behind it.

Normally creation of a primary key automatically creates a unique index to police the constraint.  It is possible to create a primary key constraint using a pre-existing index.  The index does not have to be unique, but it may as well.  However, there are some limitations.
  • You cannot create primary key on nullable columns - that is a fundamental part of the relational model.  This is rarely a problem in PeopleSoft where only dates that are not marked 'required' in the Application Designer are created nullable in the database.
    • You can create a unique index on nullable columns, which is probably why PeopleSoft has always used unique indexes.  
  • You cannot use a descending index in a primary key constraint because it is implemented as a function-based index.
CREATE TABLE t (a NUMBER NOT NULL)
/
CREATE UNIQUE INDEX t1 ON t(A DESC)
/
ALTER TABLE t ADD PRIMARY KEY (a) USING INDEX t1
/
ORA-14196: Specified index cannot be used to enforce the constraint.

    • We can see that the descending key column is actually a function of a column and not a column, and so cannot be used in a primary key.
SELECT index_name, index_type, uniqueness FROM user_indexes WHERE table_name = 'T'
/

INDEX_NAME INDEX_TYPE UNIQUENES
---------- --------------------------- ---------
T1 FUNCTION-BASED NORMAL UNIQUE

SELECT index_name, column_name, column_position, descend FROM user_ind_columns WHERE table_name = 'T'
/

INDEX_NAME COLUMN_NAME COLUMN_POSITION DESC
---------- ------------ --------------- ----
T1 SYS_NC00002$ 1 DESC

SELECT * FROM user_ind_expressions WHERE table_name = 'T'
/

INDEX_NAME TABLE_NAME COLUMN_EXPRESSION COLUMN_POSITION
---------- ---------- -------------------- ---------------
T1 T "A" 1
    • Non-PeopleSoft digression: you can create a primary key on a virtual column.  An index on the virtual column is not function-based.  So you can achieve the same effect if you move the function from the index into a virtual column, and you can have a primary key on the function.  However, PeopleTools Application Designer doesn't support virtual columns.
I think that descending keys have removed from PeopleTools because:
  • It permits the creation of primary key constraints using the unique indexes.
  • It does not pose any performance threat.  In Oracle, index leaf blocks are chained in both directions so it is possible to use an ascending index for a descending scan and vice versa. 
What are the advantages of having a primary key rather than just a unique constraint?
  • The optimizer can only consider certain SQL transformations if there is a primary key.  That mainly affects star transformation.
  • It allows the optimizer to rewrite a query to use a materialized view. 
  • It allows a materialized view refresh to be based on the primary key rather than the rowid (the physical address of the row in a table).  This can save you from performing a full refresh of the materialized view if you rebuild the table (I will come back to materialized views in another posting).
  • If you are using logical standby, you need to be able to uniquely identify a row of data otherwise Oracle will perform supplemental logging.  Oracle will additionally log all bounded-size columns (in PeopleSoft, that means everything except LOBs).  Oracle can use non-null unique constraint, but it cannot use a unique function-based index.
    • Logical standby can be a useful way to minimise downtime during a migration.  For example, when migrating the database from a proprietary Unix to Linux where there is an Endian change.  Minimising supplemental logging would definitely be of interest in this case.
The descending index change is not something that can be configured by the developer or administrator.  It is just something that is hard coded in Application Design and Data Mover that changes the DDL that they generate.

There are some considerations on migration to PeopleTools 8.54:
  • It will be necessary to rebuild all indexes with descending keys to remove the descending keys.  
  • There are a lot of descending indexes in a typical PeopleSoft application (I counted the number of function-based indexes in a typical system: HR ~11000 , Financials: ~8500).  If you choose to do this during the migration it may take considerable time.
  • In Application Designer, if your build script settings are set to only recreate an index if modified, Application Designer will not detect that the index has a descending key and rebuild it.  So you will have to work out for yourself which indexes have descending key columns and handle the rebuild manually.
  • With migration to PeopleTools 8.54 in mind, you might choose to prevent Oracle from building descending indexes by setting _IGNORE_DESC_IN_INDEX=TRUE. Then you can handle the rebuild in stages in advance
ConclusionI think the removal of descending indexes from PeopleSoft is a sensible change that enables a number of Oracle database features, while doing no harm. ©David Kurtz, Go-Faster Consultancy Ltd.

PeopleTools 8.54 for the Oracle DBA

David Kurtz - Mon, 2015-02-16 06:36
The UKOUG PeopleSoft Roadshow 2015 comes to London on 31st March 2015.  In a moment of enthusiasm, I offered to talk about new and interesting features of PeopleTools 8.54 from the perspective of an Oracle DBA.

I have been doing some research, and have even read the release notes!  As a result, I have picked out some topics that I want to talk about.  I will discuss how the feature has been implemented, and what I think are the benefits and drawbacks of the feature:
This post is not about a new feature in PeopleTools 8.54, but it is something that I discovered while investigating the new version.
    Links have been added to the above list as I have also blogged about each.  I hope it might produce some feedback and discussion.  After the Roadshow I will also add a link to the presentation.

    PeopleTools 8.54 is still quite new, and we are all still learning.  So please leave comments, disagree with what I have written, correct things that I have got wrong, ask questions.©David Kurtz, Go-Faster Consultancy Ltd.

    Spring Boot JPA Thymeleaf application deployed to IBM Bluemix

    Pas Apicella - Mon, 2015-02-16 03:27
    IBM Bluemix is an open-standards, cloud-based platform for building, managing, and running apps of all types, such as web, mobile, big data, and smart devices. Capabilities include Java, mobile back-end development, and application monitoring, as well as features from ecosystem partners and open source—all provided as-a-service in the cloud.

    The example below shows how to deploy an spring boot JPA application to IBM Bluemix. The example is based on the code below.

    https://github.com/papicella/BluemixSpringBootJPA

    1. Target the IBM Bluemix

    cf api https://api.ng.bluemix.net

    2. Log in as follows

    cf login -u pasapi@au1.ibm.com -p ******-o pasapi@au1.ibm.com -s dev

    3. Create a MYSQL service as shown below.

    pas.apicella@IBM-XD082415H ~/bluemix-apps/spring-data-jpa-thymeleaf/mysql
    $ cf create-service mysql 100 dev-mysql
    Creating service dev-mysql in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
    OK

    4.  At this point lets run the application using an embedded tomcat server as shown below.

    pas.apicella@IBM-XD082415H ~/bluemix-apps/spring-data-jpa-thymeleaf/mysql
    $ java -jar BluemixSpringBootJPA-0.0.1-SNAPSHOT.jar

      .   ____          _            __ _ _
     /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
    ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
     \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
      '  |____| .__|_| |_|_| |_\__, | / / / /
     =========|_|==============|___/=/_/_/_/
     :: Spring Boot ::        (v1.2.0.RELEASE)

    2015-02-16 20:12:18.289  INFO 15824 --- [           main] p.cloud.webapp.ApplesCfDemoApplication   : Starting ApplesCfDemoApplication on IBM-XD082415H with PID 15824 (C:\ibm\bluemix\apps\spring-data-jpa-thymeleaf\mysql\BluemixSpringBootJPA-0.0.1-SNAPSHOT.jar started by pas.apicella in C:\ibm\bluemix\apps\spring-data-jpa-thymeleaf\mysql)
    2015-02-16 20:12:18.356  INFO 15824 --- [           main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@6b51dbf6: startup date [Mon Feb 16 20:12:18 AEDT 2015]; root of context hierarchy
    2015-02-16 20:12:20.679  INFO 15824 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.boot.autoconfigure.dao.PersistenceExceptionTranslationAutoConfiguration' of type [class org.springframework.boot.autoconfigure.dao.PersistenceExceptionTranslationAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
    2015-02-16 20:12:20.779  INFO 15824 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [class org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration$$EnhancerBySpringCGLIB$$15fe846f] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
    2015-02-16 20:12:20.831  INFO 15824 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'transactionAttributeSource' of type [class org.springframework.transaction.annotation.AnnotationTransactionAttributeSource] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
    2015-02-16 20:12:20.853  INFO 15824 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'transactionInterceptor' of type [class org.springframework.transaction.interceptor.TransactionInterceptor] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
    .....

    class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
    2015-02-16 20:12:27.710  INFO 15824 --- [           main] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]
    2015-02-16 20:12:28.385  INFO 15824 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Registering beans for JMX exposure on startup
    2015-02-16 20:12:28.510  INFO 15824 --- [           main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080/http
    2015-02-16 20:12:28.513  INFO 15824 --- [           main] p.cloud.webapp.ApplesCfDemoApplication   : Started ApplesCfDemoApplication in 10.626 seconds (JVM running for 11.481)

    5. Now using a manifest.yml as follows deploy the application to bluemix as shown below.

    manifest.yml

    applications:
    - name: pas-mj-albums
      memory: 512M
      instances: 1
      host: pas-albums
      domain: mybluemix.net
      path: ./BluemixSpringBootJPA-0.0.1-SNAPSHOT.jar
      buildpack: https://github.com/cloudfoundry/java-buildpack.git
      services:
        - dev-mysql

    Deployment output

    pas.apicella@IBM-XD082415H ~/bluemix-apps/spring-data-jpa-thymeleaf/mysql
    $ cf push -f manifest.yml
    Using manifest file manifest.yml

    Creating app pas-mj-albums in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
    OK

    Creating route pas-mj-albums.mybluemix.net...
    OK

    Binding pas-mj-albums.mybluemix.net to pas-mj-albums...
    OK

    Uploading pas-mj-albums...
    Uploading app files from: BluemixSpringBootJPA-0.0.1-SNAPSHOT.jar
    Uploading 837.2K, 135 files
    Done uploading
    OK
    Binding service dev-mysql to app pas-mj-albums in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
    OK

    Starting app pas-mj-albums in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
    -----> Java Buildpack Version: 303bda3 | https://github.com/cloudfoundry/java-buildpack.git#303bda3
    -----> Uploading droplet (64M)

    0 of 1 instances running, 1 starting
    0 of 1 instances running, 1 starting
    0 of 1 instances running, 1 starting
    1 of 1 instances running

    App started


    OK

    App pas-mj-albums was started using this command `SERVER_PORT=$PORT $PWD/.java-buildpack/open_jdk_jre/bin/java -cp $PWD/.:$PWD/.java-buildpack/spring_auto_reconfiguration/spring_auto_reconfiguration-1.7.0_RELEASE.jar -Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh -Xmx382293K -Xms382293K -XX:MaxMetaspaceSize=64M -XX:MetaspaceSize=64M -Xss995K org.springframework.boot.loader.JarLauncher`

    Showing health and status for app pas-mj-albums in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
    OK

    requested state: started
    instances: ?/1
    usage: 512M x 1 instances
    urls: pas-mj-albums.mybluemix.net
    last uploaded: Mon Feb 16 09:18:28 +0000 2015

         state     since                    cpu    memory           disk
    #0   running   2015-02-16 08:20:04 PM   0.0%   393.4M of 512M   129.3M of 1G

    6. Once deployed the Bluemix Console page shows it deployed as follows


    7. Finally invoking the application using the unique route as follows

    http://pas-mj-albums.mybluemix.net/albums





    More Information

    https://www.ng.bluemix.net/docs/#

    http://feeds.feedburner.com/TheBlasFromPas
    Categories: Fusion Middleware

    Oracle SOASuite UMS: Deregister obsolete Messaging Client applications

    Darwin IT - Mon, 2015-02-16 02:38
    There are already several blogs on how to receive and send email using the UMS email adapter. A few good starting points that use GMail as a provider are the ones written by our respected con-colleagues of Amis:
    So I won't bother to do a how-to on that on my account. Although I manged to get that working with a local Exchange setup.

    What I managed to do is to read e-mail and then process it to upload the attachments and body using BPEL and java in a Spring-Context to Adaptive Case Management. If you want to do something similar make sure you install the patch 18511990 for fetching the attachment-properties and -content, since without the patch the properties for inline attachements are not written properly to the soa-infra database. See my earlier blog-posts here and here.

    Having it all setup and playing around with it, you might end up in the situation that the service won't listen to the actual email-address you reserved for it, as I did. This might be the case when you change the email address in your receiving adapter component in the composite. Or after deploying several versions of the composite, especially with different addresses.

    It turns out that there is a panel to deregister Messaging Client Applications to cleanup the mess.
    First go to the Enterprise Manager and under the Weblogic Domain navigate to the 'usermessagingserver' (there are also entries for the different usermessagingdriver's, but in this case you need the server itself):
     Right click on it and in the pop-up choose 'Messaging Client Applications':
     Here the registered Messaging Client Applications are registered with the particular endpoints.
    In our case all the activation agents of the polling inbound email adapters are registered here and it the situation might occur that the composites are listening to wrong addresses. So select those you don't want anymore, and click on the De-register button.
    You might need to re-deploy your composite and edit the component for the inbound email to have it listening to the proper address set-up for the particular environment (development, test, accepetance, production). And then it shoud work. Maybe a server restart is necessary to have the proper activation agent(s) started (and the unwanted shutdown).

      OracleText: deletes and garbage

      Yann Neuhaus - Sun, 2015-02-15 15:21

      In the previous post we have seen how the OracleText index tables are maintained when new document arrives: At sync the new documents are read up to the available memory and words are inserted in the $I table with their mapping information. Now we will see how removed documents are processed. We will not cover updates as their are just delete + insert.

      Previous state

      Here is the state from the previous post where I had those 3 documents:

      • 'Hello World'
      which was synced alone, and then the two following ones were synced together:
      • 'Hello Moon, hello, hello'
      • 'Hello Mars'
      The $K is a IOT which maps the OracleText table ROWID to the DOCID (the fact that the primary key TEXTKEY is not at start is a bit misleading):
      SQL> select * from DR$DEMO_CTX_INDEX$K;
      
           DOCID TEXTKEY
      ---------- ------------------
               1 AAAXUtAAKAAABWlAAA
               2 AAAXUtAAKAAABWlAAB
               3 AAAXUtAAKAAABWlAAC
      
      The $R is a table mapping the opposite navigation (docid to rowid) storing a fixed-length array of ROWIDs indexed by the docid, and split into several lines:
      SQL> select * from DR$DEMO_CTX_INDEX$R;
      
          ROW_NO DATA
      ---------- ------------------------------------------------------------
               0 00001752D000280000056940414100001752D00028000005694041420000
      
      The $I table stores the tokens, the first 5 columns being indexed ($X) and the TOKEN_INFO blob stores detailed location of the token:
      SQL> select * from DR$DEMO_CTX_INDEX$I;
      
      TOKEN_TEXT TOKEN_TYPE TOKEN_FIRST TOKEN_LAST TOKEN_COUNT TOKEN_INFO
      ---------- ---------- ----------- ---------- ----------- ----------
      HELLO               0           1          1           1 008801
      WORLD               0           1          1           1 008802
      HELLO               0           2          3           2 0098010201
      MARS                0           3          3           1 008802
      MOON                0           2          2           1 008802
      
      We have seen that the $I table can be fragmented for 3 reasons:
      • Each sync insert his tokens (instead of merging with other ones)
      • TOKEN_INFO size is limited to fit in-row (we will see 12c new features later)
      • Only tokens that fit in the allocated memory can be merged
      And the $N is empty for the moment:
      SQL> select * from DR$DEMO_CTX_INDEX$N;
      
      no rows selected
      

      Delete

      Do you remember how inserts are going to the CTXSYS.DR$PENDING table? Deletes are going to CTXSYS.DR$DELETE table:

      SQL> delete from DEMO_CTX_FRAG where num=0002;
      
      1 row deleted.
      
      SQL> select * from CTXSYS.DR$DELETE;
      
      DEL_IDX_ID DEL_IXP_ID  DEL_DOCID
      ---------- ---------- ----------
            1400          0          2
      
      I've deleted docid=2 but the tokens are still there:
      SQL> select * from DR$DEMO_CTX_INDEX$I;
      
      TOKEN_TEXT TOKEN_TYPE TOKEN_FIRST TOKEN_LAST TOKEN_COUNT TOKEN_INFO
      ---------- ---------- ----------- ---------- ----------- ----------
      HELLO               0           1          1           1 008801
      WORLD               0           1          1           1 008802
      HELLO               0           2          3           2 0098010201
      MARS                0           3          3           1 008802
      MOON                0           2          2           1 008802
      
      as well as their mapping to the ROWID:
      SQL> -- $R is for rowid - docid mapping (IOT)
      SQL> select * from DR$DEMO_CTX_INDEX$R;
      
          ROW_NO DATA
      ---------- ------------------------------------------------------------
               0 00001752D000280000056940414100001752D00028000005694041420000
      
      However, the $N has been maintained to know that docid=2 has been removed:
      SQL> select * from DR$DEMO_CTX_INDEX$N;
      
       NLT_DOCID N
      ---------- -
               2 U
      
      This is the goal of $N (Negative) table which records the docid that should not be there and that must be deleted at next optimization (garbage collection).

      From there, a search by words ('normal lookup') will give docid's and rowid's and the CTXSYS.DR$DELETE must be read in order to know that the document is not there anymore. It's an IOT and the docid can be found with an index unique scan.

      However for the opposite way, having a ROWID and checking if it contains some words ('functional lookup') we need to know that there is no document. In my case I deleted the row, but you may update the document, so the ROWID is still there. There is no pending table for that. It is maintained immediately in the $K table:

      SQL> select * from DR$DEMO_CTX_INDEX$K;
      
           DOCID TEXTKEY
      ---------- ------------------
               1 AAAXUtAAKAAABWlAAA
               3 AAAXUtAAKAAABWlAAC
      
      the entry that addressed docid=2 has been deleted.

      Commit

      All those changes were done within the same transaction. Other sessions still see the old values. No need to read CTXSYS.DR$DELETE for them. What I described above is only for my session: the normal lookup reading the queuing table, and the functional lookup stopping at $K. We don't have to wait a sync to process CTXSYS.DR$DELETE. It's done at commit:

      SQL> commit;
      
      Commit complete.
      
      SQL> select * from CTXSYS.DR$DELETE;
      
      no rows selected
      
      SQL> select * from DR$DEMO_CTX_INDEX$R;
      
          ROW_NO DATA
      ---------- ------------------------------------------------------------
               0 00001752D000280000056940414100000000000000000000000000000000
      
      Of course we can't read it but we see that part of it has been zeroed. That $R table is definitely special: it's not stored in a relational way, and its maintenance is deferred at commit time.

      But nothing has changed in $I which contains garbage (and sync is not changing anything to that):

      SQL> select * from DR$DEMO_CTX_INDEX$I;
      
      TOKEN_TEXT TOKEN_TYPE TOKEN_FIRST TOKEN_LAST TOKEN_COUNT TOKEN_INFO
      ---------- ---------- ----------- ---------- ----------- ----------
      HELLO               0           1          1           1 008801
      WORLD               0           1          1           1 008802
      HELLO               0           2          3           2 0098010201
      MARS                0           3          3           1 008802
      MOON                0           2          2           1 008802
      
      And of course $N row is still there to record the deleted docid:
      SQL> select * from DR$DEMO_CTX_INDEX$N;
      
       NLT_DOCID N
      ---------- -
               2 U
      
      Sync

      I've not reproduced it here, but sync is not changing anything. Sync is for new documents - not for deleted ones.

      Conclusion What you need to remember here is:
      • New documents are made visible through OracleText index at sync
      • Removed document are immediately made invisible at commit
      Of course, you can sync at commit, but the second thing to remember is that
      • New documents brings fragmentation
      • Removed document brings garbage
      and both of them increase the size of the $I table and its $X index, making range scans less efficient. We will see more about that but the next post will be about queries. I've talked about normal and functional lookups and we will see how they are done. Let's detail that.

      OracleText: inserts and fragmentation

      Yann Neuhaus - Sun, 2015-02-15 14:37

      I plan to write several posts about OracleText indexes, which is a feature that is not used enough in my opinion. It's available in all editions and can index small text or large documents to search by words. When you create an OracleText index, a few tables are created to store the words and the association between those words and the table row that contains the document. I'll start to show how document inserts are processed.

      Create the table and index

      I'm creating a simple table with a CLOB

      SQL> create table DEMO_CTX_FRAG
           (num number constraint DEMO_CTX_FRAG_PK primary key,txt clob);
      
      Table created.
      
      and a simple OracleText on that column
      SQL> create index DEMO_CTX_INDEX on DEMO_CTX_FRAG(txt)
           indextype is ctxsys.context;
      
      Index created.
      
      That creates the following tables:
      • DR$DEMO_CTX_INDEX$I which stores the tokens (e.g words)
      • DR$DEMO_CTX_INDEX$K which index the documents (docid) and links them to the table ROWID
      • DR$DEMO_CTX_INDEX$R which stores the opposite way navigation (get ROWID from a docid)
      • DR$DEMO_CTX_INDEX$N which stores docid for deferred maintenance cleanup.

      Inserts

      I'm inserting a row with some text in the clob column

      SQL> insert into DEMO_CTX_FRAG values (0001,'Hello World');
      
      1 row created.
      
      I commit
      SQL> commit;
      
      Commit complete.
      
      And here is what we have in the OracleText tables:
      SQL> select * from DR$DEMO_CTX_INDEX$K;
      no rows selected
      
      SQL> select * from DR$DEMO_CTX_INDEX$R;
      no rows selected
      
      SQL> select * from DR$DEMO_CTX_INDEX$I;
      no rows selected
      
      SQL> select * from DR$DEMO_CTX_INDEX$N;
      no rows selected
      
      Nothing is stored here yet. Which means that we cannot find our newly inserted row from an OracleText search.

      By default, all inserts maintain the OracleText tables asynchronously.
      The inserted row is referenced in a CTXSYS queuing table that stores the pending inserts:

      SQL> select * from CTXSYS.DR$PENDING;
      
         PND_CID    PND_PID PND_ROWID          PND_TIMES P
      ---------- ---------- ------------------ --------- -
            1400          0 AAAXUtAAKAAABWlAAA 13-FEB-15 N
      
      and we have a view over it:
      SQL> select pnd_index_name,pnd_rowid,pnd_timestamp from ctx_user_pending;
      
      PND_INDEX_NAME                 PND_ROWID          PND_TIMES
      ------------------------------ ------------------ ---------
      DEMO_CTX_INDEX                 AAAXUtAAKAAABWlAAA 13-FEB-15
      

      Synchronization

      let's synchronize:

      SQL> exec ctx_ddl.sync_index('DEMO_CTX_INDEX');
      
      PL/SQL procedure successfully completed.
      
      The queuing table has been processed:
      SQL> select pnd_index_name,pnd_rowid,pnd_timestamp from ctx_user_pending;
      
      no rows selected
      
      and here is how that document is sotred in our OracleText tables.

      $K records one document (docid=1) and the table rowid that contains it:

      SQL> select * from DR$DEMO_CTX_INDEX$K;
      
           DOCID TEXTKEY
      ---------- ------------------
               1 AAAXUtAAKAAABWlAAA
      
      $R table stores the docid -> rowid is a non-relational way:
      SQL> select * from DR$DEMO_CTX_INDEX$R;
      
          ROW_NO DATA
      ---------- ------------------------------------------------------------
               0 00001752D0002800000569404141
      
      How is it stored? It's an array of ROWIDs which are fixed length. Then from the docid we can directly go to the offset and get the rowid. Because DATA is limited to 4000 bytes, there are several rows. But a docid determines the ROW_NO as well as the offset in DATA.

      $I stores the tokens (which are the words here as we have TEXT token - which is the type 0) as well as their location information:

      SQL> select * from DR$DEMO_CTX_INDEX$I;
      
      TOKEN_TEXT TOKEN_TYPE TOKEN_FIRST TOKEN_LAST TOKEN_COUNT TOKEN_INFO
      ---------- ---------- ----------- ---------- ----------- ----------
      HELLO               0           1          1           1 008801
      WORLD               0           1          1           1 008802
      
      For each word it stores the range of docid that contains the work (token_first and token_last are those docid) and token_info stores in an binary way the occurrences of the word within the documents (it stores pairs of docid and offest within the document). It's a BLOB but is limited to 4000 bytes so that it is stored inline. Which means that if a token is present in a lot of document, several lines in $I will be needed, each covering a different range of docid. This has changed in 12c and we will see that in future blog posts.

      Thus, we can have several rows for one token. This is the first cause of fragmentation. Searching for documents that contain such a word will have to read several lines of the $I table. The $N has nothing here because we synchronized only inserts and there is nothing to cleanup.

      SQL> select * from DR$DEMO_CTX_INDEX$N;
      
      no rows selected
      

      Several inserts

      I will insert two lines, which also contain the 'hello' word.

      SQL> insert into DEMO_CTX_FRAG values (0002,'Hello Moon, hello, hello');
      
      1 row created.
      
      SQL> insert into DEMO_CTX_FRAG values (0003,'Hello Mars');
      
      1 row created.
      
      SQL> commit;
      
      Commit complete.
      
      And I synchronize:
      SQL> exec ctx_ddl.sync_index('DEMO_CTX_INDEX');
      
      PL/SQL procedure successfully completed.
      
      So, I've now 3 documents:
      SQL> select * from DR$DEMO_CTX_INDEX$K;
      
           DOCID TEXTKEY
      ---------- ------------------
               1 AAAXUtAAKAAABWlAAA
               2 AAAXUtAAKAAABWlAAB
               3 AAAXUtAAKAAABWlAAC
      
      The reverse mapping array has increased:
      SQL> select * from DR$DEMO_CTX_INDEX$R;
      
          ROW_NO DATA
      ---------- ------------------------------------------------------------
               0 00001752D000280000056940414100001752D00028000005694041420000
      
      And now the tokens:
      SQL> select * from DR$DEMO_CTX_INDEX$I;
      
      TOKEN_TEXT TOKEN_TYPE TOKEN_FIRST TOKEN_LAST TOKEN_COUNT TOKEN_INFO
      ---------- ---------- ----------- ---------- ----------- ----------
      HELLO               0           1          1           1 008801
      WORLD               0           1          1           1 008802
      HELLO               0           2          3           2 0098010201
      MARS                0           3          3           1 008802
      MOON                0           2          2           1 008802
      
      What is interesting here is that the previous lines (docid 1) have not been updated and new lines have been inserted for docid 2 and 3.
      • 'moon' is only in docid 2
      • 'mars' is only in docid 3
      • 'hello' is in 2 (token_count) documents, from docid 2 to docid 3 (token_first and token_last)

      This is the other cause of fragmentation coming from frequent sync. Each sync will add new rows. However, when multiple documents are processed in the same sync, then only one $I entry per token is needed.

      There is a third cause of fragmentation. We see here that the token_info is larger for that HELLO covering docid 2 to 3 because there are several occurrences of the token. All that must fit in memory when we synchronize. So it's good to synchronize when we have several documents (so that the common tokens are not too fragmented) but we need also to have enough memory. The default is 12M and is usually too small. It can be increased with the 'index memory' parameter of the index. And there is also a maximum set by ctx_adm.set_parameter for which the default (50M) is also probably too low.

      Nothing yet in the $N table that we will see in the next post:

      SQL> select * from DR$DEMO_CTX_INDEX$N;
      
      no rows selected
      

      summary

      The important points here is that inserted document are visible only after synchronization, and synchronizing too frequently will cause fragmentation. If you need to synchronize in real time (on commit) and you commit for each document inserted, then you will probably have to plan frequent index optimization. If on the other hand we are able to synchronize only when we have inserted a lot of documents, then fragmentation is reduced according that we had enough memory to process all documents in one pass.

      The next post will be about deletes and updates.

      Wanted – A Theory of Change

      Michael Feldstein - Sun, 2015-02-15 14:25

      By Michael FeldsteinMore Posts (1014)

      Phil and I went to the ELI conference this week. It was my first time attending, which is odd given that it is one of the best conferences that I’ve attended in quite a while. How did I not know this?

      We went, in part, to do a session on our upcoming e-Literate TV series, which was filmed for use in the series. (Very meta.) Malcolm Brown and Veronica Diaz did a fantastic job of both facilitating and participating in the conversation. I can’t wait to see what we have on film. Phil and I also found that an usually high percentage of sessions were ones that we actually wanted to go to and, once there, didn’t feel the urge to leave. But the most important aspect of any conference is who shows up, and ELI did not disappoint there either. The crowd was diverse, but with a high percentage of super-interesting people. On the one hand, I felt like this was the first time that there were significant numbers of people talking about learning analytics who actually made sense. John Whitmer from Blackboard (but formerly from CSU), Mike Sharkey from Blue Canary (but formerly from University of Phoenix), Rob Robinson from Civitas (but formerly from the University of Texas), Eric Frank of Acrobatiq (formerly of Flat World Knowledge)—these people (among others) were all speaking a common language, and it turns out that language was English. I feel like that conversation is finally beginning to come down to earth. At the same time, I got to meet Gardner Campbell for the first time and ran into Jim Groom. One of the reasons that I admire both of these guys is that they challenge me. They unsettle me. They get under my skin, in a good way (although it doesn’t always feel that way in the moment).

      And so it is that I find myself reflecting disproportionately on the brief conversations that I had with both of them, and about the nature of change in education.

      I talked to Jim for maybe a grand total of 10 minutes, but one of the topics that came up was my post on why we haven’t seen the LMS get dramatically better in the last decade and why I’m pessimistic that we’ll see dramatic changes in the next decade. Jim said,

      Your post made me angry. I’m not saying it was wrong. It was right. But it made me angry.

      Hearing this pleased me inordinately, but I didn’t really think about why it pleased me until I was on the plane ride home. The truth is that the post was intended to make Jim (and others) angry. First of all, I was angry when I wrote it. We should be frustrated at how hard and slow change has been. It’s not like anybody out there is arguing that the LMS is the best thing since sliced bread. Even the vendors know better than to be too boastful these days. (Most of them, anyway.) At best, conversations about the LMS tend to go like the joke about the old Jewish man complaining about a restaurant: “The food here is terrible! And the portions are so small!” After a decade of this, the joke gets pretty old. Somehow, what seemed like Jack Benny has started to feel more like Franz Kafka.

      Second, it is an unattractive personal quirk of mine than I can’t resist poking at somebody who seems confident of a truth, no matter what that truth happens to be. Even if I agree with them. If you say to me, “Michael, you know, I have learned that I don’t really know anything,” I will almost inevitably reply, “Oh yeah? Are you sure about that?” The urge is irresistible. If you think I’m exaggerating, then ask Dave Cormier. He and I had exactly this fight once. This may make me unpopular at parties—I like to tell myself that’s the reason—but it turns out to be useful in thinking about educational reform because just about everybody shares some blame in why change is hard, and nobody likes to admit that they are complicit in a situation that they find repugnant. Faculty hate to admit that some of them reinforce the worst tendencies of LMS and textbook vendors alike by choosing products that make their teaching easier rather than better. Administrators hate to admit that some of them are easily seduced by vendor pitches, or that they reflexively do whatever their peer institutions do without a lot of thought or analysis. Vendors hate to admit that their organizations often do whatever they have to in order close the sale, even if it’s bad for the students. And analysts and consultants…well…don’t get me started on those smug bastards. It would be a lot easier if there were one group, one cause that we could point to as the source of our troubles. But there isn’t. As a result, if we don’t acknowledge the many and complex causes of the problems we face, we risk having an underpants gnomes theory of change:

      Click here to view the embedded video.

      I don’t know what will work to bring real improvements to education, but here are a few things that won’t:

      • Just making better use of the LMS won’t transform education.
      • Just getting rid of the LMS won’t transform education.
      • Just bringing in the vendors won’t transform education.
      • Just getting rid of the vendors won’t transform education.
      • Just using big data won’t transform education.
      • Just busting the faculty unions won’t transform education.
      • Just listening to the faculty unions won’t transform education.

      Critiques of some aspect of education or other are pervasive, but I almost always feel like I am listening to an underpants gnomes sales presentation, no matter who is pitching it, no matter what end of the political spectrum they are on. I understand what the speaker wants to do, and I also understand the end state to which the speaker aspires, but I almost never understand how the two are connected. We are sorely lacking a theory of change.

      This brings me to my conversation with Gardner, which was also brief. He asked me whether I thought ELI was the community that could…. I put in an ellipse there both because I don’t remember Gardner’s exact wording and because a certain amount of what he was getting at was implied. I took him to mean that he was looking for the community that was super-progressive that could drive real change (although it is entirely possible that I was and am projecting some hope that he didn’t intend). It took me a while to wrap my head around this encounter too. On the one hand, I am a huge believer in the power of communities as networks for identifying and propagating positive change. On the other hand, I have grown to be deeply skeptical of them as having lasting power in broad educational reform. Every time I have found a community that I got excited about, one of two things inevitably happened: either so many people piled into it that it lost its focus and sense of mission, or it became so sure of its own righteousness that the epistemic closure became suffocating. There may be some sour grapes in that assessment—as Groucho Marx said, I don’t want to belong to any club that would have me as a member—but it’s not entirely so. I think communities are essential. And redeeming. And soul-nourishing. But I think it’s a rare community indeed—particularly in transient, professional, largely online communities, where members aren’t forced to work out their differences because they have to live with each other—that really provides transformative change. Most professional communities feel like havens, when I think we need to feel a certain amount of discomfort for real change to happen. The two are not mutually exclusive in principle—it is important to feel like you are in a safe environment in order to be open to being challenged—but in practice, I don’t get the sense that most of the professional communities I have been in have regularly encouraged  creative abrasion. At least, not for long, and not to the point where people get seriously unsettled.

      Getting back to my reaction to Jim’s comment, I guess what pleased me so much is that I was proud to have provided a measure of hopefully productive and thought-provoking discomfort to somebody who has so often done me the same favor. This is a trait I admire in both Jim and Gardner. They won’t f**king leave me alone. Another thing that I admire about them is that they don’t just talk, and they don’t just play in their own little sandboxes. Both of them build experiments and invite others to play. If there is a way forward, that is it. We need to try things together and see how they work. We need to apply our theories and find out what breaks (and what works better than we could have possibly imagined). We need to see if what works for us will also work for others. Anyone who does that in education is a hero of mine.

      So, yeah. Good conference.

       

      The post Wanted – A Theory of Change appeared first on e-Literate.