Feed aggregator

Exceptions, Business as Usual?

Jan Kettenis - Mon, 2013-09-02 14:47
In this article I describe some aspects considering using BPMN exceptions to handle business exceptions. The conclusion is that you should carefully consider if doing so is appropriate, or that using a combination of a gateway and End event would be a better option.

Bruce Silver writes in his book BPM Method & Style that he used to use BPMN Exception end events only for technical exceptions. For business exceptions he used to use a combination of a gateway and an end state test.

When I first read that I found that to be a peculiar remark, as - coming from a Oracle BPM 10g direction - I used them for business exceptions all the time! Moreover, as most technical exceptions could be caught and managed in BPL (the 10g, Java-like scripting language) I even adviced people to to use Exceptions for technical exceptions only if unavoidable with the argument that the business audience is not interested in technical exceptions, and that therefore they should be left out of the model if possible. After all, wouldn't you agree that the first model (in which exceptions are used for business exceptions) looks simpler that the second one (which is more the gateway/end-state type of solution)?


Using BPMN Error end event
Using gateways
The model presented is inspired by a business process model with a similar complexity, but then bigger. From a functional point of view both models do the same. Examples like this may explain why later on Bruce Silver changed his mind based on feedback from his students.

With Oracle BPM 10g it was easy to write some generic (technical) business exception handling process with a retry option (using the BACK action). With this retry you could simply return to the happy path even when an Exception end event had occurred.

The first time I started to have second thoughts about using Exceptions for business exceptions was when I found that with 11g this back functionality is no longer possible (although it is supposed to come back in 12c in some way or another).

The second second thoughts came when I realized that throwing an Exception and catch that in an Event Sub-process, also has some other peculiarities, as I will explain using the model below:



In this sample model there is an Event Sub-process with a non-interrupting Get Status Message start event, and a Return Status Message end event. This exposes a getStatus services operation that can be called by some 3rd party to find out where the process is, and for that returns a status that is set by the Set Rejected Status and Set Reconsidering Status Script activities. When the Rejected Error event is thrown this is caught by the Event Sub-process with the Rejection Error start event. An Event Sub-process that starts with an Error start event, is interrupting by definition.

Patterns like this (where some operation or service is exposed to interact with a running instance) are very common in my practice. As a matter of fact, as far as I recall more than half of the models I created have a similar Event Sub-process, either to get or to set some process data on the run.

The issue that I found is that when the order is rejected without the option to reconsider - meaning that the Rejected Error event is thrown, the normal flow of the process is aborted (because of the interrupting nature of Error events). As a result, when the process is in the Confirm activity, and the getStatus operation is called, it won't react because that operation it is tied to the normal flow, which is no longer active. In contrast, when the process is in the Reconsider this is not a problem.

It is unclear to me what the BPMN specifications say about this. I can imagine that this behavior is in line with the specifications, or that the specifications are not explicit about what that behavior should be. In any case, this is how it works with 11g, which made me realize that throwing Error end events in case of business exceptions has some drawbacks that might make that modeling by using a gateway plus end state is not so peculiar after all.

Oracle Linux 6 UEK3 beta

Wim Coekaerts - Mon, 2013-09-02 13:14
Last week we published UEK3 beta on http://public-yum.oracle.com.

It is very easy to get started with this and play around with the new features. Just takes a few steps :

  • Install Oracle Linux 6 (preferrably the latest update) on a system or in a VM
  • Add the beta repository file in /etc/yum.repos.d
  • Enable the beta channel
  • Reboot into the new kernel
  • Add updated packages like lxc tools and dtrace
  • Oracle Linux is freely downloadable from http://edelivery.oracle.com/linux. Oracle Linux is free to use on as many systems as you want, is freely re-distributable without changing the CD/ISO content (so including our cute penguin), provides free security errata and bugfix errata updates. You only need to pay for a support subscription for those systems that you want/need support for, not for other systems. This allows our customers/users to run the exact same software on test and dev systems as well as production systems without having to maintain potentially two kinds of repositories. All systems can run the exact same software all the time.

    The free yum repository for security and bugfix errata is at http://public-yum.oracle.com. This site also contains a few other repositories :

  • Playground channel (a yum repository where we publish the latest kernels as released on kernel.org. We take the mainline tree and build it into RPMs that can easily be installed on Oracle Linux (Oracle Linux 6 and x86_64 specifically).
  • Beta channel (a yum repository where we publish new early versions of UEK along with corresponding packages that need to be updated along with it.
  • Now, back to UEK3 beta. Just a few steps are needed to get started.

    I will assume you have already installed Oracle Linux 6 (update 4) on a system and it is configured to use public-yum as the repository.

    First download and enable the beta repository.

    # cd /etc/yum.repos.d/
    
    # wget http://public-yum.oracle.com/beta/public-yum-ol6-beta.repo
    
    # sed -i s/enabled=0/enabled=1/g public-yum-ol6-beta.repo 
    

    You don't have to do sed you can just edit (vi/emacs) the repo file and manually set it to 1 (enable). Now you can just run yum update

    # yum update
    

    This will install UEK3 (3.8.13-13) and it will update any relevant packages that are required to be on a later version as well. At this point you should reboot into UEK3.

    New features introduced in UEK3 are listed in our release notes. There are tons of detailed improvements in the kernel since UEK2 (3.0 based). Kernelnewbies is an awesome site that keeps a nice list of changes for each version. We will add more detail to our release notes over time but for those that want to browse through all the changes, check it out.

  • http://kernelnewbies.org/Linux_3.1
  • http://kernelnewbies.org/Linux_3.2
  • http://kernelnewbies.org/Linux_3.3
  • http://kernelnewbies.org/Linux_3.4
  • http://kernelnewbies.org/Linux_3.5
  • http://kernelnewbies.org/Linux_3.6
  • http://kernelnewbies.org/Linux_3.7
  • http://kernelnewbies.org/Linux_3.8
  • To try out dtrace, you need to install the dtrace packages. We introduced USDT in UEK3's version of dtrace, there is some information in the release notes about the changes.

    # yum install dtrace-utils
    

    To try out lxc, you need to install the lxc packages. lxc is capable of using Oracle VM Oracle Linux templates as a base image to create a container.

    # yum install lxc
    

    Enjoy.

    Three impossibilities with partitioned indexes

    OraFAQ Articles - Sun, 2013-09-01 11:22

    There are three restrictions on indexing and partitioning: a unique index cannot be local non-prefixed; a global non-prefixed index is not possible; a bitmap index cannot be global. Why these limitations? I suspect that they are there to prevent us from doing something idiotic.

    This is the table used for all examples that follow:

    CREATE TABLE EMP
          (EMPNO NUMBER(4) CONSTRAINT PK_EMP PRIMARY KEY,
           ENAME VARCHAR2(10),
           JOB VARCHAR2(9),
           MGR NUMBER(4),
           HIREDATE DATE,
           SAL NUMBER(7,2),
           COMM NUMBER(7,2),
           DEPTNO NUMBER(2) )
    PARTITION BY HASH (EMPNO) PARTITIONS 4;

    the usual EMP table, with a partitioning clause appended. It is of course a contrived example. Perhaps I am recruiting so many employees concurrently that a non-partitioned table has problems with buffer contention that can be solved only with hash partitioning.

    Why can't I have a local non-prefixed unique index?
    A local non-unique index is no problem, but unique is not possible:

    orclz> create index enamei on emp(ename) local;
    
    Index created.
    
    orclz> drop index enamei;
    
    Index dropped.
    
    orclz> create unique index enamei on emp(ename) local;
    create unique index enamei on emp(ename) local
                                  *
    ERROR at line 1:
    ORA-14039: partitioning columns must form a subset of key columns of a UNIQUE index

    You cannot get a around the problem by separating the index from the constraint (which is always good practice):

    orclz> create index enamei on emp(ename) local;
    
    Index created.
    
    orclz> alter table emp add constraint euk unique (ename);
    alter table emp add constraint euk unique (ename)
    *
    ERROR at line 1:
    ORA-01408: such column list already indexed
    
    
    orclz>

    So what is the issue? Clearly it is not a technical limitation. But if it were possible, consder the implications for performance. When inserting a row, a unique index (or a non-unique index enforcing a unique constraint) must be searched to see if the key value already exists. For my little four partition table, that would mean four index searches: one of each local index partition. Well, OK. But what if the table were range partitioned into a thousand partitions? Then every insert would have to make a thousand index lookups. This would be unbelievably slow. By restricting unique indexes to global or local prefixed, Uncle Oracle is ensuring that we cannot create such an awful situation.

    Why can't I have a global non-prefixed index?
    Well, why would you want one? In my example, perhaps you want a global index on deptno, partitioned by mgr. But you can't do it:

    orclz> create index deptnoi on emp(deptno) global partition by hash(mgr) partitions 4;
    create index deptnoi on emp(deptno) global partition by hash(mgr) partitions 4
                                                                    *
    ERROR at line 1:
    ORA-14038: GLOBAL partitioned index must be prefixed
    
    
    orclz>
    This index, if it were possible, might assist a query with an equality predicate on mgr and a range predicate on deptno: prune off all the non-relevant mgr partitions, then a range scan. But exactly the same effect would be achieved by using global nonpartitioned concatenated index on mgr and deptno. If the query had only deptno in the predicate, it woud have to search each partition of the putative global partitioned index, a process which would be just about identical to a skip scan of the nonpartitioned index. And of course the concatenated index could be globally partitioned - on mgr. So there you have it: a global non-prefixed index would give you nothing that is not available in other ways.

    Why can't I have a global partitioned bitmap index?
    This came up on the Oracle forums recently, https://forums.oracle.com/thread/2575623
    Global indexes must be prefixed. Bearing that in mind, the question needs to be re-phrased: why would anyone ever want a prefixed partitioned bitmap index? Something like this:

    orclz>
    orclz> create bitmap index bmi on emp(deptno) global partition by hash(deptno) partitions 4;
    create bitmap index bmi on emp(deptno) global partition by hash(deptno) partitions 4
                                           *
    ERROR at line 1:
    ORA-25113: GLOBAL may not be used with a bitmap index
    
    orclz>

    If this were possible, what would it give you? Nothing. You would not get the usual benefit of reducing contention for concurrent inserts, because of the need to lock entire blocks of a bitmap index (and therefore ranges of rows) when doing DML. Range partitioning a bitmap index would be ludicrous, because of the need to use equality predicates to get real value from bitmaps. Even with hash partitions, you would not get any benefit from partition pruning, because using equality predicates on a bitmap index in effect prunes the index already: that is what a bitmap index is for. So it seems to me that a globally partitioned bitmap index would deliver no benefit, while adding complexity and problems of index maintenance. So I suspect that, once again, Uncle Oracle is protecting us from ourselves.

    Is there a technology limitation?
    I am of course open to correction, but I cannot see a technology limitation that enforces any of these three impossibilities. I'm sure they are all technically possible. But Oracle has decided that, for our own good, they will never be implemented.
    --
    John Watson
    Oracle Certified Master DBA
    http://skillbuilders.com

    articles: 

    Redundancies should come with a pay rise

    Robert Baillie - Sun, 2013-09-01 02:21
    As far as I can see, there is only one reason why a company should ever make redundancies.Due to some unforseen circumstances the business has become larger than the market conditions can support and it needs to shrink in order to bring it back in line.Every other reason is simply a minor variation or a consequence of that underlying reason.Therefore, if the motivation is clear, and the matter dealt with successfully, then once the redundancies are over the business should be "right sized" (we've all heard that term before), and it should be able to carry on operating with the same values, practices and approach that it did prior to the redundancies.If the business can't, then I would suggest is that it is not the right size for the market conditions and therefore the job isn't complete.OK, there may be some caveats to that, but to my mind this reasoning is sound.In detail:When you reduce the headcount of the business you look for the essential positions in the company, keep those,...

    Redundancies should come with a pay rise

    Rob Baillie - Sat, 2013-08-31 10:46

    As far as I can see, there is only one reason why a company should ever make redundancies.

    Due to some unforseen circumstances the business has become larger than the market conditions can support and it needs to shrink in order to bring it back in line.

    Every other reason is simply a minor variation or a consequence of that underlying reason.

    Therefore, if the motivation is clear, and the matter dealt with successfully, then once the redundancies are over the business should be "right sized" (we've all heard that term before), and it should be able to carry on operating with the same values, practices and approach that it did prior to the redundancies.

    If the business can't, then I would suggest is that it is not the right size for the market conditions and therefore the job isn't complete.

    OK, there may be some caveats to that, but to my mind this reasoning is sound.

    In detail:

    When you reduce the headcount of the business you look for the essential positions in the company, keep those, and get rid of the rest.

    Once the redundancies are finished you should be left with only the positions you need to keep in order to operate successfully.

    It's tempting to think that you should have a recruitment freeze and not back-fill positions when people leave, but if someone leaves and you don't need to replace them, then that means you didn't need that position, in which case you should have made it redundant.

    Not back-filling positions is effectively the same as allowing your employees to choose who goes based on their personal motives rather than force the business heads to choose based on the business motives.  This doesn't make business sense.

    So, you need to be decisive and cut as far as you can go without limiting your ability to operate within the current market conditions.

    To add to that, recruitment is expensive.  If you're in a highly skilled market then you'll likely use an agency. They can easily charge 20% of a salary for a perm head.  On top of that you have the cost of bringing someone up to speed, at a time when you're running at the minimum size your market will allow.  Plus there's the cost of inefficiency during the onboarding period as well as the increased chance of the remaining overstretched employees leaving as well.

    The upshot is that you really can't afford to have people leave, it's so expensive that it jeopardises the extremely hard work you did when you made the redundancies.

    There's a theory I often hear that you can't have contractors working when the perm heads are being marched out.  That's a perfectly valid argument if the perm head would be of long term value to you and can do the job that the contract head can do.  But if you need the contractor to do a job that only lasts another 3 months and that person is by far the best or only person you have for the job, then the argument just doesn't stand up.  Get rid of the perm position now and use the contractor, it'll be cheaper and more beneficial to the business in the long run.

    OK, that's maybe not the most sentimental of arguments, but why would you worry about hurting the feelings of people who no longer work for you, at the expense of those that still do?

    It may even be worse than that - you could be jeopardising the jobs of others that remain by not operating in the most efficient and effective way possible.

    Another prime example is maternity cover.  If you need the person on maternity to come back to work then you almost certainly need the person covering them. If it's early in the maternity leave then you'll have a long period with limited staff, if it's late in the leave then you only need the temporary cover for a short period more. Either way you're overstretching the perm staff left to cover them and risking having them leave.

    Finally, there's the motivation to ensure that the business that remains is running as lean as possible. That costs are as low as they could be. The temptation is to cut the training and entertainments budget to minimum and pull back on the benefits package.
    As soon as you do this you fundamentally change the character of the business.  If you always prided yourself on being at the forefront of training then you attracted and kept staff who valued that. If you always had an open tab on a Friday night at the local bar, then you attracted people who valued that.  Whatever it is that you are cutting back on, you are saying to people who valued it that "we no longer want to be as attractive to you as we once were; we do not value you quite as much as we did". This might not be your intention, but it is the message your staff will hear.

    I put it to you that the cheapest way to reduce costs after redundancies is to be completely honest to the staff you keep. Say it was difficult, say that you're running at minimum and that a lot will be expected of whoever's left. But tell them that they're still here because they're the best of the company and they are vital to the company's success.  Let them know that the contractors you've kept are there because they're the best people for those positions to ensure that the company succeeds.  Tell them that the contractors will be gone the moment they're not generating value or when a perm head would be more appropriate.  Make it clear that the company is now at the right size and the last thing you want is for people to leave, because you value them and that if they left it would damage your ability to do business.

    Then give them a pay rise and a party to prove it.

    OWB - 11.2.0.4 standalone client released

    Antonio Romero - Fri, 2013-08-30 16:29

    The 11.2.0.4 release of OWB containing the 32 bit and 64 bit clients is released today. Big thanks to Anil for spearheading that, another milestone on the Data Integration roadmap.

    Below are the patch numbers;

    • 17389934 - OWB 11.2.0.4 STANDALONE CLIENT FOR LINUX X86 64 BIT
    • 17389949 - OWB 11.2.0.4 STANDALONE CLIENT FOR LINUX X86 32 BIT

    The windows releases will come in due course. This is the terminal release of OWB and customer bugs will be resolved on top of this release.

    Sure and Stedfast has been a steady motto through my life, it came from way back in my old Boys Brigade days back in Scotland. Working in Oracle I have always reflected back on that over the years, can still hear 'Will your anchor hold in the storms of life' ringing in my ear. The ride through different development organizations from Oracle Tools, through Oracle Database and Oracle Middleware groups, from buildings 200, to 400 to 100, 7th floor, 9th floor, 5th floor, countless acquisitions and integrations. Constant change in some aspects, but zeroes and ones remain zeroes and ones, for our group the direction and goals were well understood. Whilst its been quiet on the OWB blogging front, the data integration development organization has been busy, very busy releasing versions of OWB and ODI over the past few years and building the 12c release.

    So to 12c... our data integration product roadmap has been a strong focal point in our development over the past few years and that's what we have been using to focus our energy and and our direction. Like personal life we need a goal, a vision and a roadmap for getting there. There have been plenty of challenges along the way; technical, political and personal - its been a tough and challenging few years on all of those fronts, its when you are faced with momentous personal challenges that the technical ones look trivial. The most gratifying aspect is when you see light at the end of the tunnel. It's that light at the end of the tunnel that gives you added strength to finish the job at hand. Onwards and upwards!

    OWB - 11.2.0.4 standalone client released

    Antonio Romero - Fri, 2013-08-30 16:29

    The 11.2.0.4 release of OWB containing the 32 bit and 64 bit clients is released today. Big thanks to Anil for spearheading that, another milestone on the Data Integration roadmap.

    Below are the patch numbers;

    • 17389934 - OWB 11.2.0.4 STANDALONE CLIENT FOR LINUX X86 64 BIT
    • 17389949 - OWB 11.2.0.4 STANDALONE CLIENT FOR LINUX X86 32 BIT

    The windows releases will come in due course. This is the terminal release of OWB and customer bugs will be resolved on top of this release.

    Sure and Stedfast has been a steady motto through my life, it came from way back in my old Boys Brigade days back in Scotland. Working in Oracle I have always reflected back on that over the years, can still hear 'Will your anchor hold in the storms of life' ringing in my ear. The ride through different development organizations from Oracle Tools, through Oracle Database and Oracle Middleware groups, from buildings 200, to 400 to 100, 7th floor, 9th floor, 5th floor, countless acquisitions and integrations. Constant change in some aspects, but zeroes and ones remain zeroes and ones, for our group the direction and goals were well understood. Whilst its been quiet on the OWB blogging front, the data integration development organization has been busy, very busy releasing versions of OWB and ODI over the past few years and building the 12c release.

    So to 12c... our data integration product roadmap has been a strong focal point in our development over the past few years and that's what we have been using to focus our energy and and our direction. Like personal life we need a goal, a vision and a roadmap for getting there. There have been plenty of challenges along the way; technical, political and personal - its been a tough and challenging few years on all of those fronts, its when you are faced with momentous personal challenges that the technical ones look trivial. The most gratifying aspect is when you see light at the end of the tunnel. It's that light at the end of the tunnel that gives you added strength to finish the job at hand. Onwards and upwards!

    I'm not available

    Catherine Devlin - Thu, 2013-08-29 13:23

    I'm happy to say that I'll shortly be starting a new position as a PostgreSQL DBA and Python developer for Zoro Tools!

    We software types seem to have hardware envy sometimes. We have "builds" and "engines" and "forges" and "factory functions". But as it turns out, the "Tools" in "Zoro Tools" isn't a metaphor for cleverly arranged bytes. That's right - they're talking about the physical objects in your garage! Imagine! Lucky for me the interviewers didn't ask to review my junior high shop project.

    So disregard my earlier post about being available. Thanks for all your well-wishes!

    Depending on how you reckon it, my job search arguably only took forty minutes, though it took a while for gears to grind and finalize everything. Years of building relationships at PyCon made this the best job search ever; the only unpleasant part was having to choose from among the opportunities to work with my favorite technologies and people. I'm very glad I made the investment in PyCon over the years... and if you're thinking "that's easy for you to say, I can't afford it", don't forget PyCon's financial aid program.

    And speaking of conferences, I'll be at Postgres Open next month (my first one!) - hope to see some of you there!

    The Difference Between Access Manager 10g and 11g Webgates

    Mark Wilcox - Thu, 2013-08-29 11:00

    A common question we get is what is the difference between Access Manager 10g and Access Manager 11g webgates.

    My colleague Yagnesh who covers webgates put together a simple list:

    Here is 11g features:

    • Oracle Universal Installer for platform. Generic for all platforms
    • Host-based cookie
    • Individual WebGate OAMAuthnCookie_ making it more secure
    • A per agent key, and server key, are used. Agent key is stored in wallet file and Server key is stored in Credential store
    • One per-agent secret key shared between 11g WebGate and OAM Server One OAM Server key
    • OAM 11g supports cross-network-domain single sign-on out of the box. Oracle recommends you use Oracle Identity Federation for this situation.
    • Capability to act as a detached credential collector
    • Webgate Authorization Caching
    • Diagnostic page to tune parameters
    • Has separate install and configuration option. Hence, single install and multiple instance configuration is supported.

    And 10g:

    • InstallShield and One installer per platform
    • Domain-based cookie
    • ObSSOCookie (one for all 10g Webgates)
    • Global shared secret stored in the directory server only (not accessible to WebGate)
    • There is just one global shared secret key per OAM deployment which is used by all the WebGates
    • OAM 10g provides a proprietary multiple network domain SSO capability that predates Oracle Identity Federation. Complex configuration is required.
    • One Web server configuration supported per WebGate. Need to have multiple WebGates for multiple instances.

    The Difference Between Access Manager 10g and 11g Webgates

    Mark Wilcox - Thu, 2013-08-29 11:00

    A common question we get is what is the difference between Access Manager 10g and Access Manager 11g webgates.

    My colleague Yagnesh who covers webgates put together a simple list:

    Here is 11g features:

    • Oracle Universal Installer for platform. Generic for all platforms
    • Host-based cookie
    • Individual WebGate OAMAuthnCookie_ making it more secure
    • A per agent key, and server key, are used. Agent key is stored in wallet file and Server key is stored in Credential store
    • One per-agent secret key shared between 11g WebGate and OAM Server One OAM Server key
    • OAM 11g supports cross-network-domain single sign-on out of the box. Oracle recommends you use Oracle Identity Federation for this situation.
    • Capability to act as a detached credential collector
    • Webgate Authorization Caching
    • Diagnostic page to tune parameters
    • Has separate install and configuration option. Hence, single install and multiple instance configuration is supported.

    And 10g:

    • InstallShield and One installer per platform
    • Domain-based cookie
    • ObSSOCookie (one for all 10g Webgates)
    • Global shared secret stored in the directory server only (not accessible to WebGate)
    • There is just one global shared secret key per OAM deployment which is used by all the WebGates
    • OAM 10g provides a proprietary multiple network domain SSO capability that predates Oracle Identity Federation. Complex configuration is required.
    • One Web server configuration supported per WebGate. Need to have multiple WebGates for multiple instances.

    Fresh, Informative and Fun - Join Us For Your Opening Presentation at Open World 2013

    Mark Wilcox - Thu, 2013-08-29 09:25

    Join us on Monday September 23, 2013 for Senior Vice President Amit Jasuja's presentation.

    It's called "CON8808 - Oracle Identity Management: Enabling Business Growth in the New Economy".

    The title is boring but the presentation will be fresh, informative and fun.

    This is our annual presentation to share our thoughts on where the world is going in terms of identity management and letting customers who are leading the way let you know how they are getting there.

    And we will deliver this to you in a way that promises to be as entertaining as it is informative.

    Click here and schedule yourself for Amit's session before we run out of room

    Fresh, Informative and Fun - Join Us For Your Opening Presentation at Open World 2013

    Mark Wilcox - Thu, 2013-08-29 09:25

    Join us on Monday September 23, 2013 for Senior Vice President Amit Jasuja's presentation.

    It's called "CON8808 - Oracle Identity Management: Enabling Business Growth in the New Economy".

    The title is boring but the presentation will be fresh, informative and fun.

    This is our annual presentation to share our thoughts on where the world is going in terms of identity management and letting customers who are leading the way let you know how they are getting there.

    And we will deliver this to you in a way that promises to be as entertaining as it is informative.

    Click here and schedule yourself for Amit's session before we run out of room

    Agile and UX can mix

    Robert Baillie - Thu, 2013-08-29 05:19
    User experience design is an agile developer's worst nightmare. You want to make a change to a system, so you research. You collect usage stats, you analyse hotspots, you review, you examine user journeys, you review, you look at drop off rates, you review. Once you've got enough data you start to design. You paper prototype, run through with users, create wireframes, run through with users, build prototypes, run through with users, do spoken journey and video analysis, iterate, iterate, iterate, until finally you have a design. Then you get the developers to build it, exactly as you designed it. Agile development, on the other hand, is a user experience expert's worst nightmare. You want to make a change to a system, so you decide what's the most important bit, and you design and build that - don't worry how it fits into the bigger picture, show it to the users, move on to the next bit, iterate, iterate, iterate, until finally you have a system. Then you get the user experience...

    Agile and UX can mix

    Rob Baillie - Thu, 2013-08-29 05:19
    User experience design is an agile developer's worst nightmare. You want to make a change to a system, so you research. You collect usage stats, you analyse hotspots, you review, you examine user journeys, you review, you look at drop off rates, you review. Once you've got enough data you start to design. You paper prototype, run through with users, create wireframes, run through with users, build prototypes, run through with users, do spoken journey and video analysis, iterate, iterate, iterate, until finally you have a design.

    Then you get the developers to build it, exactly as you designed it.

    Agile development, on the other hand, is a user experience expert's worst nightmare. You want to make a change to a system, so you decide what's the most important bit, and you design and build that - don't worry how it fits into the bigger picture, show it to the users, move on to the next bit, iterate, iterate, iterate, until finally you have a system.

    Then you get the user experience expert to fix all the clumsy workflows.

    The two approaches are fundamentally opposed.

    Aren't they?

    Well, of course, I'm exaggerating for comic effect, but these impressions are only exaggerations - they're not complete fabrications.

    If you look at what's going on, both approaches have the same underlying principle - your users don't know what they want until they see something. Only then do they have something to test their ideas against.  Both sides agree, the earlier you get something tangible in front of users and the more appropriate and successful the solution will be.

    The only real difference in the two approaches as described is the balance between scope of design and fullness of implementation. On the UX side the favour is for maximum scope of design and minimal implementation; the agile side favours minimal scope of design and maximum implementation.

    The trick is to acknowledge this difference and bring them closer together, or mitigate against the risks those differences bring.

    Or, the put it another way, the main problem you have with combining these two approaches is the lead up time before development starts.

    In the agile world some people would like to think that developing based on a whim is a great way to work, but the reality is different. Every story that is developed will have gone through some phase of analysis even in the lightest of light touch processes. Not least someone has decided that a problem needs fixing.  Even in the most agile of teams there needs to be some due diligence and prioritisation.

    This happens not just at the small scale, but also when deciding which overarching areas of functionality to change. In some organisations there will be a project (not a dirty word), in some a phase, in others a sprint. Whatever its called it'll be a consistent set of stories that build up to be a fairly large scale change in the system. This will have gone through some kind of appraisal process, and rightly so.

    Whilst I don't particularly believe in business cases, I do believe in due diligence.

    It is in this phase, the research, appraisal and problem definition stage, that UX research can start without having a significant impact on the start-up time. Statistics can be gathered and evidence amassed to describe the problem that needs to be addressed. This can form a critical part of the argument to start work.

    In fact, this research can become part the business-as-usual activities of the team and can be used to discover issues that need to be addressed. This can be as "big process" as you want it to be, just as long as you are willing, and have the resources to pick up the problems that you find, and that you have the agility to react to clear findings as quickly as possible. Basically, you need to avoid being in the situation where you know there's a problem but you can't start to fix it because your process states you need to finish your 2 month research phase.

    When you are in this discovery phase there's nothing wrong with starting to feel out some possible solutions. Ideas that can be used to illustrate the problem and the potential benefits of addressing it. Just as long as the techniques you use do not result in high cost and (to reiterate) a lack of ability to react quickly.

    Whilst I think its OK to use whatever techniques work for you, for me the key to keeping the reaction time down is to keep it lightweight.  That is, make sure you're always doing enough to find out what you need to know, but not so much that it takes you a long time to reach conclusions and start to address them. User surveys, spoken narrative and video recordings, all of which can be done remotely, can be done at any time, and once you're in the routine of doing them they needn't be expensive.   Be aware that large sample sets might improve the accuracy of your research, but they also slow you down.  Keep the groups small and focused - applicable to the size of team you have to analyse and react to the data. Done right, these groups can be used to continually scrutinise your system and uncover problems.

    Once those problems are found, the same evidence can be used to guide potential solutions. Produce some quick lo-fi designs, present them to another (or the same, if you are so inclined) small group and wireframe the best ones to include in your argument to proceed.  I honestly believe that once you're in the habit, this whole process can be implemented in two or three weeks.

    Having got the go ahead, you have a coherent picture of the problem and a solid starting point for you commence the full blown design work.  You can then move into a short, sharp and probably seriously intense design phase.

    At all points, the design that you're coming up with is, of course, important. However, it's vital that you don't underestimate the value of the thinking process that goes into the design. Keep earlier iterations of the design, keep notes on why the design changed. This forms a reference document that you can use to remind yourself of the reasoning behind your design. This needn't be a huge formal tome; it could be as simple as comments in your wireframes, but an aide mémoire for the rationale behind where you are today is important.
    In this short sharp design phase you need to make sure that you get to an initial conclusion quickly and that you bear in mind that this will almost certainly not be the design that you actually end up with.  This initial design is primarily used to illustrate the problem and the current thinking on the solution to the developers. It is absolutely not a final reference document.

    As soon as you become wedded to a design, you lose the ability to be agile. Almost by definition, an agile project will not deliver exactly the functionality it set out deliver. Recognise this and ensure that you do the level of design appropriate to bring the project to life and no more.

    When the development starts, the UX design work doesn't stop. This is where the ultimate design work begins - the point at which the two approaches start to meld.

    As the developers start to produce work, the UX expert starts to have the richest material he could have - a real system. It is quite amazing how quickly an agile project can produce a working system that you are able to put in front of users, and there's nothing quite like a real system for investigating system design.

    It's not that the wireframes are longer of use. In fact, early on the wireframes remain a vital, and probably only coherent view of the system and these should evolve as the project develops.  As elements in the system get built and more rigidly set the wireframes are updated to reflect them. As new problems and opportunities are discovered, the wireframes are used to explore them.

    This process moves along in parallel to the BA work that's taking place on the project. As the customer team splits and prioritises the work, the UX expert turns their attention to the detail of their immediate problems, hand in hand with the BAs. The design that's produced is then used to explain the proposed solutions to the development team and act as a useful piece of reference material.

    At this point the developers will often have strong opinions on the design of the solution, and these should obviously be heard. The advantage the design team now have is that they have a body of research and previous design directions to draw on, and a coherent complete picture against which these ideas (and often criticisms) can be scrutinised.  It's not that the design is complete, or final, it's that a valuable body of work has just been done, which can be drawn upon in order to produce the solution.

    As you get towards the end of the project, more and more of the wireframe represents the final product.  At this point functionality can be removed from the wireframe in line with what's expected to be built.  In fact, this is true all the way through the project, it's just that people become more acutely aware of it towards the end.

    This is a useful means of testing the minimum viable product. It allows you to check with the customer team how much can be taken away before you have a system that could not be released: a crucial tool in a truly agile project.  If you don't have the wireframes to show people, the description of functionality that's going to be in or out can be open to interpretation - which means it's open to misunderstanding.
    Conclusion
    It takes work to bring a UX expert into an agile project, and it takes awareness and honesty to ensure that you're not introducing a big-up-front design process that reduces your ability to react.

    However, by keeping in mind some core principles - that you need to be able to throw and willing to throw work away, you should not become wedded to a design early on, you listen to feedback and react, you keep your level of work and techniques fit for the just-in-time problem that you need to solve right now - you can add four huge advantages to your project.

    • A coherent view and design that bind the disparate elements together into a complete system.
    • Expert techniques and knowledge that allow you to discover the right problems to fix with greater accuracy.
    • Design practices and investigative processes that allow you to test potential solutions earlier in the project (i.e. with less cost) than would otherwise be possible, helping ensure you do the right things at the right time.
    • Extremely expressive communication tools that allow you to describe the system you're going to deliver as that understanding changes through the project.

    Do it right and you can do all this and still be agile.

    Find Your Brilliance

    Tim Tow - Wed, 2013-08-28 16:33
    I’d like to interrupt our regularly scheduled programming to tell you about one my personal highlight's of Kscope 13 which was held back in June in New Orleans. Every year ODTUG announces who the keynote speaker will be well in advance of the conference. Most of the time, the speaker is a person of note; interesting, relevant, sometimes even inspiring. And then there are those times when, frankly, I’m not particularly interested in them or what they have to say. But this year was different. At the end of January, ODTUG announced that the Kscope 13 keynote speaker would be Doc Hendley.

    Who is Doc Hendley, you ask? Well, from my perspective, Doc Hendley is one of the most inspiring and truly extraordinary individuals I’ve ever come across. And after meeting him and having the privilege of spending time with him in New Orleans, I’m proud and truly humbled to be able to call this man a friend. He is truly extraordinary, which is ironic when you consider that Doc thinks of himself as “just an ordinary, regular, everyday guy.”

    Let me tell you, Doc is anything but. This is the story of how a boy who grew up in Greensboro, North Carolina saved thousands of lives all the way across the globe and in the process, proved to himself and everyone else that one person – even an ordinary regular everyday person - can do something extraordinary.

    Doc was “just a bartender” and musician who worked and played in nightclubs in Raleigh, NC. In fact, bartending was the only job he’d ever had. But in his own words, he was “dying to make a difference in this world.” In 2003, standing behind the bar, he heard that polluted water kills more children globally than HIV/AIDS, Malaria, and Tuberculosis combined, yet at that time, no one aware of this crisis.

    So what did Doc do? In his words, "He got angry, he got pissed off, he took action." And he did it the only way he knew how. He tapped into the "marginalized people in his community, the bar crowd, the regulars"  – the people that everyone else said were too ordinary - to create Wine to Water, an organization that would take him to the site of the greatest humanitarian disaster in the world – Darfur, Sudan, and eventually to 9 other countries. Doc lived in Darfur for a year, and taught the locals how to clean their water and utilize their own resources to keep it clean.

    Ordinary guy? I don’t think so.

    I watched his TEDx talk on YouTube before going to Kscope13. I was so moved by what he’d done, so overwhelmed, and so energized, that I made everyone in my company watch it before the conference. I wanted every person who worked for me to hear what Doc had to say, and to understand how we all can change the world if we try.  I love Doc's commitment to his cause and I hope we remain friends for a long time to come.


    I know my commitments don't allow me to travel the world helping others like Doc does on a regular basis, but that doesn't mean I can't help.  We decided that Applied OLAP could help support the efforts of Wine to Water and so I presented Doc with a $5,000 check as our small contribution. During his keynote speech, Doc demonstrated, again, how one person, one donation, can change the world. I’m pledging to find a way to make a difference in the world too.

    After all, I’m a regular ordinary every day kind of guy too.

    Categories: BI & Warehousing

    Database 11.2.0.4 Patchset Released

    Antonio Romero - Wed, 2013-08-28 14:06

    The 11.2.0.4 database patchset was released today, checking twitterland you can see news is already out. Tom Kyte tweeted 'look what slipped into 11.2.0.4' amongst others. There will be a standalone OWB 11.2.0.4 image also based on the database 11.2.0.4 components coming soon so I am told.

    Database 11.2.0.4 Patchset Released

    Antonio Romero - Wed, 2013-08-28 14:06

    The 11.2.0.4 database patchset was released today, checking twitterland you can see news is already out. Tom Kyte tweeted 'look what slipped into 11.2.0.4' amongst others. There will be a standalone OWB 11.2.0.4 image also based on the database 11.2.0.4 components coming soon so I am told.

    How to Configure The SSL Certificate For Oracle Warehouse Builder Repository Browser

    Antonio Romero - Tue, 2013-08-27 22:09

      The Repository Browser is a browser-based tool that generates reports from data stored in Oracle Warehouse Builder (OWB) repositories. It use OC4j as the web server. Users need to use HTTPS to access the web interface. ( HTTP on top of the SSL/TLS protocol)

    If the Repository Browser Listener is running on a computer named owb_server, then typing the following address will start the Repository Browser:

       https://owb_server:8999/owbb/RABLogin.uix?mode=design

       or

       https://owb_server:8999/owbb/RABLogin.uix?mode=runtime


    On the server side, the SSL certificate for the browser is required. Users can create it by themselves.

    First, uses can user the JRE's util "keytool" to generate a keystore, name it keystore.jks.

    For example: keytool -genkey -keyalg RSA -alias mykey -keystore keystore.jks -validity 2000 -storepass  welcome1

    Please pay attention to the password of the store, it need to be the same as the credentials of keystoreadmin in the file called "system-jazn-data.xml".


    If the password is not the same, the error message like "Keystore was tampered with, or password was incorrect" will be generated.


    In order to change the credentials, there are two files you can edit.


    • http-web-site.xml: It is in the path of %OWB_HOME%/j2ee/config. The password is stored as clear text in the http-web-site.xml, Users can change it to fit the password they use to generate the keysotre. For the security reason, if users don't want to store clear text, they can use the point (->keystoreadmin) to point another file named system-jazn-data.xml.


    • system-jazn-data.xml: User can find "system-jazn-data.xml" in the %OWB_HOME%/j2ee/config. There is a entry in it called "keystoreadmin".  Password store in this file is encrypted password. The pointer mentioned above is pointing to this place. In order to change the password, you can edit "system-jazn-data.xml",  change the value "<credentials>" of the entry "keystoreadmin". Please added "!" in front of your password. For example, if you want to change the password to welcome,change it to <credentials>!welcome</credentials>

    The next time OC4J reads "system-jazn-data.xml", it will rewrite the file with all passwords obfuscated and unreadable.(So  your clear text like "!welcome" will become encrypted password, something like '{903}dnHlnv/Mp892K8ySQan+zGTlvUDeFYyW'

    How to Configure The SSL Certificate For Oracle Warehouse Builder Repository Browser

    Antonio Romero - Tue, 2013-08-27 22:09

      The Repository Browser is a browser-based tool that generates reports from data stored in Oracle Warehouse Builder (OWB) repositories. It use OC4j as the web server. Users need to use HTTPS to access the web interface. ( HTTP on top of the SSL/TLS protocol)

    If the Repository Browser Listener is running on a computer named owb_server, then typing the following address will start the Repository Browser:

       https://owb_server:8999/owbb/RABLogin.uix?mode=design

       or

       https://owb_server:8999/owbb/RABLogin.uix?mode=runtime


    On the server side, the SSL certificate for the browser is required. Users can create it by themselves.

    First, uses can user the JRE's util "keytool" to generate a keystore, name it keystore.jks.

    For example: keytool -genkey -keyalg RSA -alias mykey -keystore keystore.jks -validity 2000 -storepass  welcome1

    Please pay attention to the password of the store, it need to be the same as the credentials of keystoreadmin in the file called "system-jazn-data.xml".


    If the password is not the same, the error message like "Keystore was tampered with, or password was incorrect" will be generated.


    In order to change the credentials, there are two files you can edit.


    • http-web-site.xml: It is in the path of %OWB_HOME%/j2ee/config. The password is stored as clear text in the http-web-site.xml, Users can change it to fit the password they use to generate the keysotre. For the security reason, if users don't want to store clear text, they can use the point (->keystoreadmin) to point another file named system-jazn-data.xml.


    • system-jazn-data.xml: User can find "system-jazn-data.xml" in the %OWB_HOME%/j2ee/config. There is a entry in it called "keystoreadmin".  Password store in this file is encrypted password. The pointer mentioned above is pointing to this place. In order to change the password, you can edit "system-jazn-data.xml",  change the value "<credentials>" of the entry "keystoreadmin". Please added "!" in front of your password. For example, if you want to change the password to welcome,change it to <credentials>!welcome</credentials>

    The next time OC4J reads "system-jazn-data.xml", it will rewrite the file with all passwords obfuscated and unreadable.(So  your clear text like "!welcome" will become encrypted password, something like '{903}dnHlnv/Mp892K8ySQan+zGTlvUDeFYyW'

    Data Pump 12c – Pumping Data with the LOGTIME Parameter

    alt.oracle - Tue, 2013-08-27 09:38
    Since its release, Oracle Data Pump has been a worthy successor to the traditional exp/imp tools.  However, one area lacking with Data Pump has been something as simple as the ability to identify how long each step of a Data Pump job actually takes.  The log will show start time at the top of the log and end time at the bottom, but the time of execution for each step is a mystery.  Oracle 12c solves this problem with the LOGTIME parameter, which adds a timestamp to the execution of each step of the Data Pump job.  Here’s what it looks like without the parameter.

    /home/oracle:test1:expdp altdotoracle/altdotoracle \
    > directory=data_pump_dir dumpfile=expdp.dmp \
    > tables=employee

    Export: Release 12.1.0.1.0 - Production on Tue Aug 13 09:32:38 2013

    Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

    Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
    Starting "ALTDOTORACLE"."SYS_EXPORT_TABLE_01":  altdotoracle/******** directory=data_pump_dir dumpfile=expdp.dmp tables=employee
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 64 KB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
    . . exported "ALTDOTORACLE"."EMPLOYEE"                   10.93 KB      16 rows
    Master table "ALTDOTORACLE"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    ******************************************************************************
    Dump file set for ALTDOTORACLE.SYS_EXPORT_TABLE_01 is:
      /oracle/base/admin/test1/dpdump/expdp.dmp
    Job "ALTDOTORACLE"."SYS_EXPORT_TABLE_01" successfully completed at Tue Aug 13 09:32:51 2013 elapsed 0 00:00:11

    With the LOGTIME parameter, each step is prefixed with a timestamp, indicating the start time for each event that is processed.

    /home/oracle:test1:expdp altdotoracle/altdotoracle \
    > directory=data_pump_dir dumpfile=expdp.dmp \
    > tables=employee LOGTIME=ALL

    Export: Release 12.1.0.1.0 - Production on Tue Aug 13 09:34:54 2013

    Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

    Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
    13-AUG-13 09:34:56.757: Starting "ALTDOTORACLE"."SYS_EXPORT_TABLE_01":  altdotoracle/******** directory=data_pump_dir dumpfile=expdp.dmp tables=employee LOGTIME=ALL
    13-AUG-13 09:34:57.019: Estimate in progress using BLOCKS method...
    13-AUG-13 09:34:57.364: Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    13-AUG-13 09:34:57.396: Total estimation using BLOCKS method: 64 KB
    13-AUG-13 09:34:57.742: Processing object type TABLE_EXPORT/TABLE/TABLE
    13-AUG-13 09:34:57.894: Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    13-AUG-13 09:34:57.964: Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
    13-AUG-13 09:35:04.853: . . exported "ALTDOTORACLE"."EMPLOYEE"   10.93 KB      16 rows
    13-AUG-13 09:35:05.123: Master table "ALTDOTORACLE"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    13-AUG-13 09:35:05.127: ******************************************************************************
    13-AUG-13 09:35:05.128: Dump file set for ALTDOTORACLE.SYS_EXPORT_TABLE_01 is:
    13-AUG-13 09:35:05.131:   /oracle/base/admin/test1/dpdump/expdp.dmp
    13-AUG-13 09:35:05.134: Job "ALTDOTORACLE"."SYS_EXPORT_TABLE_01" successfully completed at Tue Aug 13 09:35:05 2013 elapsed 0 00:00:09

    The parameter works similarly with Data Pump Import.  Note that, although it is documented, the LOGTIME parameter is not described when you do a expdp help=y or impdp help=y command.

    Categories: DBA Blogs

    Pages

    Subscribe to Oracle FAQ aggregator