Rob Baillie

Subscribe to Rob Baillie feed
More than 2 decades of writing software, and still loving it...Rob Baillie
Updated: 1 hour 38 min ago

Lightning Web Components - the dawn of (another) new era

Fri, 2018-12-14 08:04

Salesforce have a new technology. Lightning Components look like they’re on the way out, and are being replaced with a new technology ‘Lightning Web Components’.

The reasons behind that, and the main principles behind its design are covered very nicely in this article on

From that we can then get to a series of examples here.

(Note: some of the code snippets used below, to illustrate points, are taken from the recipes linked above)

Now I’m a big supporter of evolution, and I love to see new tools being given to developers on the Salesforce platform, so, with a couple of hours to play with it - what’s the immediate impression?

This is an article on early impressions, based on reviewing and playing with the examples - I fully expect there to be misunderstandings, bad terminology, and mistakes in here - If you're OK with that, I'm OK with that. I admit, I got excited and wanted to post something as quickly as possible before my cynical side took over. So here it is - mistakes and all.

WOW. Salesforce UI development has grown up.

Salesforce aren’t lying when they’ve said that they’re trying to bring the development toolset up to the modern standards.

We get imports, what look like annotations and decorators, and there’s even mention of Promises. Maybe there’s some legs in this…

It’s easy to dismiss this as ‘Oh no, yet another change’, but the thing is - the rest of industry develops and improves its toolset - why shouldn’t Salesforce?

The only way to keep the product on point IS to develop the frameworks, replace the technology, upgrade, move on. If you don’t do that then the whole Salesforce Ecosystem starts to stagnate.

Or to put it another way - in every other part of the developer community, learning from what was built yesterday and evolving is seen as a necessity. It’s good to see Salesforce trying to keep up.

So what are the big things that I’ve spotted immediately?

import is supported, and that makes things clearer

Import is a massive addition to Javascript that natively allows us to define the relationships between javascript files within javascript, rather than at the HTML level.

Essentially, this replaces the use of most ‘script’ tags in traditional Javascript development.

For Lightning Web Components,we use this to bring in capabilities from the framework, as well as static resources.

E.g. Importing modules from the Lightning Web Components framework:

import { LightningElement, track } from 'lwc';

Importing from Static Resources:

import { loadScript } from 'lightning/platformResourceLoader’;
import chartjs from '@salesforce/resourceUrl/chart';

What this has allowed Salesforce to do is to split up the framework into smaller components. If you don’t need to access Apex from your web component, then you don’t need to import the part of the framework that enables that capability.

This *should* make individual components much more lightweight and targeted - only including the capabilities that are required, when they are required.

Getting data on screen is simpler

Any javascript property is visible to the HTML template.


export default class WebAppComponentByMe extends LightningElement {

We can then render this property in the HTML with {contacts} (none of those attributes to define and none of those pesky v dot things to forget).

Much neater, much more concise.

We track properties

Looking at the examples, my assumption was that if we want to perform actions when a property is changed, we mark the property trackable using the @track decorator.

For example:

export default class WebAppComponentByMe extends LightningElement {
@track contacts;

I was thinking that, at this point, anything that references this property (on page, or in Javascript) will be notified whenever that property changes.

However, at this point I can't really tell what the difference is between tracked and non-tracked properties - a mystery for another day

Wiring up to Apex is much simpler

One of the big criticisms of Lightning Components that I always had was the amount of code you need to write in order to call an Apex method. OK, so you have force:recordData for a lot of situations, but there are many times when only an Apex method will do.

In Web Components, this is much simpler.

In order to connect to Apex, we import the ‘wire’ module, and then import functions into our javascript

import { LightningElement, wire } from 'lwc';
import getContactList from '@salesforce/apex/ContactController.getContactList';

The first line imports the wire capabilities from the framework, the second then imports the Apex method as a javascript method, therefore making it available to the component.

We can then connect a javascript property up to the method using the wire decorator:

@wire(getContactList) contacts;

Or wire up a javascript method:

wiredContacts({ error, data }) {
if (data) {
this.contacts = data;
} else if (error) {
this.error = error;

When the component is initialised, the getContactList method will be executed.

If the method has parameters, that’s also very simple (E.g. wiring to a property):

@wire(getContactList, { searchKey: '$searchKey' })

Changing the value of a property causes Apex to re-execute

Having wired up a property as a parameter to an Apex bound Javascript function, any changes to that property will cause the function to be re-executed

For example, if we:

searchKey = '';

@wire(findContacts, { searchKey: '$searchKey' })

Whenever the searchKey property changes, the Apex method imported as ‘findContacts’ will be executed and the contacts property is updated.

Thankfully, we can control when that property changes, as it looks like changing the property in the UI does not automatically fire a change the property on the Javascript object. In order to do that, we need to change the property directly.

E.g. Let’s say we extend the previous example and there’s an input that is bound to the property, and there’s an onchange event defined:

And the handler does the following:

handleKeyChange(event) {
this.searchKey =;

This will cause the findContacts method to fire whenever the value in the input is changed.

Note that it is the assignment to this.searchKey that causes the event to fire - it looks like the binding from the HTML is 1-way. I admit that I need to investigate this further.

Events do not require configuration to be implemented

Events work in a completely different way - but then that’s not a problem - Application and Component events were different enough to cause headaches previously. The model is actually much simpler.

The example in the above referenced repository to look at is ‘PubSub’.

It’s much too involved to into detail here, but the result is that you need to:

  • Implement a Component that acts as the messenger (implementing registerListener, unregisterListener and fireEvent)
  • Any component that wants to fire an event, or listen for an event will import that component to do so, firing events or registering listeners.

This would seem to imply that (at least a certain amount of) state within components is shared - looking like those defined with 'const'

Whatever the precise nature of the implementation, a pure Javascript solution is surely one that anyone involved in OO development will welcome.

I suspect that, in a later release, this will become a standard component.


Some people will be thinking "Man, glad I didn’t migrate from Classic / Visualforce to Lightning Experience / Components - maybe I’ll just wait a little longer for it all to settle down”.

You’re wrong - it won’t settle, it’ll continue to evolve and the technologies will be continually replaced by new ones. Eventually, the jump from what you have to where you need to get to will be so huge that you’ll find it incredibly hard. There’s a reason why Salesforce pushes out 3 releases a year, whether you want it or not, these technology jumps are just the same. The more you put it off, the more painful it’ll be.

The change from Lightning Components to Lightning Web Components is vast - a lot more than a single 3 letter word would have you suspect. The only real similarities between the two frameworks that I’ve seen up to now are:

  • Curlies are used to bind things
  • The Base Lightning Components are the same
  • You need to know Javascript

Other than that, they’re a world apart.

Also, I couldn’t find any real documentation - only examples - although those examples are a pretty comprehensive starting point.

Now, obviously it's early days - we're in pre-release right now, but what I've seen gives me great hope for the framework, it's a significant step forward and I can't wait to see what happens next. I wonder if a Unit Testing framework might follow (I can but hope)

You could wait, but hey, really, what are you waiting for? Come on, jump in. The change is exciting...

LinkedIn, and the GDPR age

Wed, 2018-11-28 13:34
I should start this post by saying I’m neither a lawyer, nor a GDPR expert.  Possibly both of those facts will become massively apparent in the text that follows.

Also, I’m not a LinkedIn Premium user - so it’s possible I’m missing something obvious by not having access to it.

But anyway, I’ve been thinking about how LinkedIn fits into a GDPR world, and it doesn’t something doesn’t seem quite right to me at the moment.

LinkedIn are in the data business, and they’re very good at protecting that asset.  They tend to be (quite rightly) pro-active in stopping people from extracting data from their systems and pushing it into their own systems.

As such, businesses (recruiters particularly) are encouraged to contact directly within LinkedIn, and they are offered tools to discover people and commence that communication.

Unfortunately, this lack of syncing between LinkedIn and in-house systems can cause a big problem with GDPR.

That is:
What happens if someone says to a recruitment organisation - “Please forget me, and do not contact me again”

In this situation, the organisation is obliged to ‘remove' them from their systems.

At some point in the future another recruiter from the same organisation then finds the person on LinkedIn, without reference to their own systems and messages them using LinkedIn.

What happens next?

By the letter of the law, the organisation may not have done anything wrong.
  • The person is no longer in the organisation’s system, they were found on LinkedIn.
  • The person was not sent an e-mail, or phoned, they were messaged within LinkedIn.
  • The person has consented to have their data held by LinkedIn for the expressed purpose of being contacted by potential recruiters via the platform.

With all this in mind, it may be interpreted that it’s fair game to contact anyone on LinkedIn, regardless of their expressed desire not to be contacted by a particular company.

However, whilst this may be within the definition of the law, it’s pretty clear it’s not in the spirit of the law.

Note - Again I’m not a GDPR expert, nor a lawyer, so can't say for certain that it IS within the definition of the law - nor am I asserting that it is - just that I can imagine that it might be interpreted that way by some people.

And this is where things get complicated for LinkedIn.  I can see a few outcomes of this, but two of them could be extremely worrying for the future on LinkedIn.

Scenario - LinkedIn Premium is seen as an extension of a subscribing organisation’s IT systems.

It could be argued that, whilst LinkedIn is in independent entity, when they provide services to another organisation, their systems then become part of the remit of that subscribing organisation.

I.E. within LinkedIn, any action by a user and the storage of data of that action falls solely within the responsibility of the employer of the user that performs that action.  LinkedIn are not responsible for the use of the data in any way.

On first glance, this looks ideal to LinkedIn - no responsibility!

However, that’s not true - if there’s ever a test case that proves this point, then suddenly LinkedIn becomes a big risk to any organisation that uses it.

Over the course of the last 2 years or so, every data holding organisation in the EU has looked carefully at their data retention and use policies and systems and done what they can to protect themselves - in may cases I’m sure they have changed suppliers and systems since the existing systems have not proven up to scratch in the light of GDPR legislation.

Up to now, I’m not sure that many people have scrutinised LinkedIn in the same way.

At the moment it might be argued that LinkedIn is not supplying the tools to subscribers to allow them to comply with the GDPR legislation.  For example, I’m not aware of any functionality that allows an organisation to state "I wish to completely forget this person, and ensure that I cannot connect, view data on or contact them without their expressed consent”.  If that’s a minimum requirement of any internal system, why would it not be a minimum requirement for LinkedIn?

It could be that once that test case comes, a lot of organisations will take a look at LinkedIn and decide it doesn’t stand up, and it’s no longer worth the risk.

Scenario - LinkedIn, as the data controller, is responsible for the contact made by any users within the system.

This is potentially even worse for LinkedIn.  Since LinkedIn hold the data about people, provide the tools for discovering those people, provide the tools for contacting people, and for relaying those messages, it may be argued that it is up to LinkedIn to provide the mechanism to allow Users to state that they do not wish to be visible to or contacted by a given organisation.

That is, whilst it is another user who is sending the message, it may be that a future test case could state that LinkedIn are responsible for keeping track of who has ‘forgotten’ who.

By not providing that mechanism, and allowing users on the system to make contact when the contact is not welcome and against the target’s wishes, it’s possible that LinkedIn could be argued as being responsible for the unwelcome contact and therefore misuse of data.


Today, it seems that LinkedIn is in a bit of limbo.

There may be a recognised way to use LinkedIn in the GDPR era - find someone, check in my system that I’m allowed to contact them, go back to LinkedIn and contact them - but in order for that to work it requires the due diligence of recruiters to ensure that the law isn’t broken.

Realistically, something will have to change, or that test case is coming; at some point, someone is going to get an email that is going to break the limbo.

When that happens, I wonder which way it will go..?

Things I still believe in

Fri, 2018-10-19 09:49
Over 10 years ago I wrote a blog post on things that I believe in - as a developer, and when I re-read it recently I was amazed at how little has changed.

I'm not sure if that's a good thing, or a bad thing - but it's certainly a thing.

Anyway - here's that list - slightly updated for 2018... it you've seen my talk on Unit Testing recently, you might recognise a few entries.

(opinions are my own, yada yada yada)
  • It's easier to re-build a system from its tests than to re-build the tests from their system.

  • You can measure code complexity, adherence to standards and test coverage; you can't measure quality of design.

  • Formal and flexible are not mutually exclusive.

  • The tests should pass, first time, every time (unless you're changing them or the code).

  • Test code is production code and it deserves the same level of care.

  • Prototypes should always be thrown away.

  • Documentation is good, self documenting code is better, code that doesn't need documentation is best.

  • If you're getting bogged down in the process then the process is wrong.

  • Agility without structure is just hacking.

  • Pair programming allows good practices to spread.

  • Pair programming allows bad practices to spread.

  • Team leaders should be inside the team, not outside it.

  • Project Managers are there to facilitate the practice of developing software, not to control it.

  • Your customers are not idiots; they always know their business far better than you ever will.

  • A long list of referrals for a piece of software does not increase the chances of it being right for you, and shouldn't be considered when evaluating it.

  • You can't solve a problem until you know what the problem is. You can't answer a question until the question's been asked.

  • Software development is not complex by accident, it's complex by essence.

  • Always is never right, and never is always wrong.

  • Interesting is not the same as useful.

  • Clever is not the same as right.

  • The simplest thing that will work is not always the same as the easiest thing that will work.

  • It's easier to make readable code correct than it is to make clever code readable.

  • If you can't read your tests, then you can't read your documentation.

  • There's no better specification document than the customer's voice.

  • You can't make your brain bigger, so make your code simpler.

  • Sometimes multiple exit points are OK. The same is not true of multiple entry points.

  • Collective responsibility means that everyone involved is individually responsible for everything.

  • Sometimes it's complex because it needs to be; but you should never be afraid to double check.

  • If every time you step forward you get shot down you're fighting for the wrong army.

  • If you're always learning you're never bored.

  • There are no such things as "Best Practices". Every practice can be improved upon.

  • Nothing is exempt from testing. Not even database upgrades or declarative tools.

  • It's not enough to collect data, you need to analyse, understand and act upon that data once you have it.

  • A long code freeze means a broken process.

  • A test hasn't passed until it has failed.

  • A test that can't fail isn't a test.

  • If you give someone a job, you can't guarantee they'll do it well; If you give someone two jobs you can guarantee they'll do both badly

  • Every meeting should start with a statement on its purpose and context, even if everyone in the meeting already knows.

Promises and Lightning Components

Wed, 2018-10-03 07:56
In 2015, the ECMA specification included the introduction of Promises, and finally (pun intended) the Javascript world had a way of escaping from callback hell and moving towards a much richer syntax for asynchronous processes.

So, what are promises?
In short, it’s a syntax that allows you to specify callbacks that should execute when a function either ’succeeds’ or ‘fails’ (is resolved, or rejected, in Promise terminology).

For many, they're a way of implementing callbacks in a way that makes a little more sense syntactically, but for others it's a new way of looking at how asynchronous code can be structured that reduces the dependancies between them and provides you with some pretty clever mechanisms.

However, this article isn’t about what promises are, but rather:

How can Promises be used in Lightning Components, and why you would want to?
As with any new feature of Javascript, make sure you double check the browser compatibility to make sure it covers your target brower before implementing anything.

If you want some in depth info on what they are, the best introduction I’ve found is this article on

In addition, Salesforce have provided some very limited documentation on how to use them in Lightning, here.

Whilst the documentations's inclusion can give us hope (Salesforce knows what Promises are and expect them to be used), the documentation itself is pretty slim and doesn’t really go into any depth on when you would use them.

When to use Promises
Promises are the prime candidate for use when executing anything that is asynchronous, and there’s an argument to say that any asynchronous Javascript that you write should return a Promise.

For Lightning Components, the most common example is probably when calling Apex.

The standard pattern for Apex would be something along the lines of:

getData : function( component ) {
let action = component.get(“c.getData");

action.setCallback(this, function(response) {

let state = response.getState();

if (state === "SUCCESS") {
let result = response.getReturnValue();
// do your success thing
else if (state === "INCOMPLETE") {
// do your incomplete thing
else if (state === "ERROR") {
// do your error thing

In order to utilise Promises in a such a function you would:
  1. Ensure the function returned a Promise object
  2. Call 'resolve' or 'reject' based on whether the function was successful

getData : function( component ) {
return new Promise( $A.getCallback(
( resolve, reject ) => {

let action = component.get(“c.getData");

action.setCallback(this, function(response) {

let state = response.getState();

if (state === "SUCCESS") {
let result = response.getReturnValue();
// do your success thing
else if (state === "INCOMPLETE") {
// do your incomplete thing
else if (state === "ERROR") {
// do your error thing

You would then call the helper method in the same way as usual

doInit : function( component, event, helper ) {

So, what are we doing here?

We have updated the helper function so that it now returns a Promise that is constructed with a new function that has two parameters 'resolve' and 'reject'. When the function is called, the Promise is returned and the function that we passed in is immediately executed.

When our function reaches its notional 'success' state (inside the 'state == "SUCCESS" section), we call the 'resolve' function that is passed in.

Similarly, when we get to an error condition, we call 'reject'.

In this simple case, you'll find it hard to see where 'resolve' and 'reject' are defined - because they're not. In this case the Promise will create an empty function for you and the Promise will essentially operate as if it wasn't there at all. The functionality hasn't changed.

Aside - if you're unfamiliar with the 'Arrow Function' notation - E.g. () => { doThing() } - then look here or here. And don't forget to check the browser compatibility.

So the obvious question is.. Why?
What does a Promise give you in such a situation?

Well, if all you are doing it calling a single function that has no dependant children, then nothing. But let's say that you wanted to call "getConfiguration", which called some Apex, and then *only once that was complete* you called "getData".

Without Promises, you'd have 2 obvious solutions:
  1. Call "getData" from the 'Success' path of "getConfiguration".
  2. Pass "getData" in as a callback on "getConfiguration" and call the callback in the 'Success' path of "getConfiguration"
Neither of these solutions are ideal, though the second is far better than the first.

That is - in the first we introduce an explicit dependancy between getConfiguration and getData. Ideally, this would not be expressed in getConfiguration, but rather in the doInit (or a helper function called by doInit). It is *that* function which decides that the dependancy is important.

The second solution *looks* much better (and is), but it's still not quite right. We now have an extra parameter on getConfiguration for the callback. We *should* also have another callback for the failure path - otherwise we are expressing that only success has a further dependancy, which is a partial leaking of knowledge.

Fulfilling your Promise - resolve and reject
When we introduce Promises, we introduce the notion of 'then'. That is, when we 'call' the Promise, we are able to state that something should happen on 'resolve' (success) or 'reject' (failure), and we do it from *outside* the called function.

Or, to put it another way, 'then' allows us to define the functions 'resolve' and 'reject' that will get passed into our Promise's function when it is constructed.


We can pass a single function into 'then', and this will be the 'resolve' function that gets called on success.

doInit : function( component, event, helper ) {
helper.getConfiguration( component )
.then( () => { helper.getData( component ) } );

Or, if we wanted a failure path that resulted in us calling 'helper.setError', we would pass a second function, which will become the 'reject' function.

doInit : function( component, event, helper ) {
helper.getConfiguration( component )
.then( () => { helper.getData( component ) }
, () => { helper.setError( component ) } );

Aside - It's possible that the functions should be wrapped in a call to '$A.getCallback'. You will have seen this in the definition of the Promise above. This is to ensure that any callback is guaranteed to remain within the context of the Lightning Framework, as defined here. I've not witnessed any problem with not including it, although it's worth bearing in mind if you start to get issues on long running operations.

Now, this solution isn't vastly different to passing the two functions directly into the helper function. E.g. like this:

doInit : function( component, event, helper ) {
helper.getConfiguration( component
, () => { helper.getData( component ) }
, () => { helper.setError( component ) } );

And whilst I might say that I personally don't like the act of passing in the two callbacks directly into the function, personal dislike is probably not a good enough reason to use a new language feature in a business critical system.

So is there a better reason for doing it?

Promising everything, or just something
Thankfully, Promises are more than just a mechanism for callbacks, they are a generic mechanism for *guaranteeing* that 'settled' (fulfilled or rejected) Promises result in a specified behaviour occurring once certain states occur.

When using a simple Promise, we are simply saying that the behaviour should be that the 'resolve' or 'reject' functions get called. But that's not the only option

. For example, we also have: Promise.allWill 'resolve' only when *all* the passed in Promises resolve, and will 'reject' if and when *any* of the Promises reject.Promise.raceWill 'resolve' or 'reject' when the first Promise to respond comes back with a 'resolve' or 'reject'. Once we add that to the mix, we can do something a little clever...

How about having the component load with a 'loading spinner' that is only switched off when all three calls to Apex respond with success:

doInit : function( component, event, helper ) {
Promise.all( [ helper.getDataOne( component )
, helper.getDataTwo( component )
, helper.getDataThree( component ) ] )
.then( () => { helper.setIsLoaded( component ) } );

Or even better - how about we call getConfiguration, then once that’s done we call each of the getData functions, and only when all three of those are finished do we set the flag:

doInit : function( component, event, helper ) {
helper.getConfiguration( component )
.then( Promise.all( [ helper.getDataOne( component )
, helper.getDataTwo( component )
, helper.getDataThree( component ) ] )
.then( () => { helper.setIsLoaded( component ) } )

Or how about - we normally call three functions to get data, unless a flag is set, at which point we want to call a fourth function, and only when all four are complete do we set the flag:

doInit : function( component, event, helper ) {

let initialisations = [ helper.getDataOne( component )
, helper.getDataTwo( component )
, helper.getDataThree( component ) ];

if ( component.get( 'v.runGetDataFour' ) {
initialisations.push( helper.getDataFour( component ) );

helper.getConfiguration( component )
.then( Promise.all( initialisations )
.then( () => { helper.setIsLoaded( component ) } )

Now, just for a second, think about how you would do that without Promises...

Throw it away - Why you shouldn't keep your POC

Sat, 2014-12-13 04:26

"Proof of Concepts" are a vital part of many projects, particularly towards the beginning of the project lifecycle, or even in the pre-business case stages.

They are crucial for ensuring that facts are gathered before some particularly risk decisions are made.  Technical or functional, they can address many different concerns and each one can be different, but they all have one thing in common.  They serve to answer questions.

It can be tempting, whilst answering these questions to become attached to the code that you generate.

I would strongly argue that you should almost never keep the code that you build during a POC.  Certainly not to put into a production system.

I'd go so far as to say that planning to keep the code it is often damaging to the proof of concept; planning to throw the code away is liberating, more efficient and makes proof of concepts more effective by focussing the minds on the questions that require answers..

Why do we set out on a proof of concept?

The purpose of a proof of concept is to (by definition):

  * Prove:  Demonstrate the truth or existence of something by evidence or argument.
  * Concept: An idea, a plan or intention.

In most cases, the concept being proven is a technical one.  For example:
  * Will this language be suitable for building x?
  * Can I embed x inside y and get them talking to each other?
  * If I put product x on infrastructure y will it basically stand up?

They can also be functional, but the principles remain the same for both.

It's hard to imagine a proof of concept that cannot be phrased as one or more questions.  In a lot of cases I'd suggest that there's only really one important question with a number of ancillary questions that are used to build a body of evidence.

The implication of embarking on a proof of concept is that when you start you don't know the answer to the questions you're asking.  If you *do* already know the answers, then the POC is of no value to you.

By extension, there is the implication that the questions posed require to be answered as soon as possible in order to support a decision.  If that's not the case then, again, the POC is probably not of value to you.

As such, the only thing that the POC should aim to achieve is to answer the question posed and to do so as quickly as possible.

This is quite different to what we set out to do in our normal software development process. 

We normally know the answer to the main question we're asking (How do we functionally provide a solution to this problem / take advantage of this opportunity), and most of the time is spent focussed on building something that is solid, performs well and generally good enough to live in a production environment - in essence, not answering the question, but producing software.

What process do we follow when embarking on a proof of concept?

Since the aim of a POC is distinct from what we normally set out to achieve, the process for a POC is intrinsically different to that for the development of a production system.

With the main question in mind, you often follow an almost scientific process.  You put forward a hypothesis, you set yourself tasks that are aimed at collecting evidence that will support or deny that hypothesis, you analyse the data, put forward a revised hypothesis and you start again.

You keep going round in this routine until you feel you have an answer to the question and enough evidence to back that answer up.  It is an entirely exploratory process.

Often, you will find that you spend days following avenues that don't lead anywhere, backtrack and reassess, following a zig-zag path through a minefield of wrong answers until you reach the end point.  In this kind of situation, the code you have produced is probably one of the most barnacle riddled messes you have every produced.

But that's OK.  The reason for the POC wasn't to build a codebase, it was to provide an answer to a question and a body of evidence that supports that answer.

To illustrate:

Will this language be suitable for building x?

You may need to check things like that you can build the right type of user interfaces, that APIs can be created, that there are ways of organising code that makes sense for the long term maintenance for the system.

You probably don't need to build a completely functional UI, create a fully functioning API with solid error handling or define the full set of standards for implementing a production quality system in the given language.

That said, if you were building a production system in the language you wouldn't dream of having in incomplete UI, or an API that doesn't handle errors completely or just knocking stuff together in an ad-hoc manner.

Can I embed x inside y and get them talking to each other

You will probably need to define a communication method and prove that it basically works.  Get something up and running that is at least reasonably functional in the "through the middle" test case.

You probably don't need to develop an architecture that is clean with separation of concerns that means the systems are properly independant and backwards compatible with existing integrations. Or that all interactions are properly captured and that exceptional circumstances are dealt with correctly.

That said, if you were building a production system, you'd need to ensure that you define the full layered architecture, understand the implications of lost messages, prove the level of chat that will occur between the systems.  On top of that you need to know that you don't impact pre-existing behaviour or APIs.

If I put product x on infrastructure y will it basically stand up?

You probably need to just get the software on there and run your automated tests.  Maybe you need to prove the performance and so you'll put together some ad-hoc performance scripts.

You probably don't need to prove that your release mechanism is solid and repeatable, or ensure that your automated tests cover some of the peculiarities of the new infrastructure, or that you have a good set of long term performance test scripts that drop into your standard development and deployment process.

That said, if you were building a production system, you'd need to know exactly how the deployments worked, fit it into your existing continuous delivery suite, performance test and analyse on an automated schedule.

Production development and Proof of Concept development is not the same

The point is, when you are building a production system you have to do a lot of leg-work; you know you can validate all the input being submitted in a form, or coming through an API - you just have to do it.

You need to ensure that the functionality you're providing works in the majority of use-cases, and if you're working in a TDD environment then you will prove that by writing automated tests before you've even started creating that functionality.

When you're building a proof of concept, not only should these tests be a lower priority, I would argue that they should be *no priority whatsoever*, unless they serve to test the concept that you're trying to prove.

That is,  you're not usually trying to ensure that this piece of code works in all use-cases, but rather that this concept works in the general case with a degree of certainty that you can *extend* it to all cases.

Ultimately, the important deliverable of a POC is proof that the concept works, or doesn't work; the exploration of ideas and the conclusion you come to; the journey of discovery and the destination of the answer to the question originally posed.

That is intellectual currency, not software.  The important delivery of a production build is the software that is built.

That is the fundamental difference, and why you should throw your code away.

The opportunity cost of delaying software releases

Thu, 2014-10-09 05:56
Let me paint a simple picture (but with lots of numbers).

Some software has been built.  It generates revenue (or reduces cost) associated with sales, but the effect is not immediate.  It could be the implementation of a process change that takes a little time to bed in, or the release of a new optional extra that not everyone will want immediately.

It is expected that when it is initially released there’ll be a small effect.  Over the next 6 months there will be an accelerating uptake until it reaches saturation point and levels off.

Nothing particularly unusual about that plan.  It probably describes a lot of small scale software projects.
Now let’s put some numbers against that.

At saturation point it’s expected to generate / save an amount equal to 2% of the total revenue of the business.  It might be an ambitious number, but it’s not unrealistic.

The business initially generates £250k a month, and experiences steady growth of around 10% a year.

What does the revenue generation of that software look like over the first 12 months?
It’s pretty easy to calculate, plugging in some percentages that reflect the uptake curve:

Period Original Business Revenue Software Revenue Generation Additional Revenue1 £250,000.00 0.2% £500.002 £252,500.00 0.5% £1,262.503 £255,025.00 1.1% £2,805.284 £257,575.25 1.6% £4,121.20 5 £260,151.00 1.9% £4,942.876 £262,752.51 2.0% £5,255.057 £265,380.04 2.0% £5,307.608 £268,033.84 2.0% £5,360.689 £270,714.18 2.0% £5,414.28 10 £273,421.32 2.0% £5,468.4311 £276,155.53 2.0% £5,523.1112 £278,917.09 2.0% £5,578.34 Total: £51,539.34
Or, shown on a graph:

So, here’s a question:

What is the opportunity cost of delaying the release by 2 months?
The initial thought might be that the effect isn’t that significant, as the software doesn’t generate a huge amount of cash in the first couple of months.

Modelling it, we end up with this:

Period Original Business Revenue Software Revenue Generation Additional Revenue 1 £250,000.00 £- 2 £252,500.00 £- 3 £255,025.00 0.2% £510.05 4 £257,575.25 0.5% £1,287.88 5 £260,151.00 1.1% £2,861.66 6 £262,752.51 1.6% £4,204.04 7 £265,380.04 1.9% £5,042.22 8 £268,033.84 2.0% £5,360.68 9 £270,714.18 2.0% £5,414.28 10 £273,421.32 2.0% £5,468.43 11 £276,155.53 2.0% £5,523.11 12 £278,917.09 2.0% £5,578.34 Total: £41,250.69
Let’s show that on a comparative graph, showing monthly generated revenue:

Or, even more illustrative, the total generated revenue:

By releasing 2 months later, we do not lose the first 2 months revenue – we lose the revenue roughly equivalent to P5 and P6.

When we release in P3, we don’t immediately get the P3 revenue we would have got.  Instead we get something roughly equivalent to P1 (it’s slightly higher because the business generates a little more revenue overall in P3 than it did in P1).

This trend continues in P3 through to P8, where the late release finally reaches saturation point (2 periods later than the early release – of course).

Throughout the whole of P1 to P7 the late release has an opportunity cost associated.  That opportunity cost is never recovered later in the software’s lifespan as the revenue / cost we could have generated the effect from is gone.

If the business was not growing, this would amount to a total equal to the last 2 periods of the year.

In our specific example, the total cost of delaying the release for 2 months amounts to 20% of the original expected revenue generation for the software project in the first year.
And this opportunity cost is solely related to the way in which the revenue will be generated; the rate at which the uptake comes in over the first 6 months.

Or to put it another way – in this example, if you were to increase or decrease the revenue of the business or the percentage generation at which you reach saturation point the cost will always be 20%.

So, when you’re thinking of delaying the release of software it’s probably worth taking a look, modelling your expected uptake and revenue generation to calculate just how much that will cost you…

How do I type e acute (é) on Windows 8

Wed, 2014-10-08 09:27

I keep on forgetting how to type é on Windows 8 (I used to CTRL+ALT+e, but that's now often reserved for the Euro symbol)

I then tend to run a search on Google, and end up being pointed towards 8 year old answers that point you to character map, options in old version of word, or the old way of typing the extended ASCII character code.

They all suck.

And then I remember - it's easy.

You start by pressing a CTRL + a key that represent the accent, then type the letter you want accented.

For example, CTRL + ' followed by e gives you é.


The great thing about using this technique is that the characters you use (dead letters) are representative of the accents you want to type. This makes them much easier to remember than the seemingly random character codes

Here are the ones I know about:

KeystrokesAccent typeExamplesCTRL + 'acuteéCTRL + `graveèCTRL + SHIFT + 6 / CTRL + ^circumflexêCTRL + ,cedillaçCTRL + ~perispomeneõCTRL + SHIFT + 7 / CTRL + &Diphthongs / others a = æ, o = œ, s = ß It doesn't quite work with every app (Blogger on Chrome, for example), but it certainly covers Office 2013, including both Outlook and Word.

Gamification to level 80

Fri, 2014-01-31 04:14
Since the end of July last year I've been test driving one of the latest online tools that hopes to change your life by giving you the ability to store your task lists.

Wow. What could be more underwhelming, and less worthy of a blog post?

Well, this one is different.  This one takes some of the huge amount of thinking on the behaviour of "millenials" and "Generation Y", adds a big dose of social context and ends up with something quite spectacular.

This is the gamification of task lists, this is experience points and levelling up, buying armour and using potions, this is World of Warcraft where the grinding is calling your mam, avoiding junk food or writing a blog post.

This is HabitRPG.
The concept is simple, you manage different styles of task lists.
  • If you complete entries on them you get experience points and coins.
  • If you fail to do entries them you lose hit points.

Depending on on whether you're setting yourself realistic targets and completing them you either level up, or die and start again.
Get enough coins and you can buy armour (reduce the effect of not hitting your targets), weapons (increase the effect of achieving things) or customised perks (real world treats that you give yourself).
There's a wealth of other treats in there too, but I don't want to spoil it for you, because as each of them appear you get a real jolt of surprise and delight (look out for the flying pigs)
So, what do I mean by "different styles of task lists". Well, the lists are split into three - Habits, Dailies and Todos:
HabitsThese are repeating things that you want to get into the habit of doing, or bad habits you want to break. 

They have no schedule, or immediate urgency, they just hang around and you come back every now and again to say "yup, did that".  You can set things up as positive or negative, and so state if they are a good or bad habit.

Examples might be:
  • Phone mother (positive)
  • Get a takeaway (negative)
  • Empty the bins (both - positive if you do it, negative if your partner does it)

DailiesSuffering from a bit of a misnomer, dailies are repetitive tasks with some form of weekly schedule. Things that you want to do regularly, and on particular days. You can set a task to be required every day, only every Tuesday, or anything between.

Whilst un-actioned habits are benign, if you don't tick off a daily then you get hurt.  With habits you're gently encouraged to complete them as often as possible. Dailies come with a big stick..
Examples might be:
  • Go to the gym
  • Do an uninterrupted hour of productive work

TodosThe classic task. The one off thing that you've got to do, and once its done you can cross it off and move on to the next thing.

In terms of functionality, they're pretty much the same as dailies - If you don't do a task it hurts.

Examples might be:
  • Write a blog post about HabitRPG
  • Book a holiday cottage in Wales

Other bits
They have a mobile app on both iOS and Android.  I use Android, and it does the job - nothing fancy, but it works.  Most of what you need to do is available to do on the move.

It's missing the ability to work offline, though it's not a huge problem.  I can imagine it being added soon, and I really hope it does.  Sometimes, sitting on the tube, I think of things that I need to do and it would be great to be able to add them to my task list without waiting until I get over-ground again.

Functionality is added regularly, and there is clearly a strong community spirit in the developers who are producing the site.  A kickstarter provided a boost to funds, but they seem to have worked out how to monetise the site and it looks like it'll keep being developed for some time - which is obviously good news!

There are a few community plug-ins out there (they made the good choice of using the public API to hook their UI up, meaning any functionality in the site is available in the API), including one that works like "stayfocused", monitoring your internet browsing habits and rewarding or punishing your HabitRPG character appropriately.

The API's also open up idea of a sales system driven by some of the concepts in HabitRPG, if not HabitRPG itself (though maybe with Ferrari's instead of Golden Swords).  I'd be amazed if this wasn't picked up by a Salesforce developer sometime soon...

I have to admit, I was excited about this idea the moment I heard about it, though I didn't want to blog about it straight away - I wanted to see if it had some legs first.

Sure there are other sites doing similar things, take a look at blah blsh and blah. But, excuse the pun,  this is another level.

When I first started using HabitRPG I had very short term goals. Your character is fragile, so naturally I did what I could to avoid getting hurt. I avoided unrealistic goals, or even goals that I might not get around to for a couple of days. Only todos I was likely to do that day got added.

As I've got further through I have found that I am more inclined to set longer target todos. They hurt you less as you have armour, and the longer you leave them the more XP you get. It sounds like cheating, but its not. It's simply that I've matured the way in which I use my task manager.

It's missing some things that I might expect from a really rich task manager - tags can be used to group items and tasks can be split with simple sub-tasks, but there's nothing more advanced than that - no dependent tasks, or chains of tasks for example.

But maybe the simplicity is key to its success. I rarely need more than a simple reminder, so why complicate things?

You have to be careful with the habits. It can be tempting to add a bad habit in there that you've already pretty much broken, but if Steven Levitt and Stephen J. Dubner are right then you'll end up replacing an intangible moral cost into a tangible HabitRPG cost and result in picking up that bad habit again.

It differs from sites like Strava, in that this is not primarily a competitive site - it needs to focus on the individual as it is trivially easy to "cheat".  You can add arbitrary tasks and complete them immediately - though it really defeats the purpose.  It relies on you entering a contract with yourself to use the site productively.  For that reason, any fundamental introduction to the site of competitiveness is flawed.

However, there is the concept of "challenges" - the idea that you can set goals, assign a prize and then invite people to compete.  It works, but only on the principle that people entering the challenges can be trusted.

All in all this has proven to be a pretty successful experiment for me - since I've started using it I've hardly missed a day at the gym, my washing basket is empty, all my shirts are ironed, I've managed to make it to yoga and I even call my dad more often.

And with a character at level 32 I'm becoming a god!

BBC and web accelerators don't mix

Wed, 2013-09-18 02:52
Do you have a problem with the BBC website? Even though you're based in the UK it seems to have a lot of adverts for American companies on it? And it's not that you don't like American companies, it's just that you've already paid your licence fee and that means that you shouldn't have to see any adverts at all.
Even worse than that, because it thinks you're not in the UK, it won't let you access any of the iPlayer content!

At some point in the last few weeks that started to happen to me on my Android (Nexus 10, no less) tablet. Thankfully I managed to solve it.

A quick scan of the BBC help seemed to just direct me to my ISP (they may route you through foreign / non-UK servers - I knew this wouldn't be true as my desktop works OK, and it doesn't sound like BT). A quick mail to them seemed to confirm my fears - no problem there.

A mail to the BBC was never going to be much use. I don't blame them, they have a lot of people to deal with and none of them pay a subscription fee that's optional. It makes no economic sense for them to have a good technical help line.

Any way, after a lot of Google searching for phrases like:
  • The BBC thinks I'm not in the UK when I am.
  • iPlayer thinks I'm not in the UK.
  • iPlayer won't serve me any content.
  • BBC has adverts on it.

And many other variations on the theme, I decided to go back to the BBC site and give it one last go.

On one of their help pages I spotted a pretty throwaway comment about web accelerators causing problems sometimes. Knowing that Google's a little too clever for its own good sometimes this seemed like a good avenue to check.

It turns out that this was the problem, and it's really easy to solve.

In Chrome (on Android 4.3 anyway) go to:
  • Settings
  • Advanced
  • Bandwidth management
  • Reduce data usage
  • Then in the tip right corner, flick the switch to off.

That simple.

Basically, if this is switched on then a lot of content isn't served from its source. Rather you get it from Google instead - it gets it from the source, simplifies it, re-compresses it and then sends it to you so that you can get a modest 20% saving on your download sizes.

The result is pretty much as the BBC describes it, your internet content is routed through non-UK servers. It's just that it's not down to your ISP doing, it's Google.

So, hopefully, when you get that dreaded "BBC says I'm not in the UK" feeling, your Google search will bring you here and you'll solve the problem in a fraction the time it took me!

Redundancies should come with a pay rise

Sat, 2013-08-31 10:46

As far as I can see, there is only one reason why a company should ever make redundancies.

Due to some unforseen circumstances the business has become larger than the market conditions can support and it needs to shrink in order to bring it back in line.

Every other reason is simply a minor variation or a consequence of that underlying reason.

Therefore, if the motivation is clear, and the matter dealt with successfully, then once the redundancies are over the business should be "right sized" (we've all heard that term before), and it should be able to carry on operating with the same values, practices and approach that it did prior to the redundancies.

If the business can't, then I would suggest is that it is not the right size for the market conditions and therefore the job isn't complete.

OK, there may be some caveats to that, but to my mind this reasoning is sound.

In detail:

When you reduce the headcount of the business you look for the essential positions in the company, keep those, and get rid of the rest.

Once the redundancies are finished you should be left with only the positions you need to keep in order to operate successfully.

It's tempting to think that you should have a recruitment freeze and not back-fill positions when people leave, but if someone leaves and you don't need to replace them, then that means you didn't need that position, in which case you should have made it redundant.

Not back-filling positions is effectively the same as allowing your employees to choose who goes based on their personal motives rather than force the business heads to choose based on the business motives.  This doesn't make business sense.

So, you need to be decisive and cut as far as you can go without limiting your ability to operate within the current market conditions.

To add to that, recruitment is expensive.  If you're in a highly skilled market then you'll likely use an agency. They can easily charge 20% of a salary for a perm head.  On top of that you have the cost of bringing someone up to speed, at a time when you're running at the minimum size your market will allow.  Plus there's the cost of inefficiency during the onboarding period as well as the increased chance of the remaining overstretched employees leaving as well.

The upshot is that you really can't afford to have people leave, it's so expensive that it jeopardises the extremely hard work you did when you made the redundancies.

There's a theory I often hear that you can't have contractors working when the perm heads are being marched out.  That's a perfectly valid argument if the perm head would be of long term value to you and can do the job that the contract head can do.  But if you need the contractor to do a job that only lasts another 3 months and that person is by far the best or only person you have for the job, then the argument just doesn't stand up.  Get rid of the perm position now and use the contractor, it'll be cheaper and more beneficial to the business in the long run.

OK, that's maybe not the most sentimental of arguments, but why would you worry about hurting the feelings of people who no longer work for you, at the expense of those that still do?

It may even be worse than that - you could be jeopardising the jobs of others that remain by not operating in the most efficient and effective way possible.

Another prime example is maternity cover.  If you need the person on maternity to come back to work then you almost certainly need the person covering them. If it's early in the maternity leave then you'll have a long period with limited staff, if it's late in the leave then you only need the temporary cover for a short period more. Either way you're overstretching the perm staff left to cover them and risking having them leave.

Finally, there's the motivation to ensure that the business that remains is running as lean as possible. That costs are as low as they could be. The temptation is to cut the training and entertainments budget to minimum and pull back on the benefits package.
As soon as you do this you fundamentally change the character of the business.  If you always prided yourself on being at the forefront of training then you attracted and kept staff who valued that. If you always had an open tab on a Friday night at the local bar, then you attracted people who valued that.  Whatever it is that you are cutting back on, you are saying to people who valued it that "we no longer want to be as attractive to you as we once were; we do not value you quite as much as we did". This might not be your intention, but it is the message your staff will hear.

I put it to you that the cheapest way to reduce costs after redundancies is to be completely honest to the staff you keep. Say it was difficult, say that you're running at minimum and that a lot will be expected of whoever's left. But tell them that they're still here because they're the best of the company and they are vital to the company's success.  Let them know that the contractors you've kept are there because they're the best people for those positions to ensure that the company succeeds.  Tell them that the contractors will be gone the moment they're not generating value or when a perm head would be more appropriate.  Make it clear that the company is now at the right size and the last thing you want is for people to leave, because you value them and that if they left it would damage your ability to do business.

Then give them a pay rise and a party to prove it.

Agile and UX can mix

Thu, 2013-08-29 05:19
User experience design is an agile developer's worst nightmare. You want to make a change to a system, so you research. You collect usage stats, you analyse hotspots, you review, you examine user journeys, you review, you look at drop off rates, you review. Once you've got enough data you start to design. You paper prototype, run through with users, create wireframes, run through with users, build prototypes, run through with users, do spoken journey and video analysis, iterate, iterate, iterate, until finally you have a design.

Then you get the developers to build it, exactly as you designed it.

Agile development, on the other hand, is a user experience expert's worst nightmare. You want to make a change to a system, so you decide what's the most important bit, and you design and build that - don't worry how it fits into the bigger picture, show it to the users, move on to the next bit, iterate, iterate, iterate, until finally you have a system.

Then you get the user experience expert to fix all the clumsy workflows.

The two approaches are fundamentally opposed.

Aren't they?

Well, of course, I'm exaggerating for comic effect, but these impressions are only exaggerations - they're not complete fabrications.

If you look at what's going on, both approaches have the same underlying principle - your users don't know what they want until they see something. Only then do they have something to test their ideas against.  Both sides agree, the earlier you get something tangible in front of users and the more appropriate and successful the solution will be.

The only real difference in the two approaches as described is the balance between scope of design and fullness of implementation. On the UX side the favour is for maximum scope of design and minimal implementation; the agile side favours minimal scope of design and maximum implementation.

The trick is to acknowledge this difference and bring them closer together, or mitigate against the risks those differences bring.

Or, the put it another way, the main problem you have with combining these two approaches is the lead up time before development starts.

In the agile world some people would like to think that developing based on a whim is a great way to work, but the reality is different. Every story that is developed will have gone through some phase of analysis even in the lightest of light touch processes. Not least someone has decided that a problem needs fixing.  Even in the most agile of teams there needs to be some due diligence and prioritisation.

This happens not just at the small scale, but also when deciding which overarching areas of functionality to change. In some organisations there will be a project (not a dirty word), in some a phase, in others a sprint. Whatever its called it'll be a consistent set of stories that build up to be a fairly large scale change in the system. This will have gone through some kind of appraisal process, and rightly so.

Whilst I don't particularly believe in business cases, I do believe in due diligence.

It is in this phase, the research, appraisal and problem definition stage, that UX research can start without having a significant impact on the start-up time. Statistics can be gathered and evidence amassed to describe the problem that needs to be addressed. This can form a critical part of the argument to start work.

In fact, this research can become part the business-as-usual activities of the team and can be used to discover issues that need to be addressed. This can be as "big process" as you want it to be, just as long as you are willing, and have the resources to pick up the problems that you find, and that you have the agility to react to clear findings as quickly as possible. Basically, you need to avoid being in the situation where you know there's a problem but you can't start to fix it because your process states you need to finish your 2 month research phase.

When you are in this discovery phase there's nothing wrong with starting to feel out some possible solutions. Ideas that can be used to illustrate the problem and the potential benefits of addressing it. Just as long as the techniques you use do not result in high cost and (to reiterate) a lack of ability to react quickly.

Whilst I think its OK to use whatever techniques work for you, for me the key to keeping the reaction time down is to keep it lightweight.  That is, make sure you're always doing enough to find out what you need to know, but not so much that it takes you a long time to reach conclusions and start to address them. User surveys, spoken narrative and video recordings, all of which can be done remotely, can be done at any time, and once you're in the routine of doing them they needn't be expensive.   Be aware that large sample sets might improve the accuracy of your research, but they also slow you down.  Keep the groups small and focused - applicable to the size of team you have to analyse and react to the data. Done right, these groups can be used to continually scrutinise your system and uncover problems.

Once those problems are found, the same evidence can be used to guide potential solutions. Produce some quick lo-fi designs, present them to another (or the same, if you are so inclined) small group and wireframe the best ones to include in your argument to proceed.  I honestly believe that once you're in the habit, this whole process can be implemented in two or three weeks.

Having got the go ahead, you have a coherent picture of the problem and a solid starting point for you commence the full blown design work.  You can then move into a short, sharp and probably seriously intense design phase.

At all points, the design that you're coming up with is, of course, important. However, it's vital that you don't underestimate the value of the thinking process that goes into the design. Keep earlier iterations of the design, keep notes on why the design changed. This forms a reference document that you can use to remind yourself of the reasoning behind your design. This needn't be a huge formal tome; it could be as simple as comments in your wireframes, but an aide mémoire for the rationale behind where you are today is important.
In this short sharp design phase you need to make sure that you get to an initial conclusion quickly and that you bear in mind that this will almost certainly not be the design that you actually end up with.  This initial design is primarily used to illustrate the problem and the current thinking on the solution to the developers. It is absolutely not a final reference document.

As soon as you become wedded to a design, you lose the ability to be agile. Almost by definition, an agile project will not deliver exactly the functionality it set out deliver. Recognise this and ensure that you do the level of design appropriate to bring the project to life and no more.

When the development starts, the UX design work doesn't stop. This is where the ultimate design work begins - the point at which the two approaches start to meld.

As the developers start to produce work, the UX expert starts to have the richest material he could have - a real system. It is quite amazing how quickly an agile project can produce a working system that you are able to put in front of users, and there's nothing quite like a real system for investigating system design.

It's not that the wireframes are longer of use. In fact, early on the wireframes remain a vital, and probably only coherent view of the system and these should evolve as the project develops.  As elements in the system get built and more rigidly set the wireframes are updated to reflect them. As new problems and opportunities are discovered, the wireframes are used to explore them.

This process moves along in parallel to the BA work that's taking place on the project. As the customer team splits and prioritises the work, the UX expert turns their attention to the detail of their immediate problems, hand in hand with the BAs. The design that's produced is then used to explain the proposed solutions to the development team and act as a useful piece of reference material.

At this point the developers will often have strong opinions on the design of the solution, and these should obviously be heard. The advantage the design team now have is that they have a body of research and previous design directions to draw on, and a coherent complete picture against which these ideas (and often criticisms) can be scrutinised.  It's not that the design is complete, or final, it's that a valuable body of work has just been done, which can be drawn upon in order to produce the solution.

As you get towards the end of the project, more and more of the wireframe represents the final product.  At this point functionality can be removed from the wireframe in line with what's expected to be built.  In fact, this is true all the way through the project, it's just that people become more acutely aware of it towards the end.

This is a useful means of testing the minimum viable product. It allows you to check with the customer team how much can be taken away before you have a system that could not be released: a crucial tool in a truly agile project.  If you don't have the wireframes to show people, the description of functionality that's going to be in or out can be open to interpretation - which means it's open to misunderstanding.
It takes work to bring a UX expert into an agile project, and it takes awareness and honesty to ensure that you're not introducing a big-up-front design process that reduces your ability to react.

However, by keeping in mind some core principles - that you need to be able to throw and willing to throw work away, you should not become wedded to a design early on, you listen to feedback and react, you keep your level of work and techniques fit for the just-in-time problem that you need to solve right now - you can add four huge advantages to your project.

  • A coherent view and design that bind the disparate elements together into a complete system.
  • Expert techniques and knowledge that allow you to discover the right problems to fix with greater accuracy.
  • Design practices and investigative processes that allow you to test potential solutions earlier in the project (i.e. with less cost) than would otherwise be possible, helping ensure you do the right things at the right time.
  • Extremely expressive communication tools that allow you to describe the system you're going to deliver as that understanding changes through the project.

Do it right and you can do all this and still be agile.

Remote workforces and the 12 golden questions

Fri, 2013-08-02 07:53
I had an interesting conversation with a friend the other day about the difficulties in managing a remote team. That is a team who aren't all located in the same office. Some may be home workers, some may work in different offices.  The main crux of the discussion was around how you turn a group of people into team, garner some emotional connection between them, and to you and your company, and then get the best out of them.

After a few days of gestation and rumination it came to me. The rules are the same as with a local team - you may do different things and the problems may be more difficult to overcome, but the techniques you use are fundamentally the same.

That thinking led me back to Marcus Buckingham's fantastic book "First Break all the Rules". If you manage people and haven't read this book - shame on you. It is a must read.

One of the main arguments in the book revolves around a set of questions you should ask of your staff defined by years of research by Gallup based on the strongest signifiers of a team that is performing well.

If you get good responses to these questions then you probably have a good team.

Now I'm not going to explain the why's and wherefores of these questions, that has been done far better than I ever could in Marcus's book. Buy it and read it.

What I'd like to do is go over each of the questions and look at what you may need to do as a manager of a remote team in order to ensure that you get positive responses to these questions.
I know what is expected of me at work.
Much like you would with a locally grouped team this is as simple, and as difficult as it sounds: keeping in touch, setting targets and boundaries, being available and honest. All those things that a good manager instinctively does.

The only real difference is that it takes more effort to organise those face-to-face chats.

It starts with honesty at the interview: clearly defining the role that's on offer, what's involved and what's not involved. From there it moves to regular catch ups to get a feel for where they think they are, and for you to feed back where they actually are, then finally to ensuring that rewards and praise are given when the expectations are met and exceeded.  Put in the simplest of terms you're regularly telling them what you expect then reinforcing that with action.

For some people this will feel like constant badgering, and for others you'll never be able to do enough, but I don't think there's anything about remote working that makes this fundamentally different to managing local workers.
I have the materials and equipment I need to do my work right.
Every tool you would normally provide in an office you should expect to provide for a remote worker. OK, maybe not the pen and pad, but you could consider corporate branded versions of both. At least it's a reminder of who they work for!

Every bit of software you would normally provide on a desktop needs to be available in their home office. 

Every document that they may need to access on the move should be available on-line   Workers that are expected to spend most their time on client sites should have access to software that is appropriate for onsite work from any device that has internet access.  Ideally they should have offline versions too. I.e. access to versions of their software that works when not connected to the internet, that will automatically sync when the connection is made available.  If you've ever used gmail, blogger or evernote on a disconnected tablet you will know what I mean.

You need to do everything you can to limit the chances that they'll ever be in a situation where they are disconnected from their tools.
At work I have the opportunity to do what I do best every day.
You might hope that this should be easier to achieve with remote workers than it would with a team in a single office.  Working on the move or at home gives people a chance to get on and do some work with out all those pesky distractions like other people.

However, its very easy to underestimate the impact remote working has on ease of communication, and in turn, the amount of time it takes to have those communications.  If you're not careful, those informal 2 minute chats in the kitchen turn into 1000 word project update documents.  You can see how there can be a death off a thousand cuts as layers of bureaucracy are added in order to keep everyone in the loop.

In addition, how can a manager see what a team member is best at when they don't physically witness them doing it.  It's not always easy in the office to spot someone's talents (or areas of difficulty for that matter) and guide them towards utilising them.  It's an order of magnitude harder when you don't spend that vital face to face time with them every day.

Ironically it can be tempting to have people fill in time-sheets and detailed updates in order to help spot the things that are done quickly and well, that are second nature, but then this simply distracts people from what they do best, and not everyone's talent is writing updates!

There's no simple answer to this. It takes a very special manager who can read their employees from a distance and a special kind of employee who is self aware enough to be honest about their strengths and weaknesses.  It starts with the culture of the management team and their all pervasive attitude towards spotting strengths.  They need to make sure that the workforce is constantly aware that this is the approach the management team is taking and that gives employees a strong incentive to be honest.

Part of that is then listening to your staff when they describe areas if difficulties. Sometimes this may highlight personal areas where the talents are lacking, in others it may be that the processes are getting in the way of providing real value. In either case you need to clearly assess the situation and act decisively and positively when needed.

It's vital that everybody is very clear about what they, and their team, do best and that people are allowed to focus on that as much as possible.
In the last 7 days I have received recognition or praise for doing good work.
This one should be simple.  All you have to do is follow the same rules that you normally would in the office: praise publicly or privately depending on the person you're dealing with.

Praise successes at the monthly get together, on the intranet, via mail, a conference call or a chat on the phone whichever is appropriate for the person and level of success.  However, whenever, just don't forget to do it.

Of course, you have to be much more diligent about this since the people you're praising aren't in front if you all the time.  It's harder to spot their frustration and disenchantment when they're not getting the praise they feel they deserve - you can't see their face and their minute by minute attitude.  For this reason I'd suggest that it's probably better to err on the side of too much praise than too little, and maybe even have a reminder in your calendar that pops up every couple of days so you don't forget.
My supervisor, or someone at work, seems to care about me as a person.
The main thing is honesty, and if you can fake that you've got it made...

In all seriousness though, you do actually need to care.  In order to care you need to connect with people. 

You'll spot a repeating theme here, and at the risk of sounding like a broken record, you can only connect with people if you communicate with them, and with a remote workforce that takes a lot of effort.

Whilst this point isn't just about the tough times, if you find someone's having a hard time then you need to break that remoteness, get yourself into their locale and meet up on neutral territory. Show that you care enough about them as a person that you'll take the time to go see them in their local café.  Show that its not all one way, that you'll make the effort.

It's about making sure that your team know that it's not all about the work they need to do today, but it's about them as a human-being having a valued place in a team that supports each other.

For some people it will be inappropriate to cross into the personal life, maybe they like working in a remote team because of the fact that its remote. However, it can still be valuable for those people to know that you understand and respect that, rather than simply don't care about them.

Even people who don't want regular catch ups want to be reminded that you know that and you're trying your best to act in line with their desires.

You have to be extremely careful about crossing people's personal boundaries and invading into their personal space.  Be honest with yourself about that, and recognise that not everyone wants their boss to be their best friend and that for most people it would be extremely distressing if you turned up on their doorstep unannounced!
There is someone at work who encourages my development.
When you're working remotely it can sometimes seem like you have nothing other than unrealistic demands, one after the other from a manager who can then veer wildly to forgetting you exist. This is what you need to try to overcome.

There needs to be a tough combination of slack in the schedule, freedom to explore and encouragement to follow new paths.

If your team have no time to do anything other than the day's work then they have no opportunity to develop.
If they have plenty of time, but no contact then they'll feel you don't care about their development.

You need to bring conversations on development to the front and ensure that they're had out loud.
Ensure that you have a process in place to discuss the direction your staff want to move in and ensure that they have the support they need in order to take those steps.  This may involve having decent expenditure on training, on in house resources and applications, it may be as simple as just letting your staff have time to explore.  It certainly includes letting them fail from time to time and not being judgemental about the outcome.

Not all this can be done remotely. It's tough to feel the support of someone that is not physically present, and  as with so many of these points you need to acknowledge that you're going to travel. You absolutely need some face to face time.

It may be that you need to put a central training team together and fly, train or bus people in to get their training.

You should!

It may end up being more expensive than it would have been to have a co-located office and training team, but that's the decision you took when you decided to employ a remote team.

Good quality learning and development software can help, as can access to third-party on-line training catalogues and I imagine that there is a greater return on investment on these tools than there would be in a local office.  However, making courses available to people is not the same as encouraging and supporting them in their development.

Consider mentoring programmes and ensure that you pay the expenses to get people together with their mentors.  Don't just assume that the mentors know what they're doing, put a mentoring team together so that they can support each other, and ensure that you have a training budget to teaching people how to be a mentor.  Don't forget, being a mentor can be a great way to develop the mentor!

If you want your team to think you're serious about their development, you need to get serious about their development.
At work my opinions seem to count
I'd suggest that in order for a new team member to feel their opinions matter they first need to feel that their co-workers' opinions matter.

From that you can then gestate the idea that they are allowed to have opinions, leading to you following through on some of their thoughts and ideas so that they feel their ideas matter.

Simple eh?

At the core of it, as always is the need to communicate. Not just back to the team member with the big idea, or serious concern, but with the whole team.

Regularly asking for feedback and opinions and then acting upon them. Becoming known as the manager that doesn't always assume that they know better.

Technology can help with this.  Open forums with no moderation (unless it's absolutely necessary). Having everyone involved in it, from the CEO to the intern, and a culture of respect around the postings that means every question or idea is addressed with care and thought.

That's not to say that every post is publicly stated as the best idea or most insightful question there has ever been, but that common courtesy and time is given in the response.  Most sane people have no problem being told they're wrong as long as it is clear and respectful and comes with an invitation for more.

There is also the HR angle: that people need to be able to state when they think a co-worker is not up to scratch, behaving inappropriately or suchlike.

Accessibility, openness and a visible commitment to acting on information is the only way to get this feeling fostered.  And guess what, it comes back again to two way communication.
The mission / purpose of my company make me feel my job is important.
OK, so it can seem that there's very little you can do about this, either your company resonates with your employees or it doesn't. The reality is that you can affect this quite significantly.

It's all too easy to recruit without your companies values in mind. And when I say values, I don't mean those in your company brochure, I mean those true values that actually drive the business.

An estate agents is never going to be driven by anything other than selling or letting houses, and that's the way it should be. There are different ways in which you a company may approach that, but the core value is one that selling houses is a good thing, and that you'll make money out of it.

Put simply, if you're an estate agent and you hire someone who thinks that a buoyant housing market, the need for a 'property ladder', low interest rates, and easy access to credit is a bad thing then you've hired someone who will never feel their job is important.

Consider that in your recruitment process.

I'm not saying that you can't , or shouldn't have a business with a mix of opinions, merely that you should honestly recognise the limitations of internal corporate marketing.

Having said that, you do need to market the business internally. You still need to remind people why they are here, and why the company is doing what it's doing. If you don't define the culture of the business then individuals will impose a culture upon it and it may not be the one you want. An outgoing but negative employee can very easily, and often quite unintentionally impose a negative culture on the whole of a department.

As with so many of these topics, communication is the key, more so with a remote workforce than at any other time.

Let the team know what the company feels is important, and make sure you don't stray too far from the credible truth or your employees will start to think you stand for lies.
My co-workers are committed to doing quality work.
There are three significant risks with a remote workforce that can put this into jeopardy.

First - it can be difficult to spot when you have a member of the team that's not committed to quality work.

Second - it can be difficult to sot someone who thinks their team-mates are not committed to quality work.

Third - it can be difficult to ensure that everyone knows what quality work their team-mates are doing.

With many of the other points the focus is on communication in order to feedback on progress both up and down the chain of command. This is much more focused on the sideways communication.

At the simplest level this is about regular cross team updates where you ensure that everyone knows what's going on in the whole team, particularly highlighting points of note.  This directly addresses the third risk, but doesn't deal with the other two.

You need to follow it up by fostering an environment where feedback on peers is taken seriously.  You need to ensure that your team feel comfortable asking about their team mates' progress, or pointing out areas of concern or difficulty.

This involves giving an honest and clear response.

If you feel the comments are unjustified you need to be able to clearly state why, but still then ensure you take the comments on board and react to them. Recognise that they may know more about the situation than you do.  You need to give that dual impression - you value feedback, and that you value your staff - you'll hear criticism and concern and act to rectify issues, but you'll defend and protect when it is unjustified.
I have a good friend at work?
Obviously a collection of remote workers have far fewer chances to socialise than those working together in an office.  They'll never just decide to go to the pub on a Wednesday evening and never naturally make those odd cross department smoking cliques, nor football ones neither - all simply because they're not at the office.  This means they are far less likely to make the same kinds of personal connections than they would otherwise.

The problem and potential solutions are fairly clear but easy to overlook.

You absolutely have to have a higher than usual entertainments budget. You have to meet up at least every month in order for those face to face relationships to blossom. But it's more than that. You have to foster an environment where building remote relationships is also the norm. You have to provide virtual replacements for the Wednesday evening pub and smoker's corner.

For example, your management team must have a relaxed attitude when communicating via mail.  It has to be clear that email system is more than just a business tool, that it can be a social one too. You have to make an effort to build an environment in which social networks will blossom.

Consider tools like Yammer (corporate social networking site) and then push the management to actually use them, for a combination of business and social reasons.

Provide the mechanism to allow for the hosting of virtual book clubs, badminton ladders and a Modern Warfare 3 clan.

Recognise the kinds of people you have employed and ensure that they have a means of accessing people at work who are like minded and then make it feel normal that they will reach out and find each other.

What offices you do have, don't be afraid to add a big chill out area and kitchen so that when people are in the office they get that reinforcement - "this is a company where we actively encourage you to be friends"
In the last six months someone at work has talked to me about my progress.
There is no reason why this should be difficult. Organise regular meetings, on-line or otherwise, to discuss progress. Have a solid process in place that can flex for individual needs.  All the things you would normally do.  Every six months is a bare minimum, every two is OK, once a month is ideal - as a general rule.

I could labour the point, but I think most of what needs to be said has been said already!
This last year I have had opportunities at work to learn and grow.
It can be very tempting to feel that your home workers are sitting at home happy in the knowledge that they're doing a good job and have a great work home life balance. Maybe that's true. Maybe all they want is to get their job done and then play in the garden with their kids.


However, just because they're remote doesn't mean they're not ambitious.  I don't think there's any reason why a home worker will be any less likely than an office one to want to progress, either in their career, or personally.

Also, not every remote worker is a home worker.

Those team members that are sitting at a desk 50 miles away, out of sight, are more able to look for opportunities outside of your company than someone that's sat 5 metres away.  Take their progress as seriously as you would any other staff member's.

Catch up regularly to learn about their goals and then do what you can to help them reach the realistic ones, learn about their career concerns and do what you can to help them overcome them, or to placate them.
Tailor your roles to suit the talents and desires of your team members and make sure you give the ones who need, deserve and are up to it the opportunity to stretch themselves in new directions.

If you don't give your team members the encouragement and opportunity to develop then they'll find the opportunities through a new role in a new company, and just like your local workers, you'll have no idea it's going to happen until it's too late.

So, do more than you think you need to!
Good management is good management, regardless of how local or remote the team is, and good management takes effort.

The truth of the matter is that with a remote workforce that effort is increased.  You need to be more astute, more available and more willing to put the effort in than if your team is sat next to you.  You lose so many of the visual and social clues that a good manager uses every day to gauge the health of its team that you need to compensate in many other areas.  You also have to acknowledge that you're not likely to be as effective, it simply isn't possible.

You need to get imaginative about how you remain in contact, how you foster a team spirit and an emotional connection.  Technology plays a part, of course it does. Good collaboration tools with social media aspects make it possible to create social groups within your company and allow those people to seek out like minded individuals in a way that simply wasn't possible, or necessary, 10 years ago.  However, the technology isn't a panacea. You still need to create an environment in which people actually want to connect.  Without the right cultural context, you'll simply have a dead application
Still, the rules are simple and the techniques familiar.  There's nothing fundamentally different about managing a remote team, you're still dealing with people, after all.

If you honestly care about your role as a manager, a need to create a team that performs and are willing and able to put the time in, then you probably won't go far wrong.

Measuring the time left

Sun, 2013-06-09 08:30
Burn-down (and burn-up, for that matter) charts are great for those that are inclined to read them, but some people don't want to have to interpret a pretty graph, they just want a simple answer to the question "How much will it cost?"

That is if, like me, you work in what might be termed a semi-agile*1 arena then you also need some hard and fast numbers. What I am going to talk about is a method for working out the development time left on a project that I find to be pretty accurate. I'm sure that there are areas that can be finessed, but this is a simple calculation that we perform every few days that gives us a good idea of where we are.
The basis.It starts with certain assumptions:
You are using stories.OK, so they don't actually have to be called stories, but you need to have split the planned functionality into small chunks of manageable and reasonably like sized work.
Having done that you need to have a practice of working on each chunk until its finished before moving on to the next, and have a customer team test and accept or sign off that work soon after the developers have built it.
You need that so that you uncover your bugs, or unknown work as early as possible, so you can account for them in your numbers.
Your customer team is used to writing stories of the same size.When your customer team add stories to the mix you can be confident that you won't always have to split them into smaller stories before you estimate and start working on them.
This is so you can use some simple rules for guessing the size of the work that your customer team has added but your developers have not yet estimated.
You estimate using a numeric value.It doesn't matter if you use days work, story points or function points, as long as it is expressed as a number, and that something estimated to take 2 of your unit is expected to take the same as 2 things estimated at 1.
If you don't have this then you cant do any simple mathematics on the numbers you have and it'll make your life much harder.
Your developers quickly estimate the bulk of the work before anything is started.This is not to say that the whole project has a Gandalf like startup: "Until there is a detailed estimate, YOU SHALL NOT PASS"; rather that you T-shirt cost, or similar, most of your stories so that you have some idea of the overall cost of the work you're planning.
You need this early in the project so that you have a reasonable amount of data to work with
Your developers produce consistent estimates.
Not that your developers produce accurate estimates, but that they tend to be consistent; if one story is underestimated, then the next one is likely to be.
This tends to be the case if the same group of developers estimate all the stories that they all involve making changes to the same system. If a project involves multiple teams or systems then you may want to split them into sub projects for the means of this calculation.
You keep track of time spent on your project.Seriously, you do this right?
It doesn't need to be a detailed analysis of what time is spent doing what, but a simple total of how much time has been spent by the developers, split between the time spent on stories and that on fixing defects.
If you don't do this, even on the most agile of projects, then your bosses and customer team don't have the real data that they need to make the right decisions.
You, and they, are walking a fine line to negligent

If you have all these bits then you've got something that you can work with...
The calculation.The calculation is simple, and based on the following premises:

  • If your previous estimates were out, they will continue to be out by the same amount for the whole of the project.
  • The level of defects created by the developers and found by the customer team will remain constant through the whole project.
  • Defects need to be accounted for in the time remaining.
  • Un-estimated stories will be of a similar size to previously completed work. 
The initial variables:

totalTimeSpent = The total time spent on all development work (including defects).

totalTimeSpentOnDefects = The total time spent by developers investigating and fixing defects.

numberOfStoriesCompleted = The count of the number of stories that the development team have completed and released to the customer.

storiesCompletedEstimate = The sum of the original estimates against the stories that have been completed and released to the customer.

totalEstimatedWork = The sum of the developers' estimates against stories and defects that are yet to do.

numberOfStoriesCompleted = The count of number of a stories that have been completed by the development team and released to the customer.

numberOfUnEstimatedStories = The count of the number of stories that have been raised by the customer but not yet estimated by the development team.

numberOfUnEstimatedDefects = The count of the number of defects that have been found by the customer but not yet estimated by the development team.
Using these we can work out:
Time remaining on work that has been estimated by the development team.For this we use a simple calculation on the previous accuracy of the estimates.
This includes taking into account the defects that will be found, and need to be fixed against the new feunctionality that will be built.

estimateAccuracy = totalTimeSpent / storiesCompletedEstimate

predictedTimeRemainingOnEstimatedWork = ( totalEstimatedWork * estimateAccuracy )
Time remaining on work that has not been estimated by the development team.In order to calculate this, we rely on the assumptions that the customer team have got used to writing stories of about the same size every time.
You may need to get a couple of developers to help with this by splitting things up with the customer team as they are creating them. I'd be wary of getting then to estimate work though.

averageStoryCost = totalTimeSpent / numberOfStoriesCompleted

predictedTimeRemainingOnUnEstimatedStories = numberOfUnEstimatedStories * averageStoryCost

averageDefectCost = totalTimeSpentOnDefects / numberOfStoriesCompleted

predictedTimeRemainingOnUnEstimatedDefects = numberOfUnEstimatedDefects * averageDefectCost 
Total predicted time remainingThe remaining calculation is then simple, it's the sum of the above parts.
We've assessed the accuracy of previous estimates, put in an allocation for bugs not yet found, and assigned a best guess estimate against things the development team haven't yet put their own estimate.

totalPredictedTimeRemaining = predictedTimeRemainingOnEstimatedWork + predictedTimeRemainingOnUnEstimatedStories + predictedTimeRemainingOnUnEstimatedDefects 
The limitationsI find this calculation works well, as long as you understand its limitations.
I hope to present some data in this blog very soon, as we already have some empirical evidence that it works.
Admittedly, for the first 20% or so of the project the numbers coming out of thus will fluctuate quite a bit. This is because there isn't enough 'yesterday's weather' data to make it the estimate accuracy calculation meaningful. The odd unexpectedly easy (or hard) story can have a bit effect on the numbers.
Also, if your testing and accepting of stories lags far behind your development or if you don't fix your bugs first, you will under estimate the number of bugs in the system. However, if you know these things you can react to them as you go along.
Further WorkI am not particularly inclined to make changes to this calculation, as the assumptions and limitations are perfectly appropriate for the teams that I work with. For other teams this may not be the case, and I might suggest some slight alterations if you think they'd work for you.
Estimating number of defects not yet found.
It seems reasonable for you to look at the average number of defects raised per story accepted and use this to work out the number of defects that have not yet been found.  These could then be included in your calculation based on the average cost of defects that you've already fixed.
This might be a good idea if you have a high level of defects being raised in your team.  I'd say high as meaning anything over about 20% of your time being spent fixing defects.
Using the estimate accuracy of previous projects at the start of the new.
As I pointed out earlier, a limitation of this method is the fact that you have limited information at the start of the project and so you can't rely on the numbers being generated for some time.  A way of mitigating this is to assume that this project will go much like the previous one.
You can then use the estimate accuracy (and defect rate, if you calculated one) from your previous project in order to mitigate the lack of information in this.
If you're using the same development team and changing the same (or fundamentally similar) applications, then this seems entirely appropriate.

*1 Semi-agile: I'd define this is where the development of software is performed in a full agile manner, but the senior decision makers still rely on business case documentation, project managers and meeting once a month for updates.

Pleasing line

Mon, 2010-05-17 02:47
Gotta admit, I'm quite pleased with this line from my new ORM object based database connection library...

$oFilter = Filter::attribute('player_id')->isEqualTo('1')->andAttribute('fixture_id')->isEqualTo('2');

The Happiness Meter

Mon, 2008-06-23 04:01
As part of any iteration review / planning meeting there should be a section where everybody involved talks about how they felt the last iteration went, what they thought stood in the way, what they though went particularly well and suchlike.

We find that as the project goes on, and the team gets more and more used to each other, this tends to pretty much always dissolve into everyone going "alright I suppose", "yeah fine".

Obviously, this isn't ideal and will tend to mean that you only uncover problems in the project when they've got pretty serious and nerves are pretty frayed.

This is where "The Happiness Meter" comes in.

Instead of asking the team if they think things are going OK and having most people respond non-committally, ask people to put a value against how happy they are with the last iteration's progress. Any range of values is fine, just as long as it has enough levels in it to track subtle movements. I'd go with 1-10.

You don't need strict definitions for each level, it's enough to say '1 is completely unacceptable, 5 is kinda OK, 10 is absolute perfection'.

At some point in the meeting, everyone in the team declares their level of happiness. When I say everyone, I mean everyone: developers, customers, XP coaches, infrastructure guys, project managers, technical authors, absolutely everyone who is valuable enough to have at the iteration review meeting should get a say.

In order to ensure that everyone gets to provide their own thought, each person writes down their number and everyone presents it at the same time. The numbers are then taken recorded and a graph is drawn.

From the graph we should be able to see:
  1. The overall level of happiness at the progress of the project.

  2. If there is any splits / factions in the interpretation of the progress.

If the level of happiness is low, this should be investigated; if there are any splits, this should be investigated; and just as importantly - if there are any highs, this should be investigated. It's good to know why things go well so you can duplicate it over the next iteration.

Factions tend to indicate that one part of the team has more power than the rest and the project is skewed into their interests rather than those of the team as a whole.

You may want to split the graph into different teams (customer / developer) if you felt that was important, but I like to think of us all as one team on the same side...

All said and done, the graph isn't the important bit - the discussion that comes after the ballot is the crucial aspect. This should be a mechanism for getting people to talk openly about the progress of the project.

UPDATE: Someone at work suggested a new name that I thought I should share: The Happy-O-Meter.

Ideas for improving innovation and creativity in an IS department

Sat, 2008-06-21 04:49
At our work we've set up a few 'action teams' to try to improve particular aspects of our working environment.

The team that I'm a member of is responsible for 'Innovation and Creativity'.

We're tasked with answering the question "How do we improve innovation and creativity in IS?" - How we can foster an environment that encourages innovation rather than stifles it.

As a bit of a background, the company is a a medium sized (2,500 plus employees) based mainly in the UK, but recently spreading through the world, the vast majority of whom are not IS based. The IS department is about 100 strong and includes a development team of 25 people. It's an SME at the point where it's starting to break into the big-time and recognises that it needs to refine its working practices a little in order to keep up with the pace of expansion.

We met early last week and have put together a proposal to be taken to the senior management tier. I get a feeling it will be implemented since our team included the IS Director (you don't get any senior in our department), but you never know what'll happen.

I figured it might be interesting to record my understanding of the plan as it stands now, and then take another look in 6 months time to see what's happened to it...

We decided that in order to have an environment that fosters creativity and innovation you need:


Time for ideas for form, for you to explore them, and then to put them into practice.


Outside influences that that can help to spark those ideas off - this may be from outside the organisation, or through cross-pollination within it.


The conviction to try things, to allow them to fail or succeed on their own merit - both on the part of the individual and the organisation as a whole.

Natural Selection:

The need to recognise success when it happens, to take it into the normal operation of the business and make it work in practice. Also, the need to recognise failure when it happens, and stop that from going into (or continuing to exist within) the team.


When we have a good idea, the people involved need to be celebrated. When we have a bad idea, the people involved DO NOT need to be ridiculed.


The initial ideas aren't always the ones that are successful, it's the 4th, 5th or 125th refinement of that idea that forms the breakthrough. We need to understand what we've tried, and recognise how and why each idea has failed or succeeded so we can learn from that.

We put together some concrete ideas on how we're going to help put these in place - and bear in mind that this isn't just for the development team, this is for the whole of the IS department - development, project management, infrastructure, operations, service-desk, even the technology procurement...


A position set up that will be responsible for defining / tracking a curriculum for each job role in the department.

Obviously this will be fed by those people that currently fulfil the roles, and will involve things ranging from ensuring the process documentation is up to scratch, through specifying reading lists (and organising the purchasing of the books for the staff) and suggesting / collecting / booking conferences, training courses and the like that might be of use.

This takes the burden of responsibility away from the staff and managers - all you need is the idea and someone else will organise it and ensure it's on the curriculum for everyone else to follow up.

IdeaSpace (TM ;-) ):

A forum for the discussion of ideas, and collection of any documentation produced on those ideas and their investigation. This will (hopefully) form a library of past investigations as well as a stimulus for future ones. Everyone in the department will be subscribed to it.

Lab days:

Every employee is entitled to 2 days a month outside of their normal job to explore some idea they might have. That time can be sandbagged to a point, although you can't take more than 4 days in one stint. Managers have to approve the time in the lab (so that it can be planned into existing projects) and can defer the time to some extent, but if requests are forthcoming they have to allow at least 5 days each rolling quarter so that the time can't be deferred indefinitely.

Whilst the exact format of the lab is yet to be decided, we're aiming to provide space away from the normal desks so that their is a clear separation from the day job and lab time. People will be encouraged to take time in the lab as a team as well as individually. Also, if we go into the lab for 3 days to find that an idea doesn't work, that idea should still be documented and the lab time regarded as a success (we learnt something)

Dragon's Den:

Gotta admit, I'm not sure about some of the connotations of this - but the basic idea is sound. Coming out of time in the Lab should be a discussion with peers about the conclusion of the investigation in a Dragon's Den format. This allows the wider community to discuss the suitability of the idea for future investigations, or even immediate applicability. One output of this meeting may be the formalisation of conclusions in the IdeaSpace.

Press Releases:

The company is already pretty good at this, but when something changes for the better we will ensure that we celebrate those changes and, even for a day, put some people up on pedestals.

None of the above should be seen as a replacement for just trying things in our day to day job - but the idea is that these things should help stress to the department that change and progress are important aspects of what we do, and that we value it enough to provide a structure in which big ideas can been allowed to gestate. Cross pollination and communication should just form part of our normal day job anyway, and we should ensure that our project teams are cohesive and communicate freely amongst and between themselves.

Also, an important factor in the success of the above has to be the format of the Dragon's Den - if it is in any way imposing or nerve-racking then the idea is doomed to failure. As soon as people feel under pressure to justify themselves then the freedom disappears.

I'm quite excited by the prospect of putting these ideas into practice, and I wonder exactly where we'll end up.

I'll keep you all posted.

Things I believe in

Sat, 2008-03-29 07:35
  • It's easier to re-build a system from its tests than to re-build the tests from their system.

  • You can measure code complexity, adherence to standards and test coverage; you can't measure quality of design.

  • Formal and flexible are not mutually exclusive.

  • The tests should pass, first time, every time (unless you're changing them or the code).

  • Flexing your Right BICEP is a sure-fire way to quality tests.

  • Test code is production code and it deserves the same level of care.

  • Prototypes should always be thrown away.

  • Documentation is good, self documenting code is better, code that doesn't need documentation is best.

  • If you're getting bogged down in the process then the process is wrong.

  • Agility without structure is just hacking.

  • Pairing allows good practices to spread.

  • Pairing allows bad practices to spread.

  • Cycling the pairs every day is hard work.

  • Team leaders should be inside the team, not outside it.

  • Project Managers are there to facilitate the practice of developing software, not to control it.

  • Your customers are not idiots; they always know their business far better than you ever will.

  • A long list of referrals for a piece of software does not increase the chances of it being right for you, and shouldn't be considered when evaluating it.

  • You can't solve a problem until you know what the problem is. You can't answer a question until the question's been asked.

  • Software development is not complex by accident, it's complex by essence.

  • Always is never right, and never is always wrong.

  • Interesting is not the same as useful.

  • Clever is not the same as right.

  • The simplest thing that will work is not always the same as the easiest thing that will work.

  • It's easier to make readable code correct than it is to make clever code readable.

  • If you can't read your tests, then you can't read your documentation.

  • There's no better specification document than the customer's voice.

  • You can't make your brain bigger, so make your code simpler.

  • Sometimes multiple exit points are OK. The same is not true of multiple entry points.

  • Collective responsibility means that everyone involved is individually responsible for everything.

  • Sometimes it's complex because it needs to be; but you should never be afraid to check.

  • If every time you step forward you get shot down you're fighting for the wrong army.

  • If you're always learning you're never bored.

  • There are no such things as "Best Practices". Every practice can be improved upon.

  • Nothing is exempt from testing. Not even database upgrades.

  • It's not enough to collect data, you need to analyse, understand and act upon that data once you have it.

  • A long code freeze means a broken process.

  • A test hasn't passed until it has failed.

  • If you give someone a job, you can't guarantee they'll do it well; If you give someone two jobs you can guarantee they'll do both badly

  • Every meeting should start with a statement on its purpose and context, even if everyone in the meeting already knows.

A reading list for our developers

Tue, 2008-03-25 12:54
An idea I'm thinking of trying to get implemented at our place is a required reading list for all our developers. A collection of books that will improve the way that developers think about their code, and they ways in which they solve problems. The company would buy the books as gifts to the employees, maybe one or two every three months.

Some questions though:

  • Is it fair for a company to expect its employees to read educational material out of hours?

  • Is it fair for an employee to expect to be moved forward in their career without a little bit of personal development outside the office?

If anyone has any books out there that they'd recommend - please let me know. Otherwise, here's my initial ideas - the first three would be in your welcome pack:

Update:Gary Myers came up with a good point, being that any book should really be readable on public transport. That probably rules out Code Complete (although I read it on the tube, I can see that it's a little tricky), but Design Patterns and Refactoring to Patterns are small enough I reckon.

Unfortunately, Code Complete is a really good book that gives a lot of great, simple, valuable advice. Does anyone out there have any other suggestions for similar books?

Update 2:Andy Beacock reminded me of Fowler's Refactoring, which really should also make the list.

Update 3:The development team have bought into the idea and the boss has been asked. In fact, I'm pretty pleased with the enthusiasm shown by the team for the idea. I can't see the boss turning it down. Interestingly though, someone suggested that Code Complete go onto the list...

In this order:

Ruled out because of their size:

Database Build Script "Greatest Hits"

Tue, 2007-09-04 09:54
I know its been a quiet time on this blog for a while now, but I've noticed that I'm still getting visitors looking up old blog posts. It's especially true of the posts that relate to "The Patch Runner". Many of them come through a link from Wilfred van der Deijl, mainly his great post of "Version control of Database Objects". The patch runner is my grand idea for a version controlled database build script that you can use to give your developers sandbox databases to play with as well as ensuring that your live database upgrades work first time, every time. It's all still working perfectly here, and people still seem to be interested, so with that in mind I've decided to collate them a little bit. basically provide an index of all the posts I've made over the years that directly relate to database build scripts, sandboxes and version control. So, Rob's database build script 'Greatest Hits': All of the posts describe processes and patch runners that are very similar to those that I use in my work every day. I started playing with these theories over 3 years ago now and there is no way I'd go back to implement database upgrades the way I did before. However, I'd LOVE to hear ideas on how things can be improved. I'd be amazed if my three year old thinking was still up to date! Technorati Tags: , , , , ,


Mon, 2007-08-20 10:52
And you think software patents are bad...

China Regulates Buddhist Reincarnation