Contribution Courts

Contribution Courts

A participatory approach to understanding contribution to results.

What is this all about?

‘Contribution Analysis’ is unlikely a concept that makes you sit up in your seat with excitement.

Instead it may feel like a dark art that happens over there somewhere by someone sitting in a tower, abstracted away from your work. Something hard to understand, that happens at your programme rather than with it.

Well, that perception is entirely valid and justified, as historically this is often what happens. Someone considered very nerdy sits alone in a room with documents, a bunch of numbers, and some interview transcripts and figures out contribution to a result (yours or otherwise). They then come to some kind of decision about your programme and inform you of their judgement. As such, Contribution Analysis has become something unnecessarily shrouded in jargon, and yet another aspect of Monitoring, Evaluation, and Learning (MEL) that feels inaccessible and sometimes threatening.

This is genuinely a shame, as development interventions in this era tend to reside in the domain of the complex: the end result isn’t entirely clear, and the pathway is highly complex. You have to learn as you go, change strategy, and have multiple unknown unknowns. There will be feedback loops and unpredictable environmental factors, and this cannot be repeated to get the same result. Further to this, we are now operating in a global context: each programme is trying to inject expertise and change in what are often crowded spaces. The combination of these mean that it is crucial to understand whether and how your work contributed to an observed success or failure. It is important not just in feeding the LogFrame or Results Framework beast, but also for learning.

Understanding how your intervention contributed to something is also important for programmatic learning: we can see what was successful and why so that we can replicate it elsewhere. It is also important in times of failure, as it means we can understand how something didn’t contribute and why, so that we can try something different next time (or re-target efforts if a more successful programme is operating where you are). If collaborating with another actor, it helps you understand why that was important, and to what degree your work was pivotal to success. Understanding the nuances of what connects what you’re doing to an observed success is therefore not a punitive or success scoring thing: it’s a learning thing.[1]

To get away from the jargon and tangle, and to do this in a way that is participatory, I have been developing and using Contribution Courts.


[1] For more details on complexity please see my complexity section.

Jargon plagues the evaluation world, and is detrimental for our ability to connect with teams.

So convincing a representation of evaluators is this that when building a training with this image on my screen, a colleague thought I was on a video call with an evaluator. Honest truth.

What are Contribution Courts?

Contribution Courts are a participatory and interactive approach to understanding the degree to which a programme contributed to an observed success. This allows us to move away from traditional, more linear and less engaging, methods.

Contribution Courts are an attempt at exploring how other professions approach evidence and validation. At these ‘courts’, programme teams (in this case a member of the delivery team, a team leader, or similar) will present a Story of Change or result to a ‘jury’, making their case for how the programme contributed to a particular result, supported by evidence.[1] A monitoring and evaluation (M&E) adviser acts as a ‘prosecutor’, gently questioning contribution and challenging the programme team to further support their claim.[2] The ‘jury’ observe, and give verdict on the degree of contribution they have been persuaded.


[1] For example: let’s say you are delivering a programme funded by the FCDO in the climate resilience space. The ‘jury’ could be comprised of the FCDO SRO, a programme representative or two, and a critical friend/Troll facilitator who is familiar with the programme.

[2] For example: if you have staff from your organisation working on MEL for the programme, the ‘prosecutor’ could be the Head of MEL due to having distance from the programme, or other members of the MEL team who are not associated. This permits for better cross-examination as they are separate to the programme, whilst avoiding the need to train someone in the approach.

Presenting evidence to validate a claim

Why would I do this?

While it may feel like a gimmick, this approach has several takeaways:

  1. This process draws out far more information and learning for contribution analysis than other, more isolated, methods: by inviting those of you who have seen the changes first-hand into the process, we can build a better picture of what did or didn’t happen by understanding better who did what, where, and why;
  2. By replicating a process with a more intuitive use of evidence, like a court, this experience helps teams better understand the need for their use of tools and evidence collection for their MEL advisers, and gain a sense of ownership of this process (‘MELnership’?);
  3. By utilising a commonly understood analogy of a court, teams can also climb inside of the contribution analysis process and have a better grasp of why it is important;
  4. Results are determined together, reducing the feeling of evaluation happening at, not with, your team, and acting as a collective learning exercise for both successful and unsuccessful cases (i.e. we all understand and agree why something went well or badly);
  5. It permits vertical learning and speaking truth to power, as the client or SRO can be included in the process in the jury: you get the opportunity for them to really understand the nuances of your stories.

How do I do this?

As stated, Contribution Courts are relatively straightforward. Simple identify a case study or story of change that requires a contribution assessment, and set up the ‘court’ according to the team structure. Find a results owner to present the qualitative narrative and identify their evidence, then source an unbiased member of your MEL team. Finally, assemble your jury and select a date to engage in the process. Ensure someone is present to write up what occurs and formally record the collective contribution decision for your results framework, learning plans, or any other methods you’re using.

The event can be more or less formal depending on the characters involved and your relationships. Personally, with programme teams I know well, I keep it more informal. We still have a scribe and the process is adhered to, but folk wear comfortable clothes and I have snacks set out to keep energy levels up. Just because it’s called a court doesn’t mean it needs to instil the same level of fear. Having a result poked and prodded is intimidating enough, so ensuring a gentle tone is set is crucial to success and ensuring everyone feels comfortable.

For more resources, I have some downloadable guidance using a programme example from a past life when I worked on this with an old colleague of mine. These are being written up formally as a method by Mark Cabaj in a forthcoming paper on contribution analysis.

Equally I have a recording of a live demonstration of the courts at the UK Evaluation Society in 2019 that you can watch here. The angle is terrible due to where the table was so I will replace it with a better recording that a member of my team and I did recently for our organisation. The speed of speech in this video is due to a 25 minute segment becoming a 15 minute one on the day… but I would be a hypocrite if I couldn’t adapt given my preaching on adaptive methods!

Want to talk MEL? Let’s connect.

Take me back to:

%d bloggers like this: