Putting the ‘me’ in ‘MEL’

A joint piece written with Kasia Kedzia.

Human-centred design (HCD) is something we have seen increasingly appearing as a way to think about monitoring, evaluation, and learning (MEL). Yet, despite the claims of adherence to HCD, these examples still regularly forget to put the ‘human’ in HCD. In fact, in contradiction to the spirit of HCD, this way of thinking is often only deployed at the data collection stage, and even more frequently it is not even applied across the board. Tweaking one or two data collection approaches to be sensitive to the population you are engaging is not HCD in MEL. It is simply not good enough. We need to use HCD end-to-end: from data collection to the evaluators to the report to the Donor decisions (see image). 

Only when we do this can we truly use HCD in a way that is responsible, empathic, and innovative. These three factors are crucial in our eyes. Perhaps HCD and responsibility or empathy conceivably feel like a natural match, but HCD and innovation are perhaps less frequently seen as cohabiting. We feel this is not true and we hope to show you why this is. 

Above all else, we hope to show you that evaluators must have empathy. Whether looking to the beneficiary or looking to the consumer of the evaluation, we have to have empathy and put the ‘me’ in ‘MEL’. Only then can we collect and present information in a way that works for everyone. 

Monitoring, Evaluation, and Learning (‘MEL’) rarely conjures an image of warmth and empathy. In fact, it likely conjures an image of someone practising a cold, clinical, scientific set of analyses. That image would be fair. It would be fair because this has frequently been the face of MEL. Count the things (you should all know my feelings on bean counting by now), analyse the things, write the things. It has been focused on the process and ends up feeling somewhat divorced from the lived reality on the ground.

This is not just the fact that our intervention results end up frequently divided from lived reality (creating ‘paper realities’ as the Dutch Ministry of Foreign Affairs put it in this paper, and as I ranted here), but that our very approaches and tools become unrealistic. We simply have to be better and design everything from decision to tool to results to ensure it actually works for the people we engage and represents their reality.

Let me show you what I mean.

I think it is not uncontroversial to say that there are likely to be differences between civil servants and farmers in terms of how they interpret the world and the kinds of information these people will work with on a daily basis. Regardless of where you are in the world, the gender you present or identify as, and your nuanced identities and backgrounds, if you are a civil servant you are likely to engage with and use specific types of information in a specific manner that is different to a farmer. This is not better or worse, just simply different and tailored to what you need to do in a day.

Then we get deeper. Let’s say you are a woman and also a civil servant. Let’s say you live and work in Tashkent, Uzbekistan. Suddenly this changes things a lot.

As a civil servant you will be accessing formal information, handling a lot of paperwork, and be used to talking to people and writing to people. If you are a civil servant in Uzbekistan, you will work incredibly long hours and be used to being a ‘yes’ person if you hold a lower authority level. If you are also a woman, you will be expected to carry the mental load of the household, care for children, and care for elderly or sick relatives.

This changes the dimensions. Do you see?

If you are a farmer, you will (possibly) be accessing a lot of memory-based information about agriculture as well as accessing information such as upcoming weather predictions and market-based information, which is likely informally accessed through networks, or through the Internet via your smartphone. You will likely interact with people sporadically, especially if a smallholder, and spend more time with your crops or livestock. If you are a farmer in Tajikistan, much like anywhere in the world, you will work back-breakingly long hours. However, in Tajikistan you may have additional considerations around lack of access to quality health support for this, and a higher likelihood of manual labour and lower access to agritech. If you are a woman, then you will have even lower access to tech, few rights to your land, and will bear the mental load of the household.

The reason I am building up these profiles is that the identities present here really make a difference in how these groups live their lives. As evaluators we simply MUST take that into account when engaging populations… because trying to ask a woman, Uzbek, civil servant to engage in Most Significant Change would be a strange request given the small amount of time she would have available to do that as well as the nature of how she engages with information. It would be uncomfortable. Similarly asking a farmer to fill out a long form does not make any sense for the time they have available, literacy levels, or how they engage with information. You could, however, ask a business to fill in a form as they are used to this method of sharing information. See what I mean?

These are perhaps exaggerated examples of collection methods not suiting the profile, but it is all grounded on the shocking assumption that the women, men, under-represented groups, and over-represented groups we engage with are… people. Our tools and methods should be designed to work for them, not for us. We are too frequently tempted to use the newest, shiniest, method because it feels ‘innovative’ without thinking beyond it… but this is not all there is to being innovative, and it is not being kind. Innovation can be harnessing something old in a new way, and is compatible with empathy. 

What this would look like is not asking vulnerable community members to fill in survey after survey about how they feel about local security actors after an intervention. This would instead be thinking about how they share information on a day to day basis. Consider using a participatory storytelling method, or even avoid extracting information at all and instead use something like a Reality Check Approach. Then we go one step further. Perhaps we want to pick up one subsection of voices; to capture the youth voice, we might use a social media scan instead of coaxing a sample to sit in a focus group where they may not feel comfortable sharing views as openly.

Taking the time to think about these things, to put the ‘me’ in ‘MEL’, has huge benefits beyond simply being the right thing to do. For example, I used a self-signification approach to an uptake survey for teachers that allowed them to engage in a way that suited them… and issuing it while they enjoyed a coffee and a cake prior to workshops avoided asking for additional time from people who have a draining, and hectic, profession. Teachers spend their days talking, writing, and on their feet. Sitting quietly with a fika, using a tablet, was the perfect engagement method for them, and yielded more thoughtful and useful information. I have used paper-based information collection approaches with officials in a Former Soviet Union (FSU) context because using digital methods was uncomfortable due to the FSU legacy, despite it taking longer for me to translate, compile, and analyse. Kasia has gone back to basics with storytelling and flipcharts in Uganda when all other data collection methods failed… because it worked for those beneficiaries to engage verbally as a group. No one prior had considered illiteracy or barriers in access to technology.

This should not be a revolutionary concept, and yet it seems to be. Time after time after time again I see evaluators focus so much on the information content that they need that they forget about the informants. We must have empathy, and we must take a human-centered approach to how we engage people in MEL. 

Now that you have seen examples of human centered design applied at the activity and implementation level where evaluators have tailored their approaches to their work with people, we can’t simply stop there. Applying human centered design holistically allows for the funder and evaluator/contractor side to work together to apply some of these same principles and practices at this higher level. Applying empathy doesn’t start or end at implementation. It can start at the top, with the funder, and our response to their development problem. 

I’ll never forget at a training I once facilitated on collaboration, learning and adapting (CLA) when an implementer complained that as implementers, they cannot truly adapt until donors start to adapt. That implementer had recently put in a bid where they were the only ones working in a very complex geographic area on countering violent extremism. They know the ins and outs of the issue and were the only ones who did (in their view). A donor released an RFP with what they considered an inappropriate and possibly harmful approach, in the form of a statement of work (SOW), in which, as we know, bidders should respond to the donor’s approach. The bidder responded with their approach and was not awarded. A bidder who responded exactly to the donor’s approach, was awarded and failed. 

After decades of similar stories, funders too are starting to realize that getting better answers requires the humility of recognizing they may not have all the answers. In procurement processes, FCDO has long used Early Market Engagement Meetings and USAID uses co-creation, Broad Agency Announcements (BAAs) and Statements of Objectives (SOOs) vice Statements of Work (SOW) to get broader input at the earliest stages. It is important that we applaud in these efforts and participate, rather than sit on the sidelines declaring that they take too much time.

And donors are also starting to actively seek for us to be better and design everything from decision to tool to results to ensure it actually works for the people we engage and can be sustainable or replicable.  They too are recognizing the arrogance of “we have all the answers” from us does not always bring the best implementation. We are seeing a shift in demand for humility in the approach, which is an integral part of Weeyacom’s values. Humility says: “how can we pull potential implementers and practitioners along with beneficiaries into the process of creating effective solutions?”  Iteration is not weakness – it does not say we don’t know what we are doing – it says we have the self awareness and humility to know that many minds are better than one.

We can use approaches such as crowdsourcing to create a better product and ultimately a better, more sustainable development solution. We crowdsource through the principles of curiosity, humility and empathy, to create a collaborative environment where the right partners are engaged to create synergy. We ask the difficult questions to uncover the underlying incentives of decision-making and use the data we gather to make more data-driven and adaptive decisions. Practically speaking, we get input more frequently in shorter touch points along the way instead of just at a point of a milestone or a more ‘finished’ product. 

Human centered design also creates space for innovation. It’s this iterative process of designing and implementing development solutions through quick bursts or spurts (we love design sprints) that allow for more feedback loops. Innovation does not have to be something new and shiny. It doesn’t need to be a new applied technology. It can be, for example, a process that may be used elsewhere applied in our field of M&E, we have borrowed from the Army for example in our use of after action reviews.  Innovation can be inspired by the mandate. Sometimes the most effective or “innovative” solution isn’t the most complex but actually the simplest to implement.  Innovation is doing it in a simpler way. It’s taking something complex and simplifying it. 

In the past an implementer or contractor may have provided a first draft of a product or report, followed by second and then final. However, we have all seen cases where a funder gets a first draft they are not fully satisfied with and doesn’t see the degree of improvement from this initial draft product to the final product the way they envisioned it. This ultimately makes the product useless. So lets say your evaluators did their part as described above and tailored their solutions for most significant change, but if you haven’t presented the information to the funder in a way that is most useful to them, they simply will not use it. This can be an evaluation report, a performance management plan, a learning agenda, a plan for adapting or scaling, etc. By the time they receive a final product they don’t see it’s utility and now a significant amount of funds have been spent for another product that will go on a shelf and not be used. 

Let them in, involve them in iteration. Give them a voice and have the empathy to hear them. It’s not failure, it’s making sure the user is the centre.

This shows what we mean when we say HCD should be an end-to-end approach. It also shows that HCD  is something that puts humility and empathy at the forefront. It is something that embraces innovation in how we centre the human. Something that actually puts the ‘me’ in ‘MEL’. Yes, MEL is a system or structure. However, we cannot forget that it is ultimately people who work and use this structure. It’s the people we work with that put the ‘me’ in ‘MEL’.  We therefore cannot implement processes and systems effectively without working together from a place of empathy, humility, and humanity to come up with more effective development solutions together.

 

Published by Niki Wood

Evaluation enthusiast sharing tools, ideas, and musings.

2 thoughts on “Putting the ‘me’ in ‘MEL’

Leave a comment