Six practical resources for doing gender and MEL, better

As we launch the new issue of Gender & Development, which focuses on monitoring, learning and evaluation (MEL), Global Gender MEL Adviser, Kimberly Bowman offers her top resources for gendered MEL.


It’s been a good few months for people who are interested in ‘gendered’ monitoring, evaluation and learning (MEL). That may sound like the smallest club in the world but, once you strip away the jargon and acronyms, many development workers and researchers are actually members of this club. In looking for ‘evidence’ and ‘truth’ we must often balance a demand for scientific rigour with other tightly-held valuesIts members know that the way that we
understand, measure and evaluate success is both important and political. And we know that in looking for ‘evidence’ and ‘truth’ we must often balance a demand for scientific rigour with other tightly-held values, like using evaluative questioning to address inequalities of power and voice.  

So, why it’s been so good recently? Well, some great, practical learning has emerged. Here, I’ve gathered together my favourite resources.

Practical, tangible ‘good practices’ (and don’t-dos)

1. As Caroline Sweetman has announced, the latest issue of Gender & Development focuses on gender and monitoring, evaluation and learning. The journal includes eight original articles, many of which provide a practical overview of monitoring and evaluation systems and practices in
real-life development projects. For example, Helen Lindley of Womankind provides an information-rich overview of that organization’s approach – with tables, examples and concrete recommendations. Really good stuff.  Marie-France Guimond and Katie Robinette of the International Rescue Committee outline how basic client data systems can support on-the-ground learning and the design of truly evidence-based programming.

2. The G&D issue comes hot on the heels of another big publication; early in March the ODI and DFID published a review of evaluation approaches and methods for women’s and girls’ economic empowerment programming. Yes, it’s long and meaty but it’s worth the read and if you’re really pressed skip straight to the last half of the seven-page summary.

What makes this report and the new issue of G&D so useful for people who think about gender and MEL is that they pull together concrete lessons from around the world. We need to move beyond the basics (sex disaggregated data etc) and it is this kind of sharing – warts and all – that will help us to do that.We need to move beyond the basics (sex disaggregated data etc) and it is this kind of sharing – warts and all – that will help us to do that.

That said, many of the resources that I love best are slightly older, or not written up formally. 

3. The AWID Wiki on Monitoring and Evaluation is a brilliant starting point for those trying to navigate gender in/and MEL, with links to specific MEL tools and indicator guidelines. 

4. For those looking for a general overview of key ideas, Capturing Change in Women’s Realities is a useful and readable guide. 

5. This 2001 IDS report on Gender and Monitoring has a list of checklists (!) and resources by thematic area (see p23 onwards). 

6. And for women’s economic empowerment programming more specifically, I keep a list of great resources on our MEL for WEE group page in the Grow Sell Thrive online community – which anyone can join.

But wait, I’m still struggling with…

Resources and tools are great assets – but unfortunately they don’t answer complex questions all by themselves. Here at Oxfam, my colleague Simone Lombardini (Global Impact Evaluation Advisor) and I have identified two really specific challenges that we’re facing right now:

 1) Who defines what matters?

‘Social accountability’ means giving those people who are expected to benefit from development a voice in that development process, and the decisions that guide programming. Responsible monitoring and evaluation means including less powerful voices in the ‘valuing’ process. That much is clear, but how to do that can be very difficult, particularly when we’re working with abstract and fuzzy concepts like ’empowerment.’ 

Fortunately, there are a whole host of research and evaluation methods to draw on. Finding and applying them appropriately (while also managing demands for donor reporting, very technical impact assessments, project delivery and the need to occasionally get some sleep) is the tricky part! 

2) Context vs. comparability

For large NGOs like Oxfam – and for donors – one of the key tensions we manage is the demand for aggregation (pulling-together) or comparison versus context-specificity. I’ve already outlined the “who determines value?” question – now how do you keep that in mind, when you want to “draw up” and aggregate evaluative information, across four or five very different contexts? There are often specific trade-offs between making evaluation very context-sensitive and being able to deliver the cross-site data that managers and donors demand. It’s important to know
this, so you can explain and negotiate trade-offs from the beginning. 

What’s next?

What does all this mean for what we do? How do we embed gender into our monitoring, evaluation and learning practice, in a simple but meaningful way?

I would love to hear from others thinking about these issues. What MEL/gender resources do you use? Do you have suggestions for the two challenges I’ve outlined? Are you struggling with anything similar? Please get in touch via the comments below.

Author: Kimberly Bowman
Archive blog. Originally posted on Oxfam Policy & Practice.