What’s the Story Behind the Data? Reframing MEL as a Learning Tool
Every year, billions of dollars flow into development projects worldwide, tackling critical issues such as youth employment, climate change, health crises, and more. The fundamental expectation is that these investments will lead to meaningful, long-term improvements in people’s lives. Yet, even with the significant resources invested in these initiatives, a question persists: Are the resources we are investing creating meaningful, real-world impact? And how should we monitor and evaluate this impact?
Monitoring and Evaluation (M&E) frameworks guide development efforts, enable accountability, and track progress. At their best, these frameworks are powerful tools for steering implementation and learning. But in practice—especially across small and medium-sized projects in emerging markets—their execution often falls short. This is not necessarily due to inherent flaws in the frameworks themselves. Most M&E systems are flexible by design, allowing organizations to define indicators and methodologies aligned to their goals. Instead, the challenge often lies in the capacity to execute these frameworks effectively, as well as the data systems and institutional cultures that support them.
While the development community has embraced the evolution from M&E to Monitoring, Evaluation, and Learning (MEL), the “L” often remains underdeveloped. MEL systems promise a more adaptive, learning-oriented approach—focused not only on measuring outputs and outcomes but also on drawing insights that can improve strategy and delivery. Yet embedding learning into daily operations remains an aspiration rather than a reality for many initiatives.
Small and mid-sized projects are particularly constrained. With limited resources, they often default to compliance-driven reporting, prioritizing donor-mandated inputs and outputs—number of workshops held, participants trained, materials distributed—over deeper insights into effectiveness. When evaluations are conducted, they are often structured to meet reporting requirements rather than to generate actionable insights that inform strategic decision-making. In many cases, learning is approached as a retrospective activity rather than being embedded as a continuous, iterative process throughout the program lifecycle.
This gap is not due to rigid frameworks but to underinvestment in the people, systems, and practices needed to operationalize them. Expertise in outcome-level measurement, learning facilitation, and participatory evaluation remains uneven across the sector. In some cases, organizations lack access to the evaluative tools and methodologies—such as contribution analysis, outcome harvesting, or developmental evaluation—that allow for nuanced, context-specific learning. In others, insights generated are not effectively translated into course corrections or shared beyond donor audiences.
The COVID-19 pandemic starkly revealed the limitations of conventional M&E practice—and the potential of more adaptive models. During the crisis, the World Health Organization Regional Office for Africa (WHO AFRO) worked with national governments to implement a participatory M&E (PM&E) system. This model emphasized real-time learning and responsiveness, enabling local stakeholders to co-generate data, analyze it, and adjust actions based on emerging realities. Unlike rigid indicator-focused models, WHO AFRO’s PM&E approach enabled shared ownership, continuous learning, and a more holistic understanding of impact. It serves as a powerful reminder that M&E is most effective when embedded in the realities of implementation—when it’s not just about proving success but improving outcomes.
Consider a digital literacy initiative aimed at improving technology adoption among smallholder farmers. A traditional M&E framework might focus on tracking outputs such as the number of workshops conducted, mobile devices distributed, or app downloads recorded. While these are important metrics, they do not necessarily indicate whether farmers are meaningfully using the technology to improve their productivity or market access. Without evaluating outcomes—such as whether farmers can interpret weather data, access price information, or make better-informed decisions—the evaluation risks missing the actual value or limitations of the intervention. Additionally, if the framework does not account for contextual barriers like low digital confidence, language limitations, or poor internet connectivity, it may fail to explain uneven uptake or impact. In such cases, the evaluation process becomes more about proving delivery than understanding what’s working, for whom, and why.
To counter this, development actors must embrace adaptive learning models that account for complexity, diversity, and shifting ground realities. These models support not only accountability but also iteration. However, their effectiveness hinges on something deeper: organizational willingness to reflect, share, and act on what is learned.
Too often, project weaknesses are underreported—whether due to reputational risk, donor expectations, or institutional inertia. Monitoring, Evaluation, and Learning (MEL) is still viewed in many organizations as a parallel process to implementation, rather than an integrated strategic function. Part of the problem lies in how evidence is communicated. As highlighted in a 2023 report by a UK-based philanthropic organization focused on governance and citizen engagement, “data doesn’t speak for itself”—it needs context, narrative, and framing to become influential. When MEL outputs are reduced to dry, technical reporting, they fail to engage decision-makers or build momentum for change. This often leads to a missed opportunity: learning remains internal, static, and under-leveraged. By reframing MEL as a storytelling function—one that makes complexity digestible and insights actionable—organizations can better position learning as a strategic asset, not just a compliance requirement.
Encouragingly, some organizations are exploring new ways to share insights and enable organizational learning. For example, rather than relying solely on lengthy end-of-project reports, development teams can experiment with lighter, more frequent knowledge-sharing tools—such as internal newsletters, briefs, or fact sheets—that summarize emerging challenges, lessons learned, and mid-course adjustments. However, recent data from a review of 30 international NGOs operating in Ghana suggests there is significant room for improvement: only 4.5% of these organizations used such formats to disseminate insights, compared to 36.4% that relied predominantly on full-length reports. This suggests a missed opportunity to make M&E findings more accessible, digestible, and actionable.
Finally, donors have a pivotal role to play. They can either entrench a compliance culture or cultivate an adaptive one. By encouraging flexible indicators, funding reflection time, and rewarding evidence of learning—even when it reveals setbacks—funders can shift incentives in the right direction. A growing number of donors are incorporating evaluation criteria that focus on relevance, coherence, effectiveness, sustainability, and impact. These frameworks stress learning and attribution over box-ticking and represent an important evolution.
To truly rethink M&E, we must invest not just in frameworks, but in capabilities. We need better systems, stronger data fluency, more reflective practice, and an ecosystem-wide push to move from measuring activity to understanding change. Development is complex, and impact is rarely linear. But with the right MEL culture and capabilities, we can get closer to understanding what works, for whom, and why—and use that knowledge to do better.y—it is a strategic necessity for sustainable development, economic growth, and long-term climate resilience.
Naam Chakravorty is the Gulf Lead at Botho Emerging Markets Group