Learning from consortia and portfolios: From cacophony to symphony

Kim Sobat, Cacophony

Autores: Florencia Guerzovich, Alix Wadeson, e Tom Aston 

As we discussed in a blog last week, practitioners in the Transparency, Participation and Accountability (TPA) sector face an important question: how can portfolio-level Monitoring, Evaluation and Learning (MEL) help us to learn about the collection of evidence of TPA’s impacts? And in doing so, how might this help us move beyond supposed “existential threats” to the sector. 

This second post in a three-part series looks in greater detail at how the narrative in the sector became stuck in a doom loop, despite the evidence that our community is more effective and resilient than the narrative suggests. TPA practitioners are doing work that could get the community  learning, avoiding tragic consequences for the funding of the work. We also start discussing some of the building blocks for this alternative narrative. Our answer starts with MEL – better, smartly targeted MEL, not necessarily more of it.  And we’ll share some more hopeful, yet also more technical, proposals in the next blog, based on our collective experience working on MEL with different consortia and portfolio programs. 

The minor fall and the major lift?

In the last blog, we drew attention to the Hewlett Foundation’s possible new directions. Some of these directions include prioritizing in-country work in a more focused manner (narrowing), with a hope to contribute to structural and systemic change (elevating ambition), but also to go broader than public service delivery. These multiple directions work within individual projects in specific countries. We can learn from these, but we also believe that part of the answers to the headline question lies in how we learn from portfolios.

In recent years, organizational partnerships, consortium initiatives and program approaches have very much been in fashion. Yet, relatively limited attention has been given to how we should carry out MEL beyond individual projects. In our experience, it seems like an after-thought in which the design of robust consortia and portfolio level MEL lags behind. This priority emerges when implementing agencies are pressured to account for learning and evidence to demonstrate how the whole is greater than the sum of its parts. 

As Anne Buffardi and Sam Sharp recently pointed out in relation to adaptive programming, “most measurement in adaptive management approaches were developed for and from individual projects.” This suggests we need to think harder about learning beyond individual projects. Similarly, in multi-site, multi-organisation structures, as Kimberly Bowman et al. explain:

In an ideal scenario, multi-actor efforts can be imagined as a symphony: an elaborate instrumental composition in multiple movements, written for a large ensemble which mixes instruments from different families (an orchestra). However, with so many components and actors there is also a risk of creating a cacophony: a harsh, discordant mixture of sounds.

A similar point can be made about funding portfolios. Funders often explicitly hedge bets, investing in potentially competing approaches in the hope that at least one will pay off. Sometimes this can entail funding those actors building the foundations on the inside as well as funding those throwing stones at the building from outside. It also entails funding researchers to carry out large Randomized Control Trials (RCTs) alongside their most ardent critics (and critiques). Logically, both run the risk of promoting dialogues of the deaf, strategically and methodologically

How do we play music in harmony rather than in discord? 

Bowman et al. use the analogy of a school of music to refer to the actors and the diverse roles they play in portfolios. The school might include funders and their local “fixers,” National and International Non Governmental Organizations (I-NGOs), national or international contractors, think tanks, universities, among others. 

Funders tend not to explicitly ask questions about whether they are creating harmony or discord. While theories of change are supposed to represent how different actors might collectively contribute to a particular goal, funders (and research institutions) often make themselves invisible. Most MEL efforts are delegated directly to partners and grantees who carry out the work. After all, this is where the action is, right? 

To be sure, there are exceptions to this pattern: the evaluators of Open Society Foundations’ former Fiscal Governance Program asked key informants, including one of us, about the Program’s role and contribution and so does the GPSA’s Theory of Action, Results Framework and project evaluations (see e.g. here).

Yet, as Gill Westhorp put it in a recent evaluation of a social accountability project in Indonesia, donors or research institutions may not exercise overt power in an intervention, but they can still bring multiple types of power and “authorising” to the table. These include independence; access to higher levels of government; access to media; and the fact that it is a donor organisation’ – all of these factors can support or hinder the work of local partners. Therefore, funders are also fundamentally part of any theory of change.

Asking about the role of the school of music creates challenges: who will collect the information to provide answers? As we read the strategy evaluation of Hewlett’s past portfolio that informs its ongoing review, we appreciated that Transparency and Accountability Initiative (TAI) members have tried aligning reporting and reducing reporting burdens on partners. Other efforts have moved reporting tasks up the chain. The State Accountability and Voice (SAVI) program in Nigeria, for example, explained how the program itself handled most reporting requirements for implementing partners, so that they could get on with implementing. We’ve been part of such efforts ourselves.

An alternative is for funders to take this MEL task upon themselves. Yet, we know, first hand, that doing so creates a lot of work for funders and fund managers. When they don’t have adequate in-house funding, capacity and an authorising environment to answer the big questions themselves, they risk creating a learning gap. Failing to invest in and build this space internally (or at least with a MEL partner) runs the risk of creating a cacophony, adding additional ammunition to critiques around “mixed” results. This outcome is concerning as it further undermines the argument and potential for investing in TPA work. 

MEL staff are often asked for something which we can generalize across a whole portfolio, to aggregate data in potentially misleading ways, and to create laundry lists of indicators that cater to an almost infinite list of advocacy agendas. Beyond this, we need to find a compromise between many legitimate needs, including: (1) the bottom line: report what’s needed so people get paid; (2) utility: provide useful data for partners to help improve programming, and; (3) coherence: organise data and lessons in a meaningful way so that we can show that efforts at different levels add up to something greater than the sum of the parts. The politics of MEL is in reconciling, often discordant, and under-resourced demands. As we mentioned in the previous blog, the prospect of fully generalizable results is a pernicious myth in the TPA sector. Instead, we need theories to travel across decision-making levels of any given portfolio. 

It is, of course, challenging to find the sweet spot between the detailed information needs of implementers at the frontline, where each case is different, and the elevator pitch necessary for higher management and boards. Yet, in the middle, program officers and portfolio managers might benefit from insights about patterns in particular contexts and sectors, and an appreciation of what could be transferable across subsets of investments (i.e., comparable parts of a portfolio). 

Complexity is doubtless another reason for why it’s difficult to make sense of what’s going on. As one of us explained previously, stakeholders often disagree about what is right as well as what will work. And it is often hard to pin down whether we’re measuring the same things or not (as we’ll discuss in the next blog). Yet, as Megan Colnar rightly reminds us, complexity is unfortunately used as an excuse for parts of the field considering TPA un-evaluable, drawing the self-defeating conclusion that we therefore shouldn’t really try.

A dialogue with the data?

Research also has a role to play in this sensemaking; yet how MEL and research complement one another in the sector is not always so straightforward. Although the UK Department for International Development (DFID) conducted an evaluation of its empowerment and accountability portfolio in 2016, a number of questions remained unanswered. It appeared logical that some of these gaps should be filled by commissioning research

Colin Anderson, Jonathan Fox and John Gaventa looked at part of this portfolio, focusing on six case studies in Mozambique, Myanmar, and Pakistan through the DFID/FCDO-funded Action for Empowerment and Accountability Research (A4EA) program. Yet, Anderson et al. lamented that “a diversity of approaches to monitoring and evaluation prevents much rigorous comparison on the basis of the available evidence” as well as “obstacles to clear-sighted evaluation of impact, and the limits on informing new programming decisions.” 

DFID’s portfolio evaluation offered illustrative evidence on the promise of vertical integration and concluded that there was a “need to move beyond tactical approaches to achieve success at scale.” Yet, Anderson et al. argued that from the available data they weren’t able to test the proposition that “vertically integrated approaches offer more purchase than less integrated alternatives.”, We might critique case selection (were these the right countries, projects, time series?). We might even critique the testability or validity of the theory (will assumptions about vertical integration built into other TPA portfolios be tested?). But, either way, this experience demonstrates that research is, unfortunately, no substitute for good MEL

Research and MEL should converge around a dialogue with the data, but discursive agendas sometimes get in the way. This has been accentuated when researchers and advisory groups (of which we have, on occasion, been members) fail to critically interrogate prescriptive and causal assumptions. Failing to reflect on these has, lamentably, created echo chambers with the mere semblance of agreement. Assumptions need to be surfaced, discussed, and revisited, if they don’t hold empirically.

How to inform new programming decisions through social learning

Portfolio MEL takes time and resources beyond the technical definition of indicators. As the Public Service Accountability Monitor (PSAM) and its partners found out, consortium, portfolio, and other collective  MEL efforts require coordination and negotiation among stakeholders with different needs and capacities (related to this from Varja Lipovseck). It’s as much about relationships and trust, staff time and information management as it is about defining shared targets and metrics. Portfolio MEL isn’t quick, it’s generally about the medium and long term. Its systems are also very important sources of institutional memory. They are key when one wants to look back at 5-10 years of work and tell a collective story – especially considering the rate of staff rotation in the TPA sector. If donors really care about understanding progress, learning and weaving a different narrative than they did a decade ago, they must fund these collective efforts beyond supporting one or two major research projects and/or evaluations every few years.  

The related challenge is that funders and fund managers can exercise a different type of social learning leadership that changes the design choices and increases the use of MEL systems. Few have a better vantage point on the music school than funders (perhaps consultants with many hats can sometimes make a similar claim). Funders and fund managers are best placed to provide the resourcing and generate the incentives for portfolio stakeholders to see the collective value of MEL, beyond performance audits. Funders can help spot opportunities and offer spaces to bring different stakeholders together where they are facing common challenges in comparable contexts and asking similar questions

This last point captures a methodological challenge associated with broader trends in the current strategy refresh season. Some TPA funders such as Hewlett are considering moving their attention from global advocacy – where minimum common denominators across a vast set of countries matter – to supporting in-country work in only a few countries. How might we learn from and beyond these countries and become more than the sum of our parts?

Not everything is fully comparable. There will be musicians in one country that will have comparable genres and notation (i.e., challenges and questions) to those in some countries in the portfolio but not others. Or they may have more commonalities with countries that fall outside the portfolio of the few lucky countries that receive funding.

Comparison is often an instinctive part of our learning activities. Convene a meeting among activists in different countries who work with parent-teacher associations and they’ll have lessons to share for days, despite jetlag. Seat them with another crowd and they’ll probably talk past each other or go to sleep. So, it’s important to consider what are meaningful communities which can have these conversations and which contribute to the conversation rather than detract from it. We also need to consider what is guiding those definitions (personal histories, anecdotes,  organizational dynamics, evidence, etc.). There is a lot of methodological guidance on the comparative method for studying politics, policy and power, including sub-nationally,  that can help us reflect more systematically about identifying groupings in a school of music’s MEL systems.

Defining meaningful groupings for bounded comparisons among subsets of musicians in a school of music is really hard work, but it’s doable and  pays off. Until recently, the Partnership to Engage, Learn and Reform (PERL) program in Nigeria (following SAVI), worked in seven locations across district, state, regional, and federal levels and across various different thematic sectors. Although the program had a single theory of change overall and all locations worked to promote more accountable, effective and evidence-informed governments, the challenges varied significantly. What might be learned from Kaduna in local governance reform with a dominant settlement under Governor Mallam Nasir El-Rufai might not perfectly translate to the political economy of Kano state with its short-termist competitive clientelist settlement. Successful reform efforts in the economic hub, Lagos, might not travel well to conflict-affected Borno. While it’s possible to learn something about accountability in Nigeria overall, comparisons in more similar regions, and within particular sectors (health, education, etc.) emerged as a more productive way to promote triple loop learning in the program, and to shed light not simply on the desirable, but the possible.   

VNG International’s Inclusive Decisions at Local Level (IDEAL) program, working over five years in seven quite diverse states facing fragility or conflict at different stages. IDEAL has a nested theory of change (i.e. different levels of detailed theory of change), but faced challenges in its MEL due to precise but relatively inaccurate quantitative indicators at the impact level, as the program is adaptive. The team was searching for ideas about how to weave it all together with useful evidence for learning and reporting. After the mid-term review, two of us supported IDEAL staff to work on this challenge with some recent success in integrating and mapping changes at outcome level to the theory of change. The program still faces a challenge to aggregate and compare its work in TPA at the output level because this is being done retroactively after several years, with country-level datasets of varying quality and access. The context and complexity in these seven countries facing quite diverse manifestations of fragility and conflict mean that local governance of environmental conservation in rural Mali seems to be too disparate to compare with local economic development planning in urban Palestine. There are, however, ways to design a MEL system to track outcomes and outputs across seemingly different activities and contexts into ‘buckets’ for relevant and aggregatable measurement that enables comparison and learning to identify trends, outliers and gaps in the theory of change (we explore this and the concept of “functional equivalents” in our next blog). 

The COPSAM Community of Practice, convened by PSAM, includes dozens of organizations multiplying TPA work across the SADC countries. The work encompasses hundreds of personal realities – all different from each other – but a common approach that is appropriate in each case. A few years ago, COPSAM  identified weaknesses in the MEL system – that there was a gap between practice and learning at the organizational as well as community levels. Knowledge that could support practice is tacit, held by a number of colleagues on the ground rather than in whatever MEL system each organization had. One of their partners had an evaluation gone wrong partly due to ill-suited methods. The “bad news” tested people, organizations, and the community’s resilience.The good news is that the group did not give up. They embarked on a learning journey with one of us that included partners gradually and selectively, focusing first on four partners (Policy Forum in Tanzania, United Purpose in Mozambique, Zambia Governance Foundation and SAPST in Zimbabwe) that had some common features among themselves and with the broader group. Then expanding the conversation to others  and explicitly discussing commonalities and differences between those four pioneers and the broader community. 

Many colleagues are already doing better, smarter portfolio MEL, challenging conventional wisdom, practices and boundaries, often disconnected from each other. We turn to some practical (and technical) lessons from portfolio MEL to help counter the supposed “existential threat” to the sector, and hopefully begin to transform discordant noises into something more harmonious in the next post.