Towards MEL symphony in the transparency, participation, and accountability sector

Autores: Tom Aston, Florencia Guerzovich and Alix Wadeson

Paul Klee, Ancient Harmony

As we illustrated in the previous blog in this series, funders, fund manager organisations and implementing organisations in the Transparency, Participation and Accountability (TPA) sector are wrestling with the challenge to move beyond piecemeal project-level MEL to evidencing more cohesive programme and portfolio-level results which are greater than the sum of their parts. This is the holy grail, as Sam Waldock of the UK Foreign, Commonwealth and Development Office (FCDO) put it in the Twitter discussion.

Some grounds for optimism

While we agree that portfolio MEL is challenging, it’s not impossible. Some efforts in the wider international development sector give us reasons to be optimistic when there is political commitment at the right level. As CARE International’s Ximena Echeverria and Jay Goulden explain, it’s possible to demonstrate contributions to the Sustainable Development Goals (SDGs) across an organisation with over 1,000 projects per year. With standard indicators, capable staff, and serious effort, you can assess progress at a considerable scale. But there are also ways to break such enormous portfolios down into more manageable chunks.

One of us conducted a review of CARE’s advocacy and influencing across 31 initiatives (sampled from 208 initiatives overall) that were relevant to just one of CARE’s 25 global indicators up to 2020. This was based on a Reporting Tool adapted from Outcome Harvesting (a method which is increasingly popular in the TPA sector). CARE has continued to assess this in subsequent years, looking at 89 cases last year because it saw the value in the exercise. As advocacy and influencing constitutes roughly half of their impact globally, it’s obviously worth evaluating and assessing whether trends of what worked changed over time. Oxfam was also able to do something similar for its advocacy and influencing work, building off 24 effectiveness reviews (which relied on a Process Tracing Protocol; if you’ve read some of our other blogs, you’ll know we’re fans of this method).

Both reviews, also similar to the Department for International Development’s (DFID) empowerment and accountability portfolio review of 50 projects, were Qualitative Comparative Analyses (QCA) — or fuzzy-set QCA. We believe that QCA is a helpful approach to find potential necessary and/or sufficient conditions, but such conditions are not always forthcoming (as DFID’s review showed); they also rely heavily on the quality of within-case evidence. We’re often searching for, but not quite finding, necessary and/or sufficient conditions. For this reason, there are limits to what QCA can do, and without adequate theory it can sometimes be premature.

Realist syntheses (or realist-informed syntheses) also provide a helpful option to assess what worked for whom, where and how. The “for whom” question should be particularly of interest for the Hewlett Foundation’s new strategy refresh as well as to all colleagues who strive to design portfolio MEL systems that are useful to its different stakeholders and decision making needs. One great benefit of the approach is that it assumes contingent pathways to change (i.e., diverse mechanisms), emphasizing that context is a fundamental part of that change rather than something to be explained away. A more contingent realist perspective thus builds in a limited range of application for interventions (x will work under y conditions) rather than assuming the same tool, method, or strategy will work the same everywhere (the fatally flawed assumption that brought the sector to crisis point).

The Movement for Community-led Development (MCLD) faced a similar “existential threat” as the TPA sector — after another hot debate about the mixed results of Community Driven Development (CDD) in evaluations. Colleagues from 70 INGOs and hundreds of local CSOs from around the world, started a collaborative research including a rapid realist review of 56 programmes, to understand the principles, processes, and impact of their work. They counted on leadership and some external funding, but often relied on voluntary time and good will. The process and the results include several relevant overlaps and insights for monitoring and evaluating portfolios of TPA work. The full report is due to be published on the 6th October, 2021.

As observers and participants in the MCLD process, we would like to underscore a factor that distinguishes this effort from many others — don’t ignore the “L” (learning). Regular calls among the group were made to engage in a kind of “social learning,” resembling a learning endeavour which is described by the Wenger-Trayners as an ongoing process which creates a space for actors to mutually exchange and negotiate at the edge of their knowledge, reflecting on what is known from practice as well as engaging with issues of uncertainty.

In the TPA space, where practice often entails advocacy grounded in expertise and/or values, including to set research and learning agendas, creating this learning space requires, at a minimum, changing the mindset and challenging normative assumptions and apparent agreements that constrain social learning. For another call to put “L” front and centre, see Alan Hudson’s feedback to the Hewlett Foundation’s strategy refresh.

Therefore, with sufficient political commitment and flexibility, the appropriate methods and processes are out there to assess and support social learning in as few as 5 to as many as 1,000 cases or initiatives. But there also seems to be a more meaningful sweet spot somewhere between assessing a group of 5 and 50 initiatives.

How can portfolio evaluation contribute useful knowledge to the field ? The GPSA’s journey

We’ll discuss an example of a funder the three of us know well (but do not speak for), one that has been unusually open about its complex MEL Journey. The Global Partnership for Social Accountability (GPSA) started grant-making in 2013; by 2014 it had already designed 20 flexible grants tackling a broad range of development challenges (health, education, public financial management, etc.) in diverse contexts — from Mongolia and Tajikistan to Ghana and the Dominican Republic. The common thread for these grants was that, a priori, they were all cases that it was judged would benefit from politically smart collaboration between public sector and civil society actors. The prioritisation of a problem and the precise approach and tools to tackle challenges themselves was localised.

In its infancy, the GPSA lacked a comprehensive MEL system. The programme prioritized grant partners’ ability to determine on their own appropriate approaches to MEL and aggregation of results was less pressing. “Learning by doing” is something of a mantra at the GPSA whereby both the GPSA and its grant partners are expected to embrace this approach. It encourages (uncommon levels of) openness about failures and allows ample room for iteration of collaborative social accountability processes undertaken by diverse grant partners across over 30 countries (to date). To pick up Bowman et al.’s typology, by 2016, MEL was a full-blown “jam session” among grant partners that resemble other decentralised systems (see the typology below).

Bowman, et al. 2016

One practical challenge was that the GPSA had questions about the aggregation of results. The onus, as affirmed by former World Bank President Jim Kim, was to show that this “miracle” has real outcomes. Delivering on this challenge could go in different ways. As elsewhere, finding and implementing a compromise among decision-makers needs across the system is easier said than done or resourced.

Over the years, the GPSA explored the implications of adaptive learning as a fundamental portfolio approach while responding to this demand, it intentionally experimented over time for gradual design, implementation, and course correction. A careful dance while fitting in World Bank rules, processes, and shifting approaches and norms.

A key part of this journey was about transforming the free flow ‘jam session’ into a more harmonious school of music. The latter is an entity that includes a wide range of activities (instruments) implemented by multiple, diverse grant partners (music school members) that have a loosely defined common theory of action to work in diverse contexts to tackle a range of development problems (different musical genres and sheet music). The GPSA, along with funding partners and World Bank management sector and country staff, makes initial choices in this regard — which is why strategy refresh season is important. They also provide financial and non-financial support and also can facilitate information exchange and compile activity reports for higher ups. Yet, the portfolio’s day-to-day work is often decentralised, and line-managed by people in other organisations (grant partners). The GPSA, like other funders or fund managers, often has limited authority to direct the behavior of partners — but it’s not innocuous and thus should not be omitted or obviated from the story.

The GPSA is building upon layers of ‘jam session’ lessons to shape a school of music-type portfolio that can start to play more harmoniously under the direction of a MEL system developed over time. This has been a multi-year process in which “L”, in the sense described above, was front and centre in the GPSA’s criteria for funding applications, annual reports, revised theory of action and indicators in the results framework. It’s also more explicitly emphasised in GPSA’s annual grant partner forums — GPSA alumni participate in these events which enables retrospective insights and facilitates that ongoing engagement and support across civil society can be maintained over the longer-term, beyond the duration of a grant. The GPSA also hosts other virtual events and bilateral conversations.

Connecting the dots across these MEL tools, in 2016, the GPSA began to articulate key concepts ‘for the practice from the practice,’ such as collaborative social accountability. The GPSA needed a new common language to talk about and understand the work — what looks like yet another label, was an effort to avoid cacophony. In 2018, the GPSA agreed to “test” key evaluation questions of interest to different stakeholders, with a small number of evaluations (Expert Grup took the plunge first). Then, the GPSA learned that the priority of these questions resonated with grant partners as well. In 2019, grant partners watched presentations of 4 evaluations and came back to one of us saying: “now I understand what you mean that an evaluation can be useful, I want one like that!” Part of the trick here is that those select evaluation questions were also useful to management, and a broader group of Global Partners.

Social learning within the school of music informed the next iteration of MEL tools, including refining the evaluation questions, and others that can support us to better grasp the elusive question of ‘what does sustainability and scale of social accountability look like in practice?’ All this MEL work and social learning with partners about practice precedes the current GPSA Manager, Jeff Thindwa’s, call for the field to take stock, match narratives to the work, and write the next chapter for social accountability. We agree that this kind of social learning is key to exploring TPA theories of action and change, develop tools to assess them, and hopefully will help us to challenge the doom and gloom narrative. In a slightly different guise, social learning underscores the importance that the principle of utilisation-focused evaluation should be a key priority (rather that its specific methodological incarnations, per se), which has been reasserted in recent evaluation discourse (for instance, as a central theme at the Canadian Evaluation Society’s 2021 annual conference).

In the next post of this series, we’ll share more practical insights about the design of MEL systems emerging from our work across different portfolios. These are insights that we believe are useful for different types of schools of music as well as for colleagues who privilege any of the methods discussed above and/or support the idea of a bricolaging them.

Learning from consortia and portfolios: From cacophony to symphony

Kim Sobat, Cacophony

Autores: Florencia Guerzovich, Alix Wadeson, e Tom Aston 

As we discussed in a blog last week, practitioners in the Transparency, Participation and Accountability (TPA) sector face an important question: how can portfolio-level Monitoring, Evaluation and Learning (MEL) help us to learn about the collection of evidence of TPA’s impacts? And in doing so, how might this help us move beyond supposed “existential threats” to the sector. 

This second post in a three-part series looks in greater detail at how the narrative in the sector became stuck in a doom loop, despite the evidence that our community is more effective and resilient than the narrative suggests. TPA practitioners are doing work that could get the community  learning, avoiding tragic consequences for the funding of the work. We also start discussing some of the building blocks for this alternative narrative. Our answer starts with MEL – better, smartly targeted MEL, not necessarily more of it.  And we’ll share some more hopeful, yet also more technical, proposals in the next blog, based on our collective experience working on MEL with different consortia and portfolio programs. 

The minor fall and the major lift?

In the last blog, we drew attention to the Hewlett Foundation’s possible new directions. Some of these directions include prioritizing in-country work in a more focused manner (narrowing), with a hope to contribute to structural and systemic change (elevating ambition), but also to go broader than public service delivery. These multiple directions work within individual projects in specific countries. We can learn from these, but we also believe that part of the answers to the headline question lies in how we learn from portfolios.

In recent years, organizational partnerships, consortium initiatives and program approaches have very much been in fashion. Yet, relatively limited attention has been given to how we should carry out MEL beyond individual projects. In our experience, it seems like an after-thought in which the design of robust consortia and portfolio level MEL lags behind. This priority emerges when implementing agencies are pressured to account for learning and evidence to demonstrate how the whole is greater than the sum of its parts. 

As Anne Buffardi and Sam Sharp recently pointed out in relation to adaptive programming, “most measurement in adaptive management approaches were developed for and from individual projects.” This suggests we need to think harder about learning beyond individual projects. Similarly, in multi-site, multi-organisation structures, as Kimberly Bowman et al. explain:

In an ideal scenario, multi-actor efforts can be imagined as a symphony: an elaborate instrumental composition in multiple movements, written for a large ensemble which mixes instruments from different families (an orchestra). However, with so many components and actors there is also a risk of creating a cacophony: a harsh, discordant mixture of sounds.

A similar point can be made about funding portfolios. Funders often explicitly hedge bets, investing in potentially competing approaches in the hope that at least one will pay off. Sometimes this can entail funding those actors building the foundations on the inside as well as funding those throwing stones at the building from outside. It also entails funding researchers to carry out large Randomized Control Trials (RCTs) alongside their most ardent critics (and critiques). Logically, both run the risk of promoting dialogues of the deaf, strategically and methodologically

How do we play music in harmony rather than in discord? 

Bowman et al. use the analogy of a school of music to refer to the actors and the diverse roles they play in portfolios. The school might include funders and their local “fixers,” National and International Non Governmental Organizations (I-NGOs), national or international contractors, think tanks, universities, among others. 

Funders tend not to explicitly ask questions about whether they are creating harmony or discord. While theories of change are supposed to represent how different actors might collectively contribute to a particular goal, funders (and research institutions) often make themselves invisible. Most MEL efforts are delegated directly to partners and grantees who carry out the work. After all, this is where the action is, right? 

To be sure, there are exceptions to this pattern: the evaluators of Open Society Foundations’ former Fiscal Governance Program asked key informants, including one of us, about the Program’s role and contribution and so does the GPSA’s Theory of Action, Results Framework and project evaluations (see e.g. here).

Yet, as Gill Westhorp put it in a recent evaluation of a social accountability project in Indonesia, donors or research institutions may not exercise overt power in an intervention, but they can still bring multiple types of power and “authorising” to the table. These include independence; access to higher levels of government; access to media; and the fact that it is a donor organisation’ – all of these factors can support or hinder the work of local partners. Therefore, funders are also fundamentally part of any theory of change.

Asking about the role of the school of music creates challenges: who will collect the information to provide answers? As we read the strategy evaluation of Hewlett’s past portfolio that informs its ongoing review, we appreciated that Transparency and Accountability Initiative (TAI) members have tried aligning reporting and reducing reporting burdens on partners. Other efforts have moved reporting tasks up the chain. The State Accountability and Voice (SAVI) program in Nigeria, for example, explained how the program itself handled most reporting requirements for implementing partners, so that they could get on with implementing. We’ve been part of such efforts ourselves.

An alternative is for funders to take this MEL task upon themselves. Yet, we know, first hand, that doing so creates a lot of work for funders and fund managers. When they don’t have adequate in-house funding, capacity and an authorising environment to answer the big questions themselves, they risk creating a learning gap. Failing to invest in and build this space internally (or at least with a MEL partner) runs the risk of creating a cacophony, adding additional ammunition to critiques around “mixed” results. This outcome is concerning as it further undermines the argument and potential for investing in TPA work. 

MEL staff are often asked for something which we can generalize across a whole portfolio, to aggregate data in potentially misleading ways, and to create laundry lists of indicators that cater to an almost infinite list of advocacy agendas. Beyond this, we need to find a compromise between many legitimate needs, including: (1) the bottom line: report what’s needed so people get paid; (2) utility: provide useful data for partners to help improve programming, and; (3) coherence: organise data and lessons in a meaningful way so that we can show that efforts at different levels add up to something greater than the sum of the parts. The politics of MEL is in reconciling, often discordant, and under-resourced demands. As we mentioned in the previous blog, the prospect of fully generalizable results is a pernicious myth in the TPA sector. Instead, we need theories to travel across decision-making levels of any given portfolio. 

It is, of course, challenging to find the sweet spot between the detailed information needs of implementers at the frontline, where each case is different, and the elevator pitch necessary for higher management and boards. Yet, in the middle, program officers and portfolio managers might benefit from insights about patterns in particular contexts and sectors, and an appreciation of what could be transferable across subsets of investments (i.e., comparable parts of a portfolio). 

Complexity is doubtless another reason for why it’s difficult to make sense of what’s going on. As one of us explained previously, stakeholders often disagree about what is right as well as what will work. And it is often hard to pin down whether we’re measuring the same things or not (as we’ll discuss in the next blog). Yet, as Megan Colnar rightly reminds us, complexity is unfortunately used as an excuse for parts of the field considering TPA un-evaluable, drawing the self-defeating conclusion that we therefore shouldn’t really try.

A dialogue with the data?

Research also has a role to play in this sensemaking; yet how MEL and research complement one another in the sector is not always so straightforward. Although the UK Department for International Development (DFID) conducted an evaluation of its empowerment and accountability portfolio in 2016, a number of questions remained unanswered. It appeared logical that some of these gaps should be filled by commissioning research

Colin Anderson, Jonathan Fox and John Gaventa looked at part of this portfolio, focusing on six case studies in Mozambique, Myanmar, and Pakistan through the DFID/FCDO-funded Action for Empowerment and Accountability Research (A4EA) program. Yet, Anderson et al. lamented that “a diversity of approaches to monitoring and evaluation prevents much rigorous comparison on the basis of the available evidence” as well as “obstacles to clear-sighted evaluation of impact, and the limits on informing new programming decisions.” 

DFID’s portfolio evaluation offered illustrative evidence on the promise of vertical integration and concluded that there was a “need to move beyond tactical approaches to achieve success at scale.” Yet, Anderson et al. argued that from the available data they weren’t able to test the proposition that “vertically integrated approaches offer more purchase than less integrated alternatives.”, We might critique case selection (were these the right countries, projects, time series?). We might even critique the testability or validity of the theory (will assumptions about vertical integration built into other TPA portfolios be tested?). But, either way, this experience demonstrates that research is, unfortunately, no substitute for good MEL

Research and MEL should converge around a dialogue with the data, but discursive agendas sometimes get in the way. This has been accentuated when researchers and advisory groups (of which we have, on occasion, been members) fail to critically interrogate prescriptive and causal assumptions. Failing to reflect on these has, lamentably, created echo chambers with the mere semblance of agreement. Assumptions need to be surfaced, discussed, and revisited, if they don’t hold empirically.

How to inform new programming decisions through social learning

Portfolio MEL takes time and resources beyond the technical definition of indicators. As the Public Service Accountability Monitor (PSAM) and its partners found out, consortium, portfolio, and other collective  MEL efforts require coordination and negotiation among stakeholders with different needs and capacities (related to this from Varja Lipovseck). It’s as much about relationships and trust, staff time and information management as it is about defining shared targets and metrics. Portfolio MEL isn’t quick, it’s generally about the medium and long term. Its systems are also very important sources of institutional memory. They are key when one wants to look back at 5-10 years of work and tell a collective story – especially considering the rate of staff rotation in the TPA sector. If donors really care about understanding progress, learning and weaving a different narrative than they did a decade ago, they must fund these collective efforts beyond supporting one or two major research projects and/or evaluations every few years.  

The related challenge is that funders and fund managers can exercise a different type of social learning leadership that changes the design choices and increases the use of MEL systems. Few have a better vantage point on the music school than funders (perhaps consultants with many hats can sometimes make a similar claim). Funders and fund managers are best placed to provide the resourcing and generate the incentives for portfolio stakeholders to see the collective value of MEL, beyond performance audits. Funders can help spot opportunities and offer spaces to bring different stakeholders together where they are facing common challenges in comparable contexts and asking similar questions

This last point captures a methodological challenge associated with broader trends in the current strategy refresh season. Some TPA funders such as Hewlett are considering moving their attention from global advocacy – where minimum common denominators across a vast set of countries matter – to supporting in-country work in only a few countries. How might we learn from and beyond these countries and become more than the sum of our parts?

Not everything is fully comparable. There will be musicians in one country that will have comparable genres and notation (i.e., challenges and questions) to those in some countries in the portfolio but not others. Or they may have more commonalities with countries that fall outside the portfolio of the few lucky countries that receive funding.

Comparison is often an instinctive part of our learning activities. Convene a meeting among activists in different countries who work with parent-teacher associations and they’ll have lessons to share for days, despite jetlag. Seat them with another crowd and they’ll probably talk past each other or go to sleep. So, it’s important to consider what are meaningful communities which can have these conversations and which contribute to the conversation rather than detract from it. We also need to consider what is guiding those definitions (personal histories, anecdotes,  organizational dynamics, evidence, etc.). There is a lot of methodological guidance on the comparative method for studying politics, policy and power, including sub-nationally,  that can help us reflect more systematically about identifying groupings in a school of music’s MEL systems.

Defining meaningful groupings for bounded comparisons among subsets of musicians in a school of music is really hard work, but it’s doable and  pays off. Until recently, the Partnership to Engage, Learn and Reform (PERL) program in Nigeria (following SAVI), worked in seven locations across district, state, regional, and federal levels and across various different thematic sectors. Although the program had a single theory of change overall and all locations worked to promote more accountable, effective and evidence-informed governments, the challenges varied significantly. What might be learned from Kaduna in local governance reform with a dominant settlement under Governor Mallam Nasir El-Rufai might not perfectly translate to the political economy of Kano state with its short-termist competitive clientelist settlement. Successful reform efforts in the economic hub, Lagos, might not travel well to conflict-affected Borno. While it’s possible to learn something about accountability in Nigeria overall, comparisons in more similar regions, and within particular sectors (health, education, etc.) emerged as a more productive way to promote triple loop learning in the program, and to shed light not simply on the desirable, but the possible.   

VNG International’s Inclusive Decisions at Local Level (IDEAL) program, working over five years in seven quite diverse states facing fragility or conflict at different stages. IDEAL has a nested theory of change (i.e. different levels of detailed theory of change), but faced challenges in its MEL due to precise but relatively inaccurate quantitative indicators at the impact level, as the program is adaptive. The team was searching for ideas about how to weave it all together with useful evidence for learning and reporting. After the mid-term review, two of us supported IDEAL staff to work on this challenge with some recent success in integrating and mapping changes at outcome level to the theory of change. The program still faces a challenge to aggregate and compare its work in TPA at the output level because this is being done retroactively after several years, with country-level datasets of varying quality and access. The context and complexity in these seven countries facing quite diverse manifestations of fragility and conflict mean that local governance of environmental conservation in rural Mali seems to be too disparate to compare with local economic development planning in urban Palestine. There are, however, ways to design a MEL system to track outcomes and outputs across seemingly different activities and contexts into ‘buckets’ for relevant and aggregatable measurement that enables comparison and learning to identify trends, outliers and gaps in the theory of change (we explore this and the concept of “functional equivalents” in our next blog). 

The COPSAM Community of Practice, convened by PSAM, includes dozens of organizations multiplying TPA work across the SADC countries. The work encompasses hundreds of personal realities – all different from each other – but a common approach that is appropriate in each case. A few years ago, COPSAM  identified weaknesses in the MEL system – that there was a gap between practice and learning at the organizational as well as community levels. Knowledge that could support practice is tacit, held by a number of colleagues on the ground rather than in whatever MEL system each organization had. One of their partners had an evaluation gone wrong partly due to ill-suited methods. The “bad news” tested people, organizations, and the community’s resilience.The good news is that the group did not give up. They embarked on a learning journey with one of us that included partners gradually and selectively, focusing first on four partners (Policy Forum in Tanzania, United Purpose in Mozambique, Zambia Governance Foundation and SAPST in Zimbabwe) that had some common features among themselves and with the broader group. Then expanding the conversation to others  and explicitly discussing commonalities and differences between those four pioneers and the broader community. 

Many colleagues are already doing better, smarter portfolio MEL, challenging conventional wisdom, practices and boundaries, often disconnected from each other. We turn to some practical (and technical) lessons from portfolio MEL to help counter the supposed “existential threat” to the sector, and hopefully begin to transform discordant noises into something more harmonious in the next post.

Tales of triumph and disaster in the transparency, participation and accountability sector

Frederic Edwin Church, Cotopaxi

Autores: Thomas Aston, Florencia Guerzovich e Alix Wadeson.

It’s strategy refresh time

The Biden administration is figuring out whether and how to walk the talk on anti-corruption, the Open Society Foundations and the Hewlett Foundation’s Transparency, Participation and Accountability (TPA) Programme are doing a strategy refresh. The World Bank’s Global Partnership for Social Accountability (GPSA) is considering a Strategic Review. These are some of the biggest players in the sector. Each player has its own “niche” and approach to build a portfolio. Yet, in considering some possible new directions, the Hewlett Foundation has put out a consultation. Al Kags offered some thoughts on what to fund and offered a wish list earlier this week. Hewlett asked an important set of questions for all of us which we have slightly amended:

  • How should we measure progress toward outcomes and in priority countries in a given portfolio?
  • How can we best contribute useful knowledge to the field through grantmaking, commissioning evaluations, and facilitating peer learning?
  • And what can a portfolio’s monitoring and evaluation system do to link the answer to both questions together?

To answer these questions, we must first acknowledge a wider issue — null results terrify us. Every time a new Randomised Control Trial (RCT) offers anything less than unambiguously positive results, we have groundhog day about whether the whole sector is a worthy investment or not.

Nathanial Heller captured this trepidation well for yet another study in Uganda about to publish its results:

A handful of initiatives have given the impression to donors that transparency and accountability efforts don’t work. One of these was the Making All Voices Count (MAVC) programme, which some donors (unfairly) called a “failure,” point blank in 2017. Further, as one of us explains, two studies in 2019, the Transparency for Development project (T4D) and another from Pia Raffler, Dan Posner and Doug Parkerson found null results, and this caused collective consternation in the sector.

The conversation seems stuck in a vicious feedback loop. So, to demonstrate success, many rely on idiosyncratic cases and lean very heavily on a handful of country contexts which conducted a lot of RCTs or narrowed the focus of study to common tools (e.g. scorecards) and/or outcome measures (for an effort to standardize indicators). Many others have sought refuge through “pivots” and “innovation” rather than having a candid conversation about mixed evidence and what we might do (or not) to escape the feedback loop. As ex-International Development Secretary, Rory Stuart, recently argued [talking about Afghanistan], ‘“we have to stop [saying] “either it was a disaster or it was a triumph.”’

The myth of homogeneous and generalisable success

Despite this sage advice, one expert recently told the Hewlett Foundation that a “lack of evidence about the impact of TPA initiatives is now an existential threat to the field.” And one thought leader was said to have remarked that “the window of opportunity for social accountability will remain open only if we can surface evidence that social accountability is worthy of continued support.”

There are literally a dozen evidence reviews of the TPA sector which refute this claim (we have read them, alongside hundreds of evaluations and studies). Evidence is certainly mixed, but it’s hardly absent. Part of the fear expressed recently is about heterogeneity. This is a nightmare for anyone that seeks to use evaluations to find generalisability from interventions about complex TPA processes. Many impact evaluators have opted to reduce interventions to a single tool, omitting too many components of the work, seeking findings about the “average beneficiary” that are universally valid and hold in all contexts. In the TPA sector, variation in outcomes in different contexts and sectors is something to be expected, not feared. We regularly assert that “context matters,” and yet we forget this when it actually matters. As Howard White and Edoardo Masset from the Centre for Excellence for Development Impact and Learning (CEDIL) highlight, we should focus on transferability — findings that tell us what contextual factors condition (or not) the transfer of a finding from one setting to another.

On balance, if you read the evidence reviews in the sector, the message is generally positive. A Qualitative Comparative Analysis (QCA) of the UK’s former Department for International Development’s (DFID) Empowerment and Accountability portfolio found positive results across nearly all 50 projects reviewed in 2016 (prior to the sector’s apparent fall from grace). But, this was largely ignored — perhaps because it wasn’t an RCT. Other groundbreaking reviews in the sector using realist methods which present an array of outcomes and take context into account in particular sectors were also ignored. Either there is collective amnesia, a selective reading of the evidence, or experts’ expectations of “worthiness” may be rather too elevated.

As Peter Evans of the UK Foreign, Commonwealth and Development Office (FCDO) explains, evidence reviews have their flaws. We would argue that many of them have unwarranted methodological biases and some make grand arguments without much empirical evidence. Evans is also right that “no-one ever opens an evidence review and finds the perfect answer to their question.” But, when evidence reviews don’t quite cover it, that doesn’t mean that we should resign ourselves to the wisdom of a few researchers’ hot takes, loud voices in our echo chambers, or give undue credence to a handful of expensive impact evaluations.

The supposed “existential threat” is not primarily empirical, but semantic and discursive.

The question for us remains — how can portfolio-level M&E in the TPA sector build a more inspiring narrative to help make the case to continue investing in the collection of evidence of TPA’s impacts over the medium to long term?

In our second blog post, we start answering this question. We share insights from our work as M&E consultants working with different portfolios and connecting the dots across projects and portfolios.

CGE-SC abordará governo aberto, participação social e transparência em live.

Na próxima segunda feira (7/6), às 15h, a Controladoria Geral do Estado de Santa Catarina transmitirá uma live denominada ” Governo Aberto OGP Local Brasil ” pelo Youtube.
A transmissão abordará sobre governo aberto e os processos que vem acontecendo em governos locais brasileiros, com as prefeituras de São Paulo e de Osasco e o governo do Estado de Santa Catarina.

Estão envolvidos a Udesc Esag Politeia e DAP, Controladoria-Geral do Estado, #act4delivery e diversos parceiros de sociedade civil, academia e governos estadual e municipais, com a Open Government Partnership, OGP, que articula iniciativas e aprendizagens em governo aberto em diversos países e regiões do mundo.


No atual momento de tantos desafios, há pessoas, governos locais e organizações que seguem buscando abrir nossos governos, gerar colaboração e confiança, evitar retrocessos e aprimorar a gestão pública, com os cidadãos.

Para assistir a live, acesse aqui

Parceria entre estado de SC e a Open Government Partnership para criação de Plano de gestão em transparência

https://cge.sc.gov.br/governo-aberto/

O estado de Santa Catarina firmou uma parceria com a com a organização internacional Open Government Partnership (OGP) para a elaboração do programa de Estado sobre Governo Aberto, com o objetivo de tornar o plano de gestão estadual mais eficiente e transparente. O prazo final para criação do plano é 31 de julho e para execução, 31 de outubro de 2022. No final do projeto, a instituição fará uma avaliação do desempenho do estado de Santa Catarina.

Para a execução da parceria, foram convidados a Federação das Indústrias de Santa Catarina, Rede de Controle, Conselho Regional de Contabilidade, Conselho Regional de Administração, Tribunal de Contas do Estado, Transparência Internacional ,Open Knowledge Brasil, Observatório Social de Santa Catarina e o Grupo de Pesquisa Politeia da Udesc Esag.

A metodologia de trabalho será baseada em quatro temas que serão abordados em mesas temáticas, espaços de discussão e proposição de melhorias de políticas públicas. Esses temas são:

•Transparência Ativa;

•Compras Públicas;

•Participação do Usuário e Avaliação dos Serviços Públicos;

•Articulação de Governo Aberto e Controle Social nos Municípios.

Nesta quarta-feira (2/6) ocorreu o primeiro encontro virtual com mais de 60 representantes da sociedade civil e de outras esferas do setor público para iniciar a elaboração dos compromissos do 1º Plano de Ação SC Governo Aberto.

Santa Catarina é o primeiro Estado brasileiro a fazer parte da OGP. No Brasil, a União (2011) e a Prefeitura de São Paulo (2016) foram os primeiros. A candidatura de adesão à organização foi uma iniciativa da Controladoria-Geral do Estado (CGE) com o apoio da Secretaria de Integridade e Governança (SIG). A candidatura ainda teve o endosso do Observatório Social de Santa Catarina e do Grupo de Pesquisa Politeia da Udesc Esag, que seguem participando do grupo coordenador do projeto no Estado de Santa Catarina.

Saiba mais: A parceria também foi noticiada no site da NSC – SC, confira aqui.

Webinar – HOW TO MAKE MID-LEVEL THEORY MORE USEFUL FOR SOCIAL ACCOUNTABILITY THAT CONTRIBUTES TO BUILDING BACK BETTER?

image.png

Dia 01 de junho de 2021, terça-feira, o Grupo de Pesquisa Politeia participará de webinar durante a semana de avaliação gLOCAL, na qual se compartilha conhecimentos de várias partes do mundo sobre monitoramento e avaliação de programas, serviços e políticas públicas.

Florencia Guerzovich, pesquisadora colaboradora do Grupo Politeia, participará do debate junto com demais parceiros.

Florencia Guerzovich é consultora independente e trabalhou para o Banco Mundial, Open Society Foundations, Ford Foundation, entre outras organizações. Atualmente, ela trabalha como Consultora Sênior em Monitoramento, Avaliação, Pesquisa e Aprendizagem na Global Partnership for Social Accountability do Banco Mundial.

Acesse aqui para mais detalhes e para realizar a sua inscrição.

GPSA abordará accountability social como instrumento de recuperação para pandemia, em Fórum Mundial.

GPSA Global Partners Forum – Social Accountability for a Strong COVID-19 Recovery

O sétimo fórum global, organizado pela GPSA, acontecerá nos dias 10 à 13 de Maio de 2021. As sessões terão início às 8h30 da manhã, e terminarão às 11h30, no horário de Brasília.

O tema central será sobre o uso da accountability social para recuperação da pandemia, com o objetivo de destacar a importância da vacinação e debater os esforços liderados pela sociedade civil, que complementam os trabalhos do setor público.

Na terça-feira (dia 11), a UDESC estará presente na organização do terceiro painel, sobre a conexão entre a responsabilidade social e as intervenções do setor público. Programe-se para acompanhar ao vivo, as 9h da manhã (BRT).

Confira aqui a programação detalhada

Confira aqui a lista de palestrantes

Faça a sua inscrição, aqui!