Transparência de dados e parcerias na pandemia de COVID-19: exemplos, desafios e aprendizagens

Por Ana Beatriz Rodrigues, Leonardo F. G. Lima, Maria Roquelene Souza e Maiara Silva*

Desde o início da pandemia de Covid-19, observa-se a preocupação com a disponibilidade de informações referentes aos dados sobre a difusão do vírus e da doença, seus impactos e os planos de ação para seu combate. 

Nesse período, entre o final de 2019 e meados de 2021, os meios que buscam promover a transparência de informações e critérios de tomada de decisões ganharam importância, como caminhos para aprimorar a resposta e o gerenciamento dessa grande crise, com algum grau de razoabilidade, controle e previsibilidade, em meio a tantas incertezas e riscos. 

A transparência se tornou aliada para o desenvolvimento de pesquisas e o fortalecimento do planejamento e a implementação dos planos de combate à doença e suas consequências, sejam eles planos de vacinação, políticas de isolamento ou métodos de prevenção. 

Além da relevância da transparência e da accountability na gestão pública, ganharam força os dispositivos que buscam demonstrar a legitimidade de iniciativas de organizações do terceiro setor, universidades e parcerias público-privadas. Mesmo em crise, é possível notar esforços vindos de vários segmentos da sociedade, em busca de sanar grandes problemas como o da pandemia e suas decorrências. 

Exemplos de coprodução e parceria em meio à crise não faltam. É o caso do aplicativo “ISUS”, criado pela Escola de Saúde Pública do Ceará, em parceria com o governo estadual do Ceará. O aplicativo permite que os trabalhadores da linha de frente da saúde consigam se informar e trocar experiências via app. 

Além da criação do aplicativo, em busca de mitigar a falta de equipamentos médicos, uma parceria público-privada entre a Escola de Saúde Pública do Ceará e organizações privadas permitiu a criação de um respirador menos agressivo, o Elmo, um capacete de respiração assistida genuinamente cearense, não-invasivo e mais seguro para profissionais da saúde e pacientes. 

Em casos como esses, é comum que sejam utilizados dispositivos que buscam conferir legitimidade à iniciativa, a fim de demonstrar aos “stakeholders” que os recursos empregados estão sendo bem utilizados e que as tecnologias estão baseadas em conhecimento científico qualificado, o que fortalece a relação entre transparência e accountability. 

Já no município de Florianópolis, foi criado em meio à pandemia o Covidômetro, que é um instrumento de atualização diária de dados relevantes sobre a pandemia de Covid-19 na cidade. A ferramenta ajuda a informar os cidadãos sobre diversos aspectos, como casos suspeitos, casos confirmados, casos curados, ocupação de leitos de UTIs, número de óbitos, vacinação e tendências nos números em cada indicador. Categoriza, ainda, quatro estágios de risco, que são: Risco Potencial Moderado; Risco Potencial Alto; Risco Potencial Alto e Risco Potencial Gravíssimo. Este último é o mais crítico e a cidade esteve sob este alerta por bastante tempo. A cada estágio, corresponde medidas de prevenção e contenção, nem sempre claras e seguidas como referência para a ação. 

Outra iniciativa para orientar e fomentar a transparência durante a pandemia vem do Ministério Público de Contas de Santa Catarina, MP-SC, que, entre outras ações, elaborou e divulgou uma Nota De Orientação Administrativa Circular Mpc 001/2021, sobre Medidas administrativas relativas à transparência no processo de vacinação que devem trazer prazos, critérios, mecanismos de monitoramento e indicar os “fura fila” que, ao se aproveitar de eventual falta de controle e transparência de dados da vacinação, acabam se imunizando de forma irregular. 

O MPC/SC oferece três orientações sobre a transparência de dados: 1) atualização diária, relação nominal de todas as pessoas vacinadas no dia imediatamente anterior, com identificação de nome, CPF, local onde foi feita a imunização, função exercida, vacina utilizada na imunização; 2) Incluir, na página eletrônica do Município, link específico a ser atualizado semanalmente, contendo os principais dados necessários ao acompanhamento da Cobertura Vacinal Municipal, preferencialmente sob o formato de Painel e; 3) Sugere-se a divulgação, por meio do site da Prefeitura Municipal e outros veículos oficiais, de canais para denúncia de “fura fila” da vacinação, preferencialmente via Ouvidoria. 

O MPC/SC ressalta, contudo, que essas orientações não excluem a necessidade de inclusão de dados no sistema específico disponibilizado pelo Ministério da Saúde, ou seja, que seu objetivo é ampliar a transparência pública no estado, não substituir um mecanismo por outro. 

O que podemos aprender com tudo isso é que, apesar dos riscos, desafios e incertezas associados a uma pandemia de tamanha dimensão, além das informações falsas difundidas que provocam desorientação, diversos gestores públicos, organizações privadas, acadêmicas, de terceiro setor e órgãos reguladores e de controle estão buscando promover transparência e accountability. Buscam fazer com que todas as informações possíveis sejam entregues à população, aos pesquisadores e autoridades públicas, para que as ações que estão sendo realizadas durante esse período de pandemia sejam visíveis e para que se encontre soluções para os vários problemas que enfrentamos. Ainda que tudo isso não seja suficiente para lidarmos com os desafios atuais, cabe reconhecer os erros, acertos e aprendizagens e seguir trabalhando em colaboração para que a transparência, o diálogo e a responsabilização se tornem algo contínuo e incorporado às relações entre cidadãos, governos e seus parceiros.

* Texto elaborado em agosto de 2021, pelos acadêmicos de administração pública Ana Beatriz Rodrigues, Leonardo F. G. Lima, Maria Roquelene Souza e Maiara Silva, no âmbito da disciplina sistemas de accountability, da Udesc Esag, ministrada pela professora Paula Chies Schommer, no primeiro semestre de 2021.

Referências 

FLACH, Felipe; MATOS, Marllon de; FARIA , Thiago de; KRETZER, Vitor. Uma análise sobre a transparência nos planos de vacinação: como estamos em Florianópolis. Politeia Coprodução. 2021. Disponível em: 

<https://politeiacoproducao.com.br/uma-analise-sobre-a-transparencia-nos-planos-de-vacinac ao-como-estamos-em-florianopolis/> Acesso em 12 de agosto 2021. 

SCHOMMER CHIES, Paula. Qual a importância da transparência pública no tempo de pandemia? Politeia Coprodução. 2020. Disponível em: 

<http://politeiacoproducao.com.br/qual-a-importancia-da-transparencia-publica-no-tempo-de -pandemia-o-caso-da-prefeitura-de-florianopolis-que-construiu-um-mecanismo-de-transparen cia-mas-tomou-decisoes-questionaveis-durante-a-sua-ope/> Acesso em 24 de agosto 2021. 

OPEN KNOWLEDGE BRASIL. Disponível 

em:<https://ok.org.br/projetos/transparencia-covid-19/>Acesso em: 12 de agosto de 2021. COVIDÔMETRO FLORIPA. Disponível em: <https://covidometrofloripa.com.br/>

Acesso em: 12 de agosto de 2021. 

ISUS. Disponivel em:<https://i.sus.ce.gov.br/> Acesso em: 12 de agosto de 2021 Elmo. Disponivel em:<https://sus.ce.gov.br/elmo/> Acesso em: 12 de agosto de 2021

A Associação dos Pacientes Renais de Santa Catarina e como a prestação de contas pode ser aliada para a credibilidade de organizações da sociedade civil

Por Guilherme Zomer, Juventino Neto, Luã Costa e Nathan Vieira*

Muitos veem a prestação de contas como uma formalidade burocrática, porém ela pode ter uma grande importância para as organizações da sociedade civil e ser um fator primordial para a legitimidade da organização.

SOBRE A ASSOCIAÇÃO DOS PACIENTES RENAIS DE SANTA CATARINA (APAR)

Fundada em 08 de janeiro de 1997, a Associação dos Pacientes Renais de Santa Catarina, APAR, reúne  pacientes com doenças renais crônicas do estado de Santa Catarina e procura, por meio de diversas parcerias, contribuir para a qualidade de vida de pacientes em tratamento dialítico e já transplantados.

A organização promove ações voltadas para a prevenção de doenças renais, acompanha e busca assegurar tratamento de qualidade nas clínicas de hemodiálise, entrega e apoia medicamentos básicos e de alta complexidade por parte do Sistema Único de Saúde, SUS, além de prestar serviços socioassistenciais para pacientes e suas famílias.

Dentre os projetos realizados, destacam-se as campanhas de prevenção de patologias renais, que são feitas frequentemente pela organização, visando conscientizar e informar a população sobre o tema. A organização atua por meio de  parcerias com órgãos públicos, como a Prefeitura de Florianópolis, e empresas privadas, como na ação “troco solidário”, realizada  com  supermercados, contribuindo para a  compra de alimentos  para o público atendido pela APAR. São feitas, ainda, parcerias com outras organizações da sociedade civil espalhadas por todo o Brasil, com o intuito de fortalecer o trabalho desenvolvido por essas organizações.

QUANTO À PRESTAÇÃO DE CONTAS NAS PARCERIAS COM O ESTADO

Construído ao longo de vários anos e colocado em prática desde 2014, o Marco Regulatório das Organizações da Sociedade Civil (MROSC) é o principal elemento a orientar e  formalização das responsabilidades e  atividades a serem desenvolvidas na parceria entre governos e OSCs e para que estas prestem  contas junto aos órgãos governamentais existentes. 

O marco regulatório, agenda política ampla que tem o objetivo de aperfeiçoar o ambiente jurídico e institucional relacionado às organizações da sociedade civil e suas relações de parceria com o Estado, define a prestação de contas na lei 13.019 de julho de 2014, como um procedimento em que se analisa e se avalia a execução da parceria quanto aos aspectos de legalidade, legitimidade, economicidade, eficiência e eficácia.

 Através da prestação de contas, pode ser verificado o cumprimento do objeto da parceria e o alcance das metas e dos resultados previstos, compreendendo duas fases: 

a) apresentação das contas, de responsabilidade da organização da sociedade civil; 

b) análise e manifestação conclusiva das contas, de responsabilidade da administração pública, sem prejuízo da atuação dos órgãos de controle.

PRESTAÇÃO DE CONTAS ALÉM DAS OBRIGAÇÕES LEGAIS

Para muitas organizações, como é no caso da APAR, a prestação de contas vai além de uma obrigação legal, ela é um caminho para legitimar as operações da organização, tanto perante aos parceiros quanto perante a população em geral. Uma prestação de contas bem feita, transparente e acessível, além de cumprir as obrigações e exigências legais, mostra para os parceiros e para a população que  se trata de uma organização séria, com credibilidade e confiável. Num país como o Brasil, em que existem diversos casos de corrupção, muitas vezes as organizações podem ser vistas como fachada, locais para lavagem de dinheiro, entre outras coisas, o que é totalmente equivocado, pois tais casos se tratam de exceções, não a regra. Mas tendo em vista que as repercussões são sempre maiores quando ocorrem essas tais exceções, elaborar uma boa prestação de contas se torna fundamental para dar credibilidade para as Organizações da Sociedade Civil.

Há também quem critique deliberadamente as OSCs simplesmente para deslegitimá-las, como fazem alguns políticos, pois essas muitas vezes são críticas e fazem controle social sobre os governos ou sobre temas sensíveis como meio ambiente, direitos humanos, entre outros. Esta é mais uma razão para as OSCs serem proativas na comunicação sobre o que fazem e como fazem. Além disso, existe uma questão de accountability e responsividade – respostas às expectativas substantivas da organização quanto a sua  causa, ao que se propõe e porque existe – que se deve buscar atender. Algo que tem a ver mais com resultados da organização do que com os processos.

 Existem uma série de plataformas que auxiliam OSCs na transparência e na prestação de contas, por exemplo: a plataforma OSC Fácil, a plataforma Bússola Social, a plataforma desenvolvida pela Sísamo, entre outras. Além disso, ainda existem instituições que auxiliam na gestão de OSCs. Por exemplo, a organização OSC Legal, uma iniciativa para o fortalecimento das Organizações da Sociedade Civil com o intuito de auxiliar no gerenciamento e nas suas relações com o setor privado e a administração pública. A OSC Legal busca isso através da promoção de trocas de experiências, da divulgação de informações úteis, da produção e da disseminação de conteúdos relacionados à gestão social e ao direito. 

Outro exemplo é o Instituto Comunitário da Grande Florianópolis (ICOM), que atua auxiliando a gestão das OSCs e incentivando a participação cidadã na vida comunitária e  no terceiro setor. As principais áreas de atuação do ICOM são: apoio técnico e financeiro para OSCs; investimentos sociais na comunidade; e produção e disseminação de conhecimento. Além disso, a Instituição possui ações voltadas ao fortalecimento, capacitação, consultoria, articulação e doação de recursos para OSCs da região da Grande Florianópolis.

IMPORTÂNCIA DOS PRÊMIOS E DAS RELAÇÕES PARA ORGANIZAÇÕES DA SOCIEDADE CIVIL 

Por conta de sua importância para a sociedade e sua contribuição para a construção do bem público, as organizações sem fins lucrativos podem pleitear títulos, qualificações e certificações junto ao Poder Público, após cumprimento de alguns requisitos exigidos para tal. Esses títulos e certificações, por sua vez, podem conceder benefícios fiscais, reconhecimento público e outros incentivos à manutenção ou ampliação das atividades desempenhadas.

 Alguns títulos e qualificações que podem ser requeridos pelas organizações sem fins lucrativos no âmbito federal são o Título de Utilidade Pública Federal; o Certificado de Entidade Beneficente de Assistência Social, CEBAS; e o título de Organização da Sociedade Civil de Interesse Público, OSCIP.

Existem, ainda, premiações que destacam e certificam organizações que atuam em prol do interesse público, reconhecendo sua maturidade organizacional, transparência ou forma de atuação. Neste caso, destaca-se por exemplo o Selo Doar, idealizado pelo Instituto Doar, que legitima e destaca o profissionalismo e transparência nas organizações não-governamentais brasileiras baseando-se em eixos como estratégia de financiamento, comunicação, prestação de contas e governança.

No âmbito estadual, destaca-se a Certificação de Responsabilidade Social de Santa Catarina e o Troféu Responsabilidade Social, que em 2021 chega à 21ª Edição. Tal certificação é destinada como reconhecimento ao trabalho realizado e apresentado nos balanços sociais das organizações escolhidas. A Assembleia Legislativa do Estado de Santa Catarina (Alesc) promove a certificação e premiação em parceria com uma Comissão Mista de Certificação de Responsabilidade Social, composta por representantes técnicos de órgãos públicos e da sociedade civil.

Tais honrarias, além de propiciar o reconhecimento das instituições que assumem a responsabilidade social, trazem fôlego e apoio sustentável para a manutenção dos  serviços de caráter público que as OSCs podem oferecer.   

Outra estratégia é a de formar uma rede cooperativa, como um fenômeno crescente no cotidiano das organizações, tornando-se tema de interesse de acadêmicos, empresários e gestores do setor público, setor privado e terceiro setor. 

No caso da APAR, é válido mencionar a parceria com a Associação Renal Vida, ambas associações catarinenses com interesses em comum que se juntam para fortalecer sua causa, por meio de campanhas de prevenção e divulgação de seus serviços.

Com o que foi apresentado, podemos concluir que investir tempo e dedicação na prestação de contas da organização é fundamental e essencial para que a prestação seja um instrumento que vá muito além de uma formalidade ou uma obrigação da instituição. Ela é um instrumento de credibilidade e um vetor para atrair parcerias e visibilidade para as organizações da sociedade civil. 

* Texto elaborado pelos acadêmicos de administração pública Guilherme Zomer, Juventino Neto, Luã Costa e Nathan Vieira, no âmbito da disciplina sistemas de accountability, da Udesc Esag, ministrada pela professora Paula Chies Schommer, em 2021.

REFERÊNCIAS

ICOM. Icom Floripa, 2021. Sobre o ICOM. Disponível em: <https://www.icomfloripa.org.br/o-icom/>. Acesso em: 20 de Agosto de 2021.

OSC LEGAL. Osc Legal, 2021. Página Inicial. Disponível em: <https://osclegal.org.br/>. Acesso em: 20 de Agosto de 2021.

INSTITUTO DOAR. Selo Doar, 2021. Disponível em: <https://www.institutodoar.org/selo-doar/criterios/>.

INSTITUTO DOAR. Quem somos, 2021. Disponível em: <https://www.institutodoar.org/selo-doar/criterios/>.

GOVERNO FEDERAL. Marco Regulatório das Organizações da Sociedade Civil, 2017. Disponível em: <http://www.mds.gov.br/assuntos/assistencia-social/entidade-de-assistencia-social/marco-regulatorio-das-organizacoes-da-sociedade-civil-2013-mrosc>.

ASSEMBLEIA LEGISLATIVA DO ESTADO DE SANTA CATARINA. Certificado de Responsabilidade Social de Santa Catarina e Troféu Responsabilidade Social – Destaque SC, 2021. Disponível em: <http://responsabilidadesocial.alesc.sc.gov.br/>. 

Orchestrating a MEL system for portfolios and programs: what we’re testing now

Autores: Alix Wadeson, Florencia Guerzovich e Thomas Aston

image.png
Arkhip Kuindzhi · Dnepr in the morning, 1881

As we support different organizations, think about their MEL systems, and reflect on our own work, we’re finding that we can measure progress and aggregate results to conduct a symphony that is larger than the sum of its ‘instruments’ (continuing with our ‘school of music’ metaphor). In our last post of this blog series, we discussed the importance of engaging with programs and portfolios in place, while working towards a new framework by incorporating “layers” of learning from experimentation, both at project and portfolio levels. We also learned about similarities with the thinking of other colleagues, such as the UNDP’s Accelerator Labs and Innovation teams. Our understanding is slowly evolving, but we are already learning various lessons; we hope these can inform others also working on MEL or managing ‘schools of music’, and continue this dialogue with peers and funders.

We have organized this learning and current thinking into five key areas: (i) construct validity and comparisons; (ii) collective and meaningful theory building; (iii) realistic timelines for realizing and measuring impact; (iv) purposeful and appropriate data collection and aggregation; and (v) MEL utility for all.

1) Are we measuring what we think we are measuring? And how can we compare and learn across projects and contexts?

We recommend starting by defining standard concepts, and indicators with specific guidance on what is (and is not) important to document at the project and portfolio level. This decision is often associated with a theory of change (ToC). There is some recent debate about the language, but it’s essentially a hypothesis of how and why change is expected to happen (or happened). We should also consider the main questions that stakeholders have about this theory (or theories), rather than generate concepts or indicators for their own sake.

Two of us are developing and testing a Monitoring, Evaluation, Reporting and Learning (MERL) guide for GPSA grant partners and evaluation consultants. This process has helped us reflect on common challenges in establishing other TPA portfolio MEL systems. For us, this process includes unpacking the key indicators that could prove useful to help us monitor, evaluate, iterate, and narrate a general trajectory of change across different contexts.

Also baked into this process is a set of core assumptions that the ‘school of music’ identified with its partners over the years. This included compromises for prioritizing certain assumptions over others, across the portfolio (e.g., zeroing in on the non-financial contribution of the funder as part of the effort rather than only on the work of local groups; or focusing on the development of relationships for meaningful action as opposed to focusing on the design of tools).

set of core concepts is also embedded here, for example, ‘sustainability’. The GPSA stakeholders defined both old and new concepts together, taking into account emergent practice as well as research in the sector. It was important to be clear about their definitions to ensure that stakeholders understand what exactly the portfolio intends to measure about and learn about collectively, before getting into the ‘how’.

Another example is “capacity building”, quite a generic term, but also one that is central to the GPSA’s work (and also for many other TPA projects). Therefore, the GPSA observed its practice and developed its own framework to define types of necessary capacities for effective social accountability processes. It’s important we are clear about what these key concepts mean for a given portfolio or program.

Such definitions are important to support the potential transfer of key ideas to different project contexts more easily, and to ensure we are monitoring similar dynamics (i.e., construct validity). These are also important to help avoid situations where actors are (perhaps unknowingly) talking past each other when reflecting about the work — e.g., equating capacity development to the top-down transfer of expert knowledge as opposed to the development of capacities to learn by doing with and from others. USAID also just launched a document explaining what they mean by locally-led development. So, we aren’t alone in this endeavor.

Definitions can be a purposive tool for a living process that enables us to find compromises to produce harmony among the different components of a ‘school of music’ (see below). Their codification is just a step to support MEL that can be revisited over time, in light of emergent practice. In the TPA sector, this isn’t meant to prescribe or proscribe, but rather, to ensure we’re talking about the same things, so we can also work to measure and learn about them consistently. We find that there are no perfect definitions and concepts can evolve with learning and practice. Therefore we are striving for ‘good enough’, while also being careful about conceptual stretching.

A ToC can be a useful instrument to prioritize concepts and indicators. But a portfolio-level ToC is often written in a way that does not speak to the specificities of concrete projects (see below). This is why it is useful to “localize” the ToC and associated Results Framework (RF) indicators into project-level ones early onCoherent nested ToCs; defined core concepts; a priority set of indicators; and a general approach to coding and scoring them might help us learn about key features of a portfolio/program.

It’s also equally critical to be explicit about what is not necessary to measure. We believe all stakeholders should avoid the urge to add too many indicators, and to add ‘extras’ with caution. The aim here is twofold:

a) focus attention on a manageable number of priority areas (theory, questions, indicators, learning activities) for the different stakeholders of a given ‘school of music’; and

b) avoid projects with too many indicators, while offering limited value added to your priority areas. The latter seems to be a pervasive problem in TPA projects that we know. The GPSA team reviewed 1,000s of funding applications. Over the years, one of us curated MEL jam sessions with grant partners. Developing fit-for-purpose TPA indicators which can provide the meaningful evidence we seek is rarely easy and can contribute to ‘overdoing it’ on the indicators. However, we suspect that if agencies really measured all the indicators identified in their proposals, sometimes over 70, they would not have time and money left to do the actual work! It can also dilute the quality of indicator measurement across the board, when the resources are spread too thin in this way (particularly with small teams).

This may sound like cause for concern, a risk of putting diverse grants into a “MEL straitjacket,” promoting check-box isomorphic mimicry, or promoting undue standardization. This is not our intent. So, to allow for diversity, one option is to use “functional equivalents,” an approach used in comparative social science and law as well as in the TPA field (e.g., in OECD Anti-bribery convention and Global Integrity’s legendary reports and indicators). In practical terms, this is about determining the function of key aspects of TPA projects, rather than focusing on the form (i.e., name or label).

For example, many TPA portfolios seek to bring actors from civil society, public sector, and citizens to engage collaboratively on joint problem solving to address specific service delivery or policy failure problems. These processes can take many forms with different labels — from school-based management bodies to community health committees to higher level engagement structures between policy makers and CSO coalitions. They can be formal or informal, rigid or loose.

The importance is the function that they play and that there are the appropriate group of stakeholders engaged in them to effectively address the identified problems. If these processes are critical to the joint purpose of a ‘school of music’, then each member should also ideally track an indicator focused on the activities and engagement of multi-stakeholder compacts, platforms or interfaces. However, it should be up to each member to determine what that looks like and how that works in practice (i.e., the functional equivalent). Our preference is a balance that promotes localization and appropriate contextual fit but does not simply propose that a thousand flowers (and indicators) bloom wild; this can lead to cacophony and may well be counterproductive to collective learning over the medium and long-term.

We suspect that the use of functional equivalent indicators may enable better comparison of processes with similar aims (they are part of the same overall program/portfolio) but that are not just copycats (i.e., they represent locally-led, contextually-tailored versions of the collective ‘music’). We hope this will help us to learn more about what works and does not, under which conditions. For example, how and why different contextual factors can sometimes lead to the same results or alternatively, how seemingly similar situations produce different results.

Over time, you can build a useful map of the portfolio, which can help to identify opportunities for structured comparisons. Ad-hoc case selection strategies, so common in TPA evaluations, can work well for communications, but less so for transparent, adaptable and accountable ‘schools of music’. They also get in the way of social learning, in the sense of grappling together with what we know and the uncertainty of the work.

Organizations should be prepared for course correction on the choice of core concepts and indicators until they reach the ‘Goldilocks’ balance.

We recognize that there’s a risk of too much standardization, as practitioners know well. However, we have seen in some TPA portfolios and programs that not having enough relevant data in many cases or datasets that cannot be meaningfully compared. It may take time to get the mix of concepts and indicators ‘right’, but it’s worth the effort in our view.

From a MEL practitioner perspective, the focus should be balance and compromise. Unrealistic zero-sum debates across extremes do not help us to move forward nor help us to ‘learn by doing’. As Kathy Bain picked up from our previous posts, “If we cannot support the scale up and learning from the many disparate but rather small scale success stories we all know about, we are falling short. Candid discussions and more purposeful experimentation on how best to do this, while learning from each other, is urgently needed”.

2) Build theory collectively, yet with boundaries

Nested mid-level theories of change (with boundaries) can help provide focus and build political compromises. We have shared some of our experiences on the benefits of mid-level theory for field and portfolio MEL. Mapping assumptions can help prioritize change pathways within a portfolio. Being explicit can help us interrogate the validity of these assumptions, as well as recognize other pathways that co-exist beyond our portfolio (i.e., other genres of music that exist in other schools of music and may be in harmony or discord with our work).

In this way, when we talk theory, we are not thinking about only lobbying for our preferred musical genre as universal, but the possible benefits of alternative paths and what their tradeoffs might be given organizational and contextual circumstances. When portfolio ToCs are only made with strong normative assumptions for advocacy, fundraising or other objectives, they may inadvertently undermine the quality and effectiveness of MEL, reinforcing our discursive “existential threat.”

We also advise to explicitly ask about the funder or fund manager’s contribution and comparative advantage: Even when ‘schools of music’ increasingly let ‘local musicians’ lead, there is much to learn from their common thread. We need to learn about whether they add value to the symphony or the transaction costs, organizational dynamics and/or other factors turn lofty goals into a symphony or a cacophony. If we inquire rather than assume local partner coordination (i.e. an orchestra) and a funder or fund manager’s role in it, we can learn how to better support the work.

For example, the Fund for Transparent Slovakia’s (FpTS) evaluation found that incentivizing joint projects among NGO partners did not pay off in their context, but supplementing grants with informal dialogues, including but not limited to partners, added value to the system (see p.61 here for a glimpse). However, it is important to note that much of the FpTS administrators’ staff time, responsible for the fund’s value-add beyond direct grants, was not covered by its administrative costs.

Clarifying a funder’s role in a given sector or context, where there are other actors trying different approaches, often also requires asking: what is our unique contribution to funding change within a system? Also, what ‘musicians’ and ‘musical genres’ are well suited to support and which ones are a better fit for another’s niche or specialty? In the case of the FpTS that means grappling with the opportunities and constraints of working with funders from the local private sector. For the GPSA, the institutional home at the World Bank cannot be overlooked. For many others, those opportunities and constraints will be shaped by the link with a government’s foreign policy, a founder, management and/or a board member, whose influence practitioners working at portfolio level know well.

3) Identify an appropriate time horizon for impact (and be realistic!)

Target conscientiouslyLooping back to our first post reflecting on the feedback that the Hewlett Foundation received on its strategy: Telling a narrative of progress is about showing results that stretch us collectively, but does not ‘throw the baby out with the bathwater’. We recognize that TPA often is messy and takes patient investment, but the process can lead to success if funders and other stakeholders keep at it together, as Louise Cord, the Global Director for Social Sustainability and Inclusion at the World Bank, put it.

For example, in the short- and medium term, you can set targets for the journey that are doable but not easy to achieve (e.g., other actors’ uptake and adaptation of interventions’ lessons — i.e., embeddedness) rather than expect unrealistic ones (e.g. wholesale scale-up through the adoption and implementation of an intervention exactly as one designed it). This way we may avoid feeding the discursive existential crisis of the next decade.

Connect MEL across strategy cycles: We often say that TPA work is the story of a marathon, rather than a sprint. So, we can see it as a type of relay race between the conductors in a school of music. Strategy cycles start and end, often informed by path dependence and, hopefully, learning from predecessors. The challenge is that we often forget to talk about the baton passed between the conductors in this relay race.

Evidence suggests that scaling up innovations takes a decade and translating policy change to implementation tends to take at least five years. So, it’s not realistic to expect high-level impact for communities within only a couple of yearsas is often expected. For this reason, we should seek measurement of impact and progress in 5, 10, and even 20-year periods. For the medium-term, we could use more in-depth reflections of iterations across those cycles — did the interpretations of the lessons from the last funding cycle that inform our current actions hold up over time, or not? For example, the Partnership to Engage, Reform and Learn (PERL) program is giving a series of webinars in October 2021 to share lessons from 20 years of different UK governance programs in Nigeria.

Similarly, the World Bank’s approach to social accountability in the Dominican Republic since the 1990s finds that local and global contextual shifts, and TPA and sectoral lessons, all informed changes in the Bank’s approach in country and elsewhere. All too often, those long-term histories are reserved for a selected few, doing the reflections, having the benefit of inter-generational reflection or learning from their predecessors, and/or shifting strategies from one approach to the next. In other words, we could use more comparisons to tell the story of these relays and the learning generated across strategy cycles over time.

4) Gather data purposefully and aggregate appropriately

Build filters before gathering data: The challenges of monitoring a portfolio’s work relate to its scope – the sheer volume of information and transaction costs associated with working with so many people and actors. The challenge is not so much around generating information as organizing it, constructing filters, and developing the systems to apply them so that the right information and indicators are available and consistently applied across project grants. As Clay Shirky (in Juskalian, 2008) asserts, “Without clear guidance, long qualitative narratives may be so variable that analysis, particularly comparative analyses, becomes extremely difficult”.

Aggregate data appropriately: An additional challenge with the portfolio structure may be demands for inappropriate aggregation, combining dissimilar investments, projects or outcomes to present several diverse projects in a simplified narrative. This is a challenge for TPA professionals as many of us have faced requests to aggregate index scores (e.g., civic space) and results, or perhaps find ways to add contributions to women’s reproductive health and sanitation — without considering whether we are adding apples and oranges and/or ignoring interaction (negative) effectsAn over-simplified approach risks ignoring important differences and nuances that may be valuable for learning, while ignoring contextual factors and over-generalizing.

The use of a set of functional equivalent indicators with the same units of measurement, based on common conceptual definitions within a portfolio of grants, can ease aggregation processes downstream. That is, the approach may help us to transfer meaningful, targeted information from implementers to managers, and subsequently use this knowledge to inform higher-level decision making and governance structures (e.g., Boards; Steering Committees; funders; and public officials funding specific civil society work).

5) Strive for utility to all: implementers, funders and the field as a whole

Construct a MEL system that works for implementation: There is a craft to portfolio management, including its MEL. It’s hard work and requires financial and management support. Failing to purposefully invest in sufficiently resourced MEL systems can backfire. It feeds and potentially deepens the field’s supposed “existential crisis”. We know (and experience ourselves) that organizations that have very limited human capacity, financial resources, short time horizons. Therefore the political space may find some of these examples unhelpful. We believe that it’s important to design MEL systems that are possible to actually implement, which also means making well-informed trade-offs and compromises.

In our MEL choices, we prioritize questions and concrete contextual features. Prioritization, given scarce resources, entails trade-offs, and short, medium and long-term risks which we can manage or sweep under the rug. We learned that we should prioritize collaboratively and do internal advocacy to open space to create, course-correct and sustain the implementation of compromised solutions over time that travel across decision-making levels of the portfolio. This is often difficult because we are working with organizational restrictions, limited resources, technical criteria and, often, shifting politics within organizations and the systems in which they work.

In the face of these challenges, we should be transparent and manage risk, rather than set ourselves up for failure with unrealistic expectations. With an eye towards the portfolio-level narrative, funders and intermediary staff can help to frame and/or co-produce with partners, strategic questions that are broadly relevant to most stakeholders. They can also help us to identify lessons that may be applicable and could be tracked across multiple interventions to tell a collective story.

Prioritize focus areas for monitoring and learning, with key evaluation questions to apply across the portfolio (i.e., create filters and frames)As Al Kags argues, “The question of fostering active citizenship and indeed responsive government, is a complex one with a kaleidoscope of nuances. It is more like a set of puzzles, each of which have layers of contexts.”

For example, how do we tackle common challenges such as increasing the likelihood of scale up of TPA work? These are often areas of interest for many actors across the portfolio — from project managers to funders. The documentation of these common areas isn’t detailed enough for implementers, but it can help them identify counterparts within a ‘school of music’ from whom they can learn and with whom they might collaborate and go deeper into common areas of interest. Then one can assess whether setting a top-down common approach, method, or broader parameters to answer the question has more advantages or disadvantages than letting each team define the methodology on its own.

Coda

For now, we’re pleased with the thoughtful dialogue generated so far on rethinking the TPA sector’s narratives of success and failure and the role that funders and other portfolio managers might play, as they design and implement their own MEL systems, policies and practices. Others are engaging in this discussion offering their own perspectives. We recently conducted a webinar on building mid-level theory. Participants showed a surprising level of appetite for a candid conversation about how to build this in practice for the TPA field for organizations of different sizes and types. In a recent webinar on the future of anticorruption work convened by USAID’s Achraf Aouadi (I-Watch Tunisia) put the issues (from his perspective) on the table, as did Ambassador Power (from hers). So, this isn’t just the view of three consultants. Rather than dismiss TPA portfolios as too ‘hard to measure’, let’s rise to the challenge and learn from each other. We encourage others to join in the discussion and let us know how you are managing these issues (and others) in your TPA programs.

We pose a polite challenge to funders out there: in addition to investing in your improved portfolio MEL systems in this new strategy cycle, you can also help the field by supporting relevant TPA actors to “play their MEL music together”, to facilitate collective strategic thinking and exchanging of tricks of the trade.

Towards MEL symphony in the transparency, participation, and accountability sector

Autores: Tom Aston, Florencia Guerzovich and Alix Wadeson

Paul Klee, Ancient Harmony

As we illustrated in the previous blog in this series, funders, fund manager organisations and implementing organisations in the Transparency, Participation and Accountability (TPA) sector are wrestling with the challenge to move beyond piecemeal project-level MEL to evidencing more cohesive programme and portfolio-level results which are greater than the sum of their parts. This is the holy grail, as Sam Waldock of the UK Foreign, Commonwealth and Development Office (FCDO) put it in the Twitter discussion.

Some grounds for optimism

While we agree that portfolio MEL is challenging, it’s not impossible. Some efforts in the wider international development sector give us reasons to be optimistic when there is political commitment at the right level. As CARE International’s Ximena Echeverria and Jay Goulden explain, it’s possible to demonstrate contributions to the Sustainable Development Goals (SDGs) across an organisation with over 1,000 projects per year. With standard indicators, capable staff, and serious effort, you can assess progress at a considerable scale. But there are also ways to break such enormous portfolios down into more manageable chunks.

One of us conducted a review of CARE’s advocacy and influencing across 31 initiatives (sampled from 208 initiatives overall) that were relevant to just one of CARE’s 25 global indicators up to 2020. This was based on a Reporting Tool adapted from Outcome Harvesting (a method which is increasingly popular in the TPA sector). CARE has continued to assess this in subsequent years, looking at 89 cases last year because it saw the value in the exercise. As advocacy and influencing constitutes roughly half of their impact globally, it’s obviously worth evaluating and assessing whether trends of what worked changed over time. Oxfam was also able to do something similar for its advocacy and influencing work, building off 24 effectiveness reviews (which relied on a Process Tracing Protocol; if you’ve read some of our other blogs, you’ll know we’re fans of this method).

Both reviews, also similar to the Department for International Development’s (DFID) empowerment and accountability portfolio review of 50 projects, were Qualitative Comparative Analyses (QCA) — or fuzzy-set QCA. We believe that QCA is a helpful approach to find potential necessary and/or sufficient conditions, but such conditions are not always forthcoming (as DFID’s review showed); they also rely heavily on the quality of within-case evidence. We’re often searching for, but not quite finding, necessary and/or sufficient conditions. For this reason, there are limits to what QCA can do, and without adequate theory it can sometimes be premature.

Realist syntheses (or realist-informed syntheses) also provide a helpful option to assess what worked for whom, where and how. The “for whom” question should be particularly of interest for the Hewlett Foundation’s new strategy refresh as well as to all colleagues who strive to design portfolio MEL systems that are useful to its different stakeholders and decision making needs. One great benefit of the approach is that it assumes contingent pathways to change (i.e., diverse mechanisms), emphasizing that context is a fundamental part of that change rather than something to be explained away. A more contingent realist perspective thus builds in a limited range of application for interventions (x will work under y conditions) rather than assuming the same tool, method, or strategy will work the same everywhere (the fatally flawed assumption that brought the sector to crisis point).

The Movement for Community-led Development (MCLD) faced a similar “existential threat” as the TPA sector — after another hot debate about the mixed results of Community Driven Development (CDD) in evaluations. Colleagues from 70 INGOs and hundreds of local CSOs from around the world, started a collaborative research including a rapid realist review of 56 programmes, to understand the principles, processes, and impact of their work. They counted on leadership and some external funding, but often relied on voluntary time and good will. The process and the results include several relevant overlaps and insights for monitoring and evaluating portfolios of TPA work. The full report is due to be published on the 6th October, 2021.

As observers and participants in the MCLD process, we would like to underscore a factor that distinguishes this effort from many others — don’t ignore the “L” (learning). Regular calls among the group were made to engage in a kind of “social learning,” resembling a learning endeavour which is described by the Wenger-Trayners as an ongoing process which creates a space for actors to mutually exchange and negotiate at the edge of their knowledge, reflecting on what is known from practice as well as engaging with issues of uncertainty.

In the TPA space, where practice often entails advocacy grounded in expertise and/or values, including to set research and learning agendas, creating this learning space requires, at a minimum, changing the mindset and challenging normative assumptions and apparent agreements that constrain social learning. For another call to put “L” front and centre, see Alan Hudson’s feedback to the Hewlett Foundation’s strategy refresh.

Therefore, with sufficient political commitment and flexibility, the appropriate methods and processes are out there to assess and support social learning in as few as 5 to as many as 1,000 cases or initiatives. But there also seems to be a more meaningful sweet spot somewhere between assessing a group of 5 and 50 initiatives.

How can portfolio evaluation contribute useful knowledge to the field ? The GPSA’s journey

We’ll discuss an example of a funder the three of us know well (but do not speak for), one that has been unusually open about its complex MEL Journey. The Global Partnership for Social Accountability (GPSA) started grant-making in 2013; by 2014 it had already designed 20 flexible grants tackling a broad range of development challenges (health, education, public financial management, etc.) in diverse contexts — from Mongolia and Tajikistan to Ghana and the Dominican Republic. The common thread for these grants was that, a priori, they were all cases that it was judged would benefit from politically smart collaboration between public sector and civil society actors. The prioritisation of a problem and the precise approach and tools to tackle challenges themselves was localised.

In its infancy, the GPSA lacked a comprehensive MEL system. The programme prioritized grant partners’ ability to determine on their own appropriate approaches to MEL and aggregation of results was less pressing. “Learning by doing” is something of a mantra at the GPSA whereby both the GPSA and its grant partners are expected to embrace this approach. It encourages (uncommon levels of) openness about failures and allows ample room for iteration of collaborative social accountability processes undertaken by diverse grant partners across over 30 countries (to date). To pick up Bowman et al.’s typology, by 2016, MEL was a full-blown “jam session” among grant partners that resemble other decentralised systems (see the typology below).

Bowman, et al. 2016

One practical challenge was that the GPSA had questions about the aggregation of results. The onus, as affirmed by former World Bank President Jim Kim, was to show that this “miracle” has real outcomes. Delivering on this challenge could go in different ways. As elsewhere, finding and implementing a compromise among decision-makers needs across the system is easier said than done or resourced.

Over the years, the GPSA explored the implications of adaptive learning as a fundamental portfolio approach while responding to this demand, it intentionally experimented over time for gradual design, implementation, and course correction. A careful dance while fitting in World Bank rules, processes, and shifting approaches and norms.

A key part of this journey was about transforming the free flow ‘jam session’ into a more harmonious school of music. The latter is an entity that includes a wide range of activities (instruments) implemented by multiple, diverse grant partners (music school members) that have a loosely defined common theory of action to work in diverse contexts to tackle a range of development problems (different musical genres and sheet music). The GPSA, along with funding partners and World Bank management sector and country staff, makes initial choices in this regard — which is why strategy refresh season is important. They also provide financial and non-financial support and also can facilitate information exchange and compile activity reports for higher ups. Yet, the portfolio’s day-to-day work is often decentralised, and line-managed by people in other organisations (grant partners). The GPSA, like other funders or fund managers, often has limited authority to direct the behavior of partners — but it’s not innocuous and thus should not be omitted or obviated from the story.

The GPSA is building upon layers of ‘jam session’ lessons to shape a school of music-type portfolio that can start to play more harmoniously under the direction of a MEL system developed over time. This has been a multi-year process in which “L”, in the sense described above, was front and centre in the GPSA’s criteria for funding applications, annual reports, revised theory of action and indicators in the results framework. It’s also more explicitly emphasised in GPSA’s annual grant partner forums — GPSA alumni participate in these events which enables retrospective insights and facilitates that ongoing engagement and support across civil society can be maintained over the longer-term, beyond the duration of a grant. The GPSA also hosts other virtual events and bilateral conversations.

Connecting the dots across these MEL tools, in 2016, the GPSA began to articulate key concepts ‘for the practice from the practice,’ such as collaborative social accountability. The GPSA needed a new common language to talk about and understand the work — what looks like yet another label, was an effort to avoid cacophony. In 2018, the GPSA agreed to “test” key evaluation questions of interest to different stakeholders, with a small number of evaluations (Expert Grup took the plunge first). Then, the GPSA learned that the priority of these questions resonated with grant partners as well. In 2019, grant partners watched presentations of 4 evaluations and came back to one of us saying: “now I understand what you mean that an evaluation can be useful, I want one like that!” Part of the trick here is that those select evaluation questions were also useful to management, and a broader group of Global Partners.

Social learning within the school of music informed the next iteration of MEL tools, including refining the evaluation questions, and others that can support us to better grasp the elusive question of ‘what does sustainability and scale of social accountability look like in practice?’ All this MEL work and social learning with partners about practice precedes the current GPSA Manager, Jeff Thindwa’s, call for the field to take stock, match narratives to the work, and write the next chapter for social accountability. We agree that this kind of social learning is key to exploring TPA theories of action and change, develop tools to assess them, and hopefully will help us to challenge the doom and gloom narrative. In a slightly different guise, social learning underscores the importance that the principle of utilisation-focused evaluation should be a key priority (rather that its specific methodological incarnations, per se), which has been reasserted in recent evaluation discourse (for instance, as a central theme at the Canadian Evaluation Society’s 2021 annual conference).

In the next post of this series, we’ll share more practical insights about the design of MEL systems emerging from our work across different portfolios. These are insights that we believe are useful for different types of schools of music as well as for colleagues who privilege any of the methods discussed above and/or support the idea of a bricolaging them.

Learning from consortia and portfolios: From cacophony to symphony

Kim Sobat, Cacophony

Autores: Florencia Guerzovich, Alix Wadeson, e Tom Aston 

As we discussed in a blog last week, practitioners in the Transparency, Participation and Accountability (TPA) sector face an important question: how can portfolio-level Monitoring, Evaluation and Learning (MEL) help us to learn about the collection of evidence of TPA’s impacts? And in doing so, how might this help us move beyond supposed “existential threats” to the sector. 

This second post in a three-part series looks in greater detail at how the narrative in the sector became stuck in a doom loop, despite the evidence that our community is more effective and resilient than the narrative suggests. TPA practitioners are doing work that could get the community  learning, avoiding tragic consequences for the funding of the work. We also start discussing some of the building blocks for this alternative narrative. Our answer starts with MEL – better, smartly targeted MEL, not necessarily more of it.  And we’ll share some more hopeful, yet also more technical, proposals in the next blog, based on our collective experience working on MEL with different consortia and portfolio programs. 

The minor fall and the major lift?

In the last blog, we drew attention to the Hewlett Foundation’s possible new directions. Some of these directions include prioritizing in-country work in a more focused manner (narrowing), with a hope to contribute to structural and systemic change (elevating ambition), but also to go broader than public service delivery. These multiple directions work within individual projects in specific countries. We can learn from these, but we also believe that part of the answers to the headline question lies in how we learn from portfolios.

In recent years, organizational partnerships, consortium initiatives and program approaches have very much been in fashion. Yet, relatively limited attention has been given to how we should carry out MEL beyond individual projects. In our experience, it seems like an after-thought in which the design of robust consortia and portfolio level MEL lags behind. This priority emerges when implementing agencies are pressured to account for learning and evidence to demonstrate how the whole is greater than the sum of its parts. 

As Anne Buffardi and Sam Sharp recently pointed out in relation to adaptive programming, “most measurement in adaptive management approaches were developed for and from individual projects.” This suggests we need to think harder about learning beyond individual projects. Similarly, in multi-site, multi-organisation structures, as Kimberly Bowman et al. explain:

In an ideal scenario, multi-actor efforts can be imagined as a symphony: an elaborate instrumental composition in multiple movements, written for a large ensemble which mixes instruments from different families (an orchestra). However, with so many components and actors there is also a risk of creating a cacophony: a harsh, discordant mixture of sounds.

A similar point can be made about funding portfolios. Funders often explicitly hedge bets, investing in potentially competing approaches in the hope that at least one will pay off. Sometimes this can entail funding those actors building the foundations on the inside as well as funding those throwing stones at the building from outside. It also entails funding researchers to carry out large Randomized Control Trials (RCTs) alongside their most ardent critics (and critiques). Logically, both run the risk of promoting dialogues of the deaf, strategically and methodologically

How do we play music in harmony rather than in discord? 

Bowman et al. use the analogy of a school of music to refer to the actors and the diverse roles they play in portfolios. The school might include funders and their local “fixers,” National and International Non Governmental Organizations (I-NGOs), national or international contractors, think tanks, universities, among others. 

Funders tend not to explicitly ask questions about whether they are creating harmony or discord. While theories of change are supposed to represent how different actors might collectively contribute to a particular goal, funders (and research institutions) often make themselves invisible. Most MEL efforts are delegated directly to partners and grantees who carry out the work. After all, this is where the action is, right? 

To be sure, there are exceptions to this pattern: the evaluators of Open Society Foundations’ former Fiscal Governance Program asked key informants, including one of us, about the Program’s role and contribution and so does the GPSA’s Theory of Action, Results Framework and project evaluations (see e.g. here).

Yet, as Gill Westhorp put it in a recent evaluation of a social accountability project in Indonesia, donors or research institutions may not exercise overt power in an intervention, but they can still bring multiple types of power and “authorising” to the table. These include independence; access to higher levels of government; access to media; and the fact that it is a donor organisation’ – all of these factors can support or hinder the work of local partners. Therefore, funders are also fundamentally part of any theory of change.

Asking about the role of the school of music creates challenges: who will collect the information to provide answers? As we read the strategy evaluation of Hewlett’s past portfolio that informs its ongoing review, we appreciated that Transparency and Accountability Initiative (TAI) members have tried aligning reporting and reducing reporting burdens on partners. Other efforts have moved reporting tasks up the chain. The State Accountability and Voice (SAVI) program in Nigeria, for example, explained how the program itself handled most reporting requirements for implementing partners, so that they could get on with implementing. We’ve been part of such efforts ourselves.

An alternative is for funders to take this MEL task upon themselves. Yet, we know, first hand, that doing so creates a lot of work for funders and fund managers. When they don’t have adequate in-house funding, capacity and an authorising environment to answer the big questions themselves, they risk creating a learning gap. Failing to invest in and build this space internally (or at least with a MEL partner) runs the risk of creating a cacophony, adding additional ammunition to critiques around “mixed” results. This outcome is concerning as it further undermines the argument and potential for investing in TPA work. 

MEL staff are often asked for something which we can generalize across a whole portfolio, to aggregate data in potentially misleading ways, and to create laundry lists of indicators that cater to an almost infinite list of advocacy agendas. Beyond this, we need to find a compromise between many legitimate needs, including: (1) the bottom line: report what’s needed so people get paid; (2) utility: provide useful data for partners to help improve programming, and; (3) coherence: organise data and lessons in a meaningful way so that we can show that efforts at different levels add up to something greater than the sum of the parts. The politics of MEL is in reconciling, often discordant, and under-resourced demands. As we mentioned in the previous blog, the prospect of fully generalizable results is a pernicious myth in the TPA sector. Instead, we need theories to travel across decision-making levels of any given portfolio. 

It is, of course, challenging to find the sweet spot between the detailed information needs of implementers at the frontline, where each case is different, and the elevator pitch necessary for higher management and boards. Yet, in the middle, program officers and portfolio managers might benefit from insights about patterns in particular contexts and sectors, and an appreciation of what could be transferable across subsets of investments (i.e., comparable parts of a portfolio). 

Complexity is doubtless another reason for why it’s difficult to make sense of what’s going on. As one of us explained previously, stakeholders often disagree about what is right as well as what will work. And it is often hard to pin down whether we’re measuring the same things or not (as we’ll discuss in the next blog). Yet, as Megan Colnar rightly reminds us, complexity is unfortunately used as an excuse for parts of the field considering TPA un-evaluable, drawing the self-defeating conclusion that we therefore shouldn’t really try.

A dialogue with the data?

Research also has a role to play in this sensemaking; yet how MEL and research complement one another in the sector is not always so straightforward. Although the UK Department for International Development (DFID) conducted an evaluation of its empowerment and accountability portfolio in 2016, a number of questions remained unanswered. It appeared logical that some of these gaps should be filled by commissioning research

Colin Anderson, Jonathan Fox and John Gaventa looked at part of this portfolio, focusing on six case studies in Mozambique, Myanmar, and Pakistan through the DFID/FCDO-funded Action for Empowerment and Accountability Research (A4EA) program. Yet, Anderson et al. lamented that “a diversity of approaches to monitoring and evaluation prevents much rigorous comparison on the basis of the available evidence” as well as “obstacles to clear-sighted evaluation of impact, and the limits on informing new programming decisions.” 

DFID’s portfolio evaluation offered illustrative evidence on the promise of vertical integration and concluded that there was a “need to move beyond tactical approaches to achieve success at scale.” Yet, Anderson et al. argued that from the available data they weren’t able to test the proposition that “vertically integrated approaches offer more purchase than less integrated alternatives.”, We might critique case selection (were these the right countries, projects, time series?). We might even critique the testability or validity of the theory (will assumptions about vertical integration built into other TPA portfolios be tested?). But, either way, this experience demonstrates that research is, unfortunately, no substitute for good MEL

Research and MEL should converge around a dialogue with the data, but discursive agendas sometimes get in the way. This has been accentuated when researchers and advisory groups (of which we have, on occasion, been members) fail to critically interrogate prescriptive and causal assumptions. Failing to reflect on these has, lamentably, created echo chambers with the mere semblance of agreement. Assumptions need to be surfaced, discussed, and revisited, if they don’t hold empirically.

How to inform new programming decisions through social learning

Portfolio MEL takes time and resources beyond the technical definition of indicators. As the Public Service Accountability Monitor (PSAM) and its partners found out, consortium, portfolio, and other collective  MEL efforts require coordination and negotiation among stakeholders with different needs and capacities (related to this from Varja Lipovseck). It’s as much about relationships and trust, staff time and information management as it is about defining shared targets and metrics. Portfolio MEL isn’t quick, it’s generally about the medium and long term. Its systems are also very important sources of institutional memory. They are key when one wants to look back at 5-10 years of work and tell a collective story – especially considering the rate of staff rotation in the TPA sector. If donors really care about understanding progress, learning and weaving a different narrative than they did a decade ago, they must fund these collective efforts beyond supporting one or two major research projects and/or evaluations every few years.  

The related challenge is that funders and fund managers can exercise a different type of social learning leadership that changes the design choices and increases the use of MEL systems. Few have a better vantage point on the music school than funders (perhaps consultants with many hats can sometimes make a similar claim). Funders and fund managers are best placed to provide the resourcing and generate the incentives for portfolio stakeholders to see the collective value of MEL, beyond performance audits. Funders can help spot opportunities and offer spaces to bring different stakeholders together where they are facing common challenges in comparable contexts and asking similar questions

This last point captures a methodological challenge associated with broader trends in the current strategy refresh season. Some TPA funders such as Hewlett are considering moving their attention from global advocacy – where minimum common denominators across a vast set of countries matter – to supporting in-country work in only a few countries. How might we learn from and beyond these countries and become more than the sum of our parts?

Not everything is fully comparable. There will be musicians in one country that will have comparable genres and notation (i.e., challenges and questions) to those in some countries in the portfolio but not others. Or they may have more commonalities with countries that fall outside the portfolio of the few lucky countries that receive funding.

Comparison is often an instinctive part of our learning activities. Convene a meeting among activists in different countries who work with parent-teacher associations and they’ll have lessons to share for days, despite jetlag. Seat them with another crowd and they’ll probably talk past each other or go to sleep. So, it’s important to consider what are meaningful communities which can have these conversations and which contribute to the conversation rather than detract from it. We also need to consider what is guiding those definitions (personal histories, anecdotes,  organizational dynamics, evidence, etc.). There is a lot of methodological guidance on the comparative method for studying politics, policy and power, including sub-nationally,  that can help us reflect more systematically about identifying groupings in a school of music’s MEL systems.

Defining meaningful groupings for bounded comparisons among subsets of musicians in a school of music is really hard work, but it’s doable and  pays off. Until recently, the Partnership to Engage, Learn and Reform (PERL) program in Nigeria (following SAVI), worked in seven locations across district, state, regional, and federal levels and across various different thematic sectors. Although the program had a single theory of change overall and all locations worked to promote more accountable, effective and evidence-informed governments, the challenges varied significantly. What might be learned from Kaduna in local governance reform with a dominant settlement under Governor Mallam Nasir El-Rufai might not perfectly translate to the political economy of Kano state with its short-termist competitive clientelist settlement. Successful reform efforts in the economic hub, Lagos, might not travel well to conflict-affected Borno. While it’s possible to learn something about accountability in Nigeria overall, comparisons in more similar regions, and within particular sectors (health, education, etc.) emerged as a more productive way to promote triple loop learning in the program, and to shed light not simply on the desirable, but the possible.   

VNG International’s Inclusive Decisions at Local Level (IDEAL) program, working over five years in seven quite diverse states facing fragility or conflict at different stages. IDEAL has a nested theory of change (i.e. different levels of detailed theory of change), but faced challenges in its MEL due to precise but relatively inaccurate quantitative indicators at the impact level, as the program is adaptive. The team was searching for ideas about how to weave it all together with useful evidence for learning and reporting. After the mid-term review, two of us supported IDEAL staff to work on this challenge with some recent success in integrating and mapping changes at outcome level to the theory of change. The program still faces a challenge to aggregate and compare its work in TPA at the output level because this is being done retroactively after several years, with country-level datasets of varying quality and access. The context and complexity in these seven countries facing quite diverse manifestations of fragility and conflict mean that local governance of environmental conservation in rural Mali seems to be too disparate to compare with local economic development planning in urban Palestine. There are, however, ways to design a MEL system to track outcomes and outputs across seemingly different activities and contexts into ‘buckets’ for relevant and aggregatable measurement that enables comparison and learning to identify trends, outliers and gaps in the theory of change (we explore this and the concept of “functional equivalents” in our next blog). 

The COPSAM Community of Practice, convened by PSAM, includes dozens of organizations multiplying TPA work across the SADC countries. The work encompasses hundreds of personal realities – all different from each other – but a common approach that is appropriate in each case. A few years ago, COPSAM  identified weaknesses in the MEL system – that there was a gap between practice and learning at the organizational as well as community levels. Knowledge that could support practice is tacit, held by a number of colleagues on the ground rather than in whatever MEL system each organization had. One of their partners had an evaluation gone wrong partly due to ill-suited methods. The “bad news” tested people, organizations, and the community’s resilience.The good news is that the group did not give up. They embarked on a learning journey with one of us that included partners gradually and selectively, focusing first on four partners (Policy Forum in Tanzania, United Purpose in Mozambique, Zambia Governance Foundation and SAPST in Zimbabwe) that had some common features among themselves and with the broader group. Then expanding the conversation to others  and explicitly discussing commonalities and differences between those four pioneers and the broader community. 

Many colleagues are already doing better, smarter portfolio MEL, challenging conventional wisdom, practices and boundaries, often disconnected from each other. We turn to some practical (and technical) lessons from portfolio MEL to help counter the supposed “existential threat” to the sector, and hopefully begin to transform discordant noises into something more harmonious in the next post.

Tales of triumph and disaster in the transparency, participation and accountability sector

Frederic Edwin Church, Cotopaxi

Autores: Thomas Aston, Florencia Guerzovich e Alix Wadeson.

It’s strategy refresh time

The Biden administration is figuring out whether and how to walk the talk on anti-corruption, the Open Society Foundations and the Hewlett Foundation’s Transparency, Participation and Accountability (TPA) Programme are doing a strategy refresh. The World Bank’s Global Partnership for Social Accountability (GPSA) is considering a Strategic Review. These are some of the biggest players in the sector. Each player has its own “niche” and approach to build a portfolio. Yet, in considering some possible new directions, the Hewlett Foundation has put out a consultation. Al Kags offered some thoughts on what to fund and offered a wish list earlier this week. Hewlett asked an important set of questions for all of us which we have slightly amended:

  • How should we measure progress toward outcomes and in priority countries in a given portfolio?
  • How can we best contribute useful knowledge to the field through grantmaking, commissioning evaluations, and facilitating peer learning?
  • And what can a portfolio’s monitoring and evaluation system do to link the answer to both questions together?

To answer these questions, we must first acknowledge a wider issue — null results terrify us. Every time a new Randomised Control Trial (RCT) offers anything less than unambiguously positive results, we have groundhog day about whether the whole sector is a worthy investment or not.

Nathanial Heller captured this trepidation well for yet another study in Uganda about to publish its results:

A handful of initiatives have given the impression to donors that transparency and accountability efforts don’t work. One of these was the Making All Voices Count (MAVC) programme, which some donors (unfairly) called a “failure,” point blank in 2017. Further, as one of us explains, two studies in 2019, the Transparency for Development project (T4D) and another from Pia Raffler, Dan Posner and Doug Parkerson found null results, and this caused collective consternation in the sector.

The conversation seems stuck in a vicious feedback loop. So, to demonstrate success, many rely on idiosyncratic cases and lean very heavily on a handful of country contexts which conducted a lot of RCTs or narrowed the focus of study to common tools (e.g. scorecards) and/or outcome measures (for an effort to standardize indicators). Many others have sought refuge through “pivots” and “innovation” rather than having a candid conversation about mixed evidence and what we might do (or not) to escape the feedback loop. As ex-International Development Secretary, Rory Stuart, recently argued [talking about Afghanistan], ‘“we have to stop [saying] “either it was a disaster or it was a triumph.”’

The myth of homogeneous and generalisable success

Despite this sage advice, one expert recently told the Hewlett Foundation that a “lack of evidence about the impact of TPA initiatives is now an existential threat to the field.” And one thought leader was said to have remarked that “the window of opportunity for social accountability will remain open only if we can surface evidence that social accountability is worthy of continued support.”

There are literally a dozen evidence reviews of the TPA sector which refute this claim (we have read them, alongside hundreds of evaluations and studies). Evidence is certainly mixed, but it’s hardly absent. Part of the fear expressed recently is about heterogeneity. This is a nightmare for anyone that seeks to use evaluations to find generalisability from interventions about complex TPA processes. Many impact evaluators have opted to reduce interventions to a single tool, omitting too many components of the work, seeking findings about the “average beneficiary” that are universally valid and hold in all contexts. In the TPA sector, variation in outcomes in different contexts and sectors is something to be expected, not feared. We regularly assert that “context matters,” and yet we forget this when it actually matters. As Howard White and Edoardo Masset from the Centre for Excellence for Development Impact and Learning (CEDIL) highlight, we should focus on transferability — findings that tell us what contextual factors condition (or not) the transfer of a finding from one setting to another.

On balance, if you read the evidence reviews in the sector, the message is generally positive. A Qualitative Comparative Analysis (QCA) of the UK’s former Department for International Development’s (DFID) Empowerment and Accountability portfolio found positive results across nearly all 50 projects reviewed in 2016 (prior to the sector’s apparent fall from grace). But, this was largely ignored — perhaps because it wasn’t an RCT. Other groundbreaking reviews in the sector using realist methods which present an array of outcomes and take context into account in particular sectors were also ignored. Either there is collective amnesia, a selective reading of the evidence, or experts’ expectations of “worthiness” may be rather too elevated.

As Peter Evans of the UK Foreign, Commonwealth and Development Office (FCDO) explains, evidence reviews have their flaws. We would argue that many of them have unwarranted methodological biases and some make grand arguments without much empirical evidence. Evans is also right that “no-one ever opens an evidence review and finds the perfect answer to their question.” But, when evidence reviews don’t quite cover it, that doesn’t mean that we should resign ourselves to the wisdom of a few researchers’ hot takes, loud voices in our echo chambers, or give undue credence to a handful of expensive impact evaluations.

The supposed “existential threat” is not primarily empirical, but semantic and discursive.

The question for us remains — how can portfolio-level M&E in the TPA sector build a more inspiring narrative to help make the case to continue investing in the collection of evidence of TPA’s impacts over the medium to long term?

In our second blog post, we start answering this question. We share insights from our work as M&E consultants working with different portfolios and connecting the dots across projects and portfolios.

Governo aberto e coprodução como mecanismos para o aprimoramento dos serviços públicos

Por Monyze Weber, Júlia Merlo, Ana Cláudia Savoldi, Renan Berka*

Após três décadas de avanços em participação cidadã, transparência da gestão pública e ampliação do acesso e da qualidade de serviços públicos, o Brasil e seus entes federados vêm enfrentando desafios para manter e ampliar conquistas nessas áreas.

Estudo conduzido por Michener, Contreras e Niskier (2018) sobre a implementação da Lei n. 12.527/2011, conhecida como Lei de Acesso à Informação, LAI (BRASIL, 2011), evidencia que há muitos avanços em transparência no Brasil, porém os “políticos estaduais e municipais majoritariamente valorizam a opacidade em detrimento da transparência”(MICHENER; CONTRERAS; NISKIER, 2018, p. 613).

A transparência é “um importante instrumento da democracia e, ao passo que reduz os segredos de Estado, amplia o exercício da cidadania, enquanto a cultura do segredo, de forma inversa, fragiliza a democracia” (OLIVEIRA; PFAFFENSELLER; PODESTÁ JÚNIOR, 2019, p. 61). Para que a transparência se consolide, é necessário que se torne algo valorizado pelos cidadãos e pelos agentes públicos, e que o Estado e seus parceiros implementem mecanismos e instrumentos de transparência, acesso a dados e informações, uma vez que

[…] A diminuição da opacidade administrativa propicia maior possibilidade de conhecimento do cidadão sobre a res pública. Com a informação disponível e clara sobre o ente público, o cidadão pode controlar os atos da Administração Pública e estar mais preparado para exercer seu direito de participação (VAZ; RIBEIRO; MATHEUS, 2010, p. 50)

Uma importante iniciativa em relação a esse tema é o fomento da política pública de governo aberto. A Parceria pelo Governo Aberto (Open Government Partnership, OGP) foi lançada em 2011 com o objetivo de difundir e incentivar, globalmente, práticas governamentais relacionadas à transparência dos governos, ao acesso à informação pública e à participação social (OGP, 2011). Em 2016, a OGP lançou o “Programa Piloto de Governos Subnacionais”, envolvendo 15 participantes, entre eles, a Prefeitura de São Paulo. Já Santa Catarina foi o primeiro estado brasileiro a integrar a OGP, na edição lançada em 2020.

A OGP, além de favorecer valores e práticas democráticas, parte do reconhecimento de que os problemas complexos da atualidade não podem ser resolvidos apenas pelos governos. Cidadãos, organizações da sociedade civil, academia e empresas privadas podem ser parceiros na busca de soluções para os problemas públicos. O cidadão não é apenas contribuinte, beneficiário de uma política ou usuário de um serviço, é também alguém que pode participar da ampliação do acesso e da qualidade dos serviços públicos (FREITAS; DACORSO, 2014).

Além de mecanismos e iniciativas que fomentem a transparência e a participação social, cabe fomentar o envolvimento de servidores e gestores dos órgãos e entidades do poder público. Essas iniciativas inovadoras muitas vezes não são “políticas legalmente implementadas” ou “não constam em lei”. Ainda que haja previsões legais, é necessário engajamento e construção de uma cultura de governo aberto à interação com os cidadãos.

Sobre mudanças e o contato com o novo, Feitosa e Costa (2016, p. 2) dizem que “mudanças costumam trazer incertezas e turbulências no ambiente organizacional que podem gerar resistências decorrentes da retirada do indivíduo de uma situação conhecida para uma situação desconhecida, o que afeta o seu comprometimento para com a instituição”. Uma alternativa para lidar com os desafios de construir novas soluções e relações é a cocriação e a coprodução de serviços públicos, envolvendo diferentes atores no setor público e na sociedade. O trabalho conjunto é uma forma de minimizar o choque frente a realidades e desafios novos, compartilhando-se conhecimentos e recursos em rede, o que pode gerar novos modos de trabalhar e alcançar mais eficiência e legitimidade, com soluções mais ajustadas às necessidades e recursos existentes em cada contexto.

Uma iniciativa nesse sentido é a #ACT4delivery, organização sem fins lucrativos idealizada por uma rede de profissionais que identificaram a potencialidade de diferentes atores em coproduzir serviços públicos a partir da formação, pesquisa, aprendizagem e trabalho conjunto.

De acordo com Júnior (2016, p. 11), “o conceito de coprodução cria novos parâmetros para a entrega de serviços públicos que, até então, na visão tradicional do modelo de produção pública, deveriam ser produzidos exclusivamente por agentes públicos”. Diversos textos e autores abordam a coprodução como mecanismo e alternativas para diversos serviços. Esses e muitos outros exemplos mostram que uma prestação de serviços com qualidade e excelência pode vir por meio de parcerias.

A #ACT4delivery está desenvolvendo o curso “A Prática da Coprodução de Serviços Públicos e Accountability” o qual tem como finalidade mobilizar servidores públicos e organizações que estão resolvendo problemas complexos de provisão de serviços públicos que requerem a colaboração de múltiplos atores, ajudando a resolução de seus problemas por meio de ferramentas que os possibilitem entender como a coprodução tem potencial para a resoluções de diversos problemas.

As visões e abordagens dentro da administração pública tem mudado, incluindo novas formas de se ver o serviço público, bem como os mecanismos e formas de se relacionar e criar redes. Com elas, surge a oportunidade para que mais pessoas envolvidas com a administração pública reformulem modos de pensar a atuar, abrindo espaço para novos conhecimentos e metodologias como forma de aprimorar os processos já existentes.

Dentro dos governos, já existem muitos servidores e agentes que promovem mudanças e conexões, e percebem seu papel para alcançar patamares mais ousados em governo aberto e coprodução. Processos como o da OGP podem estimular conexões entre servidores públicos, internamente, e deles com organizações e pessoas fora dos governos, contribuindo para aprimorar a gestão e os serviços públicos. Para os céticos, vale conferir o Guia da OGP para céticos em governo aberto, que mostra resultados concretos e exemplos em várias áreas e países.

Embora já tenhamos muitos avanços em transparência e participação, ainda há muito o que avançar, desde meios para contrapor resistências internas de servidores, resistências de políticos à transparência e ameaças à democracia na atualidade. As iniciativas de parceria e coprodução, como as da ACT4delivery e da OGP junto ao governo de Santa Catarina e seus parceiros mostram que transparência, participação, abertura e colaboração são caminhos relevantes para construir pontes, relações e resultados.

* Texto elaborado pelos acadêmicos de administração pública Monyze Weber, Júlia Merlo, Ana Cláudia Savoldi e Renan Berka no âmbito da disciplina Sistemas de Accountability, da Udesc Esag, ministrada pela professora Paula Chies Schommer, em 2021.

REFERÊNCIAS

BORGES JÚNIOR, José Martins. A coprodução de serviços públicos na perspectiva do cidadão: um estudo no distrito federal brasileiro. 2016. 73 f. Monografia (Especialização) – Curso de Administração, Departamento de Administração, Universidade de Brasília, Brasília, 2016. Disponível em: https://bdm.unb.br/bitstream/10483/16019/1/2016_JoseMartinsBorgesJunior_tcc.pdf. Acesso em: 20 ago. 2021.

BRASIL. Lei n. 12.527 de 18 de novembro de 2011. Dispõe sobre o acesso à informação –LAI. BRASIL. Disponível em: http://www.planalto.gov.br/ccivil_03/_ato2011-2014/2011/lei/l12527.htm. Acesso em: 19 ago. 2021.

FEITOSA, Lívia Vanessa dos Santos; COSTA, Carlos Eugênio Silva da. Inovações no setor público: a resistência à mudança e o impacto causado no comportamento do indivíduo. In: SIMPÓSIO INTERNACIONAL DE GESTÃO DE PROJETOS, INOVAÇÃO E SUSTENTABILIDADE, 5., 2016, São Paulo. Anais do V SINGEP. São Paulo: Singep, 2016. v. 5, p. 1-16. Disponível em: https://singep.org.br/5singep/resultado/191.pdf. Acesso em: 20 ago. 2021.

FILGUEIRAS, Fernando. Além da transparência: accountability e política da publicidade. Lua Nova : Revista de Cultura e Política, [S.L.], n. 84, p. 65-94, 2011. Disponível em: https://www.scielo.br/j/ln/a/3Z88sCrZZbTrnKy5SW6j6MK/?lang=pt&format=pdf. Acesso em: 10 jul. 2021.

FREITAS, Rony Klay Viana de; DACORSO, Antonio Luiz Rocha. Inovação aberta na gestão pública: análise do plano de ação brasileiro para a open government partnership. Revista de Administração Pública, [S.L.], v. 48, n. 4, p. 869-888, ago. 2014. Disponível em: https://www.scielo.br/j/rap/a/WHwnb95TWysQcnCQjvtsF3B/?lang=pt&format=pdf. Acesso em: 19 ago. 2021.

MICHENER, Gregory; CONTRERAS, Evelyn; NISKIER, Irene. Da opacidade à transparência? Avaliando a Lei de Acesso à Informação no Brasil cinco anos depois. Revista de Administração Pública, [S.L.], v. 52, n. 4, p. 610-629, ago. 2018. Disponível em: https://www.scielo.br/j/rap/a/xJVxcSMSQpQ5qvjBsV7z7ph/?lang=pt&format=pdf. Acesso em: 30 jun. 2021.

OPEN GOVERNMENT PARTNERSHIP (comp.). What’s in the 2020 Action Plans: Discover trends, promising commitments, and more from the latest round of OGP action plans. Disponível em: https://www.opengovpartnership.org/whats-in-the-2020-action-plans/. Acesso em: 12 jun. 2021.

OLIVEIRA, Alan Santos de; PFAFFENSELLER, Ana Claudia de Almeida; PODESTÁ JUNIOR, Arnaldo. Mecanismos de participação política, fiscalização e controle: o papel das ouvidorias e da lei de acesso à informação como instrumentos de comunicação governamental, transparência e publicidade. Revista Científica da Associação Brasileira de Ouvidores/Ombudsman, [s. l], v. 2, n. 2, p. 55-69, 2019. Disponível em: http://www.abonacional.org.br/files/revista-abo_2019_web.pdf. Acesso em: 23 jun. 2021.

VAZ, José Carlos; RIBEIRO, Manuella Maia; MATHEUS, Ricardo. DADOS GOVERNAMENTAIS ABERTOS E SEUS IMPACTOS SOBRE OS CONCEITOS E PRÁTICAS DE TRANSPARÊNCIA NO BRASIL. Democracia e Interfaces Digitais para a Participação Pública, [s. l], v. 9, n. 1, p. 45-62, 2010. Disponível em: https://periodicos.ufba.br/index.php/ppgau/article/view/5111/3700. Acesso em: 20 ago. 2021