Interview with Dr. Gloria Novovic

As a woman-owned small business committed to amplifying the voices of women in the communities where we work, Proximity continually observes the slow progress that is being achieved towards gender equality – despite ongoing endeavors to iterate new regulations and tools.

We recently sat down with Dr. Gloria Novovic, a feminist policy expert, to discuss the current gender landscape in international aid, the challenges being faced, and innovative new approaches.

Gloria, thank you so much for joining us. Would you like to start by telling us how you started working on gender in the context of global development and humanitarian work?

I was drawn to feminist approaches because of their focus on root causes: How are the norms shifting? What is driving more equitable discourses? And how are people’s lives impacted? These questions really came into focus for me when I was working at the World Food Programme. I frequently encountered emergency reports arguing, for example, that women’s access to cash had advanced gender equality – and I kept asking, “but how do we know?”

In 2018, as I was designing my doctoral research project, UN WOMEN released a report, “Turning Promises into Action”, which criticized global development for its slow progress toward gender equality, so I designed my research to explore whether the Sustainable Development Goals (SDGs) were creating any room to do things differently. I spoke with almost 200 specialists about a crucial policy paradox: everyone is so committed to gender equality, but limited progress is being observed across the board.

And what did you determine was the cause of that paradox?

In a nutshell, a great deal of effort has been invested in integrating gender equality into programming without fundamentally transforming humanitarian or development work; we tend to end up with the famous “add-women-and-stir” approach. The predominant – but somewhat misguided – assumption is that actors are actively ignoring gender equality commitments. Many feminists feel like they’ve spent 20 years repeating the same information, and they are disheartened by the slow progress. Throughout my research and work, I’ve heard so many versions of the same rhetorical question: “How many training modules, how many workshops and gender equality policies do we need before people get it?”

However, what I’ve found as a researcher and practitioner is that global development mechanisms are not designed for the social transformation that is necessary to advance gender equality. Institutions are resistant to change as a whole because their incentive structures protect the status quo, so we need to be motivated to transform our institutions and the system of international cooperation as a whole to make any progress.

The change is already underway. The shift towards integrated planning, implementation, and learning offers hope. With more interconnected SDGs and national priorities, we are seeing the expansion of cross-sectoral working groups that can lead to actionable gender strategies.

Policies in favor of gender equality, alone, are not enough. An overwhelming number of humanitarian and development practitioners I interviewed pointed to vague gender targets that bear no relations to the actual context. For example, the ambition to ensure gender-responsive urban transport is great, but what does it mean in practice? Urbanists and engineers lack “gender expertise” and gender specialists are unaware of other technical constraints. However, when these people are brought to the same table, they are able to co-construct actionable plans that advance gender equality objectives. So this brings us to the promising notion of gender expertise as a result of collaboration among gender specialists, technical experts, decision-makers, and funders.

What do you feel the future of gender work in development and humanitarian programming will look like? And what are innovative organizations currently doing to push the limits of established gender practices?

Actors are realizing that they have to go beyond what can be measured in the immediate term and find new ways to “do gender” in change-resistant development and humanitarian systems.

Positive trends are emerging. Gender-targeted outcomes are now a part of many UN actors’ personal performance plans. This incentivizes technical specialists to consult gender advisors to identify precisely how and what they will do differently in support of gender equality – going beyond abstract commitments and valuing local experience and perspectives.

Interestingly, we are also seeing increased pressure on donors to shift their structures. International organizations like the Frida Fund for Young Feminists and Mama Cash are, for example, perfecting and sharing models of participatory grant-making.

International NGOs in countries with feminist approaches to international assistance are also leveraging their donor influence and negotiating mechanisms that build on these innovative approaches. Kvinna til Kvinna in Sweden and the Equality Fund in Canada, for example, are working more directly with local feminist organizations in ways that insist on local agency and lower barriers to donor funding.

With more and more donors now contemplating what “locally led development” agenda means for them, we can expect to see these trends gain traction in the coming months and years.

What do you see are the persistent challenges that really need to be addressed to enhance our approaches to gender?

We all know what gender equality agendas require: increased, flexible, predictable funding that allows local groups, organizations, and communities to support their own agendas on their own terms. Yet, funding mechanisms have remained fundamentally unchanged in the last 60 years. Donors are consequently being called to reconsider how agendas are set, how funding is distributed, how decision-making agency is allocated, and how progress is defined and measured.

Local actors are at the forefront of the sector’s transformation. Organizations are turning down funding when it undermines feminist principles, erodes community agenda, or isn’t aligned with local priorities. Instead, they are focusing on movement building and coordination. The key is to ensure every new project builds on the one before, and that it backs – or is backed by – what other allies are doing. It involves seeing other organizations as allies, not competition, and co-constructing broader coalitions.

Organizations that don’t define themselves first and foremost as being feminists (including non-government organizations, consultancy organizations, think-tanks, and foundations) are also doing their part by sharing access to needed resources (e.g. office space) with more grassroots feminist organizations, pushing boundaries with feminist evaluations, and convening multi-stakeholder approaches in which marginalized groups are not only invited, but also supported to intervene.

This is what Rebeca Tatham and I argue is the role of feminist researchers as well. In our recent article published in the International Feminist Journal of Politics, we acknowledge the challenge of engaging in mainstream discussions and prioritizing local perspectives that often present a completely different viewpoint to the issue. It requires more time and resources, careful planning, coordination etc. but it is a requisite for impact and accountability.

None of this is easy, but it is possible, especially if we move away from zero-sum thinking of competition and look at strategic partnerships as avenues of change.

*This interview text was edited for conciseness and clarity

– Gloria Novovic is an independent consultant specializing in global governance and feminist policy. She holds a PhD in Political Science and International Development and has worked across the humanitarian-development spectrum and different institutional and geographical arenas. She can be reached at

82 views0 comments

It may seem pedestrian to talk about program adaptation in international development programs, and yet the nuts and bolts that are necessary for an adaptive program are often missing.

This blog offers experience-based insights to encourage program managers, donors, and MEL colleagues to revisit the foundations on which program adaptation is based.

I suggest three foundational blocks in this endeavor:

1. Gathering ‘useful’ monitoring data

Monitoring systems and tools typically yield information about whether an expected activity has been completed or a product/output has been delivered. In some instances, projects gather bespoke information for case studies, which are qualitative and (largely) anecdotal in nature. This information is useful to hold programs to account, largely assessing whether partners have spent money on the things they said they would. However, this information offers little insight into whether the program is doing the ‘right’ sorts of things and whether key assumptions in the theory of change are legitimate. For example, it might not tell us if training is the right activity to deliver institutional change.

Tip: Pay attention to the information that your monitoring system is generating and ask yourself if the information tells you if you’re doing the ‘right’ things to deliver real change. If the answer is ‘no’, then you need to reconsider what data your monitoring tools are gathering and amend the tools.

2. Ensuring analytical capabilities of data-users

What benefit is fantastic, reliable, granular data if the data users are unable to analyze and utilize the information? Most programs commission evaluations, strategic reviews, case studies, learning reviews, and so forth. These steps yield invaluable data, but we need to find a way to bring this information together with the monitoring data to allow program teams to reflect on what is working, why (or why not), and what else they can do to catalyze change. In addition to bringing this information together in digestible formats, we need to consider whether program teams and donors possess the necessary skills to leverage the information and thereby inform decision-making for program adaptation.

Tip: Map out all your data sources and draw out key themes, messages, and insights. Work with program implementation teams to ask the important questions:

(i) What is working and why/why not?

(ii) What else can we do to catalyze change?

(iii) Are our assumptions about the context and motivations/behaviors of actors legitimate?

3. Balancing learning and reporting

MEL systems are often set up primarily to report on what activities money has been spent on and the tangible products that have been delivered. This is not to suggest that MEL systems only focus on accountability, but the importance of reporting (logframes, results frameworks, etc.) means that accountability often steers the MEL system. Unfortunately, when accountability drives the MEL system, learning takes a backseat; there are only so many hats a MEL system can wear! This, in turn, skews the types of tools and methods that a program team is likely to use. For example, a MEL system skewed towards accountability is likely to focus more on determining attribution rather than, for example, understanding whether activities were appropriate or changes are sustainable. There is no silver bullet to find a balance between the (often) competing agendas, but a recognition that learning is essential must be made explicit and help drive the design of a MEL system.

Tip: In addition to quantitative indicators (which largely serve accountability), consider qualitative data collection and analysis. You can use methods such as outcome harvesting and contribution analysis as part of the MEL toolset.

Adaptive programing is the holy grail of a MEL system. We can continue to discuss semantics, methods, and value, but there are some first principles – the presence of which are a sign that program teams and donors are using their MEL systems to prioritize reflection and adaptation.

Dr. Deepti Sastry is a monitoring, evaluation, and learning expert with over 15 years of experience in the international NGO, government, civil society, and private sectors. She has extensive experience working with UK and EU-funded aid programmes, with emphasis on MEL for programmes in and on fragile and conflict affected states, private sector development, and impact investing. While being a MEL purist, Deepti is passionate both about good quality, robust MEL tools and processes, and in optimizing the value of these tools and processes to leverage insights and adaptations. In addition, Deepti is experienced with and uses numerous methodological approaches such as mixed-methods and qualitative evaluation design, appreciative inquiry, the Qualitative Impact Protocol (QuIP), outcome mapping, and qualitative research methods.

175 views0 comments

I turn 50 this month (eep!). If you told my teenage self that I would one day decide what to buy, where to go on holiday, and where to work based on Amazon, Tripadvisor, and GlassDoor ratings, I don’t think I’d have believed you. And yet – here we are.

The growing pervasiveness of these digital ratings has pushed me to consider whether they could be leveraged by the humanitarian research community. How would this system work? Would it be a good idea?

How Might It Look?

The ideal outcome of applying a Tripadvisor model to aid is being able to compare interventions of all shapes and sizes around the world.

The “AidAdvisor” would therefore have to work at scale, collecting information from as many people as possible. To achieve that scope and give access to all community members, the system would probably bypass monitors, allowing anyone to take part whenever they wanted.

We’d need to embrace digital tools and platforms, such as WhatsApp, Facebook, or an endless list of other options.

Mobile users in countries receiving aid could directly use these platforms to provide feedback. For instance, QR codes could be placed at food distribution sites, allowing recipients to scan the code and rate their satisfaction with the distribution. These sites could even be directly embedded in Google Maps.

There would need to be a central coordinator – one that communities know and trust. Probably the UN. The involvement of the UN would assure communities that the initiative was serious and their data was safe. NGO and CSOs could link into this core UN set-up, accessing free, user-friendly software.

Of course, we’d need to ensure the inclusion of the vulnerable and illiterate. For this, we might learn from Twaweza’s ‘Sauti za Wananchi’ initiative, which distributes mobile phones and solar chargers to communities without.

Would It Be a Good Idea?

It’s hard to say at this point. There are so many challenges. One obstacle would be fairness. Some projects are just more difficult to implement than others. I’ve evaluated fantastic projects that would receive low scores simply because they were in war zones where nothing worked perfectly. Recipients in this context are unlikely to give 5-star ratings.

A platform like this could also over-simplify – or even belittle – the work of civil society. Do human rights organizations exist in the same universe as companies offering packaged vacations in Cancun?

Another issue is that the direct beneficiaries are often not communities, but a limited number of individuals managing a project. If, for example, we were reviewing a small project providing teacher training, we would want feedback from the teachers at least as much as the children. Surely a healthier way to give feedback in this situation would be directly and intimately between the teachers and funder.

For some projects, the idea may not work, and that’s okay. And for many projects, sample sizes (and, certainly, statistical precision and representation) would represent significant problems.

But the upsides are also alluring! Above all, it would give communities a stronger voice. And it could do it efficiently! For me, a one-click survey offers a more respectful, sustainable way forward than lengthy surveys.

Foregoing monitors could lead to data quality issues, but it would also reduce social desirability bias. Of course, many vulnerable people don’t have phones, but a growing number do.

Perhaps the greatest benefit would be the ‘back-end’. I can only imagine the feverish debate that the initiative would create among senior officials in DC, Brussels, and London – eyes on stalks as they see which programs were rated better and worse by those who matter most. If we add the ability to upload videos (just like you upload photos of grimy bathrooms on Tripadvisor), the whole thing reaches another level.

Would evaluators start leaning on this? Could it eventually replace Third-Party Monitoring? Would I need to look for a new career? I think we’d glean real insights at the demographic, geographic, and thematic levels.

For the cynics out there who believe there’s just too much competition baked into the system for this to fly, the automotive industry already did it. Years ago, car makers realized that progress would require knowing who was better than whom, at what, and why. So, they took a bold step and asked one company, JD Power, to manage the whole industry’s customer satisfaction data. It revolutionized the industry and made consumer feedback easy.

On balance, I think it’s time for someone to pilot this idea. My gut tells me that investing in transparency is a good bet.

“AidAdvisor” may only give us a small part of the data picture – and one we’d hopefully learn to use judiciously. But I think we have slept walked into the creation of a humanitarian data space comprised of myriad isolated data sets. Much of the real value seems buried and yet to be unleashed!

Richard Harrison has 25+ years of experience leading MEL and research projects for the UK, EU, UN, US, Canada and World Bank, working on the ground in over 30 countries spanning the Middle East, Africa, and Asia.

302 views1 comment