Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

DRG TIG Week: Learning How to Evaluate Meaningful Outcomes in Collaborative Human Rights Digital Security Programs by Deanna Kolberg-Shah, Leah Squires, and Megan Guidrey

Left to right: Guidrey, Kohlberg-Shah & Squires

Hi all! We are MEL specialists at Freedom House (Deanna Kolberg-Shah and Leah Squires) and Internews (Megan Guidrey) and have been collaborating to develop an evaluation framework for digital security support programs in the DRG space as part of the USAID-funded Human Rights Support Mechanism (HRSM). HRSM is a 7-year leader with associate award designed to implement cutting edge human rights programming globally. Led by Freedom House in partnership with the American Bar Association Rule of Law Initiative, Internews, Pact, and Search for Common Ground, HRSM facilitates cross-project learning through a learning agenda based in best practices identified across 37 different associate award projects. 

We originally hoped to evaluate digital security support interventions from across the HRSM portfolio, but faced challenges stemming from an unclear and often unobservable definition of “success” in a digital security intervention. First, digital security is not something that an organization achieves, but something that it must maintain, therefore there is no end-line capacity assessment that can effectively claim an organization is meaningfully “more secure” than before.  Second, “success” is also unobservable (e.g., digital security violation attempts that were thwarted (a weaker “proxy” measure)  and when we do “see” digital disruptions, these may reflect new bad actor innovations rather than a “failure”). Finally, in human rights programming, digital security interventions are often targeted at organizations, but the actions that determine digital security are aggregates of individual decision-making (perhaps creating an “ecological fallacy” causal linkage challenge).

We reviewed all HRSM programming between 2018 and 2023 in an iterative learning process that drew on diverse but observational evidence about what digital security success means based on the real-world experiences and needs of front-line HROs and inputs from seasoned digital security specialists to develop a Digital Security Framework. The framework identifies three core digital security outcomes that can be measured as a result of digital security program assistance:

  1. Awareness: changing participants’ beliefs that digital threats pose a legitimate risk to organizational and personal safety.
  2. Knowledge: changing participants’ understanding of different digital security threats and the appropriate actions and tools (how to identify and appropriately prioritize actions within the threat context) to mitigate those risks.  
  3. Adoption: changing participants’ abilities and willingness to develop and implement digital security practices that address organizational and/or personal risks that allow organizations to  build sustainable, resilient digitally secure systems and processes.

Hot Tips

For evaluating digital security interventions:

  • Identify realistic definitions of success for the scope of the intervention (a shorter-term project  or particular operating contexts, awareness raising is a critical outcome of interventions);
  • Avoid an overemphasis on observable behavior change as the digital security practices encouraged through interventions are difficult to observe without violating reasonable expectations of individual privacy;  
  • Think critically about the use case for baseline and endline assessments of digital security “adoption” as these assessments are technically complex and require resource tradeoffs.
  • Focus on identifying key barriers to achieving outcomes to ensure participant and user buy-in.

For creating your own frameworks:

  • Have topic experts and program beneficiaries in the same room together defining success
  • Pull cases from a wide variety of contexts
  • Apply a pragmatic lens for easy application, i.e., organize your framework by a relevant timeline, geography, audience, etc.
  • Provide an abridged version so that people can easily reference the framework in their day-to-day practice.

Rad Resources

The American Evaluation Association is hosting Democracy, Human Rights & Governance TIG Week with our colleagues in the Democracy, Human Rights & Governance Topical Interest Group. The contributions all this week to aea365 come from our DRG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.