Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

When Evaluation Needs Transformational Change – Watch Out! by Romeo Santos

Hi. My name is Romeo Santos, a council member at the International Evaluation Academy. I’m one of the founders of the Asia Pacific Evaluation Association, and I served as president in 2018-2019. I started dabbling in monitoring and evaluation (M&E) in 2000.

I wrote this blog as a form of reflection. You may agree or disagree with the points I raised. However, I’m open to critiques and suggestions. Please feel free to contact me.

Uncovering Hidden Data to Address Organizational Slack: Decolonizing Efficiency-Centric Evaluation by Mita Marra

I’m Mita Marra. As an economics professor at the University of Naples in Italy, specialized in policy evaluation in regional development, innovation and work-family interface, my engagement with the evaluation community has spanned over two decades. I have served as the Editor-in-Chief of the international journal Evaluation and Program Planning since 2019, and as President of the Italian Evaluation Association (AIV) between 2013 and 2017. Currently, I am a member of the Board of the European Evaluation Society and the Council of the International Evaluation Academy.

Analyzing Qualitative Data with Relevant Frameworks for Program Evaluation by Liane M. Ventura

Hi, I’m Liane M. Ventura, MPH. I am a Research Associate in the Center for Applied Research and Evaluation in Women’s Health at East Tennessee State University. My primary role is leading a longitudinal qualitative research study to evaluate a statewide contraceptive access initiative. I also have a community health consulting practice where I provide technical assistance to practice-based organizations, including program evaluation services.

Three Top Tips for SDG Evaluations by Dorothy Lucks

Hello, AEA365 community. My name is Dorothy Lucks, an inaugural member of EVALSDGs, a credentialed evaluator, a Fellow of the Australian Evaluation Society, and Executive Director of Sustainable Development Facilitation (SDF) Global, a social enterprise that works to facilitate change through evaluations. At SDF Global, we have a strong focus on the Sustainable Development Goals (SDGs).

How Systems Thinking in Evaluation Supports Localization by Kim Norris

Hi, I’m Kim Norris, Monitoring, Evaluation and Learning (MEL) Director for American Institutes for Research (AIR)’s International Development Division. As co-chair for the Systems in Evaluation Topical Interest Group (SETIG), I get excited about using systems thinking in evaluation work to improve evaluations. In this case, I am reminded of how systems thinking in evaluation (STE) helps to move us toward localization.

Washington Evaluators Affiliate Week: Evidence Act: Building Internal Evaluation Capacity for Social Impact Organizations by Quisha Brown

Hello, I’m Quisha Brown, author of “Racial Equity Lens Logic Model & Theory of Change” – a transformative guidebook offering step-by-step instructions on building a people-centered Progressive Outcomes Scale Logic Model (POSLM). In a world where social mission organizations struggle to demonstrate evidence of their effectiveness, the need to enhance the Evidence-Based Policymaking Act in …

Washington Evaluators Affiliate Week: Evidence Act: Building Internal Evaluation Capacity for Social Impact Organizations by Quisha Brown Read More »

Washington Evaluators Affiliate Week: Research Partnerships for Better Evidence-based Policy Making by Matt St. John

How can the US federal government bring new perspectives of evidence generation and evaluation to improve programs and policy? I’m Matt St. John, Program Evaluation Specialist with Guidehouse and Evidence Act advisor to the US Department of State (DOS), and I would like to share one initiative that I am working on with my colleagues …

Washington Evaluators Affiliate Week: Research Partnerships for Better Evidence-based Policy Making by Matt St. John Read More »

Washington Evaluators Affiliate Week: Accountability and Learning Perspectives on the Evidence Act by Terell Lasane

My name is Terell Lasane, and I am the Assistant Director, Center for Evaluation Methods and Issues (CEMI) in the Applied Research and Methods team at the U.S. Government Accountability Office. Language matters. And that’s particularly true when unpacking the Evidence Act. Early on in my evaluation career, I evaluated public programs for state, local, and federal entities. When I worked with these organizations, I always emphasized that fulfilling reporting requirements for accountability provided unique opportunities for program learning, and that these functions should be paired together whenever it was appropriate to do so. The actionable intelligence that could be garnered from evaluation activity is supported by the Evidence Act, and the legislation provides a valuable framework for marrying accountability with program learning and program improvement. Evaluation practitioners have long recognized the importance of this marriage for better government at all levels.

Washington Evaluators Affiliate Week: How the Evidence Act Has Spurred Action in the Federal Government by Natalie Donahue

Hi!  I’m Natalie Donahue. I am the Chief of Evaluation in the State Department’s Bureau of Educational and Cultural Affairs’ Monitoring Evaluation Learning and Innovation (MELI) Unit and am the Washington Evaluators (WE) Past President. The Evidence Act has had a great impact on federal evaluation practices. Over the past five years we’ve seen federal agencies create learning agendas, increase capacity-building efforts, update (or, in some cases, create) evaluation policies and accompanying guidance documents, and increase collaborative efforts around evaluation – both internally and with other agencies.

Washington Evaluators Affiliate Week: Looking Back and Going Forward with the Evidence Act by Valerie Jean Caracelli

My name is Valerie Jean Caracelli, and I am a Senior Social Science Analyst in the Center for Evaluation Methods and Issues, Applied Research and Methods team at the U.S. Government Accountability Office. As we greet the 5th year anniversary of the Foundations for Evidence-Based Policymaking Act of 2018, it is useful to reflect on federal evaluation and its use in decision making prior to the passage of the Evidence Act. In 2013 a series of evaluation questions were introduced into a generalizable survey of federal civilian managers and supervisors to obtain their perspectives on several results-oriented management topics, including the extent of and barriers to evaluation use. The survey results indicated just over a third (37 percent) of federal managers reported that an evaluation had been completed in the past 5 years on any program, operation, or project they were involved in. GAO concluded that agencies’ lack of evaluations may be the greatest barrier to their ability to inform program management and policy making.