Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Multiple Perspectives: Reflections from Two University-Based Evaluation Centers in their First Year by Paula Ogston-Nobile, Jesse Senechal and Nena Bloom

Hi, we are Paula Ogston-Nobile, Associate Director and Jesse Senechal, Executive Director with the Institute for Collaborative Research and Evaluation (ICRE) at Virginia Commonwealth University (VCU), and Nena Bloom, Director of Northern Arizona University Evaluation Services (NES). We are excited to share our reflections and lessons learned as first-year University-Based Evaluation Centers (UBCs).

Evaluation for Transformation: Developing Capacities of Young and Emerging Evaluators by Claudia Olavarria

I am Claudia Olavarria, an evaluator and consultant working on Evaluation Capacity Development in Latin America, the Caribbean, and globally, working for the Global Evaluation Initiative. I am passionate about feminist approaches to evaluation and advocate for the inclusion of youth in evaluation. I am part of the council of the International Evaluation Academy, (IEAC), …

Evaluation for Transformation: Developing Capacities of Young and Emerging Evaluators by Claudia Olavarria Read More »

When Evaluation Needs Transformational Change – Watch Out! by Romeo Santos

Hi. My name is Romeo Santos, a council member at the International Evaluation Academy. I’m one of the founders of the Asia Pacific Evaluation Association, and I served as president in 2018-2019. I started dabbling in monitoring and evaluation (M&E) in 2000.

I wrote this blog as a form of reflection. You may agree or disagree with the points I raised. However, I’m open to critiques and suggestions. Please feel free to contact me.

Uncovering Hidden Data to Address Organizational Slack: Decolonizing Efficiency-Centric Evaluation by Mita Marra

I’m Mita Marra. As an economics professor at the University of Naples in Italy, specialized in policy evaluation in regional development, innovation and work-family interface, my engagement with the evaluation community has spanned over two decades. I have served as the Editor-in-Chief of the international journal Evaluation and Program Planning since 2019, and as President of the Italian Evaluation Association (AIV) between 2013 and 2017. Currently, I am a member of the Board of the European Evaluation Society and the Council of the International Evaluation Academy.

Analyzing Qualitative Data with Relevant Frameworks for Program Evaluation by Liane M. Ventura

Hi, I’m Liane M. Ventura, MPH. I am a Research Associate in the Center for Applied Research and Evaluation in Women’s Health at East Tennessee State University. My primary role is leading a longitudinal qualitative research study to evaluate a statewide contraceptive access initiative. I also have a community health consulting practice where I provide technical assistance to practice-based organizations, including program evaluation services.

Three Top Tips for SDG Evaluations by Dorothy Lucks

Hello, AEA365 community. My name is Dorothy Lucks, an inaugural member of EVALSDGs, a credentialed evaluator, a Fellow of the Australian Evaluation Society, and Executive Director of Sustainable Development Facilitation (SDF) Global, a social enterprise that works to facilitate change through evaluations. At SDF Global, we have a strong focus on the Sustainable Development Goals (SDGs).

How Systems Thinking in Evaluation Supports Localization by Kim Norris

Hi, I’m Kim Norris, Monitoring, Evaluation and Learning (MEL) Director for American Institutes for Research (AIR)’s International Development Division. As co-chair for the Systems in Evaluation Topical Interest Group (SETIG), I get excited about using systems thinking in evaluation work to improve evaluations. In this case, I am reminded of how systems thinking in evaluation (STE) helps to move us toward localization.

Washington Evaluators Affiliate Week: Evidence Act: Building Internal Evaluation Capacity for Social Impact Organizations by Quisha Brown

Hello, I’m Quisha Brown, author of “Racial Equity Lens Logic Model & Theory of Change” – a transformative guidebook offering step-by-step instructions on building a people-centered Progressive Outcomes Scale Logic Model (POSLM). In a world where social mission organizations struggle to demonstrate evidence of their effectiveness, the need to enhance the Evidence-Based Policymaking Act in …

Washington Evaluators Affiliate Week: Evidence Act: Building Internal Evaluation Capacity for Social Impact Organizations by Quisha Brown Read More »

Washington Evaluators Affiliate Week: Research Partnerships for Better Evidence-based Policy Making by Matt St. John

How can the US federal government bring new perspectives of evidence generation and evaluation to improve programs and policy? I’m Matt St. John, Program Evaluation Specialist with Guidehouse and Evidence Act advisor to the US Department of State (DOS), and I would like to share one initiative that I am working on with my colleagues …

Washington Evaluators Affiliate Week: Research Partnerships for Better Evidence-based Policy Making by Matt St. John Read More »

Washington Evaluators Affiliate Week: Accountability and Learning Perspectives on the Evidence Act by Terell Lasane

My name is Terell Lasane, and I am the Assistant Director, Center for Evaluation Methods and Issues (CEMI) in the Applied Research and Methods team at the U.S. Government Accountability Office. Language matters. And that’s particularly true when unpacking the Evidence Act. Early on in my evaluation career, I evaluated public programs for state, local, and federal entities. When I worked with these organizations, I always emphasized that fulfilling reporting requirements for accountability provided unique opportunities for program learning, and that these functions should be paired together whenever it was appropriate to do so. The actionable intelligence that could be garnered from evaluation activity is supported by the Evidence Act, and the legislation provides a valuable framework for marrying accountability with program learning and program improvement. Evaluation practitioners have long recognized the importance of this marriage for better government at all levels.