Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

About my Research Focus & a Reflection on Identify as an Evalpreneur or Evaluation Consultant by Nicolas Uwitonze

Hello, my name is Nicolas Uwitonze, and I am a second year PhD student in the Department of Agriculture Leadership and Community Education at Virginia Tech, USA. In my previous blog, I narrated my brief story in the field of evaluation and mentioned that my dissertation journey contributes towards becoming an evaluation consultant/entrepreneur. In this blog, I would like to talk a little expand on that conversation.

If you are excited to learn more about my research focus on “Evalpreneurship in Africa” or would like to engage in a discussion about “who is an ‘evalpreneur’ and how are evalpreneurs different to ‘evaluation consultant’, I hope that this blog is of great help!

Putting Descartes Before the Report: Telling your Evaluative Story with the Grid Design System by Rose Konecky

Hello, I’m Rose Konecky, Evaluation and Learning Consultant at TCC Group. I’m here to turn you into a creator of visualization masterpieces. Really!

As evaluators, we always have a story to tell, but we sometimes limit ourselves to words (which, of course, are important) and canned chart creators (also important!). I’m here to show you that we can leverage so much more visual storytelling power than that if we use innovative design principles. And don’t worry – a lack of artistic talent won’t stand in your way. In fact, the technique I’m about to describe is more of a science than an art. It is called the Cartesian Grid System, and you can leverage it with or without talent. All you need to do is follow five concrete steps.

The American Journal of Evaluation at the 2023 AEA Conference by Laura R. Peck

Greetings, AEA365 readers! I am Laura Peck, Co-Editor of the American Journal of Evaluation, recently appointed along with Rodney Hopson to serve a full three-year term leading our journal. Rodney and I are thrilled to have received a huge response to our invitation to engage in the journal’s leadership and work; and we are pleased to have appointed a new Editorial Team, including one returning and four new Associate Editors, and one returning and 12 new Section Editors, along with 14 returning and 34 new members of the Editorial Advisory Board. From among the applications, we have an additional 28 scholars and practitioners standing by to serve as reviewers, cite work in the journal, submit work to the journal, get published in the journal, and serve as advocates for the journal. This is not an exclusive team! Indeed, we look forward to bringing seasoned and new voices and perspectives together to advance our journal’s relevance and impact. We hope those of you interested in the journal will connect and join us in some way.  

Spurious Precision – Leading to Evaluations that Misrepresent and Mislead by Burt Perrin

Sometimes it is helpful to be very precise. But, in other cases, this could be irrelevant at best, and quite likely misleading. And destroy, rather than enhance, the credibility of your evaluation – and of you. Hi, I’m Burt Perrin, and I’d like to discuss what considerations such as these mean for evaluation practice.

If one is undergoing brain surgery, one would hope that this would be done with precision based upon established knowledge about how this should be done. But one can be no more precise than the underlying data permit. Yet attempting this is where too many evaluations go wrong.

Shifting the Evaluation Lens to Localization – Progress You Can See by Kim Norris

Hi, I’m Kim Norris, Monitoring, Evaluation and Learning (MEL) Director for American Institutes for Research (AIR)’s International Development Division. Part of my role is to lead a MEL practice. As part of our initial strategy, our practice team determined to focus on localizing our work. For us this means we seek out ways to increase local partnering and leadership in and around MEL efforts – from business development to MEL direction and execution. This involves local team leadership, capacity strengthening and engagement on local terms.

No More Crappy Survey Reporting – Best Practices in Survey Reporting for Evaluations by Janelle Gowgiel, JoAnna Hillman, Mary Davis, and Christiana Reene

Janelle, JoAnna, Mary, and Christiana here, evaluators from Emory Centers for Public Health Training and Technical Assistance. We had the opportunity to present a session entitled No More Crappy Surveys at last year’s AEA Summer Evaluation Institute. We are on a mission to rid the world of crappy surveys, and are here to share some of our Hot Tips and Rad Resources to do so.

If you haven’t already, check out the first and second blog posts in this series, No More Crappy Surveys – Best Practices in Survey Design for Evaluations (you can check it out here) and No More Crappy Survey Analysis – Best Practices in Survey Analysis for Evaluations (which you can read here). Today, we’ll be following up with some tips on how to report your survey findings to different audiences and tips to engage partners throughout the survey process.

Reflections from a Youth Evaluator by Yasemin Simsek

Greetings! I am Yasemin Simsek, a master’s candidate in American University’s Measurement and Evaluation program. The Quantitative Methods in Evaluation course required me to partner with an organization to identify a research need, collect and analyze data, and write a report. I had the incredible opportunity to work with the Neema Project, a nonprofit organization dedicated to empowering women experiencing poverty, gender-based violence, or teen pregnancy in Kitale, Kenya through services such as skills training, counseling, and faith-based support.

Measuring DEI in Our Own Workforce: Lessons from Four Studies Across Two Years by Laura Kim and Brooke Hill

We are Laura Kim (Senior Consultant at the Canopy Lab) and Brooke Hill (Senior Program Manager at Social Impact). Laura is part of the team that works on Canopy’s Inclusion and Leadership series, which explores the forces that influence who gets to advance in international development and why. Brooke is the technical lead for the BRIDGE survey and co-leads the Equity Incubator, a lab studying equity and inclusion through data.

Enriching the Local Evaluation Story Using “Most Significant Change” Adaptations by Kim Norris

Hi, I’m Kim Norris, Monitoring, Evaluation and Learning (MEL) Director for American Institutes for Research (AIR)’s International Development Division. Part of my role is to lead a MEL practice. As part of our initial strategy, our practice team determined to focus on localizing our work. For us this means we seek out ways to increase local partnering and leadership in and around MEL efforts – from business development to MEL direction and execution. This involves local team leadership, capacity strengthening and engagement on local terms.

Our team is keenly aware that stories are best told by those who have lived them, and that we are at risk of losing the evaluation story without significant local engagement. We have learned how using the Most Significant Change method (MSC) can more actively involve local participants in identifying, analyzing and interpreting significant changes since program inception, and can help to uncover hidden and emergent aspects of an intervention’s relevance and effectiveness.

The Story of Systemic Racism and Playgrounds: How KABOOM! uses data to overcome playspace inequity by Isaac Castillo and Colleen Coyne

Hello! We are Isaac D. Castillo and Colleen Coyne, and we represent the Learning and Evaluation team at KABOOM!. And we have a question for you: What do playgrounds, data, systemic racism, maps, and evaluation all have in common? Playgrounds should serve as a sanctuary for children – an escape from everyday pressures where they can just be kids. But not every child in the United States has access to a safe and high-quality playground. At KABOOM!, we refer to these disparities in access and quality as playspace inequity. KABOOM! builds playgrounds in partnership with others across the United States to end playspace inequity, so more kids can grow up happy and healthy. But how do we measure playspace inequity? That is where data, maps, storytelling, and evaluation come in.