Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

AEA365 Contributor, Curated by Elizabeth DiLuzio

Blue Marble Evaluation Questions by Charmagne Campbell-Patton, Hannah McMillan, Mike Moore, Michael Quinn Patton, and Rees Warne

Greetings, fellow evaluators! We are members of the Blue Marble Evaluation Network, a global group engaged in asking questions about the future of our Earth and evaluation’s role in supporting a future that is just and regenerative. The Blue Marble refers to the view of Earth from space, an image of our shared planetary home without borders, boundaries, or divisions.
At the 2019 annual conference of the American Evaluation Association, ARCevaluation of Menomonie, Wisconsin (now Catalyst), sponsored a poetry contest. The winning entry, shown below, was submitted by Evgenia Valuy.

About my Research Focus & a Reflection on Identify as an Evalpreneur or Evaluation Consultant by Nicolas Uwitonze

Hello, my name is Nicolas Uwitonze, and I am a second year PhD student in the Department of Agriculture Leadership and Community Education at Virginia Tech, USA. In my previous blog, I narrated my brief story in the field of evaluation and mentioned that my dissertation journey contributes towards becoming an evaluation consultant/entrepreneur. In this blog, I would like to talk a little expand on that conversation.

If you are excited to learn more about my research focus on “Evalpreneurship in Africa” or would like to engage in a discussion about “who is an ‘evalpreneur’ and how are evalpreneurs different to ‘evaluation consultant’, I hope that this blog is of great help!

Putting Descartes Before the Report: Telling your Evaluative Story with the Grid Design System by Rose Konecky

Hello, I’m Rose Konecky, Evaluation and Learning Consultant at TCC Group. I’m here to turn you into a creator of visualization masterpieces. Really!

As evaluators, we always have a story to tell, but we sometimes limit ourselves to words (which, of course, are important) and canned chart creators (also important!). I’m here to show you that we can leverage so much more visual storytelling power than that if we use innovative design principles. And don’t worry – a lack of artistic talent won’t stand in your way. In fact, the technique I’m about to describe is more of a science than an art. It is called the Cartesian Grid System, and you can leverage it with or without talent. All you need to do is follow five concrete steps.

The American Journal of Evaluation at the 2023 AEA Conference by Laura R. Peck

Greetings, AEA365 readers! I am Laura Peck, Co-Editor of the American Journal of Evaluation, recently appointed along with Rodney Hopson to serve a full three-year term leading our journal. Rodney and I are thrilled to have received a huge response to our invitation to engage in the journal’s leadership and work; and we are pleased to have appointed a new Editorial Team, including one returning and four new Associate Editors, and one returning and 12 new Section Editors, along with 14 returning and 34 new members of the Editorial Advisory Board. From among the applications, we have an additional 28 scholars and practitioners standing by to serve as reviewers, cite work in the journal, submit work to the journal, get published in the journal, and serve as advocates for the journal. This is not an exclusive team! Indeed, we look forward to bringing seasoned and new voices and perspectives together to advance our journal’s relevance and impact. We hope those of you interested in the journal will connect and join us in some way.  

Spurious Precision – Leading to Evaluations that Misrepresent and Mislead by Burt Perrin

Sometimes it is helpful to be very precise. But, in other cases, this could be irrelevant at best, and quite likely misleading. And destroy, rather than enhance, the credibility of your evaluation – and of you. Hi, I’m Burt Perrin, and I’d like to discuss what considerations such as these mean for evaluation practice.

If one is undergoing brain surgery, one would hope that this would be done with precision based upon established knowledge about how this should be done. But one can be no more precise than the underlying data permit. Yet attempting this is where too many evaluations go wrong.

No More Crappy Survey Reporting – Best Practices in Survey Reporting for Evaluations by Janelle Gowgiel, JoAnna Hillman, Mary Davis, and Christiana Reene

Janelle, JoAnna, Mary, and Christiana here, evaluators from Emory Centers for Public Health Training and Technical Assistance. We had the opportunity to present a session entitled No More Crappy Surveys at last year’s AEA Summer Evaluation Institute. We are on a mission to rid the world of crappy surveys, and are here to share some of our Hot Tips and Rad Resources to do so.

If you haven’t already, check out the first and second blog posts in this series, No More Crappy Surveys – Best Practices in Survey Design for Evaluations (you can check it out here) and No More Crappy Survey Analysis – Best Practices in Survey Analysis for Evaluations (which you can read here). Today, we’ll be following up with some tips on how to report your survey findings to different audiences and tips to engage partners throughout the survey process.

Reflections from a Youth Evaluator by Yasemin Simsek

Greetings! I am Yasemin Simsek, a master’s candidate in American University’s Measurement and Evaluation program. The Quantitative Methods in Evaluation course required me to partner with an organization to identify a research need, collect and analyze data, and write a report. I had the incredible opportunity to work with the Neema Project, a nonprofit organization dedicated to empowering women experiencing poverty, gender-based violence, or teen pregnancy in Kitale, Kenya through services such as skills training, counseling, and faith-based support.

Measuring DEI in Our Own Workforce: Lessons from Four Studies Across Two Years by Laura Kim and Brooke Hill

We are Laura Kim (Senior Consultant at the Canopy Lab) and Brooke Hill (Senior Program Manager at Social Impact). Laura is part of the team that works on Canopy’s Inclusion and Leadership series, which explores the forces that influence who gets to advance in international development and why. Brooke is the technical lead for the BRIDGE survey and co-leads the Equity Incubator, a lab studying equity and inclusion through data.

Sharing How the Inaugural AEA Student Evaluation Case Competition Went by Dana Linnell, Steve Mumford, Carolina De La Rosa Mateo, Julian Nyamupachitu, Rana Gautam, Jennifer Yessis, Christine Roseveare, and Asma Ali

We are the Student Evaluation Case Competition Working Group (Dana Linnell, Steve Mumford, Carolina De La Rosa Mateo, Julian Nyamupachitu, Rana Gautam, Jennifer Yessis, Christine Roseveare, and Asma Ali). We’re excited to tell you about the inaugural competition!

No More Crappy Survey Analysis – Best Practices in Survey Analysis for Evaluations by Janelle Gowgiel, JoAnna Hillman, Mary Davis, and Christiana Reene

Janelle, JoAnna, Mary, and Christiana here, evaluators from Emory Centers for Public Health Training and Technical Assistance. We had the opportunity to present a session entitled No More Crappy Surveys at last year’s AEA Summer Evaluation Institute. We are on a mission to rid the world of crappy surveys, and are here to share some of our Hot Tips and Rad Resources to do so.

If you haven’t already, check out the first blog post in this series, No More Crappy Surveys – Best Practices in Survey Design for Evaluations (you can check it out here). Today, we’ll be following up with some tips on how to analyze your surveys (which, of course, you’ve made sure are not crappy!). Stay tuned for our final post of this series, on how to report your findings to different audiences.