Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Evaluation of Global Health Security in Militancy-Hit Areas of Pakistan Bordering Afghanistan – Lessons Learned by Saeed Ahmed

Hello learned readers and evaluation enthusiasts, a warm welcome to my blog post for AEA conference. Let me introduce myself. I am Dr Saeed Ahmad from Pakistan. I am public health specialist with focus on monitoring and evaluation on public health projects. My area of work revolves around US led Global Health Security Agenda (GHSA) implementation in Pakistan for better health for all. This blog post is related to my monitoring and evaluation of GHSA in militancy hit areas of Pakistan bordering Afghanistan. In the realm of global health security, few regions pose as significant challenges as the militancy-hit areas along the Pakistan-Afghanistan border. These areas are not only afflicted by violence and conflict but also face unique socio-cultural and religious barriers that affect public health interventions. As a dedicated Health Emergency Management Consultant with extensive experience in public health, disease surveillance, and capacity building, I embarked on a groundbreaking project to evaluate US-CDC led Global Health Security indicators and polio programs in these troubled territories. In this blog, we will delve into the innovative approach employed, the invaluable lessons learned, and the implications of this work for policy makers, stakeholders, and similar conflict-affected regions around the world.

Blue Marble Evaluation Questions by Charmagne Campbell-Patton, Hannah McMillan, Mike Moore, Michael Quinn Patton, and Rees Warne

Greetings, fellow evaluators! We are members of the Blue Marble Evaluation Network, a global group engaged in asking questions about the future of our Earth and evaluation’s role in supporting a future that is just and regenerative. The Blue Marble refers to the view of Earth from space, an image of our shared planetary home without borders, boundaries, or divisions.
At the 2019 annual conference of the American Evaluation Association, ARCevaluation of Menomonie, Wisconsin (now Catalyst), sponsored a poetry contest. The winning entry, shown below, was submitted by Evgenia Valuy.

Captivate Your Crowd with Audience Engagement Principles by Sheila B. Robinson

Hi! I’m Sheila B. Robinson, Ed.D. of Custom Professional Learning, LLC. I’m a speaker, educator, consultant, and yes, a program evaluator too!

In my first career as an educator, I dedicated years to co-teaching and coaching/mentoring teachers. I was intrigued by teachers who possessed a talent for engaging students. Was it their lively personalities, raw charisma, carefully crafted lesson plans, or the particular activities the students were doing that made the difference? I began observing closely and taking notes. As I transitioned from the classroom to the conference room creating and facilitating professional development courses for teachers and school leaders, the answer became clear: it’s all of the above!

Applying Digital Development Principles to Locally Contextualize Evaluations by Kim Norris

Hi, I’m Kim Norris, Monitoring, Evaluation and Learning (MEL) Director for American Institutes for Research (AIR)’s International Development Division. Part of my role is to lead a MEL practice. As part of our initial strategy, our practice team determined to focus on localizing our work. For us this means we seek out ways to increase local partnering and leadership in and around MEL efforts – from business development to MEL direction and execution. This involves local team leadership, capacity strengthening and engagement on local terms.

About my Research Focus & a Reflection on Identify as an Evalpreneur or Evaluation Consultant by Nicolas Uwitonze

Hello, my name is Nicolas Uwitonze, and I am a second year PhD student in the Department of Agriculture Leadership and Community Education at Virginia Tech, USA. In my previous blog, I narrated my brief story in the field of evaluation and mentioned that my dissertation journey contributes towards becoming an evaluation consultant/entrepreneur. In this blog, I would like to talk a little expand on that conversation.

If you are excited to learn more about my research focus on “Evalpreneurship in Africa” or would like to engage in a discussion about “who is an ‘evalpreneur’ and how are evalpreneurs different to ‘evaluation consultant’, I hope that this blog is of great help!

Putting Descartes Before the Report: Telling your Evaluative Story with the Grid Design System by Rose Konecky

Hello, I’m Rose Konecky, Evaluation and Learning Consultant at TCC Group. I’m here to turn you into a creator of visualization masterpieces. Really!

As evaluators, we always have a story to tell, but we sometimes limit ourselves to words (which, of course, are important) and canned chart creators (also important!). I’m here to show you that we can leverage so much more visual storytelling power than that if we use innovative design principles. And don’t worry – a lack of artistic talent won’t stand in your way. In fact, the technique I’m about to describe is more of a science than an art. It is called the Cartesian Grid System, and you can leverage it with or without talent. All you need to do is follow five concrete steps.

The American Journal of Evaluation at the 2023 AEA Conference by Laura R. Peck

Greetings, AEA365 readers! I am Laura Peck, Co-Editor of the American Journal of Evaluation, recently appointed along with Rodney Hopson to serve a full three-year term leading our journal. Rodney and I are thrilled to have received a huge response to our invitation to engage in the journal’s leadership and work; and we are pleased to have appointed a new Editorial Team, including one returning and four new Associate Editors, and one returning and 12 new Section Editors, along with 14 returning and 34 new members of the Editorial Advisory Board. From among the applications, we have an additional 28 scholars and practitioners standing by to serve as reviewers, cite work in the journal, submit work to the journal, get published in the journal, and serve as advocates for the journal. This is not an exclusive team! Indeed, we look forward to bringing seasoned and new voices and perspectives together to advance our journal’s relevance and impact. We hope those of you interested in the journal will connect and join us in some way.  

Spurious Precision – Leading to Evaluations that Misrepresent and Mislead by Burt Perrin

Sometimes it is helpful to be very precise. But, in other cases, this could be irrelevant at best, and quite likely misleading. And destroy, rather than enhance, the credibility of your evaluation – and of you. Hi, I’m Burt Perrin, and I’d like to discuss what considerations such as these mean for evaluation practice.

If one is undergoing brain surgery, one would hope that this would be done with precision based upon established knowledge about how this should be done. But one can be no more precise than the underlying data permit. Yet attempting this is where too many evaluations go wrong.

Shifting the Evaluation Lens to Localization – Progress You Can See by Kim Norris

Hi, I’m Kim Norris, Monitoring, Evaluation and Learning (MEL) Director for American Institutes for Research (AIR)’s International Development Division. Part of my role is to lead a MEL practice. As part of our initial strategy, our practice team determined to focus on localizing our work. For us this means we seek out ways to increase local partnering and leadership in and around MEL efforts – from business development to MEL direction and execution. This involves local team leadership, capacity strengthening and engagement on local terms.

No More Crappy Survey Reporting – Best Practices in Survey Reporting for Evaluations by Janelle Gowgiel, JoAnna Hillman, Mary Davis, and Christiana Reene

Janelle, JoAnna, Mary, and Christiana here, evaluators from Emory Centers for Public Health Training and Technical Assistance. We had the opportunity to present a session entitled No More Crappy Surveys at last year’s AEA Summer Evaluation Institute. We are on a mission to rid the world of crappy surveys, and are here to share some of our Hot Tips and Rad Resources to do so.

If you haven’t already, check out the first and second blog posts in this series, No More Crappy Surveys – Best Practices in Survey Design for Evaluations (you can check it out here) and No More Crappy Survey Analysis – Best Practices in Survey Analysis for Evaluations (which you can read here). Today, we’ll be following up with some tips on how to report your survey findings to different audiences and tips to engage partners throughout the survey process.