Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Search Results for: diluzio

Captivate Your Crowd with Audience Engagement Principles by Sheila B. Robinson

Hi! I’m Sheila B. Robinson, Ed.D. of Custom Professional Learning, LLC. I’m a speaker, educator, consultant, and yes, a program evaluator too!

In my first career as an educator, I dedicated years to co-teaching and coaching/mentoring teachers. I was intrigued by teachers who possessed a talent for engaging students. Was it their lively personalities, raw charisma, carefully crafted lesson plans, or the particular activities the students were doing that made the difference? I began observing closely and taking notes. As I transitioned from the classroom to the conference room creating and facilitating professional development courses for teachers and school leaders, the answer became clear: it’s all of the above!

Applying Digital Development Principles to Locally Contextualize Evaluations by Kim Norris

Hi, I’m Kim Norris, Monitoring, Evaluation and Learning (MEL) Director for American Institutes for Research (AIR)’s International Development Division. Part of my role is to lead a MEL practice. As part of our initial strategy, our practice team determined to focus on localizing our work. For us this means we seek out ways to increase local partnering and leadership in and around MEL efforts – from business development to MEL direction and execution. This involves local team leadership, capacity strengthening and engagement on local terms.

About my Research Focus & a Reflection on Identify as an Evalpreneur or Evaluation Consultant by Nicolas Uwitonze

Hello, my name is Nicolas Uwitonze, and I am a second year PhD student in the Department of Agriculture Leadership and Community Education at Virginia Tech, USA. In my previous blog, I narrated my brief story in the field of evaluation and mentioned that my dissertation journey contributes towards becoming an evaluation consultant/entrepreneur. In this blog, I would like to talk a little expand on that conversation.

If you are excited to learn more about my research focus on “Evalpreneurship in Africa” or would like to engage in a discussion about “who is an ‘evalpreneur’ and how are evalpreneurs different to ‘evaluation consultant’, I hope that this blog is of great help!

Putting Descartes Before the Report: Telling your Evaluative Story with the Grid Design System by Rose Konecky

Hello, I’m Rose Konecky, Evaluation and Learning Consultant at TCC Group. I’m here to turn you into a creator of visualization masterpieces. Really!

As evaluators, we always have a story to tell, but we sometimes limit ourselves to words (which, of course, are important) and canned chart creators (also important!). I’m here to show you that we can leverage so much more visual storytelling power than that if we use innovative design principles. And don’t worry – a lack of artistic talent won’t stand in your way. In fact, the technique I’m about to describe is more of a science than an art. It is called the Cartesian Grid System, and you can leverage it with or without talent. All you need to do is follow five concrete steps.

The American Journal of Evaluation at the 2023 AEA Conference by Laura R. Peck

Greetings, AEA365 readers! I am Laura Peck, Co-Editor of the American Journal of Evaluation, recently appointed along with Rodney Hopson to serve a full three-year term leading our journal. Rodney and I are thrilled to have received a huge response to our invitation to engage in the journal’s leadership and work; and we are pleased to have appointed a new Editorial Team, including one returning and four new Associate Editors, and one returning and 12 new Section Editors, along with 14 returning and 34 new members of the Editorial Advisory Board. From among the applications, we have an additional 28 scholars and practitioners standing by to serve as reviewers, cite work in the journal, submit work to the journal, get published in the journal, and serve as advocates for the journal. This is not an exclusive team! Indeed, we look forward to bringing seasoned and new voices and perspectives together to advance our journal’s relevance and impact. We hope those of you interested in the journal will connect and join us in some way.  

Spurious Precision – Leading to Evaluations that Misrepresent and Mislead by Burt Perrin

Sometimes it is helpful to be very precise. But, in other cases, this could be irrelevant at best, and quite likely misleading. And destroy, rather than enhance, the credibility of your evaluation – and of you. Hi, I’m Burt Perrin, and I’d like to discuss what considerations such as these mean for evaluation practice.

If one is undergoing brain surgery, one would hope that this would be done with precision based upon established knowledge about how this should be done. But one can be no more precise than the underlying data permit. Yet attempting this is where too many evaluations go wrong.

Shifting the Evaluation Lens to Localization – Progress You Can See by Kim Norris

Hi, I’m Kim Norris, Monitoring, Evaluation and Learning (MEL) Director for American Institutes for Research (AIR)’s International Development Division. Part of my role is to lead a MEL practice. As part of our initial strategy, our practice team determined to focus on localizing our work. For us this means we seek out ways to increase local partnering and leadership in and around MEL efforts – from business development to MEL direction and execution. This involves local team leadership, capacity strengthening and engagement on local terms.

No More Crappy Survey Reporting – Best Practices in Survey Reporting for Evaluations by Janelle Gowgiel, JoAnna Hillman, Mary Davis, and Christiana Reene

Janelle, JoAnna, Mary, and Christiana here, evaluators from Emory Centers for Public Health Training and Technical Assistance. We had the opportunity to present a session entitled No More Crappy Surveys at last year’s AEA Summer Evaluation Institute. We are on a mission to rid the world of crappy surveys, and are here to share some of our Hot Tips and Rad Resources to do so.

If you haven’t already, check out the first and second blog posts in this series, No More Crappy Surveys – Best Practices in Survey Design for Evaluations (you can check it out here) and No More Crappy Survey Analysis – Best Practices in Survey Analysis for Evaluations (which you can read here). Today, we’ll be following up with some tips on how to report your survey findings to different audiences and tips to engage partners throughout the survey process.

Reflections from a Youth Evaluator by Yasemin Simsek

Greetings! I am Yasemin Simsek, a master’s candidate in American University’s Measurement and Evaluation program. The Quantitative Methods in Evaluation course required me to partner with an organization to identify a research need, collect and analyze data, and write a report. I had the incredible opportunity to work with the Neema Project, a nonprofit organization dedicated to empowering women experiencing poverty, gender-based violence, or teen pregnancy in Kitale, Kenya through services such as skills training, counseling, and faith-based support.

Measuring DEI in Our Own Workforce: Lessons from Four Studies Across Two Years by Laura Kim and Brooke Hill

We are Laura Kim (Senior Consultant at the Canopy Lab) and Brooke Hill (Senior Program Manager at Social Impact). Laura is part of the team that works on Canopy’s Inclusion and Leadership series, which explores the forces that influence who gets to advance in international development and why. Brooke is the technical lead for the BRIDGE survey and co-leads the Equity Incubator, a lab studying equity and inclusion through data.