Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Search Results for: jargon

Climate Ed Eval Week: Susan Lynds on Considering Scientific Jargon to Avoid Communication Barriers

My name is Susan Lynds and I am a program evaluator at the Cooperative Institute for Environmental Sciences at the University of Colorado, Boulder.  I have been part of evaluation efforts for several tri-agency climate education programs funded by the National Science Foundation, the National Aeronautics and Space Administration, and the National Oceanic and Atmospheric …

Climate Ed Eval Week: Susan Lynds on Considering Scientific Jargon to Avoid Communication Barriers Read More »

Health Evaluation TIG Week: Simple GenAI Tips and Tricks for Evaluators by Molly Linabarger

My name is Molly Linabarger and I am a part of the Evaluation and Research for Action practice at Deloitte Consulting LLP. I serve as an evaluator for government clients, helping them to assess, understand, and communicate the implementation and impact of their programs. You likely have heard about Generative Artificial Intelligence, or GenAI. Precursor …

Health Evaluation TIG Week: Simple GenAI Tips and Tricks for Evaluators by Molly Linabarger Read More »

Washington Evaluators Affiliate Week: How the Evidence Act Has Spurred Action in the Federal Government by Natalie Donahue

Hi!  I’m Natalie Donahue. I am the Chief of Evaluation in the State Department’s Bureau of Educational and Cultural Affairs’ Monitoring Evaluation Learning and Innovation (MELI) Unit and am the Washington Evaluators (WE) Past President. The Evidence Act has had a great impact on federal evaluation practices. Over the past five years we’ve seen federal agencies create learning agendas, increase capacity-building efforts, update (or, in some cases, create) evaluation policies and accompanying guidance documents, and increase collaborative efforts around evaluation – both internally and with other agencies.

No More Crappy Survey Reporting – Best Practices in Survey Reporting for Evaluations by Janelle Gowgiel, JoAnna Hillman, Mary Davis, and Christiana Reene

Janelle, JoAnna, Mary, and Christiana here, evaluators from Emory Centers for Public Health Training and Technical Assistance. We had the opportunity to present a session entitled No More Crappy Surveys at last year’s AEA Summer Evaluation Institute. We are on a mission to rid the world of crappy surveys, and are here to share some of our Hot Tips and Rad Resources to do so.

If you haven’t already, check out the first and second blog posts in this series, No More Crappy Surveys – Best Practices in Survey Design for Evaluations (you can check it out here) and No More Crappy Survey Analysis – Best Practices in Survey Analysis for Evaluations (which you can read here). Today, we’ll be following up with some tips on how to report your survey findings to different audiences and tips to engage partners throughout the survey process.

Making Complex Content Clear: AI’s Potential for Readability in Evaluation by Jeff Kosovich

Hello! I’m Jeff Kosovich and I am a senior evaluator at the Center for Creative Leadership. One of the challenges of producing technical reports and surveys meant for people without your expertise is avoiding unnecessary complexity and jargon. I’m currently testing the effectiveness of tools like Chat GPT as a time-saving method of making surveys and reports more accessible.

Qualitative Interviews: the Deep Breath Evaluations Need by Maya Lefkowich and Michaela Raab

Hi, we are Maya Lefkowich and Michaela Raab, and we are evaluators with 30 combined years of evaluation experience. Maya is an arts-based methodologist and evaluator based in Vancouver (Canada) passionate about equity-driven and joy-centred evaluation. Michaela, working from Berlin (Germany), is a senior evaluator and facilitator in international cooperation who enjoys making sense of complex projects around the world. We met online and immediately connected on the question: how can evaluators make the most of interviews?

OL-ECB TIG Week: Putting Capacity Back in Capacity-Building by Gretchen Biesecker

Hi, I am Gretchen Biesecker, Principal Consultant with Bee’s Knees Consulting LLC in Somerville, MA. A large part of my practice focuses on evaluation capacity-building with nonprofits small and large, including AmeriCorps programs across the U.S. AmeriCorps. AmeriCorps is a federal agency that “brings people together to tackle the country’s most pressing challenges through national service and volunteering.” Through a national network, AmeriCorps enrolls 200,000 Americans each year to meet critical needs in education, the environment, disaster services, public health, among others.

ChatGPT: Considering the Role of Artificial Intelligence in the Field of Evaluation (Part 1) by Silva Ferretti

Hello! I am Silva Ferretti, an independent consultant working mostly with development and humanitarian organizations. I am keen to understand “how change really happens” – in the practice and in complex setups. I craft my approaches to be learning-focused, participatory, fresh, creative, fun… yet deep!

By now, you have likely heard of ChatGPT, an Artificial Intelligence model that interacts in a conversational format. I have been playing with it for some time now. Not only am I amazed by it, I am surprised by the lack of debate regarding AI’s role in development and humanitarian program management. It is a game changer. We as a field should be looking into it NOW.