Greetings! I’m Ann Emery from Innovation Network in Washington, DC. I also tweet and blog about my adventures as a nonprofit and foundations evaluator.
Lessons Learned: Are you a recent graduate or novice evaluator? If so, you’ve probably already made tremendous professional and personal growth. Congratulations! However, you’ve probably also learned that evaluation is challenging! As I followed the #evalAHA hashtag on Twitter during the AEA conference, I started reflecting on my own aha moments. When faced with new evaluation challenges, celebrating my past aha moments gives me fuel and pushes me forward.
Hot Tips: Here are the aha moments I experienced during my first few years in evaluation:
- Evaluation isn’t research. This realization is especially common for those of us who entered evaluation from the social sciences. Hallie Preskill’s graphic about research and evaluation and Jane Davidson’s article about Unlearning Some of our Social Scientist Habits have been invaluable during my transition from research into evaluation.
- Data will always be missing and messy. Despite your best precautions, you’ll need to budget plenty of time for data cleaning. One of my favorite resources is Missing Data: A Gentle Introduction by Patrick McKnight, Katherine McKnight, Souraya Sidani, and Aurelio José Figueredo.
- Evaluation takes time. The evaluation planning and the process itself can take months (if not years!) and the resulting programmatic and policy changes can take years (if not decades!). I remind myself that evaluation is a marathon – not a sprint – and I rejoice in all victories, no matter how small.
- Qualitative methods rock. Qualitative data are meaningful and useful for program staff, and qualitative approaches are often a better fit for newer or not-yet-evaluated programs than quantitative approaches.
- Randomized control trials are no longer the gold standard. Or appropriate for some programs.
- Jargon’s unacceptable – well, most of the time. Just when I figured out how to really, truly banish jargon from my reports, I started working on an evaluation project for economists, and they loved reading statistical details. That old saying “The only constant is change, continuing change, inevitable change” certainly rings true in my own evaluation practices. I re-visit and re-question my assumptions, approaches, and techniques with every new project.
Lesson Learned: Have you had similar aha moments? What types of evaluations, events, and experiences have prompted your own insights?
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.
I’d say one of my big eval AHA moments was when I was in grad school learning about Program Eval and realized how incredibly diverse this field is. I went from “what the heck is that?” (I started out concentrating in I/O Psych, then picked up program eval) to “it’s everywhere!”
Maureen, Great point! I’m in the “it’s everywhere!” phase too. Most recently, I’m discovering lots of similarities between how my MBA friends help businesses track their progress and how evaluation friends and I help non-profits, schools, and other organizations track their progress. -Ann
I like your hot tips, thank you Ann. Especially what you said about qualitative methods, RCTs, and jargon.
Coming from applied research and evaluation, I would note that, like evaluation, applied research can also be conducted for the purpose of supporting decision-making about policies and programs.
Ann,
Nice post. Regarding the research-evaluation split, the thing that made it click for me was when I was told: Research is designed to create knowledge/Evaluation is designed to support decisionmaking. There is overlap,and many of the methods are the same, but keeping that functional distinction in mind has always helped me.
Great post Ann! I agree with all of your Hot Tips and appreciate the resources you offer. I wasn’t familiar with the book Missing Data, so I’ll add that one to my list! As for RCTs, I’m a big fan of Michael Quinn Patton and his argument for methodological appropriateness as the “gold standard.” This argument can be read in the Claremont debates with Michael Scriven (http://ccdl.libraries.claremont.edu/cdm/singleitem/collection/lap/id/70), and in Patton’s book Utilization-Focused Evaluation 4th Ed. (http://www.sagepub.com/books/Book229324).
Sheila, I wasn’t familiar with Michael Quinn Patton’s argument for methodological appropriateness being the gold standard. What a great way to explain it! Thanks for sharing. Ann