Hi my name is Danielle Hegseth from the Improve Group. In the fall of 2012 I was fresh out of grad school and still green in my role as a research analyst. During my short time in the field I had quickly gained experience with the ups and downs of meeting the needs of clients in a world of high expectations and limited resources. As a result, I was thankful for the opportunity to attend the upcoming AEA conference and professional development workshops, in the hopes that evaluation professionals could shed light on the challenges of real world evaluation.
Only a few months prior to this I was sitting in a graduate level stats class learning all the different ways we can determine trends or outcomes on a national level. Examples included: generous time horizons, census data (a lot of census data), and reports that were presented to the legislature. Examples did not include: imperfect or missing data, stakeholders with limited research and evaluation knowledge, or impossible time horizons. Furthermore, they did not include a client with a low stakes budget to serve a high risk population and a funder-imposed need for a report that says “our program works.”
At the conference I attended one full day workshop on applications of multiple regression. It was so refreshing to talk about that kind of technical analysis in the context of evaluation, which was different than much of the statistical discussions I’d experienced in academia.
Lesson Learned: Positive outcomes in evaluation are more than just t-tests
Interpreting statistical tests for program evaluation looks a lot different than the more traditional determinations of statistical significance or even the elusive causation. In my AEA workshop the instructor talked about these challenges and how to examine both the statistical integrity of data and its role in the larger program context in order to most appropriately report on outcomes. These workshops (and the conference) allowed for discussions around what positive outcomes look like in all forms, and most critically, what they mean for our clients and their programs.
Lesson Learned: While it may sound like a cliché, real world evaluation truly is an art and a science, balancing academy smarts with street smarts
Our clients face many challenges with their programs and come to us to help organize, analyze, and create practical meaning, something requiring both training and experience. My biggest lesson learned from AEA was not a revelation but a reinforcement that this work is delicate and valuable, but most importantly, imperfect. While I highly regard my academic education, I look forward to my continual education in the field, aided by my Improve Group colleagues, clients, and fellow evaluation professionals.
The American Evaluation Association is celebrating Minnesota Evaluation Association (MN EA) Affiliate Week with our colleagues in the MNEA AEA Affiliate. The contributions all this week to aea365 come from our MNEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
“Positive outcomes in evaluation are more than just t-tests.” <— Yes!