My name is Tao Gong and I’m an Associate Professor in the Department of Social Sciences at the University of Maryland Eastern Shore. I’m the 2019-2020 Minority Serving Institution Fellow. I’ve been teaching Evaluation and Policy Research including topics such as evaluation basics, types of evaluation research, evaluation designs and methodology such as quantitative, qualitative, and mixed methods, and ethics in evaluation, in the last few years, so I want to share some lessons I learned in teaching these topics after I participated in the Fellowship program.
Lesson Learned #1: Formulate course objectives using American Evaluation Association (AEA)’s competencies. Prior to starting the Fellowship program, I was not familiar with the expectations and competencies as a competent evaluator. I taught my evaluation course mainly from a technical perspective, which did not ground my teaching pedagogy in evaluation practice and caused a disconnection between the competencies required for the profession and the course objectives. For example, one of the course objectives was to describe the importance of involving stakeholders throughout the evaluation process, but after learning the competency as an evaluator, I revised the course objective to be more concrete and tangible such as involving stakeholders in designing, implementing, interpreting, and reporting evaluations as appropriate.
Lesson Learned #2: My teaching expectations used to focus mainly on the technical competencies, such as the design and data analysis methods of evaluation without recognizing the importance of including stakeholders’ cultural values and beliefs in all aspects of the evaluation process and grounding the teaching pedagogy in a culturally responsive perspective. In summer 2019, the AEA provided me an opportunity to attend the Evaluation Institute, in which several workshops trained me in understanding the evaluation practice is culturally based and the steps in conducting evaluation using the cultural responsive evaluation framework. After returning from the workshops, I made several changes to my evaluation class, including an understanding that data collection and analysis must be using credible, feasible, and culturally appropriate procedures and it is critical to engage stakeholders in the process of interpreting results, reviewing draft, and making suggestions in explanations.
Lesson Learned #3: I used to believe that evaluation projects should be designed around the methodology, thus my teaching focused too much on the technical side rather than recognizing the importance of using evaluation theories in guiding evaluation practice. Evaluation theory is useful in framing evaluation questions and guiding project design and analytical methods. Applying an evaluation theory as the framework to evaluation practice can guide project implementation, help understand program outcomes, and ground results in relevant literature. Based on this understanding, I revised my syllabus to include several key evaluation theories, such as applying the culturally responsive evaluation framework and discussing the role of program theory and its value in evaluation research.
The American Evaluation Association is AEA Minority Serving Institution (MSI) Fellowship Experience week. The contributions all this week to aea365 come from AEA’s MSI Fellows. For more information on the MSI fellowship, see this webpage: http://www.eval.org/p/cm/ld/fid=230Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
I appreciated reading your post and agree with the value you emphasized within the three learnings you highlighted. When you stated, “My teaching expectations used to focus mainly on the technical competencies, such as the design and data analysis methods of evaluation without recognizing the importance of including stakeholders’ cultural values and beliefs in all aspects of the evaluation process and grounding the teaching pedagogy in a culturally responsive perspective” I connected this to an article I read about process use. The more evaluators become schooled in the structure, culture, and politics of their program and policy communities, the better prepared they are to be strategic about the factors most likely to affect use (Shulha and Cousins, 1997). I believe that your post contributes to the emphasis of use, and the complexities involved in achieving evaluation use through process. Overall, including the involvement of stakeholders, being culturally responsive, and taking careful consideration of the theory in an evaluation design, seem to work toward the utilization of evaluation in a purposeful and meaningful manner for all those involved in the evaluation process. Thank you for your perspective and for sharing your learning experience.
Shulha, L., & Cousins, B. (1997). Evaluation use: Theory, research and practice since 1986. Evaluation Practice, 18, 195-208.