AEA365 | A Tip-a-Day by and for Evaluators

TAG | NSF

My name is Mika Yamashita, a program chair of the Mixed Methods Evaluation Topical Interest Group (TIG).  The Mixed Methods Evaluation TIG was founded in 2010 to be a space for members to “examine the use of mixed methods evaluation through reflective analysis of philosophy, theory and methodology that is developing in the field of mixed methods” (Petition submitted to AEA in 2010). Evaluation 2012 will be our third year to sponsor sessions.

Mixed Methods Evaluation TIG members who presented at past conferences contributed this week’s posts.  A majority of presentations focused on findings from mixed methods evaluations, analysis of data collection and analysis methods, and strategies used in evaluation teams.  So, posts for this week will cover these topics. On Monday, Tayo Fabusuyi and Tori Hill will highlight the framework used for the evaluation of a minority leadership program. On Tuesday, Leanne Kallemeyn and her colleagues at Loyola University will share lessons learned from and tips for conducting integrated analysis. On Wednesday, Kristy Moster and Jan Matulis will walk us through how their evaluation team members worked to analyze data from multiple sources.  On Thursday, Hongling Sun will share lessons learned from conducting a mixed methods evaluation. Finally, on Friday, Terri Anderson will share her evaluation team’s experience using the National Institute of Health’s guide, Best Practices for Mixed Methods Research in the Health Sciences to understand an unexpected evaluation result

Rad Resources: Listed are resources I found helpful for learning about Mixed Methods Evaluation.

Hot Tips: 

The American Evaluation Association is celebrating Mixed Method Evaluation TIG Week. The contributions all week come from MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · · ·

I’m Leslie Goodyear, and I’m a Program Officer at the National Science Foundation, in the Division of Research on Learning in Formal and Informal Settings (DRL). The programs in DRL include: Informal Science Education (ISE), Discovery Research K-12 (DR K-12), Research and Evaluation on Education in Science and Engineering (REESE), and Innovative Technology Experiences for Students and Teachers (ITEST). I have a hot tip about how to become a proposal reviewer for NSF/DRL.

NSF’s proposal merit review process generally includes review by outside experts. For DRL, experts in Science, Technology, Engineering and Mathematics (STEM) education, research methods, learning sciences, evaluation, and other areas are typically brought together in panels. They discuss the relative merits of the proposals and offer their best thinking to NSF programs officers based on two primary review criteria: intellectual merit and broader impacts. Advice by reviewers and panels is critical to informing program officers, who make the recommendations for awards.

As a panel reviewer, you’ll read about 15 to 20 proposals (each is about 15 pages long); write reviews for about six to eight proposals; join about ten to 12 colleagues in a two-day review panel in Arlington, Va., home to NSF; discuss the proposals and the reviews; and rate the proposals as a priority for funding. Reviewers who travel to NSF are paid a stipend for the days they serve on the panel and their travel is covered by NSF; ad hoc reviewers, who normally review just one or two proposals without serving on a panel, are not paid. In addition to providing a valuable service to the NSF and the field, you’ll learn a lot about what makes a good proposal and how the review process at NSF works. Most people who participate think it’s a great professional development opportunity.

Because DRL programs require project evaluation, the proposals submitted include evaluation plans. Thus, DRL always needs experienced, competent evaluation professionals to gauge the quality of these plans. We primarily look for evaluators who have experience conducting evaluations of STEM education programs. We also look for evaluators with strong methodological training, experience with formal or informal educational settings (in-school or out-of-school), expertise in evaluating research, and practical expertise in evaluating community programs.

Hot tip: If you’d like to be considered for serving as a proposal reviewer, first go to the NSF website (below) and learn about our programs by reading the program solicitations. Then send your CV and a cover letter with a bit about yourself, your expertise and experience, and the program(s) for which you’d like to serve as a reviewer to me, Leslie Goodyear, lgoodyea@nsf.gov. I will then forward them to the appropriate cluster within the division.

NSF DRL Website: http://nsf.gov/div/index.jsp?div=DRL

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· ·

Archives

To top