AEA365 | A Tip-a-Day by and for Evaluators

CAT | STEM Education and Training

Hello! I am Rebecca Teasdale, a doctoral student in Educational Psychology specializing in evaluation methodology at the University of Illinois at Urbana-Champaign. I’m also a librarian and have served as an administrator and science librarian in public libraries. My current work focuses on the evaluation of interest-driven learning related to science, technology, engineering and math (STEM) that takes place in public libraries and other informal learning settings.

I first became involved with the Building Informal Science Education (BISE) project as an intern at the Science Museum of Minnesota while I was pursuing a certificate in evaluation studies at the University of Minnesota. (See my blog post, “Measuring behavioral outcomes using follow-up methods,” to learn more). Now, I’m using the BISE database to support my research agenda at Illinois by identifying methods for evaluating the outcomes of public library STEM programming.

Evaluation practice is just getting started in the public library context, so few librarians are familiar with evaluation methods measuring mid- and long-term outcomes of informal science education (ISE) projects. I used the BISE reports to provide a window into understanding (a) the types of outcomes that ISE evaluators study, (b) the designs, methods and tools that they use, and (c) the implications for evaluating the outcomes of STEM programs in public libraries.

Lessons Learned:

  • I’ve found little standardization among the evaluation reports in the BISE database. Therefore, rather than provide a single model for libraries to replicate or adapt, the BISE database offers a rich assortment of study designs and data collection methods to consider.
  • Just 17% of the reports in the BISE database included the follow-up data collection necessary to examine mid- and long-term outcomes. In particular, library evaluators should ensure that we design studies that examine these effects as well as more immediate outcomes.
  • Collecting follow-up data can be challenging in informal learning settings because participation is voluntary, participants are frequently anonymous, and engagement is often short-term or irregular. The reports in the BISE database offer a number of strategies that library evaluators can employ to collect follow-up data.
  • All five impact categories from the National Science Foundation-funded Framework for Evaluating Impacts of Informal Science Education Projects are represented in the BISE database. I’m currently working to identify some of the methods and designs for each impact category that may be adapted for the library context. These impact categories include:
    • awareness, knowledge or understanding
    • engagement or interest
    • attitude
    • behavior
    • skills

 Rad Resource:

  • I encourage you to check out the BISE project to inform evaluation practice in your area of focus and to learn from the wide variety of designs, methods, and measures used in ISE evaluation.

The American Evaluation Association is celebrating Building Informal Science Education (BISE) project week. The contributions all this week to aea365 come from members of the BISE project team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi! I’m Amy Grack Nelson, Evaluation & Research Manager at the Science Museum of Minnesota. I’m part of a really cool National Science Foundation-funded project called Building Informal Science Education, or as we like to refer to it – BISE. The BISE project is a collaboration between the University of Pittsburgh, the Science Museum of Minnesota, and the Visitor Studies Association. This week we’ll share what we learned from the project and what project resources are freely available for evaluators to use.

Within the field of evaluation, there are a limited number of places where evaluators can share their reports. One such resource is informalscience.org. Informalscience.org provides evaluators access to a rich collection of reports they can use to inform their practice and learn about a wide variety of designs, methods, and measures used in evaluating informal education projects. The BISE project team spent five years diving deep into 520 evaluation reports that were uploaded to informalscience.org through May 2013 in order to begin to understand what the field could learn from such a rich resource.

Rad Resources:

  • On the BISE project website, you’ll find lots of rad resources we developed. We have our BISE Coding Framework that was created to code the reports in the BISE project database. Coding categories and related codes were created to align with key features of evaluation reports and the coding needs of the BISE authors. You’ll find our BISE NVivo Database and related Excel file where we’ve coded all 520 reports based on our BISE Coding Framework. We have a tutorial on how to use the BISE NVivo Database and a worksheet to help you think about how you might use the resource for your own practice. You can also download a zip file of all of the reports to easily have them at your fingertips.
  • This project wouldn’t be possible without the amazing resource informalscience.org. If you haven’t checked out this site before, you should! And if you conduct evaluations of informal learning experiences, consider sharing your report there.

Lessons Learned:

  • So what did we learn through the BISE project? That you can learn A LOT from others’ evaluation reports. In the coming week you’ll hear from four authors that used the BISE database to answer a question they had about evaluation in the informal learning field.
  • What lessons can you learn from our collection of evaluation reports? Explore the BISE Database for yourself and post comments on how you might use our resources.

The American Evaluation Association is celebrating Building Informal Science Education (BISE) project week. The contributions all this week to aea365 come from members of the BISE project team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Courtney Blackwell, Heather King, and Jeanne Century from Outlier Research & Evaluation at the University of Chicago. For the last 3.5 years, we have been researching and evaluating computer science education efforts.

Computer Science (CS) is becoming a buzzword in education, with educators, policymakers, and industry developers promoting CS as key to developing 21st Century skills and a pathway to employment. While CS is not new to education, the spotlight on it is. In 2014, over 50 U.S. school districts, including the seven largest, pledged to make CS education available to all students.

Like all buzzwords, most people have their own vague idea of what CS means, but even experts working within CS education do not, yet, have a clear, agreed-upon definition. If evaluators are going to be able to accurately measure the effects of CS education efforts on teaching and learning, and accumulate knowledge and understanding, we need to have a clear definition of what “CS education” is. Until CS educators create shared definitions themselves, we, as evaluators, can do our part by ensuring our logic models, strategies, and measures clearly and specifically describe the innovation — computer science education — so that our work can inform others and further the field.

Lessons Learned: Evaluating an ill-defined intervention is not an uncommon problem. In the case of CS, however, the capacity to articulate that definition is limited by the state of the field. As evaluators, we have to find alternatives. In our evaluation of the Code.org’s computer science education efforts, we ask students to provide their own definition of CS at the beginning of our questionnaires. Then, we provide a specific definition for them to use for the remainder of the questionnaire. This way, we capture student interpretations of CS and maintain the ability to confidently compare CS attitudes and experiences across students. Similarly, we begin interviews with teachers, school leaders, and district leaders by asking, “How do you define computer science education?”

Hot Tip: Always ask participants to define what they mean by computer science.

Rad Resources #1: A recent survey by the Computer Science Teacher’s Association (CSTA) found that high school leaders don’t share a common definition of CS education. This suggests that school leaders may promote their schools as providing “computer science” when in fact they are providing activities that would fail to be considered CS at the college and professional levels.

Rad Resources #2: Check out LeadCS.org, a new website about to be launched, for definitions of key terms in Computer Science education. The website offers a range of tools for K-12 school and district leaders and their partners who seek to begin or improve CS education programs.

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Jill Denner from Education, Training, Research (ETR), a non-profit organization that does research, evaluation, program development, and professional development. We partner with schools and colleges to increase the interest and capacity of girls and Latino/a students to pursue computer science.

Computer Science Education in K-12 is a relatively new space. It is a young discipline that is trying to distinguish itself from other Science, Technology, Engineering, and Mathematics (STEM) fields. And rightfully so. The “T” is different in many ways: There is less diversity in “T” classes and programs. Most programs do not have clear goals or a logic model to describe how their activities will lead to identified goals. There are many different learning outcomes, but few validated measures, established theories, or clear stakeholders who can drive key decisions about evaluation design, sampling, and measurement.

Hot TipsEvaluation can make real contributions to a field that is in its infancy, but we need to know the stakeholders or audience and what they want to know. In CS Education there are different views, but most want to know who is benefiting and why/not. For example:

  • Funders require evaluation to document return on investments. These include the US National Science Foundation, private foundations like the Motorola Solutions Foundation, and tech companies like Google.
  • Educators or program developers want evaluation to help them make improvements in impact, design new programs, and get more funding
  • Policymakers want what programs or policies to invest in
  • Researchers want to inform theory, test hypotheses, and fill gaps in knowledge

Cool TricksHow can you do good evaluation without established theories, logic models, or measures? This issue is particularly relevant for a field that places a high priority on increasing diversity. The following issues are important to consider when evaluating CS Education:

  • Culturally responsive evaluation can help evaluators avoid perpetuating unconscious bias about the type of person who belongs in computing fields
  • Getting demographic information is important, but asking students about their gender or race/ethnicity before questions about computing might trigger stereotype threat and affect responses
  • Studying only individual factors misses the relational and institutional factors that affect participation and program impact

Rad Resources:

  • Mark Guzdial’s blog to learn about issues that are central to Computer Science Education including his article on challenges that face Computing Education.
  • Kimberly Scott and her colleagues have developed a theory of culturally responsive computing
  • Talking points on unconscious bias in the classroom from the National Center for Women in IT can help evaluators avoid triggering stereotype threat
  • Google’s recent reports on computer science education provide landscape data on key issues.

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hello, we are Eric Snow, Marie Bienkowski and Daisy Rutstein, from the Center for Technology in Learning at SRI Education. In our work in computer science education research and evaluation we are routinely asked to help clients implement assessments that support valid inferences about students’ computational thinking-oriented learning outcomes. We have learned many lessons from these experiences and would like to share some Lessons Learned and Hot Tips with the AEA STEM-CS community.

The new Exploring Computer Science (ECS) and Computer Science Principles (CSP) curricula are spreading throughout U.S. high schools via NSF-sponsored pilots and, combined with the advocacy efforts of organizations such as Code.Org, will continue their expansion. As these CS curricula continue to reach more schools and students, teachers implementing the instructional activities need high-quality assessments so they can make valid inferences about students’ computational thinking (CT) practices and better support student learning of those practices.

Lessons Learned: Assessments are used in different ways for different purposes. Assessment “use” means interpreting scores and acting on, or making inferences from, the interpretation. Some uses of assessments, each with their own purpose and supported inferences, are listed below.

Formative Use

  • Purpose: discerning student misconceptions and/or preparation for future learning.
  • Score interpretation: where a student is in his or her learning of particular concepts, pointing to instructional actions to improve learning or dislodge misconceptions.

Summative Use

  • Purpose: obtaining an overall score indicating whether or not students have grasped the important concepts taught.
  • Score interpretation: overall proficiency of the student.

Teacher Evaluation

  • Purpose: determining how effective a teacher is at teaching the material of interest.
  • Score interpretation: effectiveness of the teacher and his/her instruction.

Research or Project Evaluation

  • Purpose: determining the efficacy/effectiveness of one or more education interventions.
  • Score interpretation: differentiating students or teachers, or determining growth of teachers or students.

Our experience has taught us that the use of assessments and their results needs to be approached with caution because there may be negative consequences of using an assessment for a purpose for which it has not been validated.

Hot Tips: Evaluators can help clients ensure that the assessments they want to use are aligned with the purposes for which the assessments were designed and validated by:

  • Co-designing a clear logic model relating program inputs, processes and short- and long-term outcomes. This will help clarify the purposes of any assessments that need to be administered.
  • Helping clients recognize that assessments are not “plug-and-play” and help them obtain the resources they need to critically evaluate the appropriateness of existing assessments for their measurement needs.
  • Helping clients use assessment results in ways consistent with the intended purpose(s) of the assessment.

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hi, I’m Wendy DuBow, a senior research scientist and director of evaluation at the National Center for Women & Information Technology (NCWIT). Our mission is to increase the meaningful participation of women in technology fields. We focus on sharing theory- and evidence-based practices with stakeholders in education and industry to support them as they recruit, retain, and promote girls and women in tech. In my position, I see a lot of K-20 interventions aimed at increasing women in tech, and alongside, a wide variety of measurement instruments.

Lesson Learned: Using Social Cognitive Career Theory. Most of the evaluations I see don’t take advantage of theory or past empirical evidence to ground their assessments. It would be great to share more theory- or evidence-based evaluation approaches. The social cognitive career theory (SCCT) model has been widely used to explain people’s educational and career interests in STEM. We wanted to specifically assess students in computer science-related programs, so we developed an instrument that uses SCCT to assess five constructs: interest, self-efficacy, outcome expectations, perceived social supports and barriers, and intent to persist in computing. Our survey has been used in a number of different educational settings, with middle and high school students, and with college and above. Of course, there are many other valid and reliable instruments available to evaluators of STEM education programs, but it can be hard to find them when you’re pressed for time in the proposal writing or instrument development stages. For expediency and for the larger good of sharing data and measuring interventions systematically, I would very much like to see STEM education evaluators and researchers have a shared repository of instruments. To this end, I’m holding two sessions at the Chicago AEA meeting to discuss

Hot Tip: Our SCCT survey instrument is publicly available upon request.

Cool Trick: We currently use SurveyMonkey for online surveys, and also have access to Qualtrix, so if you use either of these tools, we can share our SCCT survey directly with your pro account, already formatted though you can customize as you see fit! We just ask that you acknowledge NCWIT in any presentations or write-ups of the data.

Rad Resource: A variety of STEM assessment tools are already collected in the Engineering field:

Lessons Learned: Be sure that all of the SCCT survey constructs match the intended outcomes of the program, and tailor the wording of the parenthetical explanations of each item to the program being evaluated.

Get involved: Please come to the AEA 2015 Think Tank “Improving the Quality and Effectiveness of Computer Science Education Evaluation Through Sharing Survey Instruments” and the multi-paper session “Four Approaches to Measuring STEM Education Innovations: Moving Toward Standardization and Large Data Sets.”

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi! My name is Taylor Martin and I’m a Professor in Instructional Technology and Learning Sciences at Utah State University. I also run the Active Learning Lab (activelearninglab.org) where I work with a great bunch of colleagues and students researching and evaluating active and fun learning experiences for kids of all ages around programming, computational thinking, mathematics and engineering.

We work with kids and teachers learning in MakerSpaces and FabLabs (http://makermedia.com) and in day camps and schools programming in visual programming languages like Scratch (https://scratch.mit.edu). In both, instructors and teachers see people creating cool and complex products that they want to create, whether it’s a 3D printed geometric shape, miniature animal or person, or really anything they dream up, or a Scratch animation or game. What’s harder to see is the complex computational thinking that goes into making these objects. As teachers and instructors, we also see how excited and engrossed kids often are in these activities, and in general, we think that should make these really promising environments for learning.

Hot Tip: We can often evaluate students’ level or type of engagement better when we measure it without interrupting their activity. Think about it, how excited about programming a game in Scratch would you be if I kept asking you every five minutes, “On a scale of 1-10, how engaged are you right now?” People like Ryan Baker, Sidney D’Mello and others have been using machine learning to build detectors for states of engagement like concentration, boredom, or frustration to avoid this issue.

Rad Resource: For evaluating learning, people like Val Shute, Matthew Berland, and Marcelo Worsley have been creating novel ways to figure out what people know at any given time based on what they are doing. One example is doing machine learning and data mining on the backend data produced while a kid plays a game like Physics Playground. People are also starting to create generalized tools that can plug into games while they’re being developed that have built in data capture and analysis tools for games built on different platforms, such as adageapi.org. Another example is collecting sensor data and pulling it together to figure out what people are doing in a variety of environments like MakerSpaces.

I’d love to hear back from others with the resources they’ve developed and discovered in this space.

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Welcome to STEM TIG week! My name is Jason Ravitz and I conduct research and manage evaluations of educational outreach projects at Google. This week our blogs focus on Computer Science (CS) education which may officially be counted as a STEM field by act of Congress.

The CS First project is one curriculum that is available for use by schools, camps and after-school programs. In one case, Google has funded Boys and Girls Clubs of America to deploy AmeriCorps VISTAs to build capacity and use this curriculum in summer camps and afterschool clubs. The goals of CS First include increasing confidence and providing a sense of belonging in technology for underrepresented students.

This has been exciting work, but I’ve found there are challenges when evaluating informal STEM and CS programs. Not least is having to make teachers or volunteers test for pre-post content learning. This feels to everyone like it defeats the purpose, that is to have fun and not feel like school.

Cool Trick: Try to make assessment instructional and fun. We chose 5 basic level assessment items for a pre-test and asked volunteers to acquiesce to try these one time and report how it went. Meanwhile, to make these assessments less burdensome I came up with a way to punctuate each question with a fun activity to illustrate what kids would be learning. We had various ideas, like playing “Simon says…” as a way of demonstrating commands and loops. These would work even better with clickers and maybe as part of the curriculum. There are more vetted activities with accompanying research at CS unplugged. This is a “cool trick” because it can make assessment a learning activity that feels less like school. Even if our ideas weren’t generally used with the pre-tests, they showed we were listening to concerns about over-testing and its potential impact on what should be a fun club climate.

Hot Tip: Plan ahead with the curriculum provider. We are coordinating with the curriculum developer to incorporate information from their assessments and produce reports. This is non-trivial, but will be very beneficial for our evaluation. Among its assessments CS First has a way of scoring students’ code. The system they use, that you can try online, is Dr. Scratch, as developed by researchers in Spain. We hope our external pre-post tests can quickly be used to validate the embedded assessments (or refine them) so our external assessments can be retired.

We all want to hear what challenges others are facing and to hear your solutions. Thanks to Kimberle Kelly and AEA’s STEM Education and Training TIG for help organizing this week of blogs. Please join us at our conference sessions and tell us what you think!

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I am Anna Douglas and I conduct evaluation and assessment research with Purdue University’s Institute for Precollege Engineering Research; also known as INSPIRE. This post is about finding and selecting assessments to use in evaluation of engineering education programs.

Recent years have seen an increase in science, technology, engineering, and mathematics (STEM) education initiatives and emphasis on bringing engineering learning opportunities to students of all ages. However, in my experience, it can be difficult for evaluators to locate assessments related to learning or attitudes about engineering. When STEM assessment instruments are found, oftentimes they do not include anything specifically about engineering. Fortunately, there are some places devoted specifically to engineering education assessment and evaluation.

Rad Resource: INSPIRE has an Assessment Center website, which provides access to engineering education assessment instruments and makes the evidence for validity publicly available. In addition, INSPIRE has links to other assessment resources, such as Assessing Women and Men in Engineering, a program affiliated with Penn State University.

Rad Resource: ASSESS Engineering Education is a search engine for engineering education assessment instruments.

If you don’t find what you are looking for at the INSPIRE, AWE, or ASSESS databases, help may still be there.

Lesson Learned #1: If it is important enough to be measured for our project, someone has probably measured it (or something similar) before. Even though evaluators may not have access to engineering education or other educational journals, one place to search is Google Scholar with keywords related to what you are looking for.  This helps to 1) locate research being conducted in the similar engineering education area (and they may have used some type of assessment) and 2) locate published instruments, which one would expect has a degree of validity evidence.

Lesson Learned #2: People that develop surveys, generally like others to use them. It’s a compliment. It is ok to contact the authors for permission to use the survey and validity evidence collected, even if you can not access the article.  At INSPIRE, we are constantly involved in the assessment development process. When someone contacts us for use of an instrument, we view that as a “win-win”… the evaluator gets a tool, our instrument gets used, and with the sharing of data and/or results, we can get further information about how the instrument is functioning in different settings.

Lessons Learned #3: STEM evaluators are in this together. Another great way to locate assessment instruments is to post through the STEM RIG in LinkedIN, or pose the question to the EvalTalk listserv. This goes back to Lesson Learned #1: most of the important outcomes are being measured by others.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Kirk Knestis, here, CEO of Hezel Associates—a research and evaluation firm specializing in education innovations. Like many of you, I’ve participated in “evaluation versus research” conversations. That distinction is certainly interesting, but our work in studying science, technology, engineering, and math education (STEM) leaves me more intrigued with what I call the “NSF Conundrum”—confusion among stakeholders (not least National Science Foundation [NSF] program officers) about the expected role of an “external evaluator” as described in a proposal or implemented for a funded project. This has been a consistent challenge in our practice, and is increasingly common among other agencies’ programs (e.g., Departments of Education or Labor). The good news is that a solution may be at hand…

Lessons Learned – The most constructive distinction here is between (a) studying the innovation of interest, and (b) studying the implementation and impact of the activities required for that inquiry. For this conversation, call the former “research” (following NSF’s lead) and the latter “evaluation”—or more particularly “program evaluation,” to further elaborate the differences. Grantees funded by NSF (and increasingly by other agencies) are called “Principal Investigators.” It is presumed that they are doing some kind of research. The problem is that their research sometimes looks like, or gets labeled “evaluation.”

Hot Tip – If it seems like this is happening (purposes and terms are muddled), reframe planning conversations around the differences described above—again, between research, or more accurately “research and development” (R&D) of the innovation of interest, and assessments of the quality and results of that R&D work (“evaluation” or “program evaluation”).

Hot Tip – When reframing planning conversations, take into consideration the new-for-2013 Common Guidelines for Education Research and Development developed by NSF and US ED Institute of Education Sciences (IES). The Guidelines delineate six distinct types of R&D, based on the maturity of the innovation being studied. More importantly, they clarify “justifications for and evidence expected from each type of study.” Determine where in that conceptual framework the proposed research is situated.

Hot Tip – Bearing that in mind, explicate ALL necessary R&D and evaluation purposes associated with the project in question. Clarify questions to be answered, data requirements, data collection and analysis strategies, deliverables, and roles separately for each purpose. Define, budget, assign, and implement the R&D and the evaluation, noting that some data may support both. Finally, note that the evaluation of research activities poses interesting conceptual and methodological challenges, but that’s a different tip for a different day…

Rad Resources – The BetterEvaluation site features an excellent article framing the research-evaluation distinction: Ways of Framing the Difference between Research and Evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

<< Latest posts

Older posts >>

Archives

To top