AEA365 | A Tip-a-Day by and for Evaluators

CAT | STEM Education and Training

This is part of a two-week series honoring our living evaluation pioneers in conjunction with Labor Day in the USA (September 5).

My name is Andrea Guajardo, MPH, and I am the Director of Community Health at CHRISTUS Santa Rosa Health System in San Antonio, Texas. I am also the co-Chair of the Multiethnic Issue in Evaluation (MIE) TIG and a founding member of the LA RED TIG.

Why I chose to honor this evaluator:

LA RED honors Mariana Enriquez, PhD as a Living Pioneer in Evaluation for her significant contributions to AEA and to her evaluation discipline. As a Program Evaluation Consultant, her work focuses on education and public health programs across Colorado.

Mariana was born and raised in Mexico City as one of seven siblings. She began her evaluation career in the United States as Program Director for a small non-profit while exploring the impact of parenting classes on Spanish and English-speaking families. This early experience in evaluation led to a deeper pursuit of evaluation as a career, and in doing so, has blazed a trail for Latinx evaluators and for those practicing evaluation in Latinx communities.

As a bilingual and bicultural evaluator, she has native knowledge of the communities in which she works and functions as a bridge – un puente – to the wider, mainstream community. Her perspective informs the unique discipline of Latinx evaluation and provides cultural translation and understanding between these two communities.

Mariana has been a member of the AEA Committee on Honors and Awards (2012 -2014) and was its 2013 Chair. She also served as Chair of the Pipeline Students program at AEA in 2008, and is currently a member of the American Journal of Evaluation Editorial Advisory Board. Her mentorship of the LA RED TIG provides support for continued personal and professional development of Latinx evaluators at AEA.

As an Independent Consultant, Mariana’s current work includes STEM and English Language Learning Education at local universities in Colorado and with a communications agency conducting a state-wide public health campaign. Her work has been funded by the National Science Foundation, the Institute of Education Science, and the Department of Human Services.

Rad Resources:

Get Involved: To learn more about evaluation theory and practice by, for and with Latinx communities join LA RED by emailing lared.tig@gmail.com.

The American Evaluation Association is celebrating Labor Day Week in Evaluation: Honoring Evaluation’s Living Pioneers. The contributions this week are tributes to our living evaluation pioneers who have made important contributions to our field and even positive impacts on our careers as evaluators. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! I’m Kirk Knestis, CEO of Hezel Associates. The US Office for Human Research Projections defines “research” as any “systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge.” (Emphasis mine.) We often get wrapped up in narrow distinctions like populations but I’m increasingly of the opinion that the clearest determination if a study is “evaluation” or “research,” is whether it’s supposed to contribute to generalizable knowledge—in this context, about teaching and learning in science, technology, engineering, and math (STEM).

The National Science Foundation (NSF) frames this as “intellectual merit”—one of two merit criteria against which research projects are judged, its “potential to advance knowledge” in a program’s area of focus. The Common Guidelines for Education Research and Development expand on this, elaborating how each of its six types of R&D might contribute, in terms of theoretical understandings about the innovation being studied and its intended outcomes for stakeholders.

For impact research (Efficacy, Effectiveness, and Scale-up studies), dissemination must include “reliable estimates of the intervention’s average impact” (p. 14 of the Guidelines), so findings from inferential tests of quantitative data. Dissemination might, however, be about theories of action (relationships among variables; preliminary, evolving, or well-specified), or an innovation’s “promise” to be effective later in development. This is, I argue, the most powerful aspect of the Common Guidelines typology; it elevates Foundational, Early Stage/Exploratory, and Design and Development studies to be legitimate “research.”

So, that guidance defines what might be disseminated. Questions will remain about who will be responsible for dissemination, when it will happen, and by what channels it will reach desired audiences.

Lessons Learned:

It will likely be necessary for the evaluation research partner to work with client institutions to help them with dissemination. Many grant proposals require dissemination plans, but they are typically the purview of the grantee, PI, or project manager, rather than the “evaluator.” These individuals may well need help describing study designs, methods, and findings in materials to be shared with external audiences, so think about how deliverables can contribute to that purpose (e.g., tailoring reports for researchers, practitioners, and/or policy-makers in addition to project managers and funders).

Don’t wait until a project is ending to worry about dissemination of learnings. Project wrap-ups are busy enough and interim findings or information about methods, instruments, and emerging theories can make substantive contributions to broader understandings relating to the project.

Rad Resource:

My talented colleague-competitor Tania Jarosewich (Censeo Group) put together an excellent set of recommendations for high quality dissemination of evaluation research findings, for a panel I shared with her at Evaluation 2014. I can’t do it justice here so go check out her slides in that presentation in the AEA eLibrary.

The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! We are Dana Gonzales and Lonnie Wederski, institutional review board (IRB) members at Solutions IRB, specialists in the review of evaluation research.

Why talk about IRB review for evaluations of science, technology, engineering, and math (STEM) education projects? Most simply, federally funded projects may require it. You may also ask, “Why aren’t all of these evaluations exempt?” IRB reviewers apply the Code of Federal Regulations (CFR) in their decisions. Many STEM evaluations include children. Under CFR rules, only a narrow range of research is exempt from review when it involves children, like research applying educational tests or observations of public behavior where the investigator does not participate. Interviews and focus groups with minors won’t likely qualify for exempt review, as they are seldom part of the normal educational curriculum. Randomization to a control group would not meet exempt category requirements for the same reason. Both would, however, qualify for expedited review, if there is no more than minimal risk for participants.

So, do you need to use an IRB? Ask these questions:

  • Is IRB required by the grant or foundation funding the project?
  • Does the school district require IRB review?
  • Do you intend to disseminate findings in a publication requiring IRB review?

If the answer to any of those questions is “yes,” you need an IRB—at which point uncertainty strikes! Maybe this is the first time you’ll use an IRB (you are not alone) or you remember unpleasant experiences with an academic IRB. Fear not, evaluators! Many IRB reviewers understand the differences between clinical studies and evaluations. Some specialize in evaluations, employing reviewers with expertise in the methods evaluators use, who recognize that phenomenology, grounded theory, ethnography, and autoethnography are valid study approaches. Who wants to educate an IRB when you are paying them? 

Rad Resources:

Hot Tips:

  • Have questions regarding the ethics of recruitment or consent? Some independent IRBs will brainstorm with you and answer “what if” questions. Ask for a complementary consultation with a reviewer.
  • Ready to submit your evaluation for review? Ask the IRB if free pre-review of study documents is provided, to save time prior to formal review. Ask for a list of the documents required by the IRB.
  • Most important, know the review timeframe in advance! If the IRB requires two weeks for review, you need to plan accordingly. Some IRBs routinely review exempt and expedited studies in 24-48 hours, so timeframes can vary widely.

We hope you found the information provided helpful.

The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Lori Wingate and I am Director of Research at The Evaluation Center at Western Michigan University. I also lead EvaluATE, the evaluation resource center for the National Science Foundation’s Advanced Technological Education (ATE) program.

NSF established the ATE program in response to the Scientific and Advanced-Technology Act of 1992, which called for “a national advanced technician training program, utilizing the resources of the nation’s 2-year associate-degree-granting colleges.” ATE’s Congressional origin, characterization as a training (not research) program, and focus on 2-year colleges sets it apart from other NSF programs. Research is not the driving force of the program—it existed for 10 years before inviting proposals for research.

Since 2003, Targeted Research on Technician Education has been one of several ATE program tracks. Anecdotally, I know the program has found it challenging to get competitive research proposals. Common problems include university-based researchers treating the 2-year colleges as “guinea pigs” on which to try out their ideas, and 2-year faculty being short on research expertise.

While few of ATE’s ~250 projects are targeted research, all must be evaluated. NSF underscored the importance of evaluation when it began supporting the Evaluation Resource Center in 2008. Since 2010, the program has required that proposal budgets include funds for independent evaluators.

At the 2014 ATE PI conference, I moderated a session on ATE research and evaluation in which the Common Guidelines for Education Research and Development figured prominently. These guidelines were developed by NSF and the Institute of Education Sciences as a step toward “improving the quality, coherence, and pace of knowledge development in [STEM] education,” but some participants questioned their relevance to the ATE program. Recent evidence suggests more education is needed. While just 7 of 202 respondents to the 2016 survey of ATE PIs identified their projects as “targeted research,” 58 spent some of their budgets on research activities. Of those, almost half had either never heard of the Common Guidelines (21%) or had heard of but hadn’t read them (28%). I sense that PIs based at 2-year colleges may see the growing concern with research as a threat to the program’s historic focus on training technicians. They seemed to have embraced evaluation, but may not be sold on research.

Lessons Learned:

  • The time is ripe for evaluators with strong research skills to collaborate with ATE PIs on research.
  • Evaluation results (project-specific knowledge) may serve as a foundation for future research (generalizable knowledge), thus connecting evaluation to research.

Rad Resources:

The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

I am Laurene Johnson from Metiri Group, a research, evaluation, and professional development firm focusing on educational innovations and digital learning. I often work with school district staff to provide guidance and research/evaluation contributions to grant proposals, including those for submission to the National Science Foundation (NSF).

Programs like Discovery Research PreK-12 (DRK-12) present some interesting challenges for researchers and evaluators. Since I work at an independent research and evaluation firm, I don’t implement programs, I study them. This means that in order to pursue such funding, and research things I think are cool, I need to partner with school or district staff who do implement programs. Likely they implement them quite well, maybe even having some experience obtaining grant funding to support them. This is both a real advantage in writing an NSF proposal and a real challenge. A successful research partnership (and proposal) will involve helping the practitioners understand where their program fits into the entire proposed project. It will likely be difficult for these partners to understand that NSF is funding the research, and funding their program or innovation only because I’m going to research it. This can be a huge shift for people who have previously received funding to implement programs. Depending on the origin of the program, the individual I’m partnering with might also have a real attachment to the program, which can make it even more difficult to explain that it’s going to “play second fiddle” to the research in a proposal.

This is not an easy conversation to have but, if researchers are successful, we can likely open up many more doors in terms of partnership opportunities in schools.

Hot Tip: Be prepared to have the research-versus-implementation conversation multiple times. Especially, I think someone who has written many successful proposals will tend to revert back to what s/he knows and is comfortable with as the writing progresses.

Lesson Learned: Even if prior evaluations have indicated it might be effective, the client must clearly explain the research base behind the program design and components. My experience is that many programs in schools are designed around staff experience about what works, rather than having a foundation in what research says works (emphasizing instruction as an art rather than as a science). This may be fine for implementing the program, but falls short of funders’ expectations in terms of designing an innovation in a research context.

Hot Tip: Try to get detailed information about the program in very early conversations, so you can write the research description as completely as possible. Deliver this to the client as essentially a proposal template, with the components they need to fill in clearly marked.

The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

No tags

I’m Andrew Hayman, Research Analyst for Hezel Associates. I’m Project Leader for Southern Illinois University Edwardsville’s National Science Foundation (NSF) Innovative Technology Experiences for Students and Teachers (ITEST) program, Digital East St. Louis.

The ITEST program was established in 2003 to address shortages of technology workers in the United States, supporting projects that “advance understanding of how to foster increased levels of interest and readiness among students for occupations in STEM.” The recent revision of the ITEST solicitation incorporates components of the Common Guidelines for Education Research and Development to clarify expectations for research plans, relating two types of projects to that framework:

  • Strategies projects are for new learning models, and research plans should align with Early-Stage, Exploratory, or Design and Development studies.
  • Successful Project Expansion and Dissemination (SPrEaD) projects should have documented successful outcomes from an intervention requiring further examination and broader implementation, lending SPrEaD projects to Design and Development or Impact studies.

Integration of the Common Guidelines into the NSF agenda presents opportunities for evaluators with research experience because grantees may not possess internal capacities to fulfill research expectations. Our role in a current ITEST Strategies project includes both research and evaluation responsibilities designed to build our partner institution’s research capacity. To accomplish this, our research responsibilities are significant in Year 1 of the grant, including on-site data collections, but decrease annually until the final grant year, when we serve as a research “critical friend” to the grantee.

I presented at a recent ITEST conference about our role in research and evaluation activities for an audience primarily of evaluators. As expected, some questioned whether we can serve in dual roles effectively while others, including NSF program officers, were supportive of the model. Differences in opinion regarding research responsibilities amongst ITEST stakeholders suggest it may take time for evaluators to carve out a significant research role for ITEST. However, NSF’s commitment to rigorous research as framed by the Common Guidelines, coupled with the limited research capacity of some institutions, suggests possibilities for partnerships.

Lesson Learned:

  • Define research responsibilities clearly for both the institution and evaluators. Separation of research and evaluation activities is critical, with separate study protocols, instruments, and reports mapped out for the entire project. A third-party may be required to evaluate the research partnership.

Rad Resource:

The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings, evaluation professionals! Kirk Knestis, CEO of Hezel Associates, back this time as guest curator of an AEA365 week revisiting challenges associated with untangling purposes and methods, between evaluation and research and development (R&D) of education innovations. While this question is being worked out in other substantive areas as well, we deal with it almost exclusively in the context of federally funded science, technology, engineering, and math (STEM) learning projects, particularly those supported by the National Science Foundation (NSF).

In the two years since I shared some initial thoughts in this forum on distinctions between “research” and “evaluation,” the NSF has updated many of its solicitations to specifically reference the then-new Common Guidelines for Education Research and Development. This is, as I understand it, part of a concerted effort to increase emphasis on research—generating findings useful beyond the interests of internal project stakeholders. In response, proposals have been written and reviewed, and some have been funded. We have worked with dozens of clients, refined practices with guidance from our institutional review board (IRB), and even engaged external evaluators ourselves when serving in the role of “research partner” for clients developing education innovations. (That was weird!) While we certainly don’t have all of the answers in the complex and changing context of grant-funded STEM education projects, we think we’ve learned a few things that might be helpful to evaluators working in this area.

Lesson Learned: This evolution is going to take time, particularly given the number of stakeholder groups involved in NSF-funded projects—program officers, researchers, proposing “principal investigators” not researchers by training, external evaluators, and perhaps most importantly the panelists who score proposals on an ad hoc basis. While the increased emphasis on research is a laudable goal—as the NSF merit criterion of furthering “Intellectual Merit”—these groups are far from consensus about terms, priorities, and appropriate study designs. On reflection, my personal enthusiasm and orthodoxy regarding the Guidelines put us far enough ahead of the implementation curve that we’ve often found ourselves struggling. The NSF education community is making progress toward higher quality research but the potential for confusion and proposal disappointment is still very real.

Hot Tip: Read the five blogs that follow. Delve more into the nuances of what my colleagues are collectively learning about how we can improve our practices in the context of evolving operational distinctions between R&D and external program evaluation of STEM education innovations. This week’s posts explore what we *think* we’re learning across three specific popular NSF education programs, in the context of IRB review of our studies, and where the importance of dissemination is concerned. I hope they are useful.

The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello all! This is Shelly Engelman and Tom McKlin, evaluators at The Findings Groups, LLC, a privately-owned applied research and evaluation firm with a focus on STEM education.

The primary objective of many programs that we evaluate is to empower a broad range of elementary, middle and high school students to learn STEM content and reasoning skills. Many of our programs theorize that increasing exposure to and content knowledge in STEM will translate into more diverse students persisting through the education pipeline. Our evaluation questions often probe the affective (e.g. emotions, interests) and cognitive aspects (e.g. intelligence, abilities) of learning and achievement; however, the conative (volition, initiative, perseverance) side of academic success has been largely ignored in educational assessment. While interest and content knowledge do contribute to achieving goals, psychologists have recently found that Grit—defined as perseverance and passion for long-term goals— is potentially the most important predictor of success. In fact, research indicates that the correlation between grit and achievement was twice as large as the correlation between IQ and achievement.

Lessons Learned: Studies investigating grit have found that “gritty” students:

  • Earn higher GPAs in college, even after controlling for SAT scores,
  • Obtain more education over their lifetimes, even after controlling for SES and IQ,
  • Outperform other Scripps National Spelling Bee contests, and
  • Withstand the first grueling year as cadets at West Point.

Even among educators, research suggests that teachers who demonstrate grit are more effective at producing higher academic gains in students.

Rad Resouce Articles:

 Hot Tip: Grit may be assessed with an 8-item scale Grit Scale that has been developed and validated by Duckworth and colleagues (2009).

Future Consideration:  The major takeaway from studies on Grit is that conative skills like Grit often have little to do with the traditional ways of measuring achievement (via timed content knowledge assessments) but explain a larger share of individual variation when it comes to achievement over a lifetime. As we design evaluation plans for programs hoping to improve achievement and transition students through higher education, we may consider measuring the degree to which these programs are impacting the volitional components of goal-oriented motivation. Recently, two schools have developed programs to foster grit in students. Read their stories below:

The American Evaluation Association is celebrating Best of aea365, an occasional series. The contributions for Best of aea365 are reposts of great blog articles from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Jim Van Haneghan (Professor at the University of South Alabama) and Jessica Harlan (Senior Program Evaluation Specialist at the Johns Hopkins University School of Medicine). Over the past several years we have been studying a middle school integrated STEM curriculum called Engaging Youth through Engineering (EYE). We have identified one element of impact that is much more complicated to determine than initially thought: the influence of the program on student interest in STEM.

We would like to share two lessons learned from our work. First, interest develops through different stages, and it is challenging to measure these as unique phases. Hidi and Renninger (2006) differentiate more fleeting situational interest with an external locus of causality from sustained interest with an internal locus of causality. Considering specific level of student interest is important in for evaluators because students involved in STEM programs (especially ones where students have a choice to participate or not) may need a program to address “interest” differently depending upon students’ phase of interest development. A program that creates initial interest may differ in focus and impact on students at each interest level from a program that sustains interest. Additionally, when designing assessments of “interests” evaluators need to go beyond items that ask about initial interest.

The second lesson is that when looking at interest’s role, the program being evaluated is one of many influences that might facilitate or detract from students developing sustained interest. For example, our EYE modules were part of 6th, 7th, and 8th grade for the students we examined, but represented at most about 5% of the days of an entire middle school career. While the modules might have influenced led to stronger interest in STEM, as we continue to investigate EYE in a larger scale study, we have to ask whether EYE effects could be moderated by other factors (e.g., other STEM opportunities, having high/poor quality STEM area teachers, poorly sequenced or ill developed curriculum in regular math and science). When we asked students which impacted their interest in STEM more: EYE or regular math and science, most agreed their regular math and science classes had a greater impact. Failure to consider these other factors could result in evaluators making a Type III error: erroneously attributing differences between groups to program participation rather than other factors.

Rad Resources:

The National Academies Press has several books that helped us frame important questions about EYE:

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! I am Rebecca Teasdale, a doctoral student in Educational Psychology specializing in evaluation methodology at the University of Illinois at Urbana-Champaign. I’m also a librarian and have served as an administrator and science librarian in public libraries. My current work focuses on the evaluation of interest-driven learning related to science, technology, engineering and math (STEM) that takes place in public libraries and other informal learning settings.

I first became involved with the Building Informal Science Education (BISE) project as an intern at the Science Museum of Minnesota while I was pursuing a certificate in evaluation studies at the University of Minnesota. (See my blog post, “Measuring behavioral outcomes using follow-up methods,” to learn more). Now, I’m using the BISE database to support my research agenda at Illinois by identifying methods for evaluating the outcomes of public library STEM programming.

Evaluation practice is just getting started in the public library context, so few librarians are familiar with evaluation methods measuring mid- and long-term outcomes of informal science education (ISE) projects. I used the BISE reports to provide a window into understanding (a) the types of outcomes that ISE evaluators study, (b) the designs, methods and tools that they use, and (c) the implications for evaluating the outcomes of STEM programs in public libraries.

Lessons Learned:

  • I’ve found little standardization among the evaluation reports in the BISE database. Therefore, rather than provide a single model for libraries to replicate or adapt, the BISE database offers a rich assortment of study designs and data collection methods to consider.
  • Just 17% of the reports in the BISE database included the follow-up data collection necessary to examine mid- and long-term outcomes. In particular, library evaluators should ensure that we design studies that examine these effects as well as more immediate outcomes.
  • Collecting follow-up data can be challenging in informal learning settings because participation is voluntary, participants are frequently anonymous, and engagement is often short-term or irregular. The reports in the BISE database offer a number of strategies that library evaluators can employ to collect follow-up data.
  • All five impact categories from the National Science Foundation-funded Framework for Evaluating Impacts of Informal Science Education Projects are represented in the BISE database. I’m currently working to identify some of the methods and designs for each impact category that may be adapted for the library context. These impact categories include:
    • awareness, knowledge or understanding
    • engagement or interest
    • attitude
    • behavior
    • skills

 Rad Resource:

  • I encourage you to check out the BISE project to inform evaluation practice in your area of focus and to learn from the wide variety of designs, methods, and measures used in ISE evaluation.

The American Evaluation Association is celebrating Building Informal Science Education (BISE) project week. The contributions all this week to aea365 come from members of the BISE project team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top