AEA365 | A Tip-a-Day by and for Evaluators

TAG | innovation

I’m Andrew Hayman, Research Analyst for Hezel Associates. I’m Project Leader for Southern Illinois University Edwardsville’s National Science Foundation (NSF) Innovative Technology Experiences for Students and Teachers (ITEST) program, Digital East St. Louis.

The ITEST program was established in 2003 to address shortages of technology workers in the United States, supporting projects that “advance understanding of how to foster increased levels of interest and readiness among students for occupations in STEM.” The recent revision of the ITEST solicitation incorporates components of the Common Guidelines for Education Research and Development to clarify expectations for research plans, relating two types of projects to that framework:

  • Strategies projects are for new learning models, and research plans should align with Early-Stage, Exploratory, or Design and Development studies.
  • Successful Project Expansion and Dissemination (SPrEaD) projects should have documented successful outcomes from an intervention requiring further examination and broader implementation, lending SPrEaD projects to Design and Development or Impact studies.

Integration of the Common Guidelines into the NSF agenda presents opportunities for evaluators with research experience because grantees may not possess internal capacities to fulfill research expectations. Our role in a current ITEST Strategies project includes both research and evaluation responsibilities designed to build our partner institution’s research capacity. To accomplish this, our research responsibilities are significant in Year 1 of the grant, including on-site data collections, but decrease annually until the final grant year, when we serve as a research “critical friend” to the grantee.

I presented at a recent ITEST conference about our role in research and evaluation activities for an audience primarily of evaluators. As expected, some questioned whether we can serve in dual roles effectively while others, including NSF program officers, were supportive of the model. Differences in opinion regarding research responsibilities amongst ITEST stakeholders suggest it may take time for evaluators to carve out a significant research role for ITEST. However, NSF’s commitment to rigorous research as framed by the Common Guidelines, coupled with the limited research capacity of some institutions, suggests possibilities for partnerships.

Lesson Learned:

  • Define research responsibilities clearly for both the institution and evaluators. Separation of research and evaluation activities is critical, with separate study protocols, instruments, and reports mapped out for the entire project. A third-party may be required to evaluate the research partnership.

Rad Resource:

The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings, evaluation professionals! Kirk Knestis, CEO of Hezel Associates, back this time as guest curator of an AEA365 week revisiting challenges associated with untangling purposes and methods, between evaluation and research and development (R&D) of education innovations. While this question is being worked out in other substantive areas as well, we deal with it almost exclusively in the context of federally funded science, technology, engineering, and math (STEM) learning projects, particularly those supported by the National Science Foundation (NSF).

In the two years since I shared some initial thoughts in this forum on distinctions between “research” and “evaluation,” the NSF has updated many of its solicitations to specifically reference the then-new Common Guidelines for Education Research and Development. This is, as I understand it, part of a concerted effort to increase emphasis on research—generating findings useful beyond the interests of internal project stakeholders. In response, proposals have been written and reviewed, and some have been funded. We have worked with dozens of clients, refined practices with guidance from our institutional review board (IRB), and even engaged external evaluators ourselves when serving in the role of “research partner” for clients developing education innovations. (That was weird!) While we certainly don’t have all of the answers in the complex and changing context of grant-funded STEM education projects, we think we’ve learned a few things that might be helpful to evaluators working in this area.

Lesson Learned: This evolution is going to take time, particularly given the number of stakeholder groups involved in NSF-funded projects—program officers, researchers, proposing “principal investigators” not researchers by training, external evaluators, and perhaps most importantly the panelists who score proposals on an ad hoc basis. While the increased emphasis on research is a laudable goal—as the NSF merit criterion of furthering “Intellectual Merit”—these groups are far from consensus about terms, priorities, and appropriate study designs. On reflection, my personal enthusiasm and orthodoxy regarding the Guidelines put us far enough ahead of the implementation curve that we’ve often found ourselves struggling. The NSF education community is making progress toward higher quality research but the potential for confusion and proposal disappointment is still very real.

Hot Tip: Read the five blogs that follow. Delve more into the nuances of what my colleagues are collectively learning about how we can improve our practices in the context of evolving operational distinctions between R&D and external program evaluation of STEM education innovations. This week’s posts explore what we *think* we’re learning across three specific popular NSF education programs, in the context of IRB review of our studies, and where the importance of dissemination is concerned. I hope they are useful.

The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Kirk Knestis, CEO of Hezel Associates, back again, following up on a previous post about how evaluators’ work in STEM education settings is being influenced by the Common Guidelines for Education Research and Development introduced by the National Science Foundation (NSF) and U.S. Department of Education. Hezel Associates studies education innovations so regularly supports organizations proposing grant-funded R&D projects in science, technology, engineering, and math education (STEM). Sometimes we’re a research partner (typically providing Design and Development Research, Type #3 in the Guidelines); while in other cases we serve as an external evaluator (more accurately, “program evaluator”) assessing the implementation and impact of proposed project activities, including the research.

Lessons Learned – Work with a wide variety of clients (more than 70 proposals so far in 2014!) has left me convinced that an evaluator—or research partner, if your job is framed that way—can do a few specific things that can add substantial value to development of a client’s proposal. Someone in an external evaluator/researcher role can do more than simply “write the evaluation section,” potentially improving the likelihood for proposal success.

Hot Tips – 1. Help designers explicate the theory of action of their innovation (intervention, program, technology, etc.) being tested and developed. Any research study aligned with the Guidelines (for example, many if not most NSF projects) will be expected to build on a clearly defined theoretical basis. Evaluators ought to be well equipped to facilitate development of a logic model to serve that purpose, illustrating connections between elements or features of the innovation and its intended outcomes.

  1. Define the appropriate “type” of research . The Common Guidelines provide a typology of six purposes for research, ranging from Foundational Research contributing to basic understandings of teaching and learning; to Scale-up Research, examining if the innovation retains its effectiveness for a variety of stakeholders, when implemented in different settings “out in the wild” without substantial developer support. A skilled evaluator can help the client select the appropriate kind of research given the level of maturity of the innovation and other factors.
  2. Help clarify distinctions between “research” and “evaluation” purposes, roles, and functions. Clarity on the type of research required will inform study design, data-collection, analysis, and reporting decisions. A good evaluator should be able to help determine the expertise required for the research, requirements for external evaluation of that work, and the narrative explaining roles, responsibilities, and work plans required for a proposal.

Rad Resource – If you work with education clients, become familiar with the Common Guidelines for Education Research and Development. Some complex conversations loom but they will be an important consideration in conversations about research and evaluation in education in the coming years.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings! We are Lily Zandniapour of the Corporation for National and Community Service (CNCS) and Nicole Vicinanza of JBS International.   We work together with our colleagues at CNCS and JBS to review and monitor the evaluation plans developed and implemented by programs participating in the CNCS Social Innovation Fund (SIF).   The SIF is one of six tiered- evidence initiatives introduced by President Obama in 2010. The goals of the SIF are two-fold: 1) to invest in promising interventions that address social and community challenges and, 2) to use rigorous evaluation methods to build and extend the evidence base for funded interventions.

Within the SIF, CNCS funds intermediary grantmaking organizations that then re-grant the SIF funding to subgrantee organizations. These subgrantees implement and participate in evaluations of programs that address community challenges in the areas of economic opportunity, youth development, or health promotion.

Rad Resource: Go to http://www.nationalservice.gov/programs/social-innovation-fund to see more about the work of the Social Innovation Fund.

SIF grantees and subgrantees are required to evaluate the impact of their programs, primarily using experimental and quasi-experimental designs to assess the relationship between each funded intervention and the impact it targets. To date, there are over 80 evaluations underway within the portfolio.

Lesson Learned: A key challenge we’ve encountered is making sure that CNCS, JBS, intermediaries, subgrantees and external evaluators all know what is required for a plan to demonstrate rigor in the SIF. To address this, CNCS and JBS worked together to develop the SIF Evaluation Plan (SEP) Guidance document based on a checklist of criteria that evaluators, participating organizations, and reviewers for intermediaries and CNCS could all use when developing and reviewing a plan.

Over the past three years, this Guidance document has been used to structure and review over 80 evaluation plans, and it has proved highly valuable in helping evaluators, programs, and funders to build a shared understanding of what this type of impact evaluation plan includes.

Rad Resource: Have a look at the SIF Evaluation Plan (SEP) Guidance ! It includes a detailed checklist for writing an impact evaluation plan, references and links to resources for each section of the plan, and sample formats for logic models, timelines, budgets, and a glossary of research terms.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m Kate McKegg, Director of The Knowledge Institute Ltd a member of the Kinnect Group with Nan Wehipeihana. We want to share what we have learned about explaining developmental evaluation (DE).

Evaluation isn’t something that our clients or our communities fully understand and it can create anxiety.  So, when we suggest that a client or community undertakes a developmental evaluation this can be extra puzzling for folks.

Rad Resource:We usually begin by reinforcing some key messages about what evaluation is:

McKegg 1

Hot Tip:  In our experience, stressing the importance of systematic, well informed evaluative reasoning is a key step in convincing people that DE is evaluative, and not just some kind of continuous quality improvement process.

Hot Tip:  We explain why we think DE is best suited to their situation, meaning:

  • There is something innovative going on, something is in development and people are exploring, innovating, trying things out and creating something they hope will make a difference
  • The situation is socially and/or technically complex, and rapidly changing.  People are experimenting with new ideas, new ways of doing things, approaches, different relationships and roles – and this is likely to be happening for a while
  • There is a high degree of uncertainty about what is likely to work, in terms of process, practice and outcomes.  Which pathway the initiative might take is not yet clear, i.e., what the future holds is still unknown
  • The situation is emergent, i.e., there are continually emerging questions, challenges, successes and issues for people to deal with in real time.

Hot Tip:  Finally, we explain the key features of DE. We typically focus on the following 4 features:

  • DE has a systems orientation i.e., that understanding a DE evaluation challenge systemically involves paying attention to relationships, different perspectives and boundaries, and that this approach is ideally suited to working with complexity and emergence
  • DE involves cycles of learning to inform action using real time data, as part of an ongoing process of development – probing, venturing, sensing, learning, and re-learning

Rad Resource: Adaptive action and reflection graphic:

McKegg 2

  • DE typically has an emergent evaluation design – in order that it can be responsive and changing needs, issues, and challenges as they arise
  • With DE, the evaluator typically becomes part of the team bringing together evaluative thinking with evidence in ways that support key stakeholders to understand the quality and value of something in real time.

Rad Resource: The Australasian Evaluation Society (AES) Best Evaluation Policy and Systems Award, 2013, was for a Developmental Evaluation we conducted of He Oranga Poutama, a M?ori sport and recreation initiative. You can read about it here.

This week, we’re diving into issues of Developmental Evaluation (DE) with contributions from DE practitioners and authors. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I am Michael Quinn Patton, author of Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. I am an independent consultant based in Saint Paul, Minnesota.  I have been doing and writing about evaluation for over 40 years.  This week features posts by colleagues and clients engaged in various developmental evaluation initiatives. MQP 1

Rad Resource:Developmental evaluation (DE) informs and supports innovative and adaptive development in complex dynamic environments. DE brings to innovation and adaptation the processes of asking evaluative questions, applying evaluation logic, and gathering and reporting evaluative data to support project, program, product, and/or organizational development with timely feedback. The first chapter of the Developmental Evaluation book is available online.

Hot Tip:  Understand the difference between formative and developmental evaluation. Developmental evaluation is NOT ongoing formative evaluation. This is a common confusion.  Developmental evaluation supports adapting and changing an innovation for ongoing development.  Formative evaluation supports improving a model and, as originally conceptualized, serves the purpose of getting ready for summative evaluation (Michael Scriven, 1967, “The methodology of evaluation “).

Hot Tip: Developmental evaluation is NOT the same as development evaluation.  This is another common confusion.  Development evaluation refers to evaluations done in developing countries. Some development evaluation is developmental, but by no means all.

MQP 2

Hot Tip: Developmental evaluation may be called by other names: adaptive evaluation, real time evaluation, or emergent evaluation.  I often hear from folks that they’ve been doing DE without calling it that.  Here’s an example just published in the journal EVALUATION.

Cool Trick: Go to the AEA Public eLibrary and search for developmental evaluation.  You’ll find lots of presentations and examples.

This week, we’re diving into issues of Developmental Evaluation (DE) with contributions from DE practitioners and authors. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Greetings fellow evaluators!  Our names are Veena Pankaj and Myia Welsh and we work for Innovation Network, a Washington DC-based evaluation firm.   While Innovation Network has always used a participatory approach to evaluation, we recently came to the realization that much of the ‘participatory-ness’ of our evaluation projects was limited to evaluation planning and data collection.  We suspected that an additional richness of context could be gained by including stakeholders in the analysis process.

We started by involving stakeholders in the analysis and interpretation of the data on a few projects.  This helped us move from simply offering a final evaluation report with findings and recommendations, to embracing a practice that brought the client’s own perspective into the analysis.

Hot Tip: In determining whether participatory analysis may be a good fit for your evaluation needs, consider the following questions:

1. Quality: How might participatory analysis improve the quality of findings/recommendations?

2. Stakeholders: What might be the positive outcomes of engaging evaluation stakeholders?

3. Timeline & Resources: Will the participatory analysis approach fit within the project timeline and available resources?

Our experience in using this approach has helped us with the following:

  • Present first drafts of data and/or findings, giving stakeholders the chance to provide context

and input on findings or recommendations;

  • Help sustain stakeholder interest and engagement in the evaluation process;
  • Identify which findings and recommendations are the most meaningful to stakeholders; and
  • Increase the likelihood that findings and recommendations will be put to practical use.

Hot Tip: Conducting participatory analysis can be tricky.  You are not just presenting ideas to stakeholders; you are facilitating a discussion process.  Make sure you have an agenda in place, specific questions you’d like the stakeholders to consider and clearly communicated goals for the meeting.  Having these items in place will allow you to focus on the richness of the discussion itself.

Rad Resource #1: Participatory Analysis: Expanding Stakeholder Involvement in Evaluation This recently released white paper examines the use of participatory analysis with three different organizations. Each example includes a description of purpose; the design, planning and implementation process; the effect on the overall evaluation; and lessons learned.

Rad Resource #2: Participatory Evaluation: How It Can Enhance Effectiveness and Credibility of Nonprofit Work For a different perspective, check out this article from the Nonprofit Quarterly. It discusses participatory evaluation practices in a community-based setting.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

My name is Michael Quinn Patton and I am an independent evaluation consultant. That means I make my living meeting my clients’ information needs. Over the last few years, I have found increasing demand for innovative evaluation approaches to evaluate innovations. In other words, social innovators and funders of innovative initiatives want and need an evaluation approach that they perceive to be a good match with the nature and scope of innovations they are attempting.  Out of working with these social innovators emerged an approach I’ve called developmental evaluation that applies complexity concepts to enhance innovation and support evaluation use.

Hot Tip: Innovations are different from standard projects and programs.  Innovators are often different from people implementing typical programs.  Innovators are in a hurry, value rapid, real time feedback, have a high tolerance for ambiguity, embrace uncertainty, learn quickly, and adapt rapidly to changed conditions. They’re not always sure where they’re heading, so they resist being boxed in by concrete, pre-set targets. They’re propelled into action more by vision than by clear, specific and measurable outcomes. They want an evaluation approach attuned to their fast pace and innovative spirit. They are at home in complex dynamic systems. Such systems characterize the world in which they live and work. Thus, they want an evaluation approach attuned to complexity.

Hot tip: Complex situations challenge traditional evaluation practices. Complexity can be defined as situations in which how to achieve desired results is not known (high uncertainty), key stakeholders disagree about what to do and how to do it, and many factors are interacting in a dynamic environment that undermine efforts at control, making predictions and static models problematic.  Complexity concepts include nonlinearity (small actions can produce large reactions), emergence (patterns emerge from self-organization among interacting agents), and dynamic adaptations (interacting elements and agents respond and adapt to each other).

Hot tip: Developmental evaluation aims to meet the needs of social innovators by applying complexity concepts to enhance innovation and use. Developmental evaluation focuses on what is being developed through innovative engagement.

Rad Resources:
•    Developmental evaluation: Applying complexity concepts to enhance innovation and use by Michael Quinn Patton (Guilford Press, 2010).*
•    A developmental evaluation primer. Jamie Gamble. (2008). Montréal: The J.W. McConnell Family Foundation.
•     DE 201: A Practitioner’s Guide to Developmental Evaluation by Elizabeth Dozois,
Marc Langlois and Natasha Blanchet-Cohen. Montréal: The J.W. McConnell Family Foundation.
•    AEA Annual Conference professional development workshop on Developmental Evaluation, with Michael Quinn Patton, November 8-9, San Antonio.

*AEA Members receive 20% off on all books ordered directly from Guilford. If you are a member, sign into the AEA website at http://eval.org/ and select “Publications Discount Codes” from the “Members Only” menu to access the discount codes and process.

· ·

Archives

To top