AEA365 | A Tip-a-Day by and for Evaluators

My name is Di Cross from Clarivate Analytics. We conduct evaluations of scientific research funded by government agencies, non-profits, academic institutions or industry.

I cringe when I hear mention of ‘unbiased analysis’. What an oversimplification to state that an analysis (or evaluation) is unbiased! Everyone carries their own biases. Some exist as part of our brain’s internal wiring to enable us go about our day without being paralyzed by the tremendous amount of information that our sensory systems constantly receive.

But what specifically do I mean by bias?

In statistics, bias in an estimator is the difference between the expected value of the estimator and the population parameter which it is intended to measure. For example, the arithmetic average of random samples taken from a normal distribution is an unbiased estimator of the population average. As even Wikipedia points out, ‘bias’ in statistics does not carry with it the same negative connotation it has in common English. However, this is in the absence of systematic errors.

Systematic errors are more akin to the common English definition of bias: ‘a bent or tendency’, ‘an inclination of temperament or outlook; especially…a personal and sometimes unreasoned judgment, prejudice; an instance of such prejudice.’

So what do we do?

Hot Tip #1: Don’t panic!

Do not fool yourself into thinking that you can design and conduct evaluations which are 100% free of bias. Accept that there will be bias in some element of your evaluation. But of course, do your best to minimize bias where you can.

Hot Tip #2: Develop a vocabulary about bias

There are many sources of bias. Students in epidemiology, the discipline from which I approach evaluation, study selection bias, measurement error including differential and non-differential misclassification, confounding, and generalizability. There are also discussions of bias specific to evaluation.

Hot Tip #3: Adjust your design where possible

After identifying potential sources of bias in your study design, address them as early in your evaluation as possible – preferably during the design phase. Alternatively, addressing bias might also mean performing analysis differently, or skipping to Hot Tip #4.

(Note: There is something to be said accepting a biased estimator – or, dare I say, a biased study design – over one that is unbiased. This might be because the unbiased estimator is vastly more expensive than the biased estimator which isn’t too far off the mark. Or it might be for reasons of risk: Wouldn’t you rather consistently underestimate the time it takes to bake a batch of cookies, rather than be right on average, but risk having to throw away a charred batch half of the time?)

Hot Tip #4:  Be transparent

Where it is not possible to address bias, describe it and acknowledge that it exists. Take it into consideration in your interpretation. As a prior AEA blog writer put it, ‘out’ yourself. Be forthcoming about sources of bias and communicate their effect on your evaluation to your audience.

The American Evaluation Association is celebrating Research, Technology and Development (RTD) TIG Week with our colleagues in the Research, Technology and Development Topical Interest Group. The contributions all this week to aea365 come from our RTD TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Sara Dodson, and I work in the Office of Science Policy at the National Institutes of Health (NIH), where I have led the development of a series of case studies on biomedical innovations. Today I’ll kick off the Research Technology & Development TIG week by discussing the challenging work of tracing public research investments to societal impact, framed by the question most germane to NIH, “How do we – a federal agency who supports scientific research on the basic side of the R&D spectrum – track progress towards our mission of improving health?” The complexities of this question are rooted in several factors: the long, variable timelines of translating research into practice, the unpredictable and nonlinear nature of scientific advancement, and the intricate ecosystem of health science and practice (the multitude of research funders & policymakers, academic & industry scientists, product developers, regulators, health practitioners, and a receptive public), just to name a few.

My team started out with two core objectives: 1) develop a systematic approach to tracing the contribution of NIH and other key actors to health interventions, and 2) identify a rich tapestry of data sources that provide a picture of biomedical innovation and its attendant impacts.

Hot Tip:

To make this doable, we broke off bite-sized pieces, conducting case studies on specific medical interventions.  We chose an existing intervention in health practice and performed both a backward trace and forward trace.  Moving backward from the intervention, we searched for and selected pivotal research milestones, reaching back into basic research findings that set the stage for progress and documenting the role of NIH and others along the way.  Moving forward, we looked for evidence of the intervention’s influence on health, knowledge, and other societal impacts.

(click for larger image)

Rad Resources:

Dozens of data sources proved useful.  For each of the information categories we examined, the table above illustrates the types of evidence that we searched for and some of the data sources that we utilized.[1]

Lessons Learned:

Data needs – The development of more comprehensive and structured datasets (e.g., data related to FDA-approved drugs, biologics, and devices) with powerful search and export capabilities are needed.  Even further, wide-scale efforts to mine and structure citations in various sources – like FDA approval packages, patents, and clinical guidelines – would be very useful.

Tool needs – These studies are data and time-intensive, requiring a couple months of full time effort to conduct.  Sophisticated data aggregators could help semi-automate the process of identifying “milestones” and linking medical interventions to changes in population-level health outcomes and other societal impacts.

Uses – There are many potential uses of these studies, including for science communication and for revealing patterns of successful research-to-practice pathways & the influence of federal funding and policies.  We have published a handful of case studies as Our Stories on the Impact of NIH Research website – I invite you to take a look!

[1] Note that only open access data sources are included here. We also made use of proprietary data sources and databases available to NIH staff.

The American Evaluation Association is celebrating Research, Technology and Development (RTD) TIG Week with our colleagues in the Research, Technology and Development Topical Interest Group. The contributions all this week to aea365 come from our RTD TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

AEA365 Curator note: We generally feature posts by AEA staff and AEA365 Curators on Saturdays, and are now pleased to offer occasional Saturday blog posts from our esteemed AEA Board members!

Hi, I am Dominica McBride with Become: Center for Community Engagement and Social Change and serve you on the AEA Board of Directors.

John F. Kennedy said, “It is from numberless diverse acts of courage and belief that human history is shaped. Each time a [person] stands up for an ideal, or acts to improve the lot of others, or strikes out against injustice, he sends forth a tiny ripple of hope, and crossing each other from a million different centers of energy and daring those ripples build a current which can sweep down the mightiest walls of oppression and resistance.”

In the midst of national leaders acting against our values as an organization, explicitly marginalizing many who find a professional home in AEA and harming communities that many of us serve, I believe we are called as professionals and human beings to make ripples.

In the face of a grim reality, I have hope, especially given what I know about us as evaluators. We are connected to various organizations that are connected to many people, from residents to leaders. We’re able to critically and empirically explore the intersection of our content area and the sociopolitical context and how we may use our position and expertise to move forward on a broader issue. We have a unique set of skills – to gather information, think critically, analyze, synthesize and communicate. We are able to partner with organizations and leaders in many ways to use our skillset towards action around an issue.

With this potential, there are various possibilities for a new or refined role for evaluators to make a necessary difference in this environment. For example, we could:

  • Advocate or mobilize our partners, clients and communities to move in a common direction
  • Build resilience in the systems and institutions that are being depleted of resources
  • Help communities construct new systems and programs that work for and, in many cases, could be run by them

Hot Tips:

Begin one-on-one meetings with your clients, partners, colleagues or fellow community members. Remember to reach out and listen to those not often included in evaluation, such as returning citizens from incarceration, single mothers struggling to get by, and disenfranchised youth. Listen for recurring themes about what matters to them and what may motivate them to act collectively.

After those meetings, convene groups around that common issue to develop a plan of action and ground that action in evidence.

 

Rad Resource:

To learn more about advocacy, mobilizing and organizing and for examples on successful collective action, read Jane McAlevey’s book No Shortcuts: Organizing for Power in the New Gilded Age.

*If you’re interested in exploring or working together around these possibilities, please reach out to me at dmcbride@becomecenter.org or 312-394-9274.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

I’m Andrea Nelson Trice, President of Trice & Associates, an evaluation and consulting firm. This case came from my research for a book project on the human dimensions of social enterprise success.

Tony, a successful entrepreneur, visited several emerging markets and determined that dependency on fire for light is unacceptable in the 21st century. The health and safety dangers alone make an alternative essential. He knew how to design low-cost solar lights, so he quit his job and began building a social enterprise to address this problem.

He received a grant to give away thousands of his lights with the goal of priming the pump in multiple markets. Now, two years later, we’re brought in to evaluate the enterprise’s impact. The problem is, the company is far from breaking even. “I don’t know how many more things I can try,” Tony says. “People just aren’t buying our lights.”

As evaluators, do we simply pull out a standard template to evaluate the work, or do we risk asking deeper, more difficult questions around assumptions that are driving the enterprise? In interviews with Tony and emerging market social entrepreneurs, I’ve heard very different perspectives about “the problem.”

Rad Resources:

  • Perhaps one of the most important things we can contribute as program evaluators is help identifying faulty assumptions that guide the work. Here is my website, which includes more on this.
  • Increase your understanding of cultural differences. One of my favorite resources is from Professor Geert Hofstede, whose research team highlights national cultural differences.

Hot Tips:

  • As a Do-It-Yourself culture, we often assume we can make sense of cultural differences on our own. That’s rarely the case. Expats, who have lived in a culture for years, can be great resources.
  • Consider the Amish. It may seem futile to market solar lights to people who have no problem with their current light sources. But how often do we unintentionally overlay our values onto another culture as we work to solve a “pressing need?”

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

This is Heather Esper, senior program manager, and Yaquta Fatehi, senior research associate, from the Performance Measurement Initiative at the William Davidson Institute at the University of Michigan. Our team specializes in performance measurement to improve organizations’ effectiveness, scalability, and sustainability and to create more value for their stakeholders in emerging economies.

Our contribution to social impact measurement (SIM) focuses on assessing poverty outcomes in a multi-dimensional manner. But what do we mean by multi-dimensional? For us, this refers to three things. It first means speaking to all local stakeholders when assessing change by a program or market-based approach in the community. This includes not only stakeholders that interact directly with the organization, such as customers or distributors from low-income households, but also those that do not engage with the venture  ?  like farmers who do not sell their product to the venture, or non-customers. Second, this requires moving beyond measuring only economic outcome indicators; it includes studying changes in capability and relationship well-being of local stakeholders. Capability refers to constructs such as the individual’s health, agency, self-efficacy, and self-esteem. Relationship well-being refers to changes in the individual’s role in the family and community and also in the quality of the local physical environment. Thirdly, multi-dimensional outcomes means assessing positive as well as negative changes on stakeholders and on the local physical and cultural environment.

We believe assessing multidimensional outcomes better informs internal decision-making. For example, we conducted an impact assessment with a last-mile distribution venture and focused on understanding the relationship between business and social outcomes. We found a relationship between self-efficacy and sales, and self-efficacy and turnover, meaning if the venture followed our recommendation to improve sellers’ self-efficacy through trainings, they would also likely see an increase in sales and retention.

Rad Resources:

  1. Webinar with the Grameen Foundation on the value of capturing multi-dimensional poverty outcomes
  2. Webinar with SolarAid on qualitative methods to capture multi-dimensional poverty outcomes
  3. Webinar with Danone Ecosystem Fund on quantitative methods to capture multi-dimensional poverty outcomes

Hot Tips:  Key survey development best practices:

  1. Start with existing questions developed and tested by other researchers when possible and modify as necessary with a pretest.
  2. Pretest using cognitive interviewing methodology to ensure a context-specific survey and informed consent. We tend to use a sample size of at least 12.
  3. For all relevant questions, test reliability and variability using the data gathered from the pilot. We tend to use a sample size of at least 25 to conduct analysis, such as Cronbach’s alpha of multi-item scale questions).

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hello! We are Brian Beachkofski and Jeannie Friedman, Pay for Success (PFS) advisors at Third Sector Capital Partners. We spend most of our time assessing feasibility and designing social sector programs with rigorous evaluations and evidence-based interventions embedded into their contracting structure.

PFS is an innovative contracting model (shown in the figure below) that drives government resources toward high-performing social programs. The PFS model is designed to merge performance measurement using administrative data and rigorous evaluation of long-term outcomes into the contracting structure. This helps ensure that funding is directed toward programs that succeed in measurably improving the lives of people most in need.

 

Hot Tips:

  • Balance Factors in Evaluation Design: A randomized control trial (RCT) was once considered necessary for PFS evaluation, but now it is generally recognized that there is no one-size-fits-all answer. Factors such as operational complexities, sample size, observation windows, budget constraints, and limitations on service providers’ needs should be balanced against each other.
  • Focus on Outcomes: Aligning incentives around outcomes is a good first step. Providing interim insight on how the project progresses against those metrics allows the team to make improvements and act on those incentives. This feedback loop is fueled by interim outcome metrics and real-time program delivery modifications. Consistently keeping outcomes in mind maximizes final outcomes for those who are in need. Salt Lake County’s Homes Not Jail illustrates how different evaluation techniques apply to PFS.
  • Separate Payment from Policy: An evaluation intending to inform a payment decision is different than one evaluating a policy decision. A PFS project needs to clarify whether the evaluation is to inform payment, with quantifiable impact, or future policy, where causation is paramount. A good example of when we did that was in Santa Clara County’s Project Welcome Home.
  • Engage Stakeholders Early: Most PFS projects serve populations with complex, multi-faceted needs that cross multiple government agencies and community partners, which makes defining measurable and meaningful outcomes challenging. Collaboratively refining these goals into defined metrics can gain stakeholder buy-in from all partners.
  • Use a Pilot Period: Operationalizing data sharing, referral pathways, and randomization protocols are new skills for many projects. PFS projects are often a government’s first time releasing administrative data outside organizations. Protection requirements and prior practices can make data-sharing feel uncomfortable. A pilot period builds trust and experience in a collaborative shared-data project, easing the full project’s operations.

Rad Resources: Learn more about PFS and projects:

  • Introduction to Pay for Success to learn more about how the model works
  • Third Sector’s blog, with the latest news and thoughts on PFS are found
  • PFS Resource page, with links to more resources

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, we are Mishkah Jakoet and Amreen Choda from Genesis Analytics.

Social Impact Measurement (SIM) is important for the legitimacy, advancement, and management of impact investing. SIM can also help align incentives among stakeholders and improve communication. While innovative finance matured over the past decade, similar advancement in SIM is complicated by  diverse approaches, methods, and tools responding to various stakeholders. Unfortunately, much of SIM is focuses on outputs, uses limited evaluative thinking, and doesn’t consider how change happens.

Lessons Learned:

To best capitalize on the currency of SIM, investors and development practitioners/evaluators need to bridge the gap between their practices. At the 8th African Evaluation Association Conference  in Uganda last month, participants agreed that the evaluation profession has much to offer to overcome the challenges inherent in SIM. With support from The Rockefeller Foundation, Genesis Analytics curated the Innovations in Evaluation strand to start building this bridge by facilitating dialogue between investors and evaluators. A discussion output is below:

Lessons Learned:

  • For many years, the evaluation profession emphasized attribution of impact, but there is now a greater focus on contribution, which matters to investors looking to enhance the impact of their funds.
  • Investors use impact measurements for different objectives at different stages of the investment cycle. Evaluators must be flexible and responsive to meet these needs.
  • Some investors have been reluctant to embrace SIM, because they think a randomized control trial (RCT) is the only option, yet worry about the ethics of randomly assigning a treatment group. Evaluators and investors should share knowledge, particularly to explore the value of options beyond a randomized controlled trial, and jointly develop a contextualized definition of impact and a SIM technique based on this definition.

Rad Resources:

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

I’m Robert Picciotto, director general of the Independent Evaluation Group at the World Bank from 1992 to 2002. I oversaw evaluation there and in its sister institution, the International Finance Corporation, which uses equity finance to promote private sector development.

Evaluators charged with assessing the growing impact of the private sector in the social sphere face a significant challenge.  The United Nations Conference on Trade and Development estimates that $4 trillion to $5 trillion in annual investments are needed to support the United Nations’ Sustainable Development Goals. Because development aid and charity represent only a fraction of this investment, private investment is a critical component of achieving these goals. Impact investing is a highly promising strategy, although currently only a fraction of all investment is channelled toward social impact.

Ethical investors expect a financial return and want corporate decisions to take full account of the public interest. Their concerns range widely and can include the environment, consumer protection, community relations, human rights, etc. Reliable information about the social and environmental effects of ethical investments is necessary to meeting their needs. Luckily, development assistance agencies have already laid the groundwork by collecting and reporting data to satisfy public and philanthropic funders who demanded results as a condition of continued funding.

As highlighted at the Impact Convergence Conference, impact investing has largely focused on articulating goals and tracking progress through a battery of indicators. By relying primarily on self-assessment, it is failing to do the independent analysis required in financial reporting. 

The promise of impact investing could be fulfilled by bridging the impact investment and development evaluation worlds to share their vast operational experience, local country knowledge, and technical expertise in evaluation and investment. The multilateral development banks provide a starting place from their years of embedding independent evaluation into their corporate governance to identify, appraise, and fund major social programs.

Lessons Learned:

The Sustainable Development Goals and their adoption for goal-setting by the impact investing community may signal a relationship between social impact assessors and experienced development and evaluation practitioners. For example, ethical investors vitally concerned with results could seek the comfort of accurate social reporting by investing in derivatives that package social interventions funded by the loans and credits of multilateral development banks in Sustainable Development Goalbonds. These investment vehicles would be backed by existing systems to attest to the social value of each intervention. Finally, evaluation societies worldwide could establish communities of practice connecting impact assessment professionals and development evaluators.

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

This is Leah Goldstein Moses and Jill Lipski Cain of The Improve Group, an evaluation firm based in Minnesota. We are members of The Improve Group’s new practice area focused on how market-based strategies are used to make a social impact. We have the pleasure of introducing this week’s focus on social impact measurement (SIM) for innovative finance and market solutions, with contributions from the SIM Topical Interest Group.

Measuring social impact offers a new perspective into the bigger forces that affect all of us. As we interact with social media, consume goods and services, and work, it’s exciting to consider how our actions have the potential for positive or negative social impact.

In our role as evaluators, we have had several opportunities to learn about social impact. We consulted with companies and foundations interested in making a social impact to understand the business case for social investment; how to synthesize the investors’ interests with social outcomes; and the importance of knowing the field, the appetite of consumers, and where to push the boundaries in the name of social change.  To further learn about social impact, we worked pro bono with a handful of newly formed public benefit corporations (PBCs) in Minnesota to craft logic models and articulate their intended social change.

As you’ll read all week, there is great potential and need for SIM in the private sector. We continue to be inspired and we are excited about the learning possibilities through the TIG – like the blog posts coming your way – because of the almost overwhelming complexity in which the private sector intersects with social impact. For example:

  • The financing/investment instrument itself can be intended to lead to a social impact – by clearly articulating outcomes, and then only paying for those results. ­­
  • Products and services can have a positive social impact, such as higher-efficiency/lower cost lighting, beauty products designed for underserved communities, etc. When The Improve Group became a Public Benefit Corporation in 2016, this was the social impact model we pursued.
  • Companies can attempt to have a social impact through their hiring, recruitment, and retention strategies – bringing more opportunities to specific targeted groups. As described in a recent Atlantic article, many companies want to benefit from broader perspectives – and avoid discriminating, whether intentionally or not.
  • Companies can choose to source products or services with a social impact in mind; for example, Minnesota-based Peace Coffee sources all its coffee from small-scale cooperatives.
  • Companies can attempt to have a social impact through their brand and marketing strategies, as Dove did with its Real Beauty campaign, launched in 2004 and followed with other initiatives This strategy can flop, too, as with some now infamous examples in recent months.

Rad Resource:

Our company joined our local Impact Hub to continue learning from other social impact organizations. The hub is part of an international network with resources in many cities around the world.

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, we are Tim Sheldon and Jane Fields, Research Associates at the Center for Applied Research and Educational Improvement (CAREI) at the University of Minnesota. We serve as external evaluators for EngrTEAMS, a five-year, $8 million project funded by the National Science Foundation. The project is a partnership involving the University of Minnesota’s Science, Technology, Engineering, and Mathematics Education Center (the STEM Center) and Center for Compact and Efficient Fluid Power (CCEFP); Purdue University’s Institute for P-12 Engineering Research and Learning (INSPIRE!); and several school districts. EngrTEAMS is designed to increase students’ learning of science content, as well as mathematical concepts related to data analysis and measurement, by using an engineering design-based approach to teacher professional development and curriculum development.

Context:

As the external evaluators for this project, we based our evaluation framework on Guskey’s five levels of professional development (PD) evaluation (Guskey, 2002). He suggests evaluating (1) participant perceptions of the PD; (2) the knowledge and skills gained by participants; (3) the support from, and impact on, the organization; (4) participants use of their new knowledge and skills; and (5) the impact on student outcomes. In Guskey’s model, the aspects to be evaluated begin after delivery of the PD; that is, the framework does not specifically suggest assessing differences in participants or organizations prior to the delivery of the PD.

In the case of EngrTEAMS and other PD we have evaluated, we have noticed that even though participants receive the same training (i.e., the same “treatment”), their capacity to apply the new knowledge and skills (Guskey level 4) is not the same. What might explain this? We suggest that one way to better understand and explain these differences in implementation (and eventually student outcomes) is to also better understand participants and their organizations prior to the PD. Not all participants start the PD in the same place; for example, participants come to the PD with different levels of prior knowledge, different attitudes about the PD, different classroom management abilities, and different levels of organizational support.

Lesson learned:

When possible, assess implementation readiness of participants and their organizations prior to the delivery of the PD. This may include obtaining information about organizational readiness to support novel approaches, as well as participants’ prior content knowledge and classroom experience, their perception of school or district buy-in, and participants’ attitudes about the training and future adoption of what they will be learning.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

<< Latest posts

Older posts >>

Archives

To top