AEA365 | A Tip-a-Day by and for Evaluators

My name is Dr. Moya Alfonso, MSPH, and I’m an Associate Professor at the Jiann-Ping Hsu College of Public Health at Georgia Southern University, and I am University Sector Representative and Board Member for the Southeast Evaluation Association (SEA). I would like to offer you a few tips on engaging stakeholders in participatory evaluation based on my 16 years of experience engaging stakeholders in community health research and evaluation.

Participatory evaluation is an approach that engages stakeholders in each step of the process.  Rather than the trained evaluator solely directing the evaluation, participatory evaluation requires a collaborative approach.  Evaluators work alongside stakeholders in developing research questions, deciding upon an evaluation design, designing instruments, selecting methods, gathering and analyzing data, and disseminating results.  Participatory evaluation results in stronger evaluation designs and greater external validity because community members have a high level of input in entire process.  It also strengthens buy-in to the results and a greater use of the evaluation products.

Rad Resource: Explore the University of Kansas Community Tool Box for introductory information on participatory evaluation.

Hot Tips: Here are a few tips for engaging stakeholders:

  • Establish a diverse stakeholder advisory group: Community stakeholders have a range of skills that can contribute to the evaluation process. For example, I worked with 8th grade youth on a participatory research project and assumed that I would need to conduct the statistical analysis of survey data.  To my surprise, one of the youths had considerable expertise and was able to conduct the analysis with little assistance. With training and support, community stakeholders can contribute and exceed your expectations.
  • Keep stakeholders busy: A common problem in working with advisory groups is attrition. Keep community stakeholders engaged with evaluation tasks that use their unique skill sets. Matching assignments to existing skill sets empower community stakeholders and result in increased buy-in and engagement.
  • Celebrate successes: Celebrating successes over the course of the evaluation is a proven strategy for keeping stakeholders engaged. Rather than waiting until the end of the evaluation, reward stakeholders regularly for the completion of evaluation steps.
  • Keep your ego in check: Some highly trained evaluators might find handing over the reins to community stakeholders challenging because they’re used to running the show. Participatory evaluation requires evaluators to share control and collaborate with community stakeholders. Try to keep an open mind and trust in the abilities of community stakeholders to participate in the evaluation process with your support and guidance.  You’ll be amazed at what you can achieve when stakeholders are fully engaged in evaluation research! 

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Dr. Michelle Chandrasekhar and I serve as Board Secretary for the Southeast Evaluation Association (SEA).  My work experience includes higher education and state government, and recently with local, state, and federal criminal justice agencies. Working in different venues reminded me that our evaluation reports share several key elements across disciplines, audiences, and purposes. Below are two of these common elements.

  • What we produce must be faultless. In talking about her report strategies used at the S. General Accounting Office’s Program Evaluation and Methodology Division, Eleanor Chelimsky told a 2006 AEA Conference audience that the reports her office produced had to be accurate. If there was any kind of error, it could provide justification for ignoring or refuting the report.

Hot Tip: Hard to read reports are not used. Carefully proofread your writing, logic, and results. Use a checklist and get multiple people to review the document. Ask for examples of previous reports the clients have liked or hated to review and reference for developing future reports.

  • The audience that reads your report has a different agenda from yours. Chelimsky also said that politicians (and we can agree, any decision-maker) understand evaluation within the context of their own agendas. Evaluators need to be aware of those agendas and skilled at presenting a credible case for their work.

Hot Tip: Reports tell a story and should be written bearing in mind the interests of your audience and what they do and do not know. Tell your audiences about The Characters (Who asked for this report? Who is involved?), The Setting (Why was this report requested? Why was the data collected?), The Plot (What are the research questions? What is the study design?), The Conflict (What are the issues or caveats?), and The Resolution (What are the results and recommendations?). Yes, even an internal report can include recommendations – you know the data!

Rad Resources: Check out these links for further reading:

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi, we’re Southeast Evaluation Association (SEA) members Taylor Ellis, a doctoral student and lead evaluator, and Dr. Debra Nelson-Gardell, an Associate Professor, providing consultation at the School of Social Work at The University of Alabama. We form a team tasked with evaluating a program providing community-based, family-inclusive intervention for youth with sexual behavior problems (youngsters who lay people might call juvenile sex offenders). This post focuses on our lessons learned regarding our approach to resistance in program evaluation.

Taut and Alkin (2002) reported people stereotypically view program evaluation as “being judged…that the evaluation is used to ‘get me’, that it is not going to be used to assist me but is perceived to be negative and punitive in its nature” (p. 43). Our program evaluation faced derailment because the program had never been evaluated before, or perhaps because of the inevitability of resistance to evaluation.  Accepting the resistance as normal, we tried addressing it.  But, our efforts didn’t work as we had hoped. Below are the hard lessons learned through “hard knocks.”

Lessons Learned:

  • The Importance of Stakeholder Input: Stakeholders need to believe evaluators will listen to them.  Early in the evaluation process, stakeholders were interviewed and asked about their ideas for program improvement to promote engagement in the process. What the interviews lacked was a greater emphasis on how what stakeholders said affected the evaluation.
  • Remember and (Emphatically) Remind Stakeholders of the Evaluation’s Purpose/Goals: During the evaluation, the purpose of the evaluation was lost in that stakeholders were not reminded of the evaluation’s purpose. Project updates to stakeholders should have been more intentional about movement towards the purpose. We lost sight of the forest as we negotiated the trees. This lack of constant visioning led to many stakeholders viewing the evaluation implementation as an unnecessary hassle.
  • The Illusion of Control: Easily said, not easily done: Don’t (always) take it personally. Despite our efforts, a great deal of resistance, pushback, and dissatisfaction remained. After weeks of feeling at fault, we found out that things were happening behind the scenes over which we had no control, but that directly affected the evaluation.

Knowing these lessons earlier could have made a difference, and we intend to find out.  Our biggest lesson learned:  Resist being discouraged by (likely inevitable) resistance, try to learn from it, and know that you are not alone.

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hi all! My name is Sheena Horton, President-Elect and Board Member for the Southeast Evaluation Association (SEA). As I have been learning more about the traits of great leaders and how leaders mobilize others, I have found one element that is frequently mentioned: a leader’s influence.

Influence may seem like an obvious determinant of a leader’s success; you’re not a leader if no one will follow you. Think about a colleague for whom you would work hard for or without hesitation, and then think about a colleague for whom you would not. Why do you want to help the first colleague, but avoid the second?  What makes some leaders more effective than others? How do leaders influence others?

Hot Tips:

  • Ask. Show interest in your colleagues. Ask about their day, goals, and challenges. Build rapport and be people-focused instead of task-focused. Understanding their needs will help you convey to them the benefits of listening to you.
  • Listen. Effective leaders take the time to listen. There is a difference between leading and simply managing. Managers command action while leaders inspire it. Leading is to be focused on others – not yourself.
  • Visualize the other side. Try to understand the other person’s perspective and motivations. By doing so, you will be in a better position to address their concerns, tap into their motivations, and utilize their strengths and interests to build a more effective and mutually beneficial working relationship.
  • Be proactive. Identify, monitor, and manage risks to your team’s success. Ask your team what they need to complete their tasks, and make sure they have what they need to get things done. Address issues quickly and directly.
  • Build credibility through your actions. Consistency is key; unpredictability weakens your ability to influence and lead. Build trust and credibility by following through on what you say. Be the person that others seek out for solutions. Provide reasons for the actions you want taken.
  • Show appreciation. A simple “thank you” or “good job” can go a long way. Express your interest and investment in your team’s growth and success by providing constructive feedback. This feedback provides valuable insight, builds trust, and is an opportunity to motivate. Be supportive by mentoring or providing training or assistance.

Remember: Leadership is not about you. It’s about them. Leadership is about influencing others so they will want to help you.

Rad Resources:

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

engaging-online-content

I hope your new year’s celebrations were filled with laughter and rest – getting you ready for another year of projects and adventures!  

Last year I offered some ideas for when creative block strikes, as well as some ideas for generating meaningful online content (other than a blog post).  I thought I would revisit those ideas, especially for folks interesting in contributing to this blog in the coming year.   

Rad Resources:  Just like the saying goes, sometimes images speak volumes more than text.  There are some beautiful free stock photo sites out there, and even more free and user-friendly design sites.  Can you convey some of your information via an infographic or graph?  This may free up some space for your to dive deeper into a concept or offer background on a project. Images also help create white space (a good thing!) and a more readable screen.  

Hot Tip: For those brave souls, try getting in front of a camera!  Vlogs (or video blogs) are a fantastic way to share your knowledge and expertise with readers or followers.  Videos don’t have to be long and can include visual aids and graphics to make them even more appealing.  There are a number of affordable video editing apps – I’ve used iMovie for personal projects and it could not be easier to use.  Videos can be hosted on sites like YouTube or Vimeo and then embedded in blog posts or on websites.  

Lesson Learned: Did you (or will you) host a Twitter chat or hashtag campaign?  Share your insights without having to revisit every tweet using curating tools like Storify.  You can pull together the highlights and evolution of an online conversation, offering you a chance to have a reference point for synthesis and historical perspectives.  

Creating engaging content is not all about getting more page views or Likes or Retweets (although that’s a part) – it’s also about getting out of your comfort zone in order to share your perspective with the world.  People learn and absorb information in so many ways.  Sometimes reading an evaluation report isn’t feasible, but listening to or watching you talk about the project is!  Different types of content connect with different types of people.   

How have you experimented with different media?  Or do you have a goal this year to try something new?

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Welcome to the final installment of the Design & Analysis of Experiments TIG-sponsored week of AEA365.  It’s Laura Peck of Abt Associates, here again to address some complaints about experiments.

Experiments have limited external validity

Experimental evaluation designs are often thought to trade internal validity (ability to claim cause-and-effect between program and impact) with external validity (ability to generalize results).  Although plenty of experiments do limit generalizing to their sample, there is good news from the field. Recent scholarship reveals techniques—retrospective analyses and prospective planning—that can improve generalizability. You can read more these advances in recent articles, here, here, and here.

Experiments take too long

Experimental evaluations have a bad reputation for taking too long.  Certainly there are some evaluations that track long-term outcomes and, by definition, must take a long time. That may be a criticism of any evaluation charged with considering long-term effects.  A recent push within the government is challenging the view that experiments take too long: the White House Social and Behavioral Sciences Team is helping government identify “nudge” experiments that involve tweaking processes and influencing small behaviors to affect short-term outcomes.  It is my hope that these efforts will improve our collective ability to carry out faster experimental research and extend the method to other processes and outcomes of interest.

Another reason experiments may take a long time is that enrolling a study sample takes time.  This depends on specific program circumstances, and it does not necessarily need to be the case. For example, the first round of the Benefit Offset National Demonstration enrolled about 80,000 treatment individuals into its evaluation at one time, with the treatment group getting a notification letter of the new program rules.  Such a change can be associated with large sample build up in a very short time.

Experiments cost too much

A rule of thumb is that evaluation should comprise one-tenth of a program budget. So, for a program that costs $3 million per year, $300,000 should be invested in its evaluation.  If the evaluation shows that the program is ineffective, then society will have spent $300,000 to save $3 million per year in perpetuity.  Efforts are underway to ensure that low-cost experiments become feasible in many fields, such as using administrative data, including integrating data from systems across agencies.

The Bottom Line

Experimental evaluations need not be more time-consuming or costly than other kinds of impact evaluation; and the future is bright for experimental evaluations to meet high standards regarding external validity.

This week’s-worth of posts shows that the many critiques of experiments are not damning when carefully scrutinized, thanks to recent methodological advances in the evaluation field.

Rad Resource:

For additional detail on today’s criticisms of experiments and others that this week-long blog considers, please read On the Feasibility of Extending Social Experiments to Wider Applications.

The American Evaluation Association is celebrating the Design & Analysis of Experiments TIG Week. The contributions all week come from Experiments TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, again!  It’s Steve Bell here, that evaluator with Abt Associates who is eager to share some insights regarding the learning potential of social experiments. In a week-long blog series, we are examining concerns about social experiments to offer tips for how to avoid common pitfalls and to support the extension of this powerful research method to wider applications.

Today we turn to three apparent drawbacks in what experiments can teach us.  Perhaps you’ve heard these concerns:

  • “You can’t randomize an intervention that seeks to change a whole community and its social systems.”
  • “If you put some people into an experiment it will affect other people you’ve left out of the study.”
  • “The impacts of individual program components are lost in the overall ‘with/without’ comparison provided by a social experiment.”

Examination of these three perspectives implies that none of them should deter the use of randomized experiments.

First, evaluations of community-wide interventions are prime candidates for application of the experimental method if the policy questions to be addressed are sufficiently important to justify the resources required.  The U.S. is a very large nation, with tens of thousands of local communities or neighborhoods that could be randomly assigned into or out of a particular community-level policy or intervention.  There is no feasibility constraint to randomizing many places, only a willingness constraint.  And sure, community saturation interventions make data collection more difficult and expensive, and any impacts that do occur are harder to find because they tend to be diffused across many people in the community.  However, these drawbacks afflict any impact evaluation of a saturation intervention, not just randomized experiments.

Second, in an interconnected world, some consequences of social policies inevitably spill over to individuals not directly engaged in the program or services offered. This is a measurement challenge. All research studies, including experimental studies, that are based exclusively on data for individuals participating in an intervention and a sample of unaffected non-participants will miss some of the intervention’s effects.  Randomization does not make spillover effects more difficult to measure.

The up/down nature of experimental findings is thought to limit the usefulness of social experiments as a way to discover how a program can be made more effective or less costly through changes in its intervention components.  One response is obvious: randomize more things, including components.  Multi-stage random assignment also can be used to answer questions about the effects of different treatment components when program activities naturally occur in sequence

The bottom line:  Don’t let naysayers turn society away from experimental designs without first thinking through what is achievable.

Up for our final discussion tomorrow: The “biggest complaints” about experiments debunked.

The American Evaluation Association is celebrating the Design & Analysis of Experiments TIG Week. The contributions all week come from Experiments TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello!  I am Lisette Nieves, founder of Year Up NY, a service organization that has happily and successfully used an experimental evaluation to assess program effectiveness.  Today’s blogpost reflects on administrative challenges that need not get in the way of using experiments in practice.  I strongly believe in the nonprofit sector and what it does to support individuals in overcoming obstacles and building competencies to be successful. I also know that people in this sector want to know the impact of their efforts. With this understanding in mind, choosing to use an experimental evaluation at Year Up NY was not difficult, and the journey offered three key lessons. 

Lesson #1: Evaluation involves change, and change poses challenges.

Although everyone on the team agreed to support evaluation, the frontline team members—those who worked closest with our young adults—found it difficult to deny access to those seeking program enrollment. Team members’ buy-in was especially challenging once the names of prospective participants were attached to an experimental pool, personalizing the imminent selection process into treatment and control groups.  As a committed practitioner and program founder, I found it important to surface questions, ask for deeper discussions around the purpose and power of our evaluation, and create the space for team members to express concerns. Buy-in is a process with individualized timetables; staff may need multiple opportunities to commit to the evaluation effort.

Lesson #2: Program leaders tend to under-communicate when change is happening.

Leading a site where an experimental evaluation was taking place forced me to use language that shepherded staff through a high-stakes change effort.  Team members worried if the results would surprise us (although prior monitoring implied we were on track). The evaluation became central to weekly meetings where staff engaged in a healthy discussion about our services and how we were doing. With information on attrition patterns, even the most cautious staff members began to fully buy-in to the experimental evaluation. In the end, evaluation was about making us stronger and demonstrating impact—two key values that we as a team were wedded to with or without an experimental evaluation.

Lesson #3: Experimental evaluation is high stakes, but it can be hugely informative.

An experimental evaluation has many challenges, and some of its requirements are challenging (but not insurmountable) to implement among the social service providers nationwide. But I have no regrets for engaging in an experimental evaluation: we learned more about our organization and systems than we would have otherwise.  Experimental evaluation made us a true learning organization, and for that I encourage other organizations to consider taking evaluation efforts further.

Up for discussion tomorrow:  more things you thought you couldn’t learn from an experiment but can!

The American Evaluation Association is celebrating the Design & Analysis of Experiments TIG Week. The contributions all week come from Experiments TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello.  I am Steve Bell, Research Fellow at Abt Associates specializing in rigorous impact evaluations, here to share some thoughts about experimental evaluations in practice.  In this week-long blog series, we are examining concerns about social experiments to offer tips for how to avoid common pitfalls and to support the extension of this powerful research method to wider applications.

Today, we ask whether randomization necessarily distorts the intervention that an experiment sets out to evaluate. A potential treatment group distortion occurs when the experiment excludes a portion of a program’s normally-served population to form a research “control” group. As a result, either (1) the program serves fewer people than usual, operating below normal capacity, or (2) it serves people who ordinarily would not be served.  The first scenario can be problematic if the slack capacity allows programs to offer participants more services than usual, artificially enhancing the intervention when compared to its normal state. The second scenario can be problematic if the people who are now being served are different than those ordinarily served.  For example, if a program changes its eligibility criteria—for example, lowering background educational requirements—then a different group of people is served, and this might lead to larger or smaller program impacts than would be the case for the standard program targets.  Fortunately, Olsen, Bell and Nichols (2016) have proposed a way to identify which individuals would ordinarily have been served so that impact results can be produced for just that subset.

The problem of a different-than-usual participant population diminishes in degree as the control group shrinks in size relative to the studied program’s capacity.  With few control group members in any site, the broadening of the pool of people served by the program is less substantial.  This supports another solution: where feasible, an evaluation should spread a fixed number of control group members across many local programs, creating only a few individual control group cases in any one community.  This is a desirable option as well for program staff who are often hesitant to turn away many applicants to form a control group.

In sum, social experiments need not distort the programs they set out to study.

Up for discussion tomorrow: Practitioner insights on how to overcome some common administrative challenges to running an experiment.

Rad Resource:

For additional detail on this issue of the fidelity of policy comparisons, as well as other issues that this week-long blog considers, please read On the Feasibility of Extending Social Experiments to Wider Applications.

The American Evaluation Association is celebrating the Design & Analysis of Experiments TIG Week. The contributions all week come from Experiments TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello AEA365 readers!  I am Laura Peck, founder and co-chair of the AEA’s recently-established (and growing) Design & Analysis of Experiments TIG.  I work at Abt Associates as an evaluator in the Social & Economic Policy Division and director of Abt’s Research & Evaluation Expertise Center.  Today’s AEA365 blogpost recaps what experimental evaluations typically tell us and highlights recent research that helps tell us more.

As noted yesterday, dividing eligible program participants randomly into groups—a “treatment group” that gets the intervention and a “control group” that does not—means the difference in the groups’ outcomes is the intervention’s “impact.”  This is the “average treatment effect” of the “intent to treat” (ITT).  The ITT is the effect of the offer of treatment, regardless whether those offered “take up” the offer.  There can also be interest in (a) the effect of taking up the offer; and (b) the impact of other, post-randomization milestone events within the overall treatment, two areas where pushing experimental evaluation data can tell us more.

The ITT effect is commonly considered to be the most policy relevant:  in a world where program sponsors don’t mandate participation but instead make services available, the ITT captures the average effect of making the offer.

Fortunately, a widely-accepted approach exists for converting the ITT into the effect of the treatment-on-the-treated (TOT).  The ITT can be rescaled by the participation rate—under the assumption that members of the treatment group who do not participate (“no-shows”) experience none of the program’s impact.  For example, if the ITT estimate shows an improvement of $1,000 in earnings, in a study where 80% of the treatment group took up the training ($1,000 divided by 0.80), then the TOT effect would be $1,250 for the average participant.

In addition, an active body of research advances methods for understanding mediators—those things that happen after the point of randomization that subsequently influence program impact. For example, although improving earnings may be a job training program’s ultimate goal, we might want to know whether earning a credential generates additional earnings gains.  Techniques that leverage the experimental design to produce strong estimates of the effect of some mediator include: capitalizing on cross-site and cross-participant variation, instrumental variables (including principal stratification), propensity score matching, and analysis of symmetrically-predicted endogenous subgroups (ASPES).  These use existing experimental data and increasingly are being planned into evaluations.

From this examination of the challenge of the day, we conclude that social experiments can provide useful information on the effects of participation and the effects of post-randomization events in addition to the standard (ITT) average treatment effect.

Up for discussion tomorrow:  are the counterfactual conditions that experiments create the right ones for policy comparisons?

The American Evaluation Association is celebrating the Design & Analysis of Experiments TIG Week. The contributions all week come from Experiments TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top