AEA365 | A Tip-a-Day by and for Evaluators

This is John LaVelle, Louisiana State University, and Yuanjing Wilcox, EDucation EValuation EXchange, members of AEA’s Competencies Task Force. Task Force members recently shared the 2/24/16 draft AEA evaluator competencies in five domains: professional, methodology, context, management, and interpersonal. Feedback in coming months will enable us to finalize the set in preparation for two very important engagement activities: (1) a survey of all members to determine the extent to which they agree that these competencies are the right ones for AEA, and (2) a formal vote on the competencies, including a process for their routine revision, thereby making them an official AEA document.

Hot Tip: Keep your eyes open because he Task Force is working on creating professional development materials to enable evaluators, wherever they work, to use the competencies to reflect on their practice and to assess specific needs.  We believe that it is in the reflection process that the explicit value of the competencies will shine as evaluators use them to shape effective practice.  For example:

  • Novice evaluators, those entering the field who want to identify areas of strength and need for development
  • Accidental evaluators, people who may not have formal training, but who are responsible for conducting evaluations
  • Professionals in transition, such as those who may be experts in a particular field, but who want to become competent evaluators in that specific area
  • Experienced professional evaluators, who want to stay abreast of changes in the field’s practice and theory

We envision an individual assessment process similar to that used for the Essential Competencies for Program Evaluators (http://www.cehd.umn.edu/OLPD/MESI/resources/ECPESelfAssessmentInstrument709.pdf) and an interactive process that groups of evaluators (e.g., members of a firm, students in a cohort) could use to customize the competencies to their specific settings.

Lessons Learned: Feedback on the first draft of AEA competencies raised the question of to what extent individual evaluators need to demonstrate each of the competencies because many evaluators work in collaborative groups. We added one competency (Interpersonal Domain 5.7) to address the fact that for many evaluators teamwork skills are essential. We believe that the question of whether the entire set of competencies should apply to individual evaluators versus teams is context-dependent; we invite people to use the competencies as suits their settings and practice.

Rad Resources: If you are interested in a quick orientation to the world of evaluator competencies, consider these three readings:

  • King, J. A., Stevahn, L., Ghere, G., & Minnema, J. (2001). Toward a taxonomy of essential evaluator competencies.  American Journal of Evaluation, 22(2), 229-247.
  • Russ-Eft, D., Bober, M. J., de la Teja, I., Foxon, M. J., & Koszalka, T. A. (2008). Evaluator competencies: Standards for the practice of evaluation in organizations.  San Francisco, CA: Jossey-Bass.
  • Wilcox, Y., & King, J. A. (2014). A professional grounding and history of the development and formal use of evaluator competencies. Canadian Journal of Program Evaluation, 28(3), 1-28.
  • Buchanan, H., & Kuji-Shikatani, K. (2014). Evaluator competencies: The Canadian experience. Canadian Journal of Program Evaluation, 28(3), 29-47.

Hot Tip: See you at #eval17 where we hope to unveil the final draft competencies!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi! This is Sheena Horton again, President-Elect and Board Member for the Southeast Evaluation Association (SEA). I wanted to close out SEA’s AEA365 week by providing you with a few tips for stimulating and maintaining your professional growth. In a similar vein of thought as evaluation being an everyday activity, our own professional growth should be approached as an everyday opportunity.  Isaac Asimov once said, “People think of education as something they can finish.”  Learning is a life-long commitment.

The extent to which we seek growth opportunities should not be limited by our current positions, schedules, finances, networks, or fears and hesitations, but be defined by the depth of our intellectual curiosity, aspirations, and commitment to evaluating and bettering ourselves.

Hot Tips:

  • Search YouTube regularly for quick tips or full lessons to develop your knowledge or skills in a specific area, such as in Excel. There are also many free virtual courses and trainings offered at CourseraedXMIT OpenCourseWareFindLectures, and Udemy.
  • Follow the professional development strategy that George Grob suggested at a past AEA Conference: Every year, pick one hard skill and one soft skill to develop over the course of the year.
  • Choose a few bloggers to follow to pick up daily tips and stay up to date on the latest evaluation news. Take it a step further and volunteer to write for a blog or newsletter! AEA365 blog posts are short and allow you to perform a high-level review of a topic of interest or share your experiences and tips with others. SEA’s seasonal newsletter accepts a variety of submissions on evaluation and professional development topics, and article length can vary from a sidebar to a feature article.
  • Volunteer for AEA and SEA short- or long-term projects, or sign up for programs, conferences, and workshops. AEA’s next conference is scheduled for November 6th-11th, 2017 in Washington, DC.  SEA will be holding its 2-day Annual Workshop on February 27th-28th, 2017 in Tallahassee, FL, and will offer in addition to its normal programming a secondary track that will feature Essential Skills training sessions, including as “Evaluation Planning and Design,” “Relating Costs and Results,” and “Effective Presentations.”

Rad Resources:

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, again! My name is Dr. Michelle Chandrasekhar and I serve on the Southeast Evaluation Association (SEA) Executive Board. Every year in February, SEA holds an Annual Workshop that offers attendees networking opportunities and a variety of presentations and panel discussions on evaluation issues. At SEA’s 2015 Annual Workshop, the Board facilitated a round table discussion and asked participants to discuss common challenges encountered in conducting evaluations.

Hot Tips: Below is a summary of the observations and tips from SEA’s Workshop round table discussions. Overall, attendees indicated that evaluators need to know how to do the following:

  1. Talk about Evaluation.
  • Build buy-in and rapport – for example, use stories to explain numbers.
  • Create or find case studies or examples that help evaluators talk to others.
  • Communicate the value of evaluation to leadership.
  • Manage the politics – particularly in how data is presented or for analyzing sensitive data.

2. Plan for Good Evaluation.

  • Demonstrate cultural competence – this means going beyond language barriers.
  • Develop good logic models and get them validated up front.
  • Establish relationships among key people in the client’s organization, as well as among fellow evaluators who can help you problem solve.
  • Include front-line people in the conversation to find problems and solutions, or to review reports.
  • Make recommendations that use Return on Investment concepts.
  • Work within the confines of a grant rather than what the evaluator or client may want to do.

3. Manage Evaluations.

  • Manage multiple projects in various stages – use project management tools and update items in your toolbox (reports, communication protocols, client capacity building information).
  • Manage time and people to stay on track – understand the amount of effort needed for a project and that it isn’t practical to make it perfect.
  • Work within the budget (estimate the billable hours, time frame, and amount to charge) and include the client in the process.

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

My name is Dr. Moya Alfonso, MSPH, and I’m an Associate Professor at the Jiann-Ping Hsu College of Public Health at Georgia Southern University, and I am University Sector Representative and Board Member for the Southeast Evaluation Association (SEA). I would like to offer you a few tips on engaging stakeholders in participatory evaluation based on my 16 years of experience engaging stakeholders in community health research and evaluation.

Participatory evaluation is an approach that engages stakeholders in each step of the process.  Rather than the trained evaluator solely directing the evaluation, participatory evaluation requires a collaborative approach.  Evaluators work alongside stakeholders in developing research questions, deciding upon an evaluation design, designing instruments, selecting methods, gathering and analyzing data, and disseminating results.  Participatory evaluation results in stronger evaluation designs and greater external validity because community members have a high level of input in entire process.  It also strengthens buy-in to the results and a greater use of the evaluation products.

Rad Resource: Explore the University of Kansas Community Tool Box for introductory information on participatory evaluation.

Hot Tips: Here are a few tips for engaging stakeholders:

  • Establish a diverse stakeholder advisory group: Community stakeholders have a range of skills that can contribute to the evaluation process. For example, I worked with 8th grade youth on a participatory research project and assumed that I would need to conduct the statistical analysis of survey data.  To my surprise, one of the youths had considerable expertise and was able to conduct the analysis with little assistance. With training and support, community stakeholders can contribute and exceed your expectations.
  • Keep stakeholders busy: A common problem in working with advisory groups is attrition. Keep community stakeholders engaged with evaluation tasks that use their unique skill sets. Matching assignments to existing skill sets empower community stakeholders and result in increased buy-in and engagement.
  • Celebrate successes: Celebrating successes over the course of the evaluation is a proven strategy for keeping stakeholders engaged. Rather than waiting until the end of the evaluation, reward stakeholders regularly for the completion of evaluation steps.
  • Keep your ego in check: Some highly trained evaluators might find handing over the reins to community stakeholders challenging because they’re used to running the show. Participatory evaluation requires evaluators to share control and collaborate with community stakeholders. Try to keep an open mind and trust in the abilities of community stakeholders to participate in the evaluation process with your support and guidance.  You’ll be amazed at what you can achieve when stakeholders are fully engaged in evaluation research! 

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Dr. Michelle Chandrasekhar and I serve as Board Secretary for the Southeast Evaluation Association (SEA).  My work experience includes higher education and state government, and recently with local, state, and federal criminal justice agencies. Working in different venues reminded me that our evaluation reports share several key elements across disciplines, audiences, and purposes. Below are two of these common elements.

  • What we produce must be faultless. In talking about her report strategies used at the S. General Accounting Office’s Program Evaluation and Methodology Division, Eleanor Chelimsky told a 2006 AEA Conference audience that the reports her office produced had to be accurate. If there was any kind of error, it could provide justification for ignoring or refuting the report.

Hot Tip: Hard to read reports are not used. Carefully proofread your writing, logic, and results. Use a checklist and get multiple people to review the document. Ask for examples of previous reports the clients have liked or hated to review and reference for developing future reports.

  • The audience that reads your report has a different agenda from yours. Chelimsky also said that politicians (and we can agree, any decision-maker) understand evaluation within the context of their own agendas. Evaluators need to be aware of those agendas and skilled at presenting a credible case for their work.

Hot Tip: Reports tell a story and should be written bearing in mind the interests of your audience and what they do and do not know. Tell your audiences about The Characters (Who asked for this report? Who is involved?), The Setting (Why was this report requested? Why was the data collected?), The Plot (What are the research questions? What is the study design?), The Conflict (What are the issues or caveats?), and The Resolution (What are the results and recommendations?). Yes, even an internal report can include recommendations – you know the data!

Rad Resources: Check out these links for further reading:

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi, we’re Southeast Evaluation Association (SEA) members Taylor Ellis, a doctoral student and lead evaluator, and Dr. Debra Nelson-Gardell, an Associate Professor, providing consultation at the School of Social Work at The University of Alabama. We form a team tasked with evaluating a program providing community-based, family-inclusive intervention for youth with sexual behavior problems (youngsters who lay people might call juvenile sex offenders). This post focuses on our lessons learned regarding our approach to resistance in program evaluation.

Taut and Alkin (2002) reported people stereotypically view program evaluation as “being judged…that the evaluation is used to ‘get me’, that it is not going to be used to assist me but is perceived to be negative and punitive in its nature” (p. 43). Our program evaluation faced derailment because the program had never been evaluated before, or perhaps because of the inevitability of resistance to evaluation.  Accepting the resistance as normal, we tried addressing it.  But, our efforts didn’t work as we had hoped. Below are the hard lessons learned through “hard knocks.”

Lessons Learned:

  • The Importance of Stakeholder Input: Stakeholders need to believe evaluators will listen to them.  Early in the evaluation process, stakeholders were interviewed and asked about their ideas for program improvement to promote engagement in the process. What the interviews lacked was a greater emphasis on how what stakeholders said affected the evaluation.
  • Remember and (Emphatically) Remind Stakeholders of the Evaluation’s Purpose/Goals: During the evaluation, the purpose of the evaluation was lost in that stakeholders were not reminded of the evaluation’s purpose. Project updates to stakeholders should have been more intentional about movement towards the purpose. We lost sight of the forest as we negotiated the trees. This lack of constant visioning led to many stakeholders viewing the evaluation implementation as an unnecessary hassle.
  • The Illusion of Control: Easily said, not easily done: Don’t (always) take it personally. Despite our efforts, a great deal of resistance, pushback, and dissatisfaction remained. After weeks of feeling at fault, we found out that things were happening behind the scenes over which we had no control, but that directly affected the evaluation.

Knowing these lessons earlier could have made a difference, and we intend to find out.  Our biggest lesson learned:  Resist being discouraged by (likely inevitable) resistance, try to learn from it, and know that you are not alone.

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hi all! My name is Sheena Horton, President-Elect and Board Member for the Southeast Evaluation Association (SEA). As I have been learning more about the traits of great leaders and how leaders mobilize others, I have found one element that is frequently mentioned: a leader’s influence.

Influence may seem like an obvious determinant of a leader’s success; you’re not a leader if no one will follow you. Think about a colleague for whom you would work hard for or without hesitation, and then think about a colleague for whom you would not. Why do you want to help the first colleague, but avoid the second?  What makes some leaders more effective than others? How do leaders influence others?

Hot Tips:

  • Ask. Show interest in your colleagues. Ask about their day, goals, and challenges. Build rapport and be people-focused instead of task-focused. Understanding their needs will help you convey to them the benefits of listening to you.
  • Listen. Effective leaders take the time to listen. There is a difference between leading and simply managing. Managers command action while leaders inspire it. Leading is to be focused on others – not yourself.
  • Visualize the other side. Try to understand the other person’s perspective and motivations. By doing so, you will be in a better position to address their concerns, tap into their motivations, and utilize their strengths and interests to build a more effective and mutually beneficial working relationship.
  • Be proactive. Identify, monitor, and manage risks to your team’s success. Ask your team what they need to complete their tasks, and make sure they have what they need to get things done. Address issues quickly and directly.
  • Build credibility through your actions. Consistency is key; unpredictability weakens your ability to influence and lead. Build trust and credibility by following through on what you say. Be the person that others seek out for solutions. Provide reasons for the actions you want taken.
  • Show appreciation. A simple “thank you” or “good job” can go a long way. Express your interest and investment in your team’s growth and success by providing constructive feedback. This feedback provides valuable insight, builds trust, and is an opportunity to motivate. Be supportive by mentoring or providing training or assistance.

Remember: Leadership is not about you. It’s about them. Leadership is about influencing others so they will want to help you.

Rad Resources:

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

engaging-online-content

I hope your new year’s celebrations were filled with laughter and rest – getting you ready for another year of projects and adventures!  

Last year I offered some ideas for when creative block strikes, as well as some ideas for generating meaningful online content (other than a blog post).  I thought I would revisit those ideas, especially for folks interesting in contributing to this blog in the coming year.   

Rad Resources:  Just like the saying goes, sometimes images speak volumes more than text.  There are some beautiful free stock photo sites out there, and even more free and user-friendly design sites.  Can you convey some of your information via an infographic or graph?  This may free up some space for your to dive deeper into a concept or offer background on a project. Images also help create white space (a good thing!) and a more readable screen.  

Hot Tip: For those brave souls, try getting in front of a camera!  Vlogs (or video blogs) are a fantastic way to share your knowledge and expertise with readers or followers.  Videos don’t have to be long and can include visual aids and graphics to make them even more appealing.  There are a number of affordable video editing apps – I’ve used iMovie for personal projects and it could not be easier to use.  Videos can be hosted on sites like YouTube or Vimeo and then embedded in blog posts or on websites.  

Lesson Learned: Did you (or will you) host a Twitter chat or hashtag campaign?  Share your insights without having to revisit every tweet using curating tools like Storify.  You can pull together the highlights and evolution of an online conversation, offering you a chance to have a reference point for synthesis and historical perspectives.  

Creating engaging content is not all about getting more page views or Likes or Retweets (although that’s a part) – it’s also about getting out of your comfort zone in order to share your perspective with the world.  People learn and absorb information in so many ways.  Sometimes reading an evaluation report isn’t feasible, but listening to or watching you talk about the project is!  Different types of content connect with different types of people.   

How have you experimented with different media?  Or do you have a goal this year to try something new?

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Welcome to the final installment of the Design & Analysis of Experiments TIG-sponsored week of AEA365.  It’s Laura Peck of Abt Associates, here again to address some complaints about experiments.

Experiments have limited external validity

Experimental evaluation designs are often thought to trade internal validity (ability to claim cause-and-effect between program and impact) with external validity (ability to generalize results).  Although plenty of experiments do limit generalizing to their sample, there is good news from the field. Recent scholarship reveals techniques—retrospective analyses and prospective planning—that can improve generalizability. You can read more these advances in recent articles, here, here, and here.

Experiments take too long

Experimental evaluations have a bad reputation for taking too long.  Certainly there are some evaluations that track long-term outcomes and, by definition, must take a long time. That may be a criticism of any evaluation charged with considering long-term effects.  A recent push within the government is challenging the view that experiments take too long: the White House Social and Behavioral Sciences Team is helping government identify “nudge” experiments that involve tweaking processes and influencing small behaviors to affect short-term outcomes.  It is my hope that these efforts will improve our collective ability to carry out faster experimental research and extend the method to other processes and outcomes of interest.

Another reason experiments may take a long time is that enrolling a study sample takes time.  This depends on specific program circumstances, and it does not necessarily need to be the case. For example, the first round of the Benefit Offset National Demonstration enrolled about 80,000 treatment individuals into its evaluation at one time, with the treatment group getting a notification letter of the new program rules.  Such a change can be associated with large sample build up in a very short time.

Experiments cost too much

A rule of thumb is that evaluation should comprise one-tenth of a program budget. So, for a program that costs $3 million per year, $300,000 should be invested in its evaluation.  If the evaluation shows that the program is ineffective, then society will have spent $300,000 to save $3 million per year in perpetuity.  Efforts are underway to ensure that low-cost experiments become feasible in many fields, such as using administrative data, including integrating data from systems across agencies.

The Bottom Line

Experimental evaluations need not be more time-consuming or costly than other kinds of impact evaluation; and the future is bright for experimental evaluations to meet high standards regarding external validity.

This week’s-worth of posts shows that the many critiques of experiments are not damning when carefully scrutinized, thanks to recent methodological advances in the evaluation field.

Rad Resource:

For additional detail on today’s criticisms of experiments and others that this week-long blog considers, please read On the Feasibility of Extending Social Experiments to Wider Applications.

The American Evaluation Association is celebrating the Design & Analysis of Experiments TIG Week. The contributions all week come from Experiments TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, again!  It’s Steve Bell here, that evaluator with Abt Associates who is eager to share some insights regarding the learning potential of social experiments. In a week-long blog series, we are examining concerns about social experiments to offer tips for how to avoid common pitfalls and to support the extension of this powerful research method to wider applications.

Today we turn to three apparent drawbacks in what experiments can teach us.  Perhaps you’ve heard these concerns:

  • “You can’t randomize an intervention that seeks to change a whole community and its social systems.”
  • “If you put some people into an experiment it will affect other people you’ve left out of the study.”
  • “The impacts of individual program components are lost in the overall ‘with/without’ comparison provided by a social experiment.”

Examination of these three perspectives implies that none of them should deter the use of randomized experiments.

First, evaluations of community-wide interventions are prime candidates for application of the experimental method if the policy questions to be addressed are sufficiently important to justify the resources required.  The U.S. is a very large nation, with tens of thousands of local communities or neighborhoods that could be randomly assigned into or out of a particular community-level policy or intervention.  There is no feasibility constraint to randomizing many places, only a willingness constraint.  And sure, community saturation interventions make data collection more difficult and expensive, and any impacts that do occur are harder to find because they tend to be diffused across many people in the community.  However, these drawbacks afflict any impact evaluation of a saturation intervention, not just randomized experiments.

Second, in an interconnected world, some consequences of social policies inevitably spill over to individuals not directly engaged in the program or services offered. This is a measurement challenge. All research studies, including experimental studies, that are based exclusively on data for individuals participating in an intervention and a sample of unaffected non-participants will miss some of the intervention’s effects.  Randomization does not make spillover effects more difficult to measure.

The up/down nature of experimental findings is thought to limit the usefulness of social experiments as a way to discover how a program can be made more effective or less costly through changes in its intervention components.  One response is obvious: randomize more things, including components.  Multi-stage random assignment also can be used to answer questions about the effects of different treatment components when program activities naturally occur in sequence

The bottom line:  Don’t let naysayers turn society away from experimental designs without first thinking through what is achievable.

Up for our final discussion tomorrow: The “biggest complaints” about experiments debunked.

The American Evaluation Association is celebrating the Design & Analysis of Experiments TIG Week. The contributions all week come from Experiments TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top