AEA365 | A Tip-a-Day by and for Evaluators

TAG | training

We’re Ningqin Wu and Amy Chen, both coordinators at AFDI – the Asia-Pacific Finance and Development Institute (AFDI) in Shanghai, China. AFDI is a member of the CLEAR Initiative (Centers for Learning on Evaluation and Results) and hosts the East Asia CLEAR Center. CLEAR promotes evaluation capacity building in regional centers across the globe. This week’s blogs are by CLEAR members.

Much of the work at our center involves training, with participants coming from across the globe, but especially in China and other parts of Asia. We’d been looking for an easy way to stay in touch with participants before, during and after courses. We turned to a popular instant messaging service – in our case WeChat – to serve as our main connecting tool with course participants. Below we share more about how we use it.

WeChat – like many other similar apps – is a powerful mobile communication tool to connect the users across the globe. It supports sending voice, video, photo and text messages. We can chat in Chinese with our Chinese participants, and in English with our international participants. We mainly use it to build “mobile learning communities” with members of each of our courses, such as our annual course, SHIPDET – the Shanghai International Program for Development Evaluation Training.

  • Before courses, we send detailed instructions on how to install the app and invite participants to join. We send logistics details and reminders on deadlines. If participants have any questions, they are able to connect to us directly – and the group can see responses which can be helpful for all to read.
  • During the class, we and the instructors share files and other relevant information in our groups. This supports their learning after the training is over. The participants use it to plan social outings and share community info. We also share end-of-course evaluation links through the app so participants can complete course surveys.
  • After the courses and when participants return to work, we use WeChat to stay connected and promote upcoming courses among those alumni. We share resources – such as links to new publications or conferences – with the participants. We’ve found that if instructors are active users, the groups will tend to stay more connected.

Hot Tip:

  • Remember that not everyone has a smartphone or feels comfortable connecting in a group. So make provisions – such as sending information via email – to those who wish not to participate through instant messaging.

Rad Resources:

  • Different apps are more popular in some regions than others. So explore what people in your region might be using such as WhatsApp, iMessage and others.

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m Leah Goldstein Moses, Founder and CEO of The Improve Group and adjunct professor at the University of Minnesota’s Humphrey School of Public Affairs. The Improve Group’s evaluation practice embeds capacity building – building the skills, systems and tools needed for organizations to conduct evaluations. In the last few years, we teach more and more in academic and community settings.

To put it bluntly, when I was a novice teacher, I stunk: my pace, content, and activities were off mark.

Hot Tips (through observation, trial and error, and wise advice from veteran teachers).

  1. Learn about your audience. In a session at a leadership conference years ago, I mistakenly assumed that attendees were nonprofit leaders who just needed an overview about evaluation so they could delegate tasks to others. Instead, attendees were managers who would be directly involved in implementing evaluations. If I had been better prepared, I would have designed activities more focused on practical tips for evaluation. Now, I learn about attendees ahead of time by talking to session organizers, interviewing prior participants, or surveying attendees ahead of time.
  2. Set the learning agenda with participants to make sure the content is relevant. Depending on how the lessons are structured, I might ask them to share specific things they are working on via email ahead of time or have them start with an informal conversation in pairs about what they hope to learn. As ideas come in, I sort them (e.g., issues about design, data collection, and reporting) and adapt.
  3. Make expectations clear. At one of the very first workshops I taught, a woman right in front spent the entire session checking her email. It was a small group and her inattention was noticeable and distracting. Now, at the beginning of each session, I lay groundwork to help participants engage fully: Scheduled breaks will be substantial enough to check email or return calls and participants can step out if needed.
  4. Tell stories. You can give as much content as you want, but it needs to be clear how it can be applied to participants’ work. I’ve learned stories bring the content to life. Some are short vignettes to briefly illustrate something like non-response bias. Longer stories explore more complex issues, like cultural responsiveness. I encourage participants to share their own stories to help others understand what they are facing or model their successes.
  5. Use a rule of thirds. Students have different ways of hearing, processing, and thinking about using information. Time is divided equally between presentation, interactive large group activities, and individual or small group reflection. They also need time to rest; I build in a 15-minute break after 90 minutes of content.

Interested in further conversation? Join us at the conference: http://bit.ly/1MSzH95

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Jul/15

28

Awab on How to Gauge Learning of a Training

My name is Awab and I am working as Monitoring & Evaluation Specialist with the Higher Education Commission (HEC), Islamabad.

To gauge the learning of a training is always a challenge. Recently, we faced this challenge when the HEC conducted the training of about 1600 top managers of Pakistani universities. The trainings were conducted through some implementation partners (IPs). We asked the IPs to conduct pre and post-training tests so that we now how much the participants could learn from these trainings. The IPs conducted the pre & post-tests. They analyzed the data and told the difference between the scores in pre-tests and post-tests. Since the post-test scores are always greater than the pre-test scores (in some of our cases, more than 100%) , the analysis painted a rosy picture of the trainings and everything looked fine (as shown in figure 1).

Figure 1: Comparison of Pre & Post-tests, shared by one of the IPs.

Awab 1

As the training reports were passed on to the M&E Unit, we rejected the analysis, because it did not give us sufficient information to know the quality of training and plan for the future.

Hot Tips: We started with asking the right questions. We told the IPs that, from the pre & post-tests analyses, we were rather interested in knowing the answers to three questions: (i) what was the pre-existing learning level of the participants?; (ii) what is the net learning attributable to the training?; and (iii) what is the learning gap we need to bridge in future training?

Cool Tricks: The answers to the three question could be given by analyzing the pre & post test scores in a very simple manner and putting the data in a stacked bar chart. We developed a model for analysis and shared it with the IPs. The results were surprisingly interested. The model gave a clear picture of the pre-existing learning, net learning and the learning lag. Thus, we were able not only to appreciate the IPs for the net learning attributable to them but also hold them accountable for the learning gap and plan for the future training.

Figure 2: Learning-based Model of Pre & Post-tests analysis.

Awab 2

Lessons Learned:

In evaluations, it is always good to ask yourself how you are going to use the data. Asking the right questions is half the solution.

For further details on how to gauge learning in a training and downloading the Excel sheets for data analysis on the given model, please click on the following links:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

 

I’m Urmy Shukla, Capacity Building Manager at the CLEAR South Asia Regional Center, hosted by J-PAL South Asia at the Institute for Financial Management and Research. Since our 2011 start with CLEAR we’ve developed a wide-range of activities aimed at improving monitoring and evaluation (M&E) capacity throughout the region, including 90 trainings for partners such as the Indian civil services, state governments, NGOs, donor agencies, and academic institutions. Each training requires a significant amount of planning and preparation, including a needs assessment to assess skills and the partners’ role in evaluation, the development of customized content, and delivering the course itself. As such, we want to ensure that are trainings are meeting their objectives. 

How do we know if our trainings are ‘working’?

As evaluators, we know that there are several steps to plan for, and later assess, effectiveness of our activities. Most importantly, we need to:

  • define a theory of change and/or results framework for program activities, focusing on desired outcomes
  • measure/assess the desired outcomes

For evaluation capacity development, these aren’t always easy to design and implement. But we’re taking several steps to assess the effectiveness of our trainings, including developing an organization-specific results framework and tracer surveys to track past training participants. We’re testing our approach as we’re going, and below are sharing some practical and strategic tips.

Hot Tips: For training tracer studies:

  • Clearly define training objectives from the outset. These objectives should go beyond skills gained, but should also include what you hope the participants will do after the training, within what is reasonably feasible during that timeline.
  • Develop a way to systematically organize your multiple objectives. This will make it easier for you to design future tracer surveys and needs assessments. We categorize our objectives by (a) partner type (those who either do evaluations, use evaluations for decision-making, fund evaluations, and/or commission evaluations) and (b) knowledge, attitude, or behavior (KAB). From this, we have developed a database of tracer survey questions, which can be easily filtered for each type of training.
  • Get partner buy-in early. Getting people to participate in a tracer study a year or two after the training can be hard, so give advance notice at the training that a tracer study will occur. Then have some contact with trainees – through newsletters, announcements, listservs – after the training to keep contact info current and so they remain familiar with you.

Rad Resources:

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I am Boubacar Aw, Coordinator of the Regional Centers for Learning on Evaluation and Results (CLEAR) for Francophone Africa hosted at Centre Africain d’Etudes Superieures en Gestion (CESAG) in Senegal, Dakar. I am writing today to offer practical tips on how to develop teaching materials through a Training of Trainers (ToT) model. These tips are especially helpful when you are trying to develop teaching materials adapted to different contexts.

Lessons Learned Through Experience:

There are numerous teaching materials on M&E in English. The main challenge faced by Francophone Africa is to develop materials in French– there is work do to! It is not just about translation; it is about how to adapt materials to Francophone African context with “real example” case studies to make them useful to the practitioners in the field. A great way to develop such materials is through a ToT approach.

Before a ToT program begins, teaching materials are prepared by a team of master trainers. During a ToT event, trainers use these materials for the training. At the same time, trainees are asked to divide themselves into groups by modules of their interests and to provide feedback on the teaching materials. Moreover, trainees share their own experiences in M&E and provide “real examples.” Such examples are incorporated into the teaching materials as case studies.

During the ToT event, mock-training is organized so that trainees can already test the materials as well as case studies. When trainees go back to their own countries and work places, they can further test the materials and provide further suggestions of necessary adjustments to the trainers.

Hot Tips:

  • Involving trainees to develop teaching materials turns out to be a very effective way to make necessary adaptations to the materials to a Francophone African context.
  • Organizing a mock-training during a ToT event is a good way to make necessary modifications to teaching materials. Trainees also feel more at ease to use case studies suggested by them during a mock-training.
  • It is important to have one trainer responsible for harmonizing and finalizing the teaching materials!

Rad Resources:

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Michelle Paul Heelan, Ph.D., an evaluation specialist and organizational behavior consultant with ICF International.  In my fifteen years assisting private corporations and public agencies to track indicators of organizational health, I’ve found that moving towards more sophisticated levels of training evaluation is challenging – but here at ICF we’ve identified effective strategies to measure application of learning to on-the-job behavior.  This post provides highlights of our approach.

A challenge in training evaluation is transcending organizations’ reliance on participant reactions and knowledge acquisition to assess the impact of training.  Training is offered for a purpose beyond learning for learning’s sake – however we struggle to possess data that show the extent to which that purpose has been achieved once participants return to their jobs.  In our approach, we confront a key question: How do we (as empirically-based evaluation experts) gather those data that demonstrate the on-the-job impact of training?

Hot Tip #1: The work occurs during the training design phase – Nearly all essential steps of our approach happen during training design, or these steps must be reverse-engineered if one is acquiring training.

Hot Tip #2: A structured collaboration among three parties creates the foundation for the evaluation – Evaluation experts, instructional design experts, and organizational stakeholders (e.g., business unit leaders, training/development champions) must identify desired business goals and the employee behaviors hypothesized as necessary to achieve those business goals.  In practice, this is more difficult than it seems.

Hot Tip #3: Evaluation data collection instruments and learning objectives are developed in tandem – We craft learning objectives that, when achieved, can be demonstrated in a concrete, observable manner. During the design phase, we identify the behavioral variables expected to be affected by individuals’ participation for each of the learning objectives.

Hot Tip #4: The behavioral application of learning is best measured by multiple perspectives – For each variable, we create survey items for ratings from multiple perspectives (i.e., participants and at least one other relevant party, such as supervisors or peers). Using multiple perspectives to evaluate behavioral changes over time is an essential component of a robust evaluation methodology. Investigating the extent to which other parties assess a participant’s behavior similarly to their own self-assessment helps illuminate external factors in the organizational environment that affect training results.

Hot Tip #5: Training goals are paired with evaluation variables to ensure action-oriented results – This method also permits the goals of the training to drive what evaluation variables are measured, thereby maintaining clear linkages between each evaluation variable and specific training content elements.

Benefits of Our Approach:

  • Ensures evaluation is targeted at those business results of strategic importance to stakeholders
  • Isolates the most beneficial adjustments to training based on real-world application
  • Provides leadership with data directly useful for training budget decisions

Rad Resource: Interested in learning more?  Attend my presentation entitled “Essential Steps for Assessing Behavioral Impact of Training in Organizations” with colleagues Heather Johnson and Kate Harker at the upcoming AEA conference – October 19th, 1:00pm – 2:30pm in OakLawn (Multipaper Session 900).

The American Evaluation Association is celebrating Business, Leadership and Performance (BLP) TIG Week with our colleagues in the BLP AEA Topical Interest Group. The contributions all this week to aea365 come from our BLP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. Want to learn more from Michelle and colleagues? They’ll be presenting as part of the Evaluation 2013 Conference Program, October 16-19 in Washington, DC.

·

Hi! I’m Mary Arnold, a program evaluator with the Extension 4-H Youth Development program at Oregon State University. This AEA 365 entry kicks off a week of postings by members of the Extension Education Evaluation TIG, which is starting its 32nd year! Extension is the outreach arm of the public Land-Grant universities and its mission is to provide research-based education programming to the public. In essence, we take the university to the people.

The evaluation needs of Extension are vast and complex. From evaluating small local programs, to large grant-funded endeavors, to calculating the public benefit of our programs, as an organization we are always learning about effective evaluation. Extension recently shared its organizational learning through a New Directions for Evaluation Issue devoted to evaluation in complex organizations, which could be useful to other such organizations.

Rad Resource: New Dimensions for Evaluation: Program Evaluation in a Complex Organizational System: Lessons Learned from Cooperative Extension, Volume 210 2008.

In my Extension role, I spend a great deal of time engaged in evaluation capacity building. Over the years I have discovered several important elements of capacity building that lead to success:

Hot Tip: Use a four-fold framework for building evaluation capacity:

  1. Develop and use logic models to ensure sound program planning and create evaluation plans. This helps everyone to be clear on the program, its intent, and outcomes.
  2. Provide one-on-one help to educators with their own evaluation projects. Learning is best accomplished when applied to a real project that means something to the learner
  3. Facilitate small-group collaborations on a real project, allowing members to learn and practice new skills within the cycle of evaluation.
  4. Conduct larger scale, multi-site evaluations for your organization that allows everyone to participate at some level.

Rad Resource: My experience using this evaluation capacity framework is detailed in an article in the June 2006 American Journal of Evaluation.

Rad Resource: Building Evaluation Capacity: 72 Activities for Teaching and Training by Hallie Preskill and Darlene Russ-Eft (2005 Sage Publications). This detailed book is my “go-to” whenever I plan a new evaluation capacity building training. The activities are detailed, creative, and engaging for adult learners of evaluation.

Rad Resource: The Logic Model Guide Book: Better Strategies for Great Results (2nd ed) by Lisa Wyatt Knowlton and Cynthia C. Phillips (2013 Sage Publications). This book focuses on the application of logic models in the real world. Using more complex models in capacity building trainings has helped learners to appreciate more deeply the value of logic modeling as something more than a simple exercise without real meaning.

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Hello! My name is Laura Keene, owner of Keene Insights, a one-woman consulting shop based in sunny Los Angeles, California. Over the past nine years, I’ve worked as both an internal and an external evaluator in a variety of settings. For many organizations, especially smaller nonprofits, working with me is their first introduction to evaluation. The success of these projects hinges on good capacity building as I work to overcome their fears and give them the tools they need to make design decisions and use findings. (See Reid Zimmerman’s post for a great description of evaluation fears).

Sometimes this work requires sessions that are solely focused on training. Other times we move back and forth between learning, discussion, and decision making. In either case, I try to integrate activities that tap into adult-learning principles and actively engage the group as much as possible. This makes the training more effective and much more fun.

Hot Tips:

In the excitement of planning great activities, don’t forget to keep these basic training principles in mind:

  • Put yourself in the shoes of your participants. What is their background? What is their perspective on the topic? What do they know already? What are they expecting to gain from the training overall?
  • Keep the purpose of the activity in the forefront. What’s the main point? Why does it matter? All components should support the purpose.
  • Be as prepared as humanly possible. Knowing the activity inside and out gives you the flexibility to handle the unexpected and adapt where needed.

Rad Resource:

Building Evaluation Capacity by Hallie Preskill and Darlene Russ-Eft is a great source for activities and ideas. It has step-by-step instructions for activities that focus on everything from exploring the differences between evaluation and research to understanding and interpreting data. Handouts included!

For a more comprehensive training guide check out Bruce Klatt’s Ultimate Training Workshop Handbook.

The American Evaluation Association is celebrating Organizational Learning & Evaluation Capacity Building (OL-ECB) TIG Week with our colleagues in the OL-ECB AEA Topical Interest Group. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Hey there. I’m Stephanie Evergreen, AEA’s eLearning Initiatives Director and general data communications geek. Susan Kistler has a family obligation this weekend, so I’m stepping in to share with you the newest developments in AEA’s Potent Presentations Initiative (p2i).

Potent Presentations Logo

You’ve heard about p2i, right? It is a new initiative to help AEA members improve their presentation skills, particularly around delivering conference presentations. We come together once or twice a year to teach each other about our practices and processes, so shouldn’t we do everything we can to make it easy to learn from our presentations? That’s why p2i will feature online and in-person training before and during the annual conference around the three facets of presenting: message, design, and delivery.

We have just launched p2i.eval.org, which will be the hub of this activity.

Rad Resource: Our home page features our upcoming webinar-based training on how to prepare for and deliver an Ignite session. When you receive the proposal status notice for your Ignite session on July 3, head to our site to sign up for one of the two trainings, either on July 17 at 11:30am ET or July 26 at 4pm ET.

Rad Resource: Our first tool to help you rock your conference session is the Presentation Preparation Checklist. Download this PDF to find out what to prepare when, keep yourself on track, and minimize the last minute rush many people experience leading up to a conference presentation. The checklist include time frames specific to this year’s annual conference, October 22-28.

Rad Resource: During the conference we’ll provide a demonstration on research-based effective practices around slide design. But you don’t want to wait until then to begin working on your session slides. So we’ve released the handout for that demonstration already. Head to the p2i site to snag the Slide Design Guidelines (with extra tips for handouts, too). It covers how to handle fonts, graphics, colors, and arrangement and includes links for step-by-step instructions (we’ll add links each month) and awesome extensions of these guidelines from your AEA colleagues.

· · · · · ·

I’m Susan Kistler, the American Evaluation Association’s Executive Director, and aea365’s Saturday contributor. Our eStudy director just announced the lineup for January and February!

Lesson Learned: AEA’s eStudy offerings are online, real-time, webinar-based training right from your desktop with no need to fly, or get lost in traffic, or lose extra time away from work, or even change out of your PJs if you are so inclined. Facilitators are chosen from among the best of those offering AEA workshops and the topics are ones that are most sought-after by registrants.

Hot Tip: Registration is open to both AEA members and nonmembers, and students receive significantly discounted registration rates. For one registration fee, you may attend one or all of the sessions for a particular workshop. Here’s the January/February lineup.

Social Network Analysis
Tuesdays January 10, 17, 24, & 31, 1:00 – 2:30 PM Eastern Time
This eStudy provides an introduction to social network analysis theories, concepts, and applications within the context of evaluation, including network concepts, methods, and the software that provides for analysis of network properties. We’ll use real world examples and discussion to facilitate a better understanding of network structure, function and data collection.
Presenter: Kimberly Fredericks conducts social network analyses in her role as associate professor at The Sage Colleges. Kim is a regular author and speaker, including co-editing a New Directions for Evaluation issue on Social Network Analysis in Program Evaluation.
Cost: $150 Members, $200 Nonmembers, $80 Students

Applications of Correlation and Regression: Mediation, Moderation, and More
Wednesdays February 8, 15, 22, & 29, 1:00 – 2:30 PM Eastern Time
Regression analyses are used to describe multivariate relationships, test theories, make predictions, and model relationships. We’ll explore data issues that may impact correlations and regression, selecting appropriate models, preparing data for analysis, running SPSS analyses, interpreting results, and presenting findings to a nontechnical audience.
Presenter: Dale Berger of Claremont Graduate University is a lauded teacher of workshops and classes in statistical methods and recipient of the outstanding teaching award from the Western Psychological Association.

Cost: $150 Members, $200 Nonmembers, $80 Students

Empowerment Evaluation
Tuesday & Thursday February 21 & 23, 3:00 – 4:30 PM Eastern Time
Empowerment evaluation builds program capacity, fosters program improvement, and produces outcomes. It teaches people how to help themselves by learning how to evaluate their programs. This eStudy will introduce you to the steps of empowerment evaluation and tools to facilitate the approach.
Presenter: David Fetterman is president and CEO of Fetterman & Associates. He is the founder of empowerment evaluation and the author of over 10 books including Empowerment Evaluation Principles in Practice with Abraham Wandersman.
Cost: $75 Members, $100 Nonmembers, $40 Students

See the full descriptions and register for one, two, or all three online here.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association.

· ·

Older posts >>

Archives

To top