AEA365 | A Tip-a-Day by and for Evaluators

CAT | Business, Leadership, and Performance

Sheena Horton

Sheena Horton

Do you consider meetings to be where time and productivity go to die? You’re not alone! We have all been in ineffective, mind-numbing meetings, but both leaders and attendees are responsible for a meeting’s success. I’m Sheena Horton, Consultant/Project Manager for MGT Consulting Group and President of the Southeast Evaluation Association, and I have some tips for how we can all contribute to having more mindful, productive meetings.

 

Hot Tips and Rad Resources – As a Leader:

  • Before scheduling a meeting, consider whether you need one. Is an email more appropriate for the topic or audience? Have a clear purpose and expectations for what is to be gained from the meeting.
  • Determine who needs to attend to accomplish the desired goals. The fewer, the better! Small groups encourage a more conversational tone and increased engagement from all attendees versus a large group.
  • Avoid scheduling last minute or having long meetings. Allow time for yourself and attendees to prepare for the meeting. Be considerate of your attendees’ other commitments. Keep meetings short; 35-45 minutes is ideal. People become restless, tired, and disengaged during longer meetings. Your meeting is too long if you need breaks!
  • Always provide an agenda to attendees at least 1-2 days before the meeting. Do not delegate the meeting planning; it’s your meeting – own it! Create a simple bulleted list agenda or download a free template from Microsoft (https://templates.office.com/en-us/Agendas) or TidyForm (https://www.tidyform.com/agenda-template.html), or utilize planning tools like Agreedo (https://www.agreedo.com). List agenda items as questions to encourage brainstorming, and avoid too many agenda items. After sharing the agenda, ask attendees if they have any items to add or questions. This will help you prepare answers ahead of the meeting.
  • Designate a notetaker. A record is vital for ensuring what was accomplished during a meeting is not forgotten or lost and is useful for upholding accountability regarding any assigned tasks. Minutes should be distributed to attendees promptly after the meeting.
  • Be mindful of meeting start/stop times and moderate as appropriate. Stick to your agenda. Set expectations for the meeting’s purpose and for how the meeting will be conducted. Robert’s Rules of Order (http://www.robertsrules.org) is a common resource used to govern meeting procedures. Allow for creativity and interaction among attendees, such as mind mapping using post-it notes or online tools like MindMeister (https://www.mindmeister.com).
  • Determine outcomes for agenda items. Resolve one agenda item before moving to the next. Determine the who, what, and when for assigned tasks. Follow up on action items after the meeting and track progress. Smartsheet offers great templates (https://www.smartsheet.com/15-free-task-list-templates) for fast tracking.

Hot Tips and Rad Resources – As an Attendee:

  • Respond promptly to meeting invitations, review the agenda, and arrive on time. Consider agenda items carefully, brainstorm what you can contribute, and send questions to your meeting leader at least a day before the meeting.
  • Avoid distractions during meetings and be engaged! Make the time spent worthwhile for everyone.
  • Follow through on assigned action items. Adhere to deadlines and keep your meeting leader informed of progress.

The American Evaluation Association is celebrating Business, Leadership and Performance (BLP)  TIG week. All posts this week are contributed by members of the BLP Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

John Murphy

John Murphy

Greetings, fellow evaluation enthusiasts! My name is John Murphy and I am an Evaluation Associate on the Evaluation Research and Measurement team at Cincinnati Children’s Hospital Medical Center. Our team, historically focused on program evaluation, was placed into the driver’s seat of the employee engagement surveys after a departmental reorganization. Our strengths in problem-solving, data literacy, and survey design generated this opportunity for us, but we lacked what The Power of People calls “HR sixth sense”, the ability to intuit the most relevant variables out of the myriad available to human resources (HR) professionals. That, along with the need to understand the different functions and relationships within the human resources department of a large organization, made for a challenging period of growth for our team.

hands on computer keyboard next to notebook and pen

After six months of struggle and success, we are sharing a few discoveries that might help others prioritize how they use their valuable resources of time and energy.

Lesson Learned 1: Find “thought leaders” with institutional and HR experience who can open up your mind to the intricacies of understanding employee experience.

This can be simple, like having lunch with a veteran manager or more complicated, like a series of roundtables complete with PowerPoint or Prezi presentations and flipcharts. As long as it helps you find out what drives engagement, consider it time well spent.

Lesson Learned 2: Resist the desire to change everything and “make it your own.” Instead, focus on understanding the reasoning behind decisions that have been made.

People, talking, using cell phones, reading magazine

We inherited the employee engagement survey process from knowledgeable staff members, and while we had the brief temptation to make widespread changes, we resisted. Our predecessors had many great processes in place. Understanding those processes and making incremental changes saved us the time of having to vet new processes and introduce those new processes to a large organization. Those thought leaders we cultivated provided us with the perspective needed to see how decisions that were made affected the organization as a whole. For example, before we decided to revise what seemed to be an extraneous question, we talked to organization leaders and found that the results from that item were being used in decision-making for one part of the hospital.

No tags

Heather Esper and Yaquta Fatehi

Heather Esper and Yaquta Fatehi

Hello, this is Heather Esper and Yaquta Fatehi of the Performance Measurement Institute at the University of Michigan (http://wdi.umich.edu/about). Our team specializes in performance measurement to improve organizations’ effectiveness, scalability, and sustainability and create more value for their stakeholders, in low- and middle-income countries.

The William Davidson Institute (WDI) uses social well-being indicators to address business challenges and improve organizational learning. We believe assessing multidimensional outcomes of well-being helps inform better internal decision making within businesses. These multidimensional outcomes move beyond economic outcome indicators such as income and savings; and include capability and relationship well-being indicators. Capability refers to constructs such as the individual’s health, agency, self-efficacy, and self-esteem. Relationship well-being refers to changes in the individual’s role in the family and community as well as the quality of the local physical environment.

image of computer with coffee cup, notepad, and pen

For example, we conducted an impact assessment of a last mile distribution venture and focused on understanding the relationship between business and social outcomes. Through a mixed methods design, we found a relationship between employee self-efficacy (one’s belief in their ability to do certain tasks) and two major challenges the venture was facing: turnover and sales. We recommended the venture augment their current trainings to employees to increase self-efficacy which would in turn hopefully increase retention and improve sales. Based on this finding, we also recommended that certain high-priority social well-being indicators such as self-efficacy be monitored on a quarterly basis along with key business performance indicators. In another engagement, we relied heavily on the organizations’ proposed theory of change to offer examples and solutions of how to track and link socio-economic and business impacts. For example, in one enterprise, we found that their ability to retain current and recruit new micro-distributors could be influenced by sharing the improvement in standard of living experienced by the micro-distributors’ children.

Hot Tip: Social well-being data can provide insights for organizational learning. Businesses can use ‘pause and reflect’ sessions to examine such data across different teams to draw new insights, as well as discuss challenges and identify lessons learned related to collecting such data to enhance efficiency and rigor.

Lesson Learned: Embedding social metrics into existing processes often requires a monitoring and evaluation champion (i.e., a senior staff member) at the organization to help facilitate social metric data collection and utilization.

Rad Resources:

  • Webinar with guest- Grameen Foundation (http://wdi.umich.edu/knowledge/multi-dimensional-impacts-enhancing-poverty-alleviation-performance-the-importance-of-implementing-multidimensional-metrics) on the value of capturing multi-dimensional poverty outcomes
  • Webinar with guest- Solar Aid (http://wdi.umich.edu/knowledge/enhancing-poverty-alleviation-performance-amplifying-the-voice-of-local-stakeholders) on qualitative methods to capture multi-dimensional poverty outcomes
  • Webinar with guest- Danone Ecosystem Fund (wdi.umich.edu/knowledge/quantitative-methodology-enhancing-poverty-alleviation-performance-quantifying-changes-experienced-by-local-stakeholders) on quantitative methods to capture multi-dimensional poverty outcomes
  • Written for UNICEF, this guide (http://devinfolive.info/impact_evaluation/img/downloads/Theory_of_Change_ENG.pdf) explains the Theory of Change tool and its use; and Better Evaluation shares this guide (http://www.betterevaluation.org/en/resources/guide/facilitators_sourcebook_theory_of_change) on how to conduct a Theory of Change workshop with a section on how to use the tool to select indicators

The American Evaluation Association is celebrating Business, Leadership and Performance (BLP)  TIG week. All posts this week are contributed by members of the BLP Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hello, my name is Erika Cooksey. I am employed as an internal evaluator at Cincinatti Children’s. I’ve been a member of AEA since 2011, and I co-chair the Social Work TIG.

A change in leadership can be a positive and stressful experience for the systems that it affects. A leader, new to an organization, brings new energy, ideas and priorities, including philosophies about how data should be used and presented. Internal evaluators with organizational history often have established and vetted methods of data collection and reporting. In a perfect world these methods would align with the needs of the new leader and they would move forward together in harmony. However, in most work environments a change in leadership requires change in the way evaluators do business. Navigating the waters of change can be complicated but establishing a good working relationship with new leadership early on makes working through change more manageable.

Hot Tip: Learn what they value

Meet with the new leader to gain an understanding what’s important to them. Assess where they are on the spectrum of understanding evaluation. Ask specific questions about their background; results and reports that were useful at their last job; how they used data to make decisions in the past; and what they need to know in the next 60-90 days to understand the work ahead. Learning more about their association with data will help assess the current state of your work and whether you should consider other methods of data collection and reporting.

Image of desk with laptop, calendar, pen, coffee, clock, etc.

Hot Tip: Approach change with an open mind

It’s important that evaluators critically assess the need for change when appropriate. This process requires time, focus and input from others. Create an environment that fosters open and honest discussion about your work. Positive feedback and accolades are great, yet it’s often the feedback that wasn’t easy to receive that is the most valuable.

Some behavioral tips for receiving feedback include: encouraging others to voice concerns and suggestions; supporting a questioning attitude and being a gracious recipient of feedback. If making a change is the right way to go, consider providing prototypes to the new leader and other stakeholders to get feedback. 

Rad Resource: Maintain professional standards

Refer to AEA’s Guiding Principles as you review your methods of evaluation. Maintain high standards and ensure that your data collection practices preserve credibility and integrity. Think critically about how changes in the evaluative model will impact the business. Make decisions regarding needed changes based on your analysis. AEA reminds us to “continually seek to maintain and improve their (our) competencies, in order to provide the highest level of performance in their evaluations.” While change is sometimes difficult, it can lead to a better and more efficient approach.

The American Evaluation Association is celebrating Business, Leadership and Performance (BLP)  TIG week. All posts this week are contributed by members of the BLP Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Sarah Stawiski

Sarah Stawiski

My name is Sarah Stawiski, and I am a Senior Researcher and Program Evaluator at the Center for Creative Leadership (CCL). At CCL, we provide leadership solutions to individuals, organizations and communities through programs, coaching, and other services. We make a promise to our clients – we will help them achieve results. The results that matter to our clients may range from becoming a more effective communicator, building stronger leadership networks in communities, or transforming an organizational culture. When evaluating these programs, results certainly matter, but to know best how to achieve these results, we have to remember that context also matters.

Train, Empower, Reward People diagram

When we work with individual leaders, after they complete a program or coaching engagement, what they do next is critical. Will they only remember the friends they made and the fun they had in the program, or will they actually go back and apply what they’ve learned to make meaningful changes in their leadership practices? There are many factors related to individual differences and program design that will determine how much of the learning experience “sticks.” However, the work environment they return to is very important and should not be left out of the equation. There is extensive literature on the importance of context when it comes to learning transfer in general. A review suggests there are multiple dimensions of work context related that have been empirically connected to the extent learners can apply what they learn and actually make lasting changes in their behavior (e.g., psychological safety, development climate, learning transfer climate, etc.). At CCL, one aspect of context that can influence the extent that a program “works” and leaders actually make positive changes to their behavior is supervisor support for development.

image of woman and man in work discussion and laptop computer

Rad Resource: My colleagues Steve Young, Heather Champion, Michael Raper and Phillip Braddy recently published a white paper called Adding More Fuel to the Fire: How Bosses Can Make or Break Leadership Development Programs (https://www.ccl.org/wp-content/uploads/2017/03/how-bosses-can-make-or-break-leadership-development.pdf) that shows that in some of our leadership programs, the more participants feel their development as a leader is supported by their supervisors back at work, the more they were able to apply what they learned and develop as a leader.

 

More recently, we have focused on other aspects of context that we know and/or suspect to be important to promoting the “stickiness” of learning such as the extent individuals support and challenge one another and the extent senior leaders are perceived as making leadership development a priority. Collecting this additional data allows us to have a more holistic conversation about what our clients can do to get the most benefit from their investment in leadership development.

 

Lesson Learned: Collecting data about context opens the doors to better conversations with clients about how to strengthen the effectiveness of leadership programs.

 

The American Evaluation Association is celebrating Business, Leadership and Performance (BLP)  TIG week. All posts this week are contributed by members of the BLP Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Jennifer Dewey, and I chair the Business, Leadership, and Performance TIG, working closely with Kristy Moster, the TIG’s Program Chair. We’re highlighting key issues important to our members this week.

These issues – leadership, business priorities, knowledge management, and workforce engagement – address categories in the Baldrige Excellence Framework (http://www.nist.gov/baldrige/publications/baldrige-excellence-framework) whose purpose is to help all organizations determine: 1) Is my organization doing as well as it could? 2) How do I know?, and 3) What and how should my organization improve or change?

The Framework includes the Criteria for Performance Excellence and Core Values and Concepts. The Framework promotes a systems perspective, i.e., managing all the components of an organization as a unified whole to achieve its mission, ongoing success and performance excellence.

Baldrige Excellence Framework

The Criteria for Performance Excellence includes an Organizational Profile that describes an organization’s background and sets the context for the methods used to accomplish work and resulting outcomes. The leadership process triad (Leadership, Strategy, and Customers) emphasizes a leadership focus on strategy and customers. The results triad (Workforce, Operations, and Results) includes workforce-focused processes, key operational processes, and the performance results they generate. “Integration” at the center of the figure indicates the system elements are interrelated. The system foundation (Measurement, Analysis, and Knowledge Management) is critical to a fact-based, knowledge-driven, agile system to improve performance and competitiveness. All actions lead to Results related to products and processes, customers,  workforce, leadership and governance, and financial and market outcomes.

Hot Tip: The Criteria does not prescribe how users should structure their organization or its operations. Through the Organizational Profile, users describe what is important such as their mission, vision and values; customer, supplier, and partner relationships; regulatory requirements, competitive environment, and strategic context.

A set of Core Values and Concepts, starting with a systems perspective that is supported by visionary leadership, support the Criteria. The next seven values are the hows of an effective system. The final two values, ethics and transparency and delivering value and results, are the outcome of using the Baldrige Excellence Framework.

Baldrige Excellence Framework.

Created by Congress in 1987, the Baldrige Performance Excellence Program is managed by the National Institutes for Standards in Technology (NIST), an agency of the Department of Commerce. The Criteria has expanded to include Business/Non-Profit, Education, and Healthcare industries.

Rad Resources:

The American Evaluation Association is celebrating Business, Leadership and Performance (BLP)  TIG week. All posts this week are contributed by members of the BLP Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Kylie Hutchinson.  I am an independent evaluation consultant with Community Solutions Planning & Evaluation.  In addition to evaluation consulting and capacity building, I tweet at @EvaluationMaven and co-host the monthly evaluation podcast, Adventures in Evaluation along with my colleague @JamesWCoyle.

When I started out in evaluation 26 years ago, I was focused on being a good methodologist and statistician.  After deciding to work primarily with NGOs I learned the importance of being a good program planner.  Employing a participatory approach required me to become a competent facilitator and consensus-builder.  These days, the increased emphasis on utilization and data visualization is forcing me to upgrade my skills in communications and graphic design.  New developments in mobile data collection are making me improve my technical skills.  A recent foray into development evaluation has taught me the important role that a knowledge manager plays in evaluation. Finally, we are starting to understand evaluation capacity development as a process rather than a product, so now I need expertise in organizational development, change management, and the behavioral sciences.  Whoa.

HutchDon’t get me wrong, I’m not complaining.  Every day I wake up and think how lucky I am to have picked such a diverse career as evaluation. But with all these responsibilities on my plate, my toolbox is starting to get full and sometimes keep me awake a night.  How can I manage to be effective at all of these things?  Should I worry about being a Jack of all trades, Master of none?

Hot Tip:  You don’t have to do it all.  Determine your strengths and outsource your weaknesses. Pick several areas of specialization and ask for assistance with the others.  This help may come in the form of other colleagues or departments.  For example, if you think you need help with change management, sub-contract an organizational development consultant to your team.  If you work in an organization with a communications or graphic design department, don’t forget to call on their expertise when you need it.

Hot Tip:  Take baby steps.  If you want to practice more innovative reporting, don’t assume you have to become an expert in communication strategies overnight. Select one or two new skills you want to develop annually and pick away at those.

Hot Tip:  If you can, strategically select those evaluations that will expose you to a new desired area, e.g. mobile data collection or use of a new software.

Rad Resource:  Even if you’re not Canadian, the Canadian Evaluation Society’s Competencies for Canadian Evaluation Practice provide a great basis from which to reflect on your skills.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings!  We are Tom McQuiston (USW Tony Mazzocchi Center) and Tobi Mae Lippin and Kristin Bradley-Bull (New Perspectives Consulting Group).  We have collaborated for over a decade on participatory evaluation and assessment projects for the United Steelworkers (labor union).  And we have grappled mightily with how to complete high-quality data analysis and interpretation in participatory ways.

Hot Tip: Carefully determine up front what degree of full evaluation team participation there will be in data analysis.  Some practical considerations include:  the amount of team time, energy, interest, and analysis expertise that is available; the levels of data analysis being completed; the degree of project focus on team capacity-building; and the project budget and timeline.  How these and other considerations get weighed is, of course, also a product of the values undergirding your work and the project.

Hot Tip: Consider preparing an intermediate data report (a.k.a. “half-baked” report) that streamlines the analysis process for the full team.  Before the full team dives in, we:  review the raw quantitative data; run preliminary cross-tabs and statistical tests; refine the data report content to include only the — to us — most noteworthy data; remove extraneous columns spit out of SPSS; and assemble the tables that should be analyzed together — along with relevant qualitative data — into reasonably-sized thematic chunks for the team.

Hot Tip: Team time is a precious commodity, so well-planned analysis/ interpretation meetings are essential.  Some keys to success include:

  1. Invest in building the capacity of all team members.  We do this through a reciprocal process of us training other team members in, say, reading a frequency or cross-tab table or coding qualitative data and of them training us in the realities of what we are all studying.
  2. Determine time- and complexity-equivalent analyses that sub-teams can work on simultaneously.  Plan to have the full team thoughtfully review sub-team work.
  3. Stay open to shifting in response to the team’s expertise and needs.  An empowered team will guide the process in ever-evolving ways.

Some examples of tools we have developed — yes, you, too, can use Legos™ in your work — can be found at: http://newperspectivesinc.org/resources.

We never fail to have many moments of “a-ha,” “what now” and “wow” in each participatory process.  We wish the same for you.

The American Evaluation Association is celebrating Best of aea365 week. The contributions all this week are reposts of great aea365 blogs from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello! I’m Michelle Baron, an Independent Evaluation Strategist. In my work in higher education, I’ve encountered a mixture of evaluation champions and critics. Today I’d like to address the importance of strategic planning in building a culture of evaluation.

Strategic planning is considered by many to be an organizational road map by outlining the organizational vision and mission, establishing clear and attainable objectives and goals, and then developing processes for how to achieve them.    Strategic planning and evaluation go hand in hand in moving the organization and its programs forward to benefit its stakeholders. Strategic planning is simply crucial to the evaluation process: without a road map of criteria, standards, and goals, it’s almost impossible to achieve desired success.

Evaluators have a unique role in helping organizations with both ends of the spectrum: creating a foundation through strategic planning, and then conducting evaluations to examine and monitor progress.

Hot Tip #1: Start at the top. Buy-in from top management for strategic planning is of the utmost importance for its success.

Hot Tip #2: Conduct a SWOT analysis (Strengths, Weaknesses, Opportunities, and Threats) of the entity or its programs/services. Doing so not only enlightens people to a variety of ideas and questions to consider, but can also indicate the level of support for those topics.

Cool Trick: Brainstorming sessions are often an excellent starting point for the organization itself or smaller group within that organization. The evaluator or designated member of the organization can facilitate the discussion by developing questions beforehand that may serve as prompts for the discussion, such as those dealing with objectives, goals, and resources.

Rad Resource #1: Strategic Planning for Public & Nonprofit Organizations by John Bryson, and related books by the same author, provide the groundwork and tools necessary for organizations to develop and sustain their strategic planning process.

Rad Resource #2: The Fifth Discipline: The Art and Practice of the Learning Organization by Peter Senge helps leaders establish the foundation and philosophy behind strategic planning, and helps them develop their long-term thinking for organizational growth.

With these tools and resources, evaluators may be more prepared to assist organizations in strategic planning, and have more support for and effectiveness of the evaluations for the organizations.

The American Evaluation Association is celebrating Business, Leadership and Performance (BLP) TIG Week with our colleagues in the BLP AEA Topical Interest Group. The contributions all this week to aea365 come from our BLP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· · · ·

My name is Michelle Paul Heelan, Ph.D., an evaluation specialist and organizational behavior consultant with ICF International.  In my fifteen years assisting private corporations and public agencies to track indicators of organizational health, I’ve found that moving towards more sophisticated levels of training evaluation is challenging – but here at ICF we’ve identified effective strategies to measure application of learning to on-the-job behavior.  This post provides highlights of our approach.

A challenge in training evaluation is transcending organizations’ reliance on participant reactions and knowledge acquisition to assess the impact of training.  Training is offered for a purpose beyond learning for learning’s sake – however we struggle to possess data that show the extent to which that purpose has been achieved once participants return to their jobs.  In our approach, we confront a key question: How do we (as empirically-based evaluation experts) gather those data that demonstrate the on-the-job impact of training?

Hot Tip #1: The work occurs during the training design phase – Nearly all essential steps of our approach happen during training design, or these steps must be reverse-engineered if one is acquiring training.

Hot Tip #2: A structured collaboration among three parties creates the foundation for the evaluation – Evaluation experts, instructional design experts, and organizational stakeholders (e.g., business unit leaders, training/development champions) must identify desired business goals and the employee behaviors hypothesized as necessary to achieve those business goals.  In practice, this is more difficult than it seems.

Hot Tip #3: Evaluation data collection instruments and learning objectives are developed in tandem – We craft learning objectives that, when achieved, can be demonstrated in a concrete, observable manner. During the design phase, we identify the behavioral variables expected to be affected by individuals’ participation for each of the learning objectives.

Hot Tip #4: The behavioral application of learning is best measured by multiple perspectives – For each variable, we create survey items for ratings from multiple perspectives (i.e., participants and at least one other relevant party, such as supervisors or peers). Using multiple perspectives to evaluate behavioral changes over time is an essential component of a robust evaluation methodology. Investigating the extent to which other parties assess a participant’s behavior similarly to their own self-assessment helps illuminate external factors in the organizational environment that affect training results.

Hot Tip #5: Training goals are paired with evaluation variables to ensure action-oriented results – This method also permits the goals of the training to drive what evaluation variables are measured, thereby maintaining clear linkages between each evaluation variable and specific training content elements.

Benefits of Our Approach:

  • Ensures evaluation is targeted at those business results of strategic importance to stakeholders
  • Isolates the most beneficial adjustments to training based on real-world application
  • Provides leadership with data directly useful for training budget decisions

Rad Resource: Interested in learning more?  Attend my presentation entitled “Essential Steps for Assessing Behavioral Impact of Training in Organizations” with colleagues Heather Johnson and Kate Harker at the upcoming AEA conference – October 19th, 1:00pm – 2:30pm in OakLawn (Multipaper Session 900).

The American Evaluation Association is celebrating Business, Leadership and Performance (BLP) TIG Week with our colleagues in the BLP AEA Topical Interest Group. The contributions all this week to aea365 come from our BLP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. Want to learn more from Michelle and colleagues? They’ll be presenting as part of the Evaluation 2013 Conference Program, October 16-19 in Washington, DC.

·

Older posts >>

Archives

To top