AEA365 | A Tip-a-Day by and for Evaluators

TAG | quality

Greetings! I am Carla Forrest, a staff member and Lean Six Sigma Black Belt at Sandia National Laboratories, Albuquerque, New Mexico. My professional work involves configuration management of scientific and engineering knowledge and information. My passion, however, lies in using appreciative approaches to improve workplace performance.

Rad Resource

Recently I read “The 5 Languages of Appreciation in the Workplace” by Dr. Gary Chapman and Dr. Paul White. The authors categorize the five appreciative languages as: (1) words of affirmation; (2) quality time; (3) acts of service; (4) tangible gifts; and (5) physical touch. In the workplace, we often overlook the impact that appreciative inquiry and language have on organizational and individual performance. Authentic appreciation, when expressed in the primary appreciative language of the individual, can be a strong motivator, trust builder, and empowering influence, often uplifting the individual and organization into high performance.

Hot Tip

Appreciation is not recognition or reward. The focus of appreciation is intrinsic. The focus of recognition and reward is extrinsic. Organizational reward and recognition programs focus on performance. Appreciation is personally meaningful, focusing on who a person is. The typical “one size fits all” reward and recognition program is usually managerially directed and impersonal, often lending skepticism as to the genuineness of the leader’s intentions. The ultimate downside to the reward/recognition approach is the cost involved. Motivating through authentic appreciation has no financial cost, but is truly priceless!

In what ways can leaders apply appreciative approaches to transform relationships, attitudes, and performance in the workplace?


The American Evaluation Association is celebrating Business, Leadership and Performance (BLP) TIG Week with our colleagues in the BLP AEA Topical Interest Group. The contributions all this week to aea365 come from our BLP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


· · ·

Greetings from the Midwest! We are Jenell Holstead (University of Wisconsin-Green Bay) and Mindy Hightower King (Indiana University). For the past eight years, we have evaluated 21st Century Community Learning after school programs both at the local level, as well as statewide initiatives in Indiana and Kentucky. This post discusses best practices for evaluating out-of-school time programs intended to boost academic achievement.

Hot Tips for an Effective Out-of-School Time Program Evaluation

Hot Tip: Importance of “Point of Service” Practices. External evaluation of out-of-school time programs often focus on outcomes such as grades, test scores, and survey data. Such evaluation often place less emphasis on the practices that influence the after school environment and the program activities youth experience—the “point of service” (POS) aspects of after school quality. However, after school program staff can utilize evaluations of POS practices to systematically review the quality of their efforts and to facilitate discussion on how to enhance them.

Hot Tip: Role of Self-Assessment. In addition to external evaluation, it it important for program staff to engage in ongoing self-assessment processes which include POS program elements. External evaluators can help facilitate this process and provide constructive feedback/technical support throughout the process.  When program staff have ownership of the evaluation and systemically review the quality of their after school program, enhancements to the program are more likely to be implemented.

Hot Tip: Evaluating Academic POS Elements. Often when assessing POS practices, either externally or through the self-assessment process, the focus of the assessment tends to be on youth development principles. Such principles might include interactions among youth and staff, safety, skill-building opportunities, social norms, and program routine or structure (Granger et al., 2007). Although such factors are important to students’ overall development, it is also important assess program elements that have been found to improve academic achievement, as identified in the IES practice guide (Beckett, et al., 2009). Elements identified by Beckett et al. that can be observed at the POS include:

  • Aligning the out-of-school time program academically with the school day
  • Maximizing student participation and attendance
  • Adapting instruction to individual and small-group needs
  • Providing engaging learning experiences

Therefore, evaluators should determine ways to assess these POS practices, both externally and through self-assessment.

By using these strategies, evaluators can help ensure out-of-school time evaluations comprehensively assess program quality. Instead of merely examining outcomes and providing program staff data after the program has ended, evaluators should focus on helping program staff identify POS elements that contribute to overall program quality. In this way, program staff will be able to assess current practices and implement changes prior to the assessment of program outcomes.

Want to learn more about evaluating out-of-school time programs? Consider attending skill building workshop 774 at Evaluation 2011, the American Evaluation Association’s Annual Conference this November. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.


My name is Alexander Manga, and I am a Ph.D student at Western Michigan University. I served as a scribe at Evaluation 2010 and attended session 356, Differing Perspectives of Quality Throughout an Evaluation. I chose this session because I am very interested in understanding more about the construction and processing of quality evaluations.

Lessons Learned: Quality began at the grant proposal stage

  • The more detailed, accurate, and sensible your grant deliverables were the greater your chances of getting a proposal chosen. The keys are not to over commit, yet hit the deliverables asked for in the RFP. Tailor the proposal to the needs and wants of the agency in a manner that ensures success for both sides.
  • Past work was a predictor of future quality. It was noted that past work performance with program managers or stakeholders will greatly influence future opportunities. A strong reputation for quality work may increase your chances for grant acceptance.

Lessons Learned: Working with stakeholders and managers is a key to success and having a re-occurring relationship

  • Communication throughout the entire evaluation process is key. Perspectives and criteria may change during the evaluation process. There are two different situations that may exist; predictable and unpredictable paths the evaluator can encounter. Obviously, the more predictable the better. Communication between evaluator and stakeholder can mitigate unpredictable situations during the evaluation.
  • Assume change will occur. Nothing will be static, yet dynamic. Remain open minded to continuous change in both planning and practice. Stakeholders may use different criteria to judge the quality by the end of the evaluation.
  • Engage in reflection. After each evaluation, team members should reflect on the process from beginning to end and determine positive and negative points.

Lessons Learned: How is quality judged?

  • Reputation
  • Methodological Rigor
  • Cost-Effectiveness
  • Likelihood to meet expected deliverables
  • Credentialing

Lesson Learned: Increase evaluative inquiry sustainability

  • By incorporating a participatory approach, evaluators can take advantage of current practices and procedures by researching process methods and operational scopes to determine efficiencies and effectiveness. Sustainability can be attenuated by participation of stakeholders or constituents. This process then repeats itself through a cycle that includes: Action, Plan, Observation, and Reflection. This involvement of participants through the entire cycle can enrich the evaluation process by ensuring communication and understanding at the ground level through completion.

The full description of this session and its presenters may be found here. At AEA’s 2010 Annual Conference, session scribes took notes at over 30 sessions and we’ll be sharing their work throughout the winter on aea365. This week’s scribing posts were done by the students in Western Michigan University’s Interdisciplinary PhD program. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.


I am David J. Bernstein, and I am a Senior Study Director with Westat. We will be celebrating the 20th Anniversary of the Government Evaluation Topical Interest Group at the 2010 AEA Conference, so I have been reflecting on how government evaluation has changed over the last 20 years. One area that has not changed is how we determine the quality of performance measures for government programs.

Hot Tips: Here are my top 10 list of indicators of performance measurement quality:

10.    Resistant to Perverse Behavior.  Credit goes to the Governmental Accounting Standards Board (1994) for this phrase, which means performance measures should be objective, and not manipulated in a misleading way. Measures that are easily manipulated are less likely to be useful.

9.      Relevant. Performance measures need to be relevant to the government program being measured, or they will not be seen as useful by stakeholders.

8.      Cost-Effective/Affordable. Government managers prefer using resources on programs, not “overhead.” Many managers see performance measures as a “less expensive” evaluation substitute, which it is not since you need evaluation to determine causation. The cost of measurement systems is typically understated, when calculated at all, and systems still need to be affordable.

7.      Accessible, Easy to Capture, Measurable. Measures which are not easy to capture are unlikely to be cost-effective. Evaluation can help identify measures that are linked to program purposes and measurable, hence useful.

6.      Consistent/Reliable. Performance measures should be consistent, because without consistency, comparisons are not possible, and measures will not be usable for tracking program progress.

5.      Comparable. Consistent performance measures allow comparisons with prior performance, benchmarks set by legislatures or executives, or “best practices” by similar organizations.

4.      Results-Oriented. The biggest change in performance measurement in the last 20 years has been an increased focus on results, and performance measures that are results-oriented are seen as being more useful.

3.      Valid, Verifiable, Accurate. We are evaluators, are we not? Performance measures, like evaluation methods, should be valid, verifiable, and accurate, or else they won’t be seen as trustworthy or useful.

2.      Clear/Comprehensible/Understandable. Some government organizations with complex missions and diverse delivery systems such as U.S. Federal government agencies develop multiple complex metrics combining levels of service with percentage of results achieved, making it difficult to judge if programs are really effective. This may make measurement systems technically accurate and politically useful, but the measures themselves may be less useful.

1.      Useful. Performance measurement systems that do not produce useful information will be abandoned. So, with a nod to Michael Quinn Patton, “utilization-focused performance measurement systems” that meet the other quality criteria are more likely to be sustainable and useful in government evaluation and accountability.

The American Evaluation Association is celebrating Government Evaluation Week with our colleagues in the Government Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our GOVT TIG members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting Government-focused evaluation resources. You can also learn more from the GOVT TIG via their many sessions at Evaluation 2010 this November in San Antonio.

· · ·

Hello, my name is Lisa Garbrecht. As a Research Associate at EVALCORP Research & Consulting, I work on numerous projects requiring data collection from youth. As you may know, it is not always easy to obtain high quality data (i.e., sufficient numbers of completed surveys, academic data, etc.) when relying on schools to help facilitate the data collection process. Below are a few tips that have proved useful!

Hot Tip #1:  Take time up front to identify the right data and methods. With the limited time and resources faced by schools and school-based programs today, it is important to collaborate with clients early on to identify priority needs and ensure that data are collected efficiently. Would a brief post-survey suffice instead of a comprehensive pre-post? Be strategic and include only items that really matter. Phrase items clearly and simply to ensure they are easily understood. Show schools you respect their time by only asking for the most vital information to inform the evaluation. 

Hot Tip #2:  Partnerships are key. Working together, evaluators and clients can build mutually beneficial relationships with schools to overcome their resistance to providing data. By showing school personnel and stakeholders how the findings may be of use and providing them with the necessary tools and databases, schools are more willing to collect and provide data in a timely manner. Communicate regularly with clients and schools, providing contact information so that you can answer their questions and offer assistance as needed.

Hot Tip #3:  Look at the data before it is too late. Whenever possible, do not wait until the end of the data collection process to analyze what’s coming in. Running the data early on allows you to identify problems with the tool or data collection process and make changes. Monitor data quality on at least a quarterly basis. This allows you to provide clients and schools with formative information that can serve to strengthen their programs and their motivation for assisting with ongoing data collection.

Hot Tip #4:  A little incentive goes a long way. Use incentives with project staff, school personnel and/or students as allowed. For instance, EVALCORP uses an award system for rewarding site staff members who consistently collect accurate, legible and complete survey data with a small gift card and certificate of appreciation. Pizza parties or other youth-friendly activities are other alternatives for showing clients/schools your thanks. If tangible incentives are not possible, be sure to let those involved know the value of their input and how much you appreciate their time. Oftentimes, a “Thank you and I really appreciate your help” goes a long, long way!

This aea365 Tip-a-Day contribution comes from the American Evaluation Association. If you want to learn more from Lisa, check out the sessions sponsored by the PreK-12 Educational Evaluation TIG on the program for Evaluation 2010, November 10-13 in San Antonio. If you would like to contribute an aea365 Tip, please send an email to

· ·

Greetings from Columbia, SC! My name is Heather Bennett, MSW, and I have experience working in the field of evaluation for both the public and private sector. Currently, I work as a Research Associate in the Office of Program Evaluation (OPE) at the University of South Carolina where I have the opportunity to lead and work collaboratively on state and federally funded education initiatives in South Carolina. One of my primary responsibilities is to lead our qualitative data analysis efforts, including the analysis of video or audio recordings of cognitive labs, focus groups, interviews, and responses to open-ended survey items.

Lesson Learned: For my tip-a-day for aea365 I am going to focus on one vital and fundamental lesson I’ve learned through the analysis of responses to open-ended survey items — the quality of the question asked has the greatest impact on the data analysis process.

As evaluators, I’m sure we have all inherited some projects with the corresponding data collection instruments and noticed some issues with the construction of items…or worse, we have looked back on the open-ended items we’ve developed and asked ourselves: “What was I thinking?” Upon inheriting the evaluation of a program, I was soon reminded of the impact item writing can have on data management. Issues of data utility arose as my team and I reviewed the structure of qualitative items and worked to develop clear coding structures for corresponding data.

Hot Tip: Poorly written items do not always follow the “garbage in, garbage out” scenario. However, it takes more time to take-out the trash and get to meaningful data (data cleaning, analysis, coding) when you start with bad items. Below are a few things to keep in mind when developing open-ended items that will support your analysis and coding efforts once the data is collected.

First, you must have a clear understanding of what it is you want to learn about the project before you do anything else. What information do you really hope to gain? What is its utility for the program? This process should be guided by the project scope and involve project stakeholders to ensure the usefulness of the data collected.

Now that you have focused your data collection efforts, use these tips when developing your open-ended item(s):

  1. Ask one question at a time.
  2. Avoid leading questions.
  3. Avoid including personal biases in questions.
  4. Be specific about the topic.
  5. DO NOT ask questions that can be answered with yes/no.
  6. Indicate the number of responses requested from the participant.
  7. Ask clear and concise questions to avoid participant fatigue.

Following these tips will serve to improve your efforts in collecting focused and clear information from program participants.

This aea365 Tip-a-Day contribution comes from the American Evaluation Association. If you want to learn more from Heather, join us in San Antonio this November for Evaluation 2010 and check out her session on the Conference Program. If you would like to contribute an aea365 Tip, please send an email to

· ·


To top