AEA365 | A Tip-a-Day by and for Evaluators

Hi, we are Tom Archibald (Assistant Professor and Extension Specialist, Department of Agricultural, Leadership, and Community Education at Virginia Tech) and Guy Sharrock (Senior Technical Advisor for Learning with Catholic Relief Services). We believe one way to integrate and sustain learning in an organization is by intentionally promoting “evaluative thinking.”

Evaluative thinking (ET) is an increasingly popular idea within the field of evaluation. A quick overview of ET is provided in a previous post here. Today, we share some principles and practices for instilling ET in organizations and programs, based on our experiences facilitating ET-promoting workshops with development practitioners in Ethiopia and Zambia.

Lesson Learned: From our research and practice, we identified these guiding principles for promoting ET:

  1. Promoters of ET should be opportunist about engaging learners in ET processes, building on and maximizing intrinsic motivation. Meet people where they are and in what they are doing.
  2. Promoting ET should incorporate incremental experiences, following the developmental process of “scaffolding.” For example, instead of starting by asking people to question their deeply-held beliefs, begin with something less threatening, such as critiquing a newspaper article, and then work up to more advanced ET.
  3. High-level ET is not a born-in skill, nor does it depend on any particular educational background; therefore, promoters should offer opportunities for it to be intentionally practiced by all who wish to develop as evaluative thinkers.
  4. Evaluative thinkers must be aware of—and work to overcome—assumptions and belief preservation.
  5. ET should be applied in many settings—program design, monitoring, evaluation, and so on. In order to best learn to think evaluatively, the skill should be applied and practiced in multiple contexts and alongside peers and colleagues.
  6. Old habits and practices die hard. It may take time for ET to infuse existing processes and practices. Be patient and persevere!

Lesson Learned: In addition, we learned that:

  • Interest and buy-in in the effort must be both top-down and bottom-up. From the top, in international development, some funders and large organizations (e.g., the US Agency for International Development) are increasingly supportive of learning-centered and complexity-aware approaches, favoring the promotion of ET.
  • Existing levels and structures of evaluation capacity must be considered; ET can and should fit within and augment those structures.
  • Hierarchical power dynamics and cultural norms, especially around giving and receiving constructive criticism (without getting defensive) must be addressed.

Rad Resource: InterAction and the Centre for Learning on Evaluation and Results for Anglophone Africa have undertaken a study of international NGO ET practices in sub-Saharan Africa. Their report provides some great insights on the enabling factors (at a general, organizational, and individual level) that can help ET, and learning, take hold.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Lisa Richardson and I am the internal Improvement Advisor/Evaluator for the UCLA-Duke University National Center for Child Traumatic Stress (NCCTS), which in addition to coordinating the collaborative activities of the National Child Traumatic Stress Network (NCTSN), provides leadership in many aspects of child trauma policy, practice, and training. Online surveys are a favored NCTSN tool, particularly for the collaborative development and evaluation of network products. By last count, over 600 surveys have been done since 2006!

This plethora of surveys has become an unexpected and successful mechanism to enhance evaluation and organizational learning. In the past two years, our evaluation team has taken on very few surveys ourselves and instead given over the process to NCCTS staff and NCTSN groups. We made a previously recommended review process required and increased technical assistance to augment capacity.

Approaching every review as an educational opportunity is the cornerstone to this process. The goal is not only to produce a well-designed survey but also enhance staff member’s ability to create better ones in the future. Coaching builds on staff’s intrinsic passion for working in the child trauma field and for doing collaborative work. Evaluative thinking is reinforced by coaching and shared learning over time.

We have seen the quality of surveys improve tremendously (along with response rates), larger more complicated surveys are being undertaken, and I now receive more queries about using different tools to answer their questions.

Lessons Learned:

  • Put comments in writing and in context. Be clear about required verses suggested changes.
  • Provide alternatives and let the person or group decide. Walk them through the implications of choices and the influence it would have on their survey or data and then get out of the way!
  • Have everyone follow the same rule. My surveys are reviewed as are those developed with input from renowned treatment developers.
  • Build incrementally and use an individualized approach. A well-done survey is still an opportunity for further development.

Rad Resource: Qualtrics , the online survey solution we use is user-friendly and sophisticated. When consulting on technical issues, I often link to technical pages on their excellent website. User Groups allow us to share survey template, questions, messages, and graphics, increasing efficiency and consistency.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Joe Bauer, the Director of Survey Research & Evaluation in the Statistics & Evaluation Center (SEC) at the American Cancer Society (ACS) in Atlanta, Georgia. I have been working as an internal evaluator at the ACS for almost nine years, in a very challenging, but very rewarding position.

Lesson Learned: Evaluation is always political and you must be aware of those cultural dynamics that are part of every environment. I came to the American Cancer Society to have an impact at a national level. I had envisioned evaluation (and still do) as a means to systematically improve programs to improve the lives of cancer patients.

In the beginning, many were not ‘believers’ in evaluation. The perception was that evaluation could only lead to finding things that were wrong or that were not working – and that this might lead to politically problematic situations. We needed to navigate the cultural mine fields, even as we were acting as change agents. Over time, our Center worked hard to build a sense of trust. As internal evaluators, one must always be aware that we are being judged, as to how nice you are playing in the sandbox, even as we strive and push for higher quality, better data, and better study designs. Evaluators ask the tough questions – which at times cause ‘friction’. However, an internal evaluator must have a comfort level and the confidence with taking that role of asking the tough questions, which can be lonely.

Hot Tips: As an internal evaluator, one must be willing to ‘stay the course’ and ‘weather the storms’ and to never compromise on your values. This is crucially important – because you always need to do the right thing. This does not mean you end up winning all these ‘battles’, because ultimately, you can and are over-ruled on many issues. However, you must keep your integrity – because that is something you need to own throughout your career. That is also what builds trust and credibility.

Rad Resources: The American Evaluation Association’s Guiding Principles for Evaluators http://www.eval.org/p/cm/ld/fid=51 – which are intended to guide the professional practice for evaluators and inform evaluation clients and the general public about the principles they can expect to be upheld by professional evaluators.

The Official Dilbert Website with Scott Adams http://www.dilbert.com/ – where there are many ‘real world’ examples of the cultural dynamics that occur in the world of work and the often absurd scenarios and dynamics that play themselves out. As an evaluator – you will not only need to have a good skill set and work hard at keeping your values and integrity – you will need to have a sense of humor and keep your perspective.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Bonnie Richards, an analyst from Foresee and Chair of the Organizational Learning and Evaluation Capacity Building TIG. Welcome to the OL-ECB sponsored AEA365 week!

This week our blog posts will cover a range of experiences discussing challenges and successes we have had sustaining learning or evaluation in our work with organizations or programs. Across our members’ varied experiences, you will learn more about their strategies and methods for facilitating learning and the challenges they have encountered.

In my own role working with clients, one of my main goals is to help them understand where to prioritize improvements for their stakeholders. One of the challenges in doing this is navigating the different environments of organizations, companies, and government agencies. Each group is unique. For example, among government agencies, while there are some similar requirements or processes that consistently govern each, the mix of involved stakeholders who serve as the primary point of contact actually vary significantly.

A primary contact could be a program analyst, or a director of the agency’s strategic planning and evaluation office, or technical director, or even a third party vendor.

Understanding and acclimating to each client, meeting them at their “level” and working within their context is key because it helps you to learn the best ways for interacting with different stakeholder groups. This sets the stage for a successful relationship.

Lessons learned: Ask questions.

  • So, how does one get to the point of successfully meeting stakeholders in the appropriate context? Ask questions:
  • Why are they beginning this process? Were they instrumental in initiating it, or are they tasked it as part of a directive from a director or committee? How do they intend to use the information? What are their goals? What information will be most useful?
  • Take some time to ask questions. Stakeholders will appreciate your interest and the opportunity, and it exposes you to the thoughts, concerns, and values that are top of mind to the people you will be working closely with.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello fellow evaluators! I’m Sheila B Robinson, aea365’s Lead Curator and sometimes Saturday contributor, and I have to admit, I get lonely sometimes. After all, I’m the only program evaluator in my organization. Sure, there are other people who collect and analyze data, but no one who can sit down with me over lunch and discuss logic models, debate the merits of using a goal-based or goal-free  approach, prattle on about program theory, or compare favorite theorists on the Evaluation Theory Tree. Where’s the eHarmony or Match.com for evaluators?

Thankfully, I have several options for going virtual to enjoy some good evaluation camaraderie. Strictly platonic, of course.

Rad Resources: EvalTalk is the discussion list of the American Evaluation Association. It’s a listserv that has been going since 1995! There are many active members and many, many more readers. Discussions can get quite heavy and theoretical at times, and many contributors write lengthy responses to questions engaging in spirited debates. On the other hand, many people use the group to pose simpler questions, such as requests for recommendations of instruments, products or services.

AEA’s LinkedIn group also hosts a number of interesting discussions on various evaluation-related topics. And while you’re on LinkedIn, look for other groups as well. I belong to a number of additional evaluation-related groups: The Evaluators’ Institute, The European Evaluation Society, Monitoring and Evaluation Professionals, Evaluators Group, RealWorld Evaluation, and Research, Methodologies, and Statistics in the Social Sciences. Some AEA Topical Interest Groups (TIGS) also have LinkedIn groups. And of course, some group discussions are more active than others.

All of these discussion groups have featured conversations around topics such as systems thinking, definitions of terms (e.g. outputs, outcomes, indicators, metrics, measures, etc.), how to deal with different types of data (e.g. Likert scales), statistical analysis software, RFPs, research design, capacity building, evaluation approaches, job openings, and much, much more.

Don’t forget to look for AEA, AEA TIGs, and AEA Affiliates on Facebook, and follow them on Twitter for even more evaluation conversation!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Kirk Knestis, CEO of Hezel Associates, back again, following up on a previous post about how evaluators’ work in STEM education settings is being influenced by the Common Guidelines for Education Research and Development introduced by the National Science Foundation (NSF) and U.S. Department of Education. Hezel Associates studies education innovations so regularly supports organizations proposing grant-funded R&D projects in science, technology, engineering, and math education (STEM). Sometimes we’re a research partner (typically providing Design and Development Research, Type #3 in the Guidelines); while in other cases we serve as an external evaluator (more accurately, “program evaluator”) assessing the implementation and impact of proposed project activities, including the research.

Lessons Learned – Work with a wide variety of clients (more than 70 proposals so far in 2014!) has left me convinced that an evaluator—or research partner, if your job is framed that way—can do a few specific things that can add substantial value to development of a client’s proposal. Someone in an external evaluator/researcher role can do more than simply “write the evaluation section,” potentially improving the likelihood for proposal success.

Hot Tips – 1. Help designers explicate the theory of action of their innovation (intervention, program, technology, etc.) being tested and developed. Any research study aligned with the Guidelines (for example, many if not most NSF projects) will be expected to build on a clearly defined theoretical basis. Evaluators ought to be well equipped to facilitate development of a logic model to serve that purpose, illustrating connections between elements or features of the innovation and its intended outcomes.

  1. Define the appropriate “type” of research . The Common Guidelines provide a typology of six purposes for research, ranging from Foundational Research contributing to basic understandings of teaching and learning; to Scale-up Research, examining if the innovation retains its effectiveness for a variety of stakeholders, when implemented in different settings “out in the wild” without substantial developer support. A skilled evaluator can help the client select the appropriate kind of research given the level of maturity of the innovation and other factors.
  2. Help clarify distinctions between “research” and “evaluation” purposes, roles, and functions. Clarity on the type of research required will inform study design, data-collection, analysis, and reporting decisions. A good evaluator should be able to help determine the expertise required for the research, requirements for external evaluation of that work, and the narrative explaining roles, responsibilities, and work plans required for a proposal.

Rad Resource – If you work with education clients, become familiar with the Common Guidelines for Education Research and Development. Some complex conversations loom but they will be an important consideration in conversations about research and evaluation in education in the coming years.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello everyone, I’m Brad Rose of Brad Rose Consulting, Inc. a Massachusetts-based consulting firm that provides program evaluation, applied social research, and organization development consulting services to national and local non-profits, community-based organizations, educational institutions, philanthropies, corporations, and state and federal agencies.  I’d like to share my views about the importance of interpersonal skills to successful program evaluation initiatives.

Lesson Learned: We all know that a good evaluator must have the requisite technical/methodological skills; he or she must be able to develop a research design, carry out research in the field, analyze data, and report findings. These technical/methodological skills, although of critical importance, are not the only skills that evaluators need. Effective program evaluations also depend upon a range of interpersonal and relational skills that make effective and responsive interpersonal interaction possible.

I recently posted to the American Evaluation Association’s listserv a query about the importance and role of interpersonal skills in evaluation.  I asked for AEA members’ opinions about the importance of interpersonal skills in conducting successful evaluations. The central theme of the many responses I received was that successful evaluators employ key interpersonal skills, and that without these, evaluation engagements are unlikely to be successful.

Among the most prominent reasons that my AEA colleagues said interpersonal skills were important were:

1) the importance of building strong, candid, and constructive relationships, on which effective data collection depends

2) the importance of establishing trusting and collaborative relationships between evaluators and stakeholders in order to help to ensure that evaluation findings will be utilized by clients and stakeholders

3)  Additionally, some colleagues commented that strong interpersonal skills in evaluation enhance the probability that clients and stakeholders will share information and provide valuable insights about the program.

As my colleagues confirmed, effective evaluation necessarily entails trusting, open, and amicable relationships that make access to program knowledge, evaluands’ experience, and critical program information possible. Interpersonal skills are a prerequisite for effective program evaluation.

Hot Tips:

  • Build rapport and trust with clients, evaluands, and stakeholders
  • Act with personal integrity
  • Display a genuine curiosity and ask good questions
  • Make yourself vulnerable in order to learn
  • Be empathic
  • Be both socially aware and self-aware— i.e., be aware of, and manage, both your own and other’s emotions (including the features of emotional intelligence, i.e, capacities to accurately perceive emotions, use emotions to facilitate thinking, to understand emotional meanings, and to manage emotions).
  • Treat each person with respect
  • Manage conflict and galvanize collaboration
  • Facilitate collective (group) learning

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I am Vardhani Ratnala, a monitoring and evaluation professional. In this post, I would like to share my views on the importance of CONTEXT in evaluations.

Recently, at a high tea event, a friend pointed to a dress made from an Indian sari worn by an expat and suggested that we should get similar dresses stitched. In response, another friend pointed out that – “if an expat wore it, it might be considered fashionable, but if an Indian wore it, people would think that we were short of money and are recycling a sari into a dress”.

The conversation had me thinking on the importance of context. What might be considered positive in one context, can be considered average or negative in another context.

Lessons Learned: One can relate the importance of context to a number of evaluations. For example, in the context of a developed country, a disability programme providing a non-mechanical wheelchair might be considered an average intervention; but in a developing country context, where resources are limited, even provision of a tri-cycle, can be considered a life-altering intervention.

Prior to this event, I was discussing another evaluation with a friend. Our discussion centered on a programme offering legal assistance to trafficking victims to seek justice in a court of law. Very few victims had utilised the assistance, and only two of them had reached the verdict stage. Normally, the programme would have been considered a failure and its impact almost negligible. However, given the context in which the programme was operational, even the small numbers reached were remarkable. The programme was implemented in a region, where the police were non-cooperative, intimidation by traffickers was common, court cases dragged on for 10-15 years, and there was stigma associated with being identified as a trafficking victim. Under these circumstances, the programme was considered a success.

Hot Tips for context based evaluations: Apart from having a brief section on the context at the beginning of an evaluation report, it is essential to have “Context” as a specific evaluation criteria, so that the programme results can be viewed in the light of its social, cultural, political, legal or economic context, in order to determine its actual impact.

Since context is often subtle i.e. it is not always easy to articulate or observe, as there is a subtext involved, it is essential for evaluation teams to have a local evaluator on board, who can help understand the circumstances in which the programme was operational and thus, determine its impact.

Rad Resource: Check out this weblink for additional info: http://www.iisd.org/casl/caslguide/evalcontext.htm

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi there! My name is Leigh M. Tolley, and I’m a Research Assistant at Hezel Associates, LLC in Syracuse, NY. As an advanced doctoral student in Instructional Design, Development and Evaluation at Syracuse University, I have been fortunate to present at two annual conferences of the Eastern Evaluation Research Society (EERS), my AEA local affiliate. EERS, the oldest professional society for program evaluators in the United States, welcomes participation by all interested individuals. In my experience, they have been particularly welcoming to students.

Lesson Learned: Local affiliates are a great way to get to know others interested in evaluation in your area. Their smaller conferences are ideal networking opportunities for students and new evaluators, and a chance for you to meet and interact with professionals in the field. For example, I was able to meet other evaluators in different states that are now go-to colleagues when I have questions about how they have approached issues that I face in my own work.

Lesson Learned: I submitted my first solo proposal to EERS after co-presenting with my advisor at the AEA conference in 2010. Although it was initially a challenge to figure out how to frame my thoughts, submitting a proposal was a way for me to start developing work I had done in class into a research focus. Preparing a paper presentation one year and a poster the next helped me refine my ideas and determine the best way to disseminate my findings. The more intimate venue of a local affiliate conference was less intimidating, but I still was able to get great feedback from talking about my emerging research from other students, evaluation professionals, and even current and past AEA Presidents! Through presenting at EERS, I felt much more comfortable in preparing proposals for and presenting at subsequent AEA conferences.

Lesson Learned: Regional conferences tend to be more informal, and can help you hone your presentation and discussion skills. For me, EERS was also a chance to attend some amazing plenary sessions and hear prominent evaluators share their work—even better, these were the same people that asked me about my research when I presented!

Hot Tip: EERS is currently accepting proposals for its 2015 conference. The theme this year is Let’s Get Real: Evaluation Challenges and Solutions. Undergraduate and graduate students are invited to submit proposals, too! The proposal deadline is Friday, December 12.

Rad Resource: To find your AEA local affiliate, check out the list of organizations here. There are almost 30 organizations at both the state and regional levels. Contact information for each affiliate is given, and many affiliates have their own website, too.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, we are William Faulkner (i2i Institute) and João Martinho (PlanPP), writing here on our own poster design process, which apparently worked well enough to impress some of the judges at AEA 2014. We were guided by a simple principle: understand what the target audience considers relevant and where this overlaps with that which we desire to communicate.

Faulkner 1

 

(Here’s a larger pdf of this poster: Network Analysis on a Shoestring_AEA2014)

We organized the content in three blocks:

  1. Orientation: what are we talking about and for whom is it relevant?
    • Who are you? The target audience – for whom we thought the content would be useful – because poster content is never relevant for everyone.
    • How do you collect data? We wanted to at least orient the audience to the range of types of data which could be fed into this tool.
    • Why would you use this tool? This box attempts to correct two common misconceptions: (a) that network analysis is only useful to map relationships between people, and (b) that producing a network visualization is the end of the process. The latter misconception inspires complaints that network analysts often produce attractive visualizations with little to no interesting interpretations.
  2. Main Message: what are the basic steps of using this tool? This block leads the reader through a tutorial on the main steps of using NodeXL emphasizing simplicity – in four steps NodeXL transforms raw data into a visualization. The section should display sufficient information to a solitary reader, but during the poster session itself at AEA we had one of the authors present with a laptop so anyone interested could play with a real dataset themselves as a way of reducing some of the mental entry barriers to starting to use the software.
  3. Examples/inspiration: The final block presents some concrete examples which illustrate the insights which network visualization (alone – even without the calculation of statistics) can supply.

Faulkner 2

Hot Tip: Focus on content first. The choice of a design tool should come after you can clearly articulate what you want to communicate and how this information is relevant to the target audience. Think about the gap you are trying to fill in the readers’ mind, and research how others communicated similar content. Second, as the design comes together, be strict about following the standard bank of recommendations about visual communication (less text, leave empty space, help the reader with cues about where their eye should go next). Once you have thoroughly thought through these aspects, the design should pretty much draw itself.

Rad Resource: NodeXL, of course! https://nodexl.codeplex.com/

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Older posts >>

Archives

To top