AEA365 | A Tip-a-Day by and for Evaluators

CAT | Organizational Learning and Evaluation Capacity Building

Hi, I’m Joseph E. Bauer, the Strategic Director of Survey Research & Evaluation in the Statistics & Evaluation Center (SEC) for the American Cancer Society (ACS) in Atlanta, Georgia.  I am in my eleventh year as an internal evaluator.  I am the former Chair of the Organizational Learning and Evaluation Capacity Building (OL-ECB) TIG, and am currently on our Leadership Team.

Recently, Roberto Azevedo, the Director General of the World Trade Association cited studies showing that as much as 90% of U.S. manufacturing jobs recently lost were due to new technologies, innovations, or the result of improvement in efficiencies.  Many jobs world-wide are being eliminated by automation.  In fact, we are living in a time when the world is experiencing a fourth Industrial Revolution (a quick summary and other information: https://www.youtube.com/watch?v=SCGV1tNBoeU)

The amount and complexity of knowledge and skills that employees (and the organizations they work for) need – is increasing dramatically every day due to technology and globalization.  Simple repetitive tasks are being automated and online technologies are disrupting traditional business on a global scale (big data, technology, and the world of work are being dramatically transformed by the Internet of Things (IoT) (a brief introduction: https://www.wired.com/insights/2014/07/internet-things-will-disrupt-everything/)

Employees that survive automation and disruption will need to know more and do more.  Employees need to learn quickly.  They need to acquire new information, new skills, and develop new abilities and they need to do this in a way in which that learning will be retained and applied immediately to problem-solving.  Employees and the organizations they work for, must be learning and adapting continuously to survive and be successful.  Most organizations over the last 100 years have had ‘training cultures’- but that methodology does not allow a quick enough adaptation to the disruptions in a changing environment and in changing customer needs.  The work environment needs to support learning all the time and employees need to be learning all the time – hence the need for learning cultures becomes imperative.  Of course, this brave new world needs to evolve a culture of evaluation and build the necessary capacity to design, test, and improve processes and outcomes systematically.  Evaluation and an evaluative mindset will play a critical role in supporting and driving organizational learning, employee attitudes and behavior, and learning cultures.  To that end, the OL-ECB is creating an ECB Commons, mentioned earlier this week.

Rad Resources:  The Masie Center (http://masie.com), in Saratoga Springs, New York is a think tank, and a consultative and research center for organizational and employee learning that is focused on how organizations can support learning and knowledge within the workforce.  They coordinate a Learning Consortium, a coalition of 230 global organizations on the evolution of learning and collaboration strategies.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Jeff Sheldon, newly minted Ph.D. from the School of Social Science, Policy, and Evaluation at Claremont Graduate University and many of the positions to which I’ve applied have the words “strategic learning” and “evaluation” in the job title for good reason; evaluation is part and parcel of strategic learning and it is the learning organization that uses evaluation strategically to become more efficient, effective or accountable.  Given some confusion about how organizations learn, today I offer the 16 widely agreed upon characteristics of the learning organization that can be used as a checklist to determine whether your organization is a learning organization.  These characteristics are subject to interpretation, but by contextualizing they’ll be relevant to your organization.

  1. Your organization provides continuous learning opportunities;
  1. Your organization uses coordinated efforts and learning to reach its shared goals;
  1. Your organization links individual performance with organizational performance;
  1. Your organization fosters inquiry and dialogue;
  1. Your organization makes it safe for people to openly share and take risks;
  1. Your organization embraces creative tension as a source of energy and renewal, and has a continuous awareness of and interaction with its environment;
  1. Your organization has shared insights or vision;
  1. Your organization has an active commitment to continuous improvement and to the diffusion of best practices throughout the organization;
  1. In your organization learning is based on experience;
  1. In your organization there is a willingness to change mental models;
  1. In your organization there is individual and group motivation;
  1. In your organization learning is done in teams;
  1. In your organization learning is nurtured by new information;
  1. In your organization there is an ability to understand, analyze, and use the dynamic system within which it functions;
  1. In your organization information flows horizontally in networks bringing together expertise as well as external links; and
  1. In your organization there is an ever-increasing learning capacity to reach a state of continuous transformation.

Where does evaluative inquiry fit with these characteristics of the learning organization?  First, it is integrated into an organization’s processes and performed primarily by its own members.  Second, evaluative inquiry and the resultant learning are ongoing rather than episodic because it is embedded in the organization’s naturally occurring processes.  Last, the organization is a community of inquirers who regularly use their own inquiry skills for understanding and improving organizational processes and systems.

Taken together these characteristics foster the development of a learning culture in organizations, one that supports the systematic acquisition, transfer, and ongoing use of knowledge and information for adaptation and improvement to enrich and enhance the organization as a whole.

Rad Resource: LinkedIn members can access the full document (with references): https://www.linkedin.com/pulse/jeff-sheldon-learning-organization-part-i-jeff-sheldon-ph-d-?trk=mp-reader-card

 

 

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

(*ICYMI (“in case you missed it”) we present a few summaries of recent ECB literature to help you stay up-to-date on this quickly evolving aspect of evaluation.

Hi, we are Natalie Cook (Graduate Research Assistant) and Tom Archibald (Assistant Professor) from the Agricultural, Leadership, and Community Education department at Virginia Tech.

We both practice and do research on evaluation capacity building (ECB). In recent years, ECB has been one of the fastest growing areas of research on evaluation. Yet with such a quickly growing body of literature, it is hard to keep up. Evaluation practitioners and researchers alike often lament either not having access to the evaluation literature, or not having time to consult it.

Rad Resource: Four Recent ECB Publications

In “Evaluation Capacity Building in the Context of Military Psychological Health: Utilizing Preskill and Boyle’s Multidisciplinary Model,” Lara Hilton and Salvatore Libretto present the need for ECB in the field of military psychological health. Hilton and Libretto apply Preskill and Boyle’s multidisciplinary ECB model, which they found highly applicable their context. The authors explain however, that “while there was high utilization of ECB activities by program staff, there was misaligned evaluative thinking, which ultimately truncated sustainable evaluation practice.”

In the most recent volume of Evaluation and Program Planning, Sophie Norton, Andrew Milat, Barry Edwards, and Michael Giffin offer a “narrative review of strategies by organizations for building evaluation capacity.” They sought to: (1) identify ECB strategies implemented by organizations and program developers, and (2) describe successes and lessons learned, finding that successful ECB involves “a tailored strategy based on needs assessment, an organizational commitment to evaluation and ECB, experiential learning, training with a practical element, and some form of ongoing technical support within the workplace.” The authors call for more “rigorous” studies of ECB.

Beverly Parsons (2014 AEA President) along with colleagues Chris Lovato, Kylie Hutchinson, and Derek Wilson discuss an ECB model which embeds evaluative thinking and practice in the context of higher education. They describe Communities of Learning, Inquiry, and Practice (CLIPs) as a type of community of practice and discuss how the CLIPs model was implemented in a community college in the U.S. and a medical school in Canada. Dr. Parsons has also reported on this work on aea365 here.

Finally, Audrey Rorrer presents an evaluation capacity building toolkit for principal investigators of undergraduate research experiences. Toolkits, which served to balance the need for standardized assessment as well as account for individual program contexts, included instructional materials about conducting evaluation, standardized applicant management tool, and a modulated outcomes measure.  Rorrer indicates that “Lessons learned included the imperative of understanding the evaluation context, engaging stakeholders, and building stakeholder trust.”

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Tom Archibald, Assistant Professor in the Agricultural, Leadership, and Community Education department at Virginia Tech and Chief of Party of the USAID/Education and Research in Agriculture project in Senegal.

I’m also Program Co-Chair for the OL-ECB TIG; as Sally Bond mentioned here on aea365 yesterday, as a TIG we are excited to develop an ECB Commons that will be a publically-available clearinghouse for pertinent and helpful ECB resources. While the Commons is not yet up and running, we’ve begun collecting resources that either (1) directly support ECB practice (e.g., activities to teach logic models, such as Hallie Preskill and Darlene Russ-Eft’s Chocolate Chip Cookie Exercise), or (2) provide clear, accessible help on evaluation issues, and as such can also be used in ECB practice.

Below, we share just a few resources that will no doubt be featured prominently in the ECB Commons. We hope anyone who is engaged in ECB will find these resources immediately helpful.

One caveat: As Tom Schwandt has pointed out (and as my colleagues and I have reiterated), the proliferation of evaluation toolkits is great, but is also potentially ineffective or even dangerous in the absence of evaluative thinking. With good ECB facilitation, the resources below can promote evaluative thinking and thus better evaluation.

Rad Resource:

BetterEvaluation is the product of an international collaboration to improve evaluation practice, and is probably the most comprehensive resource and knowledge base on evaluation on the web. In addition to the seven-stage Rainbow Framework for program evaluation, the site includes a growing encyclopedia of approaches to evaluation (e.g., appreciative inquiry, developmental evaluation, realist evaluation), with links to selected resources for each approach, as well as coverage of a variety of special topics.

Rad Resource:

University of Wisconsin Extension’s division of Program Development and Evaluation has a long history of developing resources for Evaluation Capacity Building.  The website is currently under construction, but instructional materials can still be found under the tab for UW-Cooperative Extension Publications.  The Quick Tips tab is also full of excellent resources that non-evaluators can easily understand.

Rad Resource:

The Voluntary Organization of Professional Evaluators (VOPE) Institutional Capacity Toolkit, compiled by EvalPartners, is a collection of curated descriptions, tools, advice, examples, software and toolboxes developed by VOPEs and other organizations working to support non-profit organizations.

Rad Resource:

The Systems Evaluation Protocol along with its free online companion software, the Netway, were developed by the Cornell Office for Research on Evaluation to offer step-by-step systems evaluation-influenced evaluation planning support to ECB facilitators and non-evaluators

Do you know of other resources not listed here? Please post a comment to let everyone know about them!

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

My name is Sally Bond. I am an independent evaluation consultant based in Pittsboro, NC and chair of the Organizational Learning & Evaluation Capacity Building (OL-ECB) TIG.

In the fall of 2012, after 24 years in evaluation and 16 years doing ECB work, I went back to school to learn more about training and development to support my ECB practice. In my intro Human Resource Development class, we read Bob Johanson’s 2009 book, Leaders Make the Future: Ten New Leadership Skills for an Uncertain World.  The tenth skill he describes is Creating Commons: “New commons are shared resources that create platforms for generating wealth and value…Future leaders will be called to create new commons, to grow new places within which collaboration and mutual success can occur.”  It didn’t take me long to connect this idea with the wealth of ECB training resources that we hear about every year at AEA.

Hmm, thinks I: ”…how might we capture all that great thinking and development in a searchable digital platform to share it with a wider audience?”

Rad Resource:

To answer this question, the OL-ECB TIG is hosting a think tank at the annual meeting in Atlanta this year. The name of our session is “The ECB Commons Project: Designing an Open Source Repository to Advance the Science and Practice of ECB.”

From our 2015 OL-ECB TIG Member Survey, we learned that major challenges faced by our members include tailoring learning strategies to meet clients’ particular needs and designing activities and materials that foster a variety of learning objectives. The purpose of the ECB Commons think tank is to explore the possibility of an online platform for curating and storing OL- and ECB-related curriculum, instructional materials, and assessments that help build the capacity of individuals and organizations to do and use program evaluation.

In addition to providing a central repository for the artifacts of ECB practice, other possible functions of an ECB Commons include:

  • Inspiring ECB practitioners to experiment with new strategies,
  • Stimulating and supporting theory-building to advance the science and practice of ECB, and
  • Cultivating a virtual ECB learning community.

We will use the think tank to seek input about how an ECB Commons might foster these kinds of objectives. As OL and ECB increasingly imbue the work of all evaluators, we encourage participation by a wide variety of AEA members.  We hope you’ll join us on Thursday, October 27, from 4:45-6:15 in Room L402.

ECB Commons Session @AEA2016

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Welcome to the Organizational Learning and Evaluation Capacity Building (OL-ECB) TIG week on AEA365! My name is Erin Bock, Director of Research and Evaluation at The Sherwood Foundation in Omaha, Nebraska and OL-ECB TIG Program Co-Chair.  Before I begin, I want to give a shout-out to my OL-ECB TIG Leadership colleagues: I’ve had a blast working with you guys.

The OL-ECB TIG team has some cool projects we’re pursuing and I’m kicking off this week setting the stage for those projects.  In order to capitalize on the TIG’s momentum, the leadership team conducted a membership survey to learn more about people who choose to affiliate with the OL-ECB TIG, the needs they have within this topic area, and mechanisms they would use to get their needs met. Here’s what we learned…

Lessons Learned:

Lesson #1

The average OL-ECB TIG Member has been an evaluator for a long time, but a member of this TIG a relatively short period of time. It makes one wonder about the journey we undertake as professionals and learning that successful evaluations happen when primary intended users are comfortable with evaluative practice. This brings them to the OL-ECB’s doorstep.

Lesson #2

Among many needs, TIG members are challenged by a range of needs as well as target populations, limited time and resources, and garnering organizational leadership support for evaluation.

Lesson #3

In order to meet this need, members are looking to the TIG as a source of professional development and a place to network during the annual conference.

We have a number of initiatives planned in response to the data. Stay tuned this week to learn more.  In the meantime, check out the report on the TIG’s website:  http://comm.eval.org/olecb/home.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hi, we’re Tosca Bruno-van Vijfeijken (Director of the Syracuse University Transnational NGO Initiative) and Gabrielle Watson (independent evaluator). We engaged a group of practitioners at the 2015 AEA conference to talk about organizational change in International Non-Governmental Organizations (INGOs), and explore a hunch that Developmental Evaluation could help organizations manage change.

Several large INGOs are undergoing significant organizational change. These are complex processes – they’re always disruptive and often painful. The risk of failure is high. Roughly half of all organizational change processes either implode or fizzle out ( ). A common approach is not to build in learning systems at all, but rather to take an “announce, flounder, learn” approach ( ).

Lesson Learned: Most INGOs support change processes in three main ways: (1) external “expert” reviews; (2) CEO- level exchanges with peer organizations; (3) staff-level reviews. It is this last category – where change is actually implemented – that is least developed but where it’s most needed. Successful organizational change hinges on deep culture and mindset change ( ).

AEA Session participants highlighted key challenges:

  • Headquarters and country staff experience change very differently
  • Frequent revisiting of decisions
  • Ineffective communication; generates uncertainty and anxiety
  • Learning not well supported at country or implementation team level
  • Country teams retain a passive mindset when should be more assertive
  • Excessive focus on legal and administrative; not enough on culture and mind-set

Can organizations do better? Might Developmental Evaluation offer useful approaches and tools?

Hot Tip: seems tailor-made for large-scale organizational change processes. It is designed for innovative interventions in complex environments when the optimum approach and end-state are not known or knowable. It involves stakeholder sense-making supported by tailored & evolving evaluative inquiry (often also participatory) to quickly test iterations, track progress and guide adaptations. It’s designed to evolve along with the intervention itself.

Hot Tips: Session participants share some good practices:

  • Action learning. Exchanges among implementers increased adaptive capacity and made emotional experience with change easier
  • Pilot initiatives. Time-bound, with frequent reviews and external support
  • “Guerrilla” roll-out. Hand-picked early adopters sparked “viral” spread of new approaches

Lesson Learned: Our review suggests Developmental Evaluation can address many of the challenges of organizational change, including shifting organizational culture. Iterative participatory learning facilitates adaptations that are appropriate and owned by staff. It adds value by building a learning culture – the ultimate driver of large scale organizational change.

We are curious how many organizations are using Developmental Evaluation for their change processes, and what we can learn from this experience. Add your thoughts to the comments, or write to Tosca or Gabrielle if you have an experience to share.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I’m Jennifer Grove, Prevention Outreach Coordinator at the National Sexual Violence Resource Center (NSVRC), a technical assistance provider for anti-sexual violence programs throughout the country.  I’ve worked in this movement for nearly 17 years, but when it comes to evaluation work, I’m a newbie.  Evaluation has been an area of interest for programs for several years now, as many non-profit organizations are tasked with showing funders that sexual violence prevention work is valuable.  But how do you provide resources and training on a subject that you don’t quite understand yourself?  Here are a few of the lessons I’ve learned on my journey so far.

Lesson Learned: An organizational commitment to evaluation is vital.   I’ve seen programs that say they are committed to evaluation hire an evaluator to do the work.  This approach is shortsighted.  When an organization invests all of its time and energy into one person doing all of the work, what happens when that person leaves?  We like to think of evaluation as long-term and integrated into every aspect of an organization.  Here at the NSVRC, we developed a Core Evaluation Team made up of staff who care about or are responsible for evaluation. We contracted with an evaluator to provide training, guide us through hands-on evaluation projects, and provide guidance to the Team over the course of a few years.   We are now two years into the process, and while there have been some staffing changes that have resulted in changes to the Team structure, efforts have continued without interruption.

Lesson Learned: Evaluation capacity-building takes time.     We received training on the various aspects of evaluation and engaged in an internal evaluation project (complete with logic model, interview protocol, coding, and final report).  According to the timeline we developed at the beginning of the process, this should have taken about eight months.  In reality, it took over 12.  The lesson learned here is this:  most organizations do not have the luxury of stopping operations so that staff can spend all of their time training and building their skills for evaluation.  The capacity-building work happens in conjunction with all of the other work the organization is tasked with completing. Flexibility is key.

Hot Tip: Share what you’ve learned.  The most important part of this experience is being able to share what we are learning with others.  As we move through our evaluation trainings, we are capturing our lessons learned and collecting evaluation resources so that we can share them with others in the course of our technical assistance and resource provision.

Rad Resource: Check out an online learning course developed by the NSVRC, Evaluating Sexual Violence Prevention Programs: Steps and strategies for preventionists.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Sharon Wasco, and I am a community psychologist and independent consultant. I describe here a recent shift in my language that underscores, I think, important trends in evaluation:

  • I used to pitch evaluation as a way that organizations could “get ahead of” an increasing demand for evidence-based practice (EBP);
  • Now I sell evaluation as an opportunity for organizations to use practice-based evidence (PBE) to increase impact.

I’d like evaluators to seek a better understanding of EBP and PBE in order to actively span the perceived boundaries of these two approaches.

Most formulations of EBP require researcher driven activity — such as randomized controlled trials (RCT) — and clinical experts to answer questions like: “Is the right person doing the right thing, at the right time, in the right place in the right way, with the right result?” (credit: Anne Payne)

In an editorial introduction to a volume on PBE, Anne K. Swisher offers this contrast:

“In the concept of practice-based evidence, the real, messy, complicated world is not controlled. Instead, real world practice is documented and measured, just as it occurs, “warts” and all.

It is the process of measurement and tracking that matters, not controlling how practice is delivered. This allows us to answer a different, but no less important, question than ‘does X cause Y?’ This question is: ‘how does adding X intervention alter the complex personalized system of patient Y before me?’”

Advocates of PBE make a good case that “evidence supporting the utility, value, or worth of an intervention…can emerge from the practices, experiences, and expertise of family members, youth, consumers, professionals and members of the community.

Further exploration should convince you that EBP and PBE are complementary; and that evaluators can be transformative in the melding of the approaches. Within our field, forces driving the utilization of PBE include more internal evaluators, shared value for culturally competent evaluation, a range of models for participatory evaluation, and interest in collaborative inquiry as a process to support professional learning.

Lessons Learned: How we see “science-practice gaps,” and what we do in those spaces, provide unique opportunities for evaluators to make a difference. Metaphorically, EBP is a bridge and PBE is a Midway.

PBE_EBP 2

 

Further elaboration of this metaphor and more of what I’ve learned about PBE can be found in my speaker presentations materials from Penn State’s Third Annual Conference on Child Protection and Well-Being (scroll to the end of the page — I “closed” the event).

Rad Resource: I have used Chris Lysy’s cartoons to encourage others to look beyond the RCT for credible evidence and useful evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

I’m Kathryn Lowerre, an internal evaluator for the Environmental Health Epidemiology Bureau (EHEB) at the New Mexico Department of Health (NMDOH). My background includes work in Health Impact Assessment (HIA) and teaching in the humanities.

Environmental Health Epidemiology looks at the connections between the environment and human health (nmhealth.org/about/erd/eheb). Funding for many EHEB programs comes through the Centers for Disease Control and Prevention (CDC), including Asthma, Environmental Public Health Tracking, and Lead Poisoning Prevention. As an internal evaluator I have to engage a predictably super-busy state health department staff, some of whom work primarily with data (epidemiologists, analysts) and some of whom work primarily with people (program coordinators, health educators, healthcare providers). I am also responsible for engaging stakeholders from community and professional groups.

Somewhere along the continuum of initial responses to having a new evaluator on board, ranging from “someone who will solve all our problems” to “someone who can’t possibly solve any of our problems,” is the fruitful middle ground. The combination of quantitative and qualitative skills used in evaluation also apply to connecting with colleagues of very different training, experience, or mindset. From them, learn everything you can about internal and external constraints, and program history.

Lesson Learned: In program and evaluation team meetings (as in teaching), smiles and nods are better than frowns and arms folded across the chest, but they don’t necessarily mean that you’ve succeeded in conveying to your audience the evaluation purpose and information you intended.

Whether it’s a big division-wide meeting or a small project-specific group, it’s good to identify one or more people you can touch base with informally, afterwards. This is your reality check. What did they hear, what did they think, and what (if anything) are they planning to do, or do differently? If there’s a specific evaluation component for which they’ll be responsible, make sure both of you agree on the details.

While developing evaluation capacity is always going to be work-in-progress, I believe it’s an important part of an internal evaluator’s role, to encourage colleagues to think systematically about how we do what we do: how we might not only fulfill the requirements of a particular grant, but use evaluation to improve planning and implementation of future projects to make the greatest possible positive change.

Rad Resources: Several CDC programs, including the Asthma Control Program, have great evaluation resources and staff support for public health evaluation, including capacity development (www.cdc.gov/asthma/program_eval/default.htm).

Another resource, familiar to attendees of Michele Tarsilla’s AEA presentations and workshops, is the Evaluation Capacity Development Group’s web site (www.ecdg.net). If it’s new to you, check it out.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top