AEA365 | A Tip-a-Day by and for Evaluators

CAT | Evaluation Use

We are Erin Bock of The Sherwood Foundation and Nora Murphy of TerraLuna Collaborative. We feel fortunate to have been partners in developmental evaluations for several years now, each of acting as an important thought partner and sounding board for the other.

We recently partnered on an evaluation for a community-wide initiative. The Adolescent Health Project, led by the Women’s Fund of Omaha, seeks to change a wicked problem–high STI and teen pregnancy rates–using a systems approach.

Project leadership, in the face of incredible urgency (the county’s STI rates are epidemic levels), knew that there was a need not only to expanded services, but to change the way the present system functions. A learning collaborative was created, facilitated by the evaluation team and made up of grantee leadership who had previously been competitors. The learning collaborative is charged with establishing learning priorities that they, as a group, want to take on. In other words, instead of releasing grant funds and expecting immediate results, the project leaders created space and time for grantees to build trusting relationships.

The foundation and the Women’s Fund of Omaha calls its work “squishy” and embraces complexity, but the learning collaborative experience has been an act of faith. It feels risky to create space for trust when there’s no objective or completion date tied to it. It is an honor that nonprofits would enter into this risky space with project leadership and it is an honor to work with evaluation professionals who can hold us steady through the grey area.

Already we’ve seen the benefits of creating this space. The issue of trauma was surfaced during the fourth learning collaborative meeting. There was a sense that something deeper is going on for young people and that to reduce risky behaviors, we needed to open ourselves up to those difficult experiences…to become culturally and experientially humble.

Hot Tip: Amongst the rush of evaluation deadlines, create intentional space to build trust with your partners.

This space for trust will ensure that we can supersede the hard boundaries of community organizations and health centers and we can get real about the issues that drive this problem in in our community. Our ability to be real with each other will drive authentic use of the evaluation for real change.

Rad Resource: Not only have service recipients experienced trauma, but so have the professionals working with them. Check out this resource to gauge secondary trauma: http://academy.extensiondlc.net/file.php/1/resources/TMCrisis20CohenSTSScale.pdf

Rad Resource: The upcoming book Developmental Evaluation Exemplars edited by Michael Quinn Patton, Kate McKegg and Nan Wehipeihana has a chapter, written by Nora Murphy, describing the process of convening a learning collaborative.

The American Evaluation Association is celebrating Evaluation Use (Eval Use) Topical Interest Group Week. The contributions all this week to aea365 come from our Eval Use TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I am Arnold Love from Toronto, the recent host city of the 2015 Para- and Pan American Games. Toronto also hosted the first Accessibility Innovation Showcase to mark the 25th Anniversary of the Americans With Disabilities Act and the 10th Anniversary of the Ontarians with Disabilities Act.

My evaluation interests include both sports and accessibility, so I want to share with you a powerful and enjoyable way of increasing evaluation use, called Jane’s Walk. It was a pivotal feature of the Para- and Pan Am Games and the Accessibility Showcase.

Jane’s Walk is named after Jane Jacobs, noted researcher and author of The Death and Life of Great American Cities. Jacobs championed the use of direct observation through “eyes on the street” and direct engagement to understand the “messy and complex systems” that comprise the urban landscape and to mobilize findings into action.

Rad Resource: Jane’s Walk is an informal walking tour. Check out the Jane’s Walk website to find out how walks “get people to tell stories about their communities, explore their cities, and connect with neighbors.”

Hot Tip: Several walks take place at the same time, each on a different theme. Local volunteers organize them based on their interests and expertise. For example, one walk during the Accessibility Innovation Showcase explored ideas to make busy intersections and entry to stores more accessible.

Hot Tip: Invite people of different ages and backgrounds to participate. The informal nature of Jane’s Walk encourages each person to voice their perspectives based on unique experience and insights. This energizes the conversations.

Hot Tip: Evaluators need diverse yet balanced views of the discussion topics. Facilitate this by finding two people with different viewpoints to co-lead each walk.

Hot Tip: Taking notes shuts down the trust and free exchange of ideas that are the hallmark of the Jane’s Walk. Instead, tweet your notes to yourself and encourage the other walkers to tweet their comments and ideas or share on social media.

Rad Resource: Adding an incentive can greatly increase use of the findings coming from the Jane’s Walk methodology. Check out how Jane’s Walk partnered with Evergreen CityWorks to offer micro-grants to implement the best ideas (http://janeswalk.org/canada/toronto/grants) with little money, but big results.

Rad Resource: Change Jane’s Walk into a game by geocaching. Hide small items (toys, badges, stories) in locations that fit a specific evaluation theme, such as a coffee shop with an accessible ramp. Then log the coordinates and cache description on http://www.geocaching.com. Use the app to find the cache. Its fun!

Evaluation 2015 Challenge: Organize a few Jane’s Walks for AEA 2015. A great opportunity to experience the methodology first hand and get to know Chicago and other AEA members better.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

We are Joy Kaufman, Associate Professor at Yale University School of Medicine and Director of Program and Service System Evaluation and Evaluation Research and Andrew Case, Assistant Professor or Psychology at the University of North Carolina Charlotte. We are pleased that the Evaluation Use TIG asked us to share work we have done in engaging consumers of mental health services in the evaluation process.

With a primary goal of better understanding consumer perspectives of receiving services at the Connecticut Mental Health Center, four consumer researchers were recruited from the clients served at the Center and trained in all aspects of focus group evaluation. The most salient aspect of this evaluation is the fact that it was developed, implemented and reported by consumers who receive services within the mental health center. Over the past 4 years this team has provided feedback regarding many aspects of care at the Center and their recommendations serve as a blueprint for Center administrators to use in improving the care environment. Perhaps one of the most important outcomes is that this consumer driven evaluation process is now part of how things are done at the mental health center.

Lessons Learned:

Having consumers of behavioral health services evaluate and report their results to the center where they receive care was profound. In our experience as professional evaluators leadership and front line staff, while interested in the results of an evaluation, are often passive recipients of the information. That was not the case in this evaluation, the professionals listened and immediately began reviewing ways to enhance the care experience for consumers.

Having peers lead the evaluation process led service recipients to feel that their voices were heard, a phenomena that consumers of publically behavioral health services do not often experience.

The Center leadership and clinical supervisors reported that the evaluation had added legitimacy and authenticity because of the central role of the consumer researchers.

As evaluators we have learned that while true partnership with service recipients may take more time, the results of the evaluation have increased validity, value and usefulness to the program.

Rad Resources: Patient-Centered Outcomes Research Institute provides resources, including funding to further the engagement of consumers in evaluation of health services.

A first person account of the evaluation process highlighted above was conducted and published in the American Journal of Community Psychology. This paper includes accounts from four stakeholder groups regarding how the project was perceived by stakeholders at the mental health center and the impact of this project on the care environment.

The Focus Group Kit (Morgan & Krueger 1997, Sage Publications) includes a very helpful volume on including community members in focus groups.

The American Evaluation Association is celebrating Evaluation Use (Eval Use) Topical Interest Group Week. The contributions all this week to aea365 come from our Eval Use TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Charmagne Campbell-Patton and I am an independent evaluation consultant. About a year ago, I made the transition from my role as program manager and internal evaluator at an education nonprofit, to an external evaluation consultant. I continued working with my former employer as a client and, in my naiveté, I thought the transition would be relatively straightforward. I figured that since I knew the inner workings of the organization and had strong relationships with most staff members, it would be easy to continue to conduct useful evaluation.

Lessons Learned: My first mistake was failing to recognize and address that as the program manager, I used to be the primary intended user of the evaluation results. When I made the transition to an external consultant, I needed to be much more intentional about designing evaluations that met the needs of the new intended users.

Hot Tip: Be aware of how your position affects use. The personal factor is different in different relationships – internal and external.

Lesson Learned: Process use is different internally and externally. As a staff member, I used to be able to identify opportunities for process use in an ongoing and informal way. As an external consultant, however, I again had to be much more intentional about identifying opportunities and planning for process use.

Hot Tip: External evaluators need to be intentional about seeking opportunities to support evaluative thinking across the organization through more formalized process use.

Cool Trick: One way to engage staff is a reflective practice exercise. Bring staff together to reflect on the question: “What are things you know you should be doing but aren’t?” This question gets people thinking about potential personal barriers to using information. That sets the stage for discussing barriers to evaluation use organizationally. Next identify enabling factors that support and enhance use, and ways to overcome barriers to use.

It’s also worth noting that despite some of the challenges noted above, the transition from internal to external also gave me a new perspective on evaluation use. Once I recognized some of the barriers to use as an external consultant, I was actually able to use my position to promote use more effectively than I did while internal. The added distance gave me some leverage that I lacked as a staff member to call attention to opportunities and challenges to evaluation use across the organization.

Rad Resources: Essentials of Utilization-Focused Evaluation, Michael Quinn Patton, Sage (2012).

Consulting Start-Up and Management, Gail Barrington, Sage (2012).

Using Reflective Practice for Developmental Evaluation, Charmagne Campbell-Patton, AEA365 March 2015.

The American Evaluation Association is celebrating Evaluation Use (Eval Use) Topical Interest Group Week. The contributions all this week to aea365 come from our Eval Use TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

We are Nora Murphy and Keith Miller with TerraLuna Collaborative, an evaluation cooperative in the Twin Cities, Minnesota. We feel fortunate to be the evaluation partners on several large developmental evaluations.

One project we are working on seeks to support the inner wellbeing journey of seasoned social entrepreneurs. On a recent conference call, a project team member asked: “How do you know when to use the data to make a change to the program? Isn’t struggle an important part of the individual’s wellbeing journey? If we react too quickly to data and ‘fix’ everything the participant isn’t comfortable with, aren’t we minimizing their opportunities for growth?”

He’s right. I (Nora) shared my perspective that evaluation data is only one source of information that should be used when making a decision. Also important to consider is: 1) our intuition, 2) our accumulated personal and professional wisdom, and 3) the collective wisdom of the group of people seeking to use the evaluation findings.

Hot Tip: Be reflective and identify the source(s) of wisdom you are drawing on.

Reflecting on that conversation, Keith and I realized that my response was rooted in the guiding principles of a three-year partnership with the Minnesota Humanities Center, Omaha Public Schools, and The Sherwood Foundation. The guiding principles are:

  • Build and strengthen relationships;
  • Recognize the power of story and the danger of absence;
  • Learn from and with multiple voices; and
  • Amplify community solutions for change.

These principles guide how we show up as evaluators and how we do our work. Evaluation use happens when there is a foundation of trust–trust in both the results and the evaluators. We’ve learned to build trust by investing in relationships, intentionally including multiple voices, seeking absent narratives, and amplifying community ideas and solutions.

Hot Tip: Be responsive, not reactive.

Patton (2010) suggests that one role of developmental evaluators is to look for and document “forks in the road that move the program in new directions. (p. 150)” As developmental evaluators we can facilitate conversations about whether the data be used immediately because it indicates a fork in the road, or whether the data is something to be aware of and track. During these conversations we can also create space for intuition and wisdom.

Lesson Learned: These guiding principles have helped us shape our role as evaluation partners and increase evaluation use. Our partners trust us to engage them in reflective conversations about what the findings mean how they might be used.

Rad Resource: Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Michael Quinn Patton, Guilford (2010).

Rad Resource: Nora F. Murphy and Jennifer Tonko – How Do You Understand the Impact of the Humanities?

The American Evaluation Association is celebrating Evaluation Use (Eval Use) Topical Interest Group Week. The contributions all this week to aea365 come from our Eval Use TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Keiko Kuji-Shikatani, the current chair of the Evaluation Use Topical Interest Group (TIG), one of the original AEA TIGs. The Evaluation Use TIG was born of the interest in evaluation utilization in the 1970s, extending into both theoretical and empirical work on Use in the 1980s and 1990s, and to a broader conceptualization of use and influence in the 2000s. The Evaluation Use TIG is committed to understanding and enhancing the use of evaluation in a variety of contexts and to maximizing the positive influence of evaluation through both the evaluation process and the results produced.

Program evaluation began with the desire to seek information that can be utilized to improve the human condition. Use may not be apparent to those who are not internal to an organization since the process of using evaluation requires discussions that may be very sensitive in nature. This week’s AEA365 will examine how Evaluation Use TIG members are striving to support various efforts in diverse and complex contexts.

As for me, as an internal evaluator for the Ontario Ministry of Education, utilization of evaluation is something that is the norm in what I do every day in pursuit of reaching every student. The world in which our students are growing up and going to be leaders and learners throughout their lifetime is a complex and a quickly changing place. In order to support students so they are the best that they can be, those in the system needs to work smarter and use evaluative thinking to guide every facet of improvement efforts.

Rad Resource: Evaluative thinking is systematic, intentional and ongoing attention to expected results. It focuses on how results are achieved, what evidence is needed to inform future actions and how to improve future results. One cannot really discuss Evaluation Use without Michael Quinn Patton – check out (http://www.mcf.org/news/giving-forum/making-evaluation-meaningful).

Our work as internal evaluators involve continually communicating the value of evaluative thinking and guiding developmental evaluation (DE) by modeling the use of evidence to understand more precisely the needs of all students and to monitor and evaluate progress of improvement efforts.

Hot Tips: Check out how evaluation (http://edu.gov.on.ca/eng/teachers/studentsuccess/CCL_SSE_Report.pdf) is used to inform the next steps https://www.edu.gov.on.ca/eng/teachers/studentsuccess/strategy.html) and how that change can look like (http://edu.gov.on.ca/eng/research/EvidenceOfImprovementStudy.pdf).

In our work, the ongoing involvement of evaluators, who are intentionally embedded in program and policy development and implementation teams contribute to modeling evaluative thinking and guiding DE that build system evaluation capacity. The emphasis is on being a learning organization through evidence-informed, focused improvement planning and implementation.

Hot Tips: check out how evaluative thinking is embedded in professional learning (http://sim.abel.yorku.ca/ )or how evaluation thinking is embedded in improvement planning (http://www.edu.gov.on.ca/eng/policyfunding/memos/september2012/ImprovePlanAssessTool.pdf).

The American Evaluation Association is celebrating Evaluation Use (Eval Use) Topical Interest Group Week. The contributions all this week to aea365 come from our Eval Use TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi all!  Liz Zadnik here, aea365 Outreach Coordinator and occasional Saturday Contributor.  I wanted to share some insights and reflection I had as the result from a recent EVALTALK discussion thread.  Last month, someone posed the following request:

I’m searching for a “Why Evaluate” article for parents/community members/stakeholders. An article that explains in clear and plain language why organizations evaluate (particularly schools) and evaluation’s potential benefits. Any suggestions?

Rad Resources: Others were kind enough to share resources, including this slideshare deck that moves through some language and reasoning for program evaluation and assessment, book recommendations  There is also a very helpful list from PlainLanguage.gov offering possible replacements for commonly-used words.  (Even the headings – “Instead of…” and “Try…” – make the shift seems much more manageable).

Lessons Learned: Making evaluation accessible and understandable requires tapping into an emotional and experiential core.

  • Think about never actually saying “evaluate” or “evaluation.”  It’s OK not to use phrases or terms if they are obstacles for engaging people in the evaluation process.  If “capturing impact,” “painting a picture,” “tracking progress” or any other combination of words works…use it!  It may be helpful to talk with interested or enthusiastic community members about what they think of evaluation and what it means to them.  This helps gain insight into relevant language and framing for future discussions.
  • Have the group brainstorm potential benefits, rather than listing them for them.  Similar to engaging community members in discussion of the “how” is also asking them what they feel is the “why” of evaluation.  I have heard the most amazing and insightful responses when I have done this with organizations and community members.  Ask the group “What can we do with the information we get from this question/item/approach?” and see what happens!
  • Evaluation is about being responsible and accountable.  For me, program evaluation and assessment is about ethical practice and stewardship of resources.  I have found community members and colleagues receptive when I frame evaluation as a way to make sure we are doing what we say we’re doing – that we are being transparent, accountable, and clear on our expectations and use of funds.

We’d love to hear how others in the aea365 readership are engaging communities in accessible conversations about evaluation.  Share your tips and resources in the comments section!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi – I’m Erik Mason, the Curator of Research at the Longmont Museum and Cultural Center, located in Longmont, Colorado, about 35 miles northwest of Downtown Denver. I am not an evaluator – in fact, the word “evaluation” does not appear in my job description.  I have come to believe, however, that evaluation is critical to the success of my work as a museum curator.  Much of that realization is the result of my participation in the Denver Evaluation Network (DEN), a collection of 15 museums across the Denver metro area that have made a commitment to learn about, and do, evaluation on a regular basis.

Only two members of DEN have full-time evaluators on staff. The rest of us are a mix of educators, exhibit developers, administrators, and curators.  Our daily work is filled with school tours, fundraising, label writing, and all the other stuff that goes into making museums fun and interesting places to visit. As a result, evaluation can get short shrift. We fall back to anecdote and what we think we know.

Over the last two years, the members of DEN have been presenting at museum conferences about the work we are doing to bring evaluation to a broader community.  It has been fascinating watching people who always thought evaluation was something scary and hard, and required a large supply of clipboards, realize that it can be done in many ways.

Within my workplace, I have been pleasantly surprised as we have begun incorporating evaluation into more and more of what we do. Data gathered from iPad surveys provides a baseline understanding of our audience demographics and allows us to compare the changes in our audience as our special exhibits change. Evaluation is now a part of the development of all our exhibits. In the course of doing evaluation, I’ve seen attitudes change from “Why are we wasting our time doing this?” to “When are we doing another evaluation?”

Rad Resource: Check out this video of testimonials from members of DEN.

Hot Tip for Evaluation 2014 Attendees: Denver really is the “Mile High City” and you can take home proof of this fact with a short jaunt and a camera. A free shuttle and brief walk away from the Colorado Convention Center is the Colorado State Capitol building, a Neoclassical building that sits at the eastern end of Denver’s Civic Center Park. The Capitol building sits exactly one mile above sea level, and the official marker can be found on 13th step. The Capitol building is emerging from a multi-year restoration effort with a shiny new coat of gold on its dome, in honor of Colorado’s mining heritage. Free tours of the Colorado Capitol Building are offered Monday-Friday.

We’re looking forward to October and the Evaluation 2014 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

I am Humberto Reynoso-Vallejo, a private consultant on health services research. A few years ago, I was part of an exploratory study of Latino caregivers in the Boston area caring for a family member suffering Alzheimer’s disease. Difficulties facing those families coping with the disease have promoted the rise of support groups for diverse population group. Support groups for racial/ethnic diverse caregivers were scarce, and in the case of Latino caregivers in the Boston area nonexistent. To respond to this need, I tried to develop a support group for Latinos with the assistance of the Alzheimer’s Association. After several unsuccessful attempts, I conducted a focus group with four caregivers to identify barriers to participation. Findings indicated that caregivers faced a number of issues including: lack of transportation; lack of available time to take off from other responsibilities; the absence of linguistically appropriate support groups; caring for other family members dealing with an array of health problems (multiple caregiving); and, other personal and social stressors.

I designed an alternative and pragmatic model support group, which took the form of a radio program. The “radio support group” directly targeted caregiver’s concerns and aimed to:

a) Disseminate culturally relevant information, largely from the point of view of the caregivers themselves, either as guest in the program or when calling into; and,

b) Reduce the sense of isolation that many caregivers feel on a daily basis as a result of their caregiving roles.

I facilitated the radio support group with the participation of caregivers, professionals and service providers. Four programs were aired exploring topics such as memory problems, identifying signs of dementia, caregiver needs, and access to services. After each radio program was aired, I called the 14 participant caregivers to explore their reactions, and found that the majority of them were not able to participate. Since the “live” radio support group was not accomplishing its original purpose of disseminating information and reducing caregivers sense of isolation, I decided to distribute the edited audiotapes of the 4 programs to all caregivers. Overall, caregivers found the information useful and many established contact with others. 

Lessons Learned:

  • This model of intervention, the radio support group, showed that innovation simultaneously with cultural relevant material is promising.
  • Research and evaluation should adapt to the particular needs and social context of Latino caregivers of family members with Alzheimer’s disease.
  • There is a need for more culturally appropriate types of interventions that mobilize caregivers’ own strengths, values, and resources.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

No tags

Hi, I am Jindra Cekan, PhD, an independent evaluator with 25 years of international development fieldwork, at www.ValuingVoices.com.

What if we saw our true clients as project participants and wanted the return on investment of projects be maximally sustained? How would this change how we evaluate, capture, learn together?

Lesson Learned: Billions of dollars of international development assistance are spent every year and we do baseline, midterm and final evaluations on most of them.  We even sometimes evaluate sustainability using OECD’s DAC Criteria for Evaluating Development Assistance: relevance, effectiveness, efficiency, impact and sustainability.  This is terrific, but deeply insufficient. We rarely ask communities and local NGOs during or after implementation what they think about our projects, how to best sustain activities themselves and how to help them do so.

Also, very rarely do we return 3, 5, or 10 years after projects close and ask participants what is “still standing” that they managed to sustain themselves. How often do we take community members, local NGOs, or national evaluators as the leaders of evaluations of long-term self-sustainability of our projects? Based on my research 99% of international aid projects are not evaluated for sustainability or impact after project close by anyone, much less by the communities they are designed to serve.

With $1.52 trillion dollars in US and EU foreign aid being programmed for 2014–2020, our industry desperately needs feedback on what communities feel will be sustainable now, what interventions offer the likelihood of positive impact beyond the performance of the project’s planned (log-framed) activities. Shockingly, this does not exist today.

Further, such learning needs to be transparently captured and shared in open-date format for collective learning, especially at the country and implementer level. Creating feedback loops between project participants, national stakeholders, partners and donors that foster self-sustainability will foster true impact.

Hot Tip: We can start in current project evaluations. We need to ask these questions of men, women, youth, elders, the richer and poorer in communities as well as of local stakeholders. Ideally we would request national evaluators to ask (revise!) questions such as:

  • How valuable have you found the project overall in terms of being able to sustain activities yourselves?
  • How well were project activities transferred to local stakeholders?

o   Who is helping you sustain the project locally once it ends?

  • What were the activities do you think you can least maintain yourselves?

o   What should be done to help you?

  • What were activities that you wish the project had supported that build on your community’s strengths?
  • Was there any result that came of the project that was surprising or unexpected?
  • What else do we need to learn from you to have greater success in the future?
Clipped from http://www.oecd.org/dac/evaluation/daccriteriaforevaluatingdevelopmentassistance.htm

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

<< Latest posts

Older posts >>

Archives

To top