AEA365 | A Tip-a-Day by and for Evaluators

CAT | Evaluation Use

awab

Awab

My name is Awab and I am working as Monitoring & Evaluation Specialist for Tertiary Education Support Project (TESP), at the Higher Education Commission (HEC), Islamabad, Pakistan.

In my experience, the most challenging task in any evaluation is to sell the findings and recommendations to the decision makers and make the evaluation usable. Many evaluations stay on the shelf and do not go beyond the covers of the report as their findings are not owned and used by the management and implementation team.

After conducting the Level-1 &2 Evaluation (shared here earlier https://goo.gl/gyit55), recently we conducted the Level-3 evaluation of the TESP training programs (please find the full report on https://goo.gl/AELJtU). The overall purpose of the evaluation was to know if the learning from training had transformed into improved performance at work place. Also, we wanted to document the lessons learnt from the training and incorporate them in improving strategies for the future training programs.

Cool Tricks:

In order to ensure that when we conduct the Level-3 Evaluation of the training program of TESP, its findings and recommendations are used, we adopted the following strategies:

  1. Drafted the scope of work for Level-3 Evaluation and shared it with the top management and the implementation team. As a result they clearly knew the purpose and importance of the Level-3 Evaluation in measuring the effects of training on performance of its participants.
  2. Engaged the implementation team in the processes of drafting the survey questionnaire and finalizing it. As a result, they curiously waited for the evaluation results so that they could learn how well their training program had performed in improving the performance.
  3. Presented the results overall to make them easy to understand. Then we disaggregated the information and explained the results ‘training theme-wise’ and ‘implementation partner (IP)-wise.’ So, the implementation team knows the problem areas very precisely, avoiding over-generalizations.
  4. Used data visualization techniques and presented the information in the forms of attractive graphs with appropriate highlights, as shown in the following figure. This made the findings easy to understand. awab2
  5. Adopted a sandwich approach in presenting the findings. Highlighted the achievements of the training program, before we went to point out the gaps. And closed the presentation with a note of appreciation for the implementation team. This helped the implementation team in swallowing the not-so-good feedback.

All the above tricks helped the management in acknowledging the findings of the evaluation and adopting its recommendations. Interestingly, at the end of our final presentation, the Leader of the training implementation team was the one to lead the applause.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Dec/16

28

Keith Child on Evaluation Research Quality

I am Keith Child, a Senior Research Advisor to the Committee on Sustainability Assessment.

The debate around appropriate criteria for measuring research quality has taken a new turn as development donors apply collective pressure on development agencies to prove that they can bringing about positive change for intended beneficiaries.  It is within this research for development (R4D) context that traditional deliberative (e.g., peer review) and analytic (e.g., bibliometric) approaches to evaluating research merit are themselves not measuring up.  In part this is because the design and evaluation of research has been the exclusive preserve of scientists who tend to judge research quality according to science values like internal and external validity, research design and implementation, replicability and so on, rather than on research use, uptake and impact.  Within the scientific community these latter criteria are seen largely as “somebody else’s problem”.  The message from the donor community, on the other hand, is adamant: science and scientific values “can no longer be considered a largely academic enterprise divorced from societal concerns about social goals”.

Rad Resource: To reconcile these sometimes-conflicting perspectives, the Canadian International Development Research Centre (IDRC) has recently developed an alternative approach, called Research Quality Plus (RQ+).  While the RQ+ framework consists of three core components, worth noting here are the  four dimensions and subdimensions for assessing research quality:

  1. Research Integrity

2. Research Legitimacy

2.1 Addressing potentially negative consequences

2.2 Gender-responsiveness

2.3 Inclusiveness

2.4 Engagement with local knowledge

3. Research Importance

3.1 Originality

3.2 Relevance

4. Positioning for use

4.1 Knowledge accessibility and sharing

4.2 Timeliness and actionability

Dimensions 1 and 3 are typically examined as part of a research quality framework.  Dimension 2, with its emphasis on gender, inclusiveness and local knowledge is less the preserve of scientists, but certainly a core idea in R4D settings.  It is the fourth dimension, however, that makes the RQ+ approach so novel for evaluation research quality.

The “positioning for use” criteria attempts to measure the extent to which research has been positioned to increase the probability of its use.  Significantly, research influence (e.g., bibliometric or scientometric analysis, reputational studies, etc.) and actual development impact are not part of the assessment criteria.  Instead, dimension 4.1 focuses on the extent to which research products are targeted to specific users, conveyed in a manner that is intelligible to intended beneficiaries and is appropriate for the socio-economic conditions of their context.  Dimension 4.2 focus on the intended user setting at a particular time and the extent to which researchers have internalized this in their planning.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

This is part of a two-week series honoring our living evaluation pioneers in conjunction with Labor Day in the USA (September 5).

My name is Stan Capela and the Vice President for Quality Management and Corporate Compliance Officer for HeartShare Human Services of New York.

Why I chose to honor this evaluator: 

I am honoring Michael Q. Patton because he defines what it means to be a mentor. A mentor is someone who tries to help you break into your field. MQP was there to help me early on in my career when I was still an inexperienced evaluator. At the time, I couldn’t understand why no one wanted to deal with me and why evaluation was intimidating to my colleagues. To address this issue, MQP suggested a book entitled Utilization Focused Evaluation. He said it would offer some suggestions on how to overcome resistance to evaluation and help stakeholders understand its value. With this new approach, stakeholders told me how useful evaluation was to them.

A mentor is someone who inspires you to move forward no matter what. When I was President of the Society for Applied Sociology (SAS), MQP gave the keynote at my conference one month after September 11th. Everyone was canceling their conferences because no one wanted to fly. MQP did not back down. Instead, he carried on to deliver his keynote speech on the relevance of program evaluation to the field of applied sociology.

A mentor is someone who helps you to make positive strides in your career. He reads evaltalk and saw a post that I did. MQP asked if he could include it in a revised edition of Utilization Focused Evaluation. This book was my bible on program evaluation from the very beginning.

A mentor is someone who gives you feedback that helps you produce your best work. MQP took the time to review a PQI Plan that I developed for my $150 million organization. Following that, he suggested that I offer an expert lecture on it at the AEA Conference to help strengthen the field.

A mentor is someone who has made a difference in this world. MQP has devoted his life to strengthening the field and who provided me with nearly 40 years of impactful evaluation experience that makes me feel like the richest person on the face of this earth.

As my mentor, MQP helped me understand the right questions to ask and how best to provide the information in a way that helps strengthen program performance. In the end, MQP helped me become the evaluator that I am today and to better serve the children, adults and families in HeartShare’s care.

As an evaluator, he has helped me understand the importance of utilization and how to communicate the value of program evaluation in strengthening program performance.

Resources:

Michael Q. Patton Sage Publication Page

Michael Q. Patton Amazon Page

The American Evaluation Association is celebrating Labor Day Week in Evaluation: Honoring Evaluation’s Living Pioneers. The contributions this week are tributes to our living evaluation pioneers who have made important contributions to our field and even positive impacts on our careers as evaluators. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi y’all, Daphne Brydon here. I am a clinical social worker and independent evaluator. In social work, we know that a positive relationship built between the therapist and client is more important than professional training in laying the foundation for change at an individual level. I believe positive engagement is key in effective evaluation as well as evaluation is designed to facilitate change at the systems level. When we engage our clients in the development of an evaluation plan, we are setting the stage for change…and change can be hard.

The success of an evaluation plan and a client’s capacity to utilize information gained through the evaluation depends a great deal on the evaluator’s ability to meet the client where they are and really understand the client’s needs – as they report them. This work can be tough because our clients are diverse, their needs are not uniform, and they present with a wide range of readiness. So how do we, as evaluators, even begin to meet each member of a client system where they are? How do we roll with client resistance, their questions, and their needs? How do we empower clients to get curious about the work they do and get excited about the potential for learning how to do it better?

Hot Tip #1: Engage your clients according to their Stage of Change (see chart below).

I borrow this model most notable in substance abuse recovery to frame this because in all seriousness, it fits. Engagement is not a linear, one-size-fits-all, or step-by-step process. Effective evaluation practice demands we remain flexible amidst the dynamism and complexity our clients bring to the table. Understanding our clients’ readiness for change and tailoring our evaluation accordingly is essential to the development of an effective plan.

Stages of Change for Evaluation

Hot Tip #2: Don’t be a bossypants.

We are experts in evaluation but our clients are the experts in the work they do. Taking a non-expert stance requires a shift in our practice toward asking the “right questions.” Our own agenda, questions, and solutions need to be secondary to helping clients define their own questions, propose their own solutions, and build their capacity for change. Because in the end, our clients are the ones who have to do the hard work of change.

Hot Tip #3: Come to my session at AEA 2015.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Daphne? She’ll be presenting as part of the Evaluation 2015 Conference Program, November 9-14 in Chicago, Illinois.

We are Erin Bock of The Sherwood Foundation and Nora Murphy of TerraLuna Collaborative. We feel fortunate to have been partners in developmental evaluations for several years now, each of acting as an important thought partner and sounding board for the other.

We recently partnered on an evaluation for a community-wide initiative. The Adolescent Health Project, led by the Women’s Fund of Omaha, seeks to change a wicked problem–high STI and teen pregnancy rates–using a systems approach.

Project leadership, in the face of incredible urgency (the county’s STI rates are epidemic levels), knew that there was a need not only to expanded services, but to change the way the present system functions. A learning collaborative was created, facilitated by the evaluation team and made up of grantee leadership who had previously been competitors. The learning collaborative is charged with establishing learning priorities that they, as a group, want to take on. In other words, instead of releasing grant funds and expecting immediate results, the project leaders created space and time for grantees to build trusting relationships.

The foundation and the Women’s Fund of Omaha calls its work “squishy” and embraces complexity, but the learning collaborative experience has been an act of faith. It feels risky to create space for trust when there’s no objective or completion date tied to it. It is an honor that nonprofits would enter into this risky space with project leadership and it is an honor to work with evaluation professionals who can hold us steady through the grey area.

Already we’ve seen the benefits of creating this space. The issue of trauma was surfaced during the fourth learning collaborative meeting. There was a sense that something deeper is going on for young people and that to reduce risky behaviors, we needed to open ourselves up to those difficult experiences…to become culturally and experientially humble.

Hot Tip: Amongst the rush of evaluation deadlines, create intentional space to build trust with your partners.

This space for trust will ensure that we can supersede the hard boundaries of community organizations and health centers and we can get real about the issues that drive this problem in in our community. Our ability to be real with each other will drive authentic use of the evaluation for real change.

Rad Resource: Not only have service recipients experienced trauma, but so have the professionals working with them. Check out this resource to gauge secondary trauma: http://academy.extensiondlc.net/file.php/1/resources/TMCrisis20CohenSTSScale.pdf

Rad Resource: The upcoming book Developmental Evaluation Exemplars edited by Michael Quinn Patton, Kate McKegg and Nan Wehipeihana has a chapter, written by Nora Murphy, describing the process of convening a learning collaborative.

The American Evaluation Association is celebrating Evaluation Use (Eval Use) Topical Interest Group Week. The contributions all this week to aea365 come from our Eval Use TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I am Arnold Love from Toronto, the recent host city of the 2015 Para- and Pan American Games. Toronto also hosted the first Accessibility Innovation Showcase to mark the 25th Anniversary of the Americans With Disabilities Act and the 10th Anniversary of the Ontarians with Disabilities Act.

My evaluation interests include both sports and accessibility, so I want to share with you a powerful and enjoyable way of increasing evaluation use, called Jane’s Walk. It was a pivotal feature of the Para- and Pan Am Games and the Accessibility Showcase.

Jane’s Walk is named after Jane Jacobs, noted researcher and author of The Death and Life of Great American Cities. Jacobs championed the use of direct observation through “eyes on the street” and direct engagement to understand the “messy and complex systems” that comprise the urban landscape and to mobilize findings into action.

Rad Resource: Jane’s Walk is an informal walking tour. Check out the Jane’s Walk website to find out how walks “get people to tell stories about their communities, explore their cities, and connect with neighbors.”

Hot Tip: Several walks take place at the same time, each on a different theme. Local volunteers organize them based on their interests and expertise. For example, one walk during the Accessibility Innovation Showcase explored ideas to make busy intersections and entry to stores more accessible.

Hot Tip: Invite people of different ages and backgrounds to participate. The informal nature of Jane’s Walk encourages each person to voice their perspectives based on unique experience and insights. This energizes the conversations.

Hot Tip: Evaluators need diverse yet balanced views of the discussion topics. Facilitate this by finding two people with different viewpoints to co-lead each walk.

Hot Tip: Taking notes shuts down the trust and free exchange of ideas that are the hallmark of the Jane’s Walk. Instead, tweet your notes to yourself and encourage the other walkers to tweet their comments and ideas or share on social media.

Rad Resource: Adding an incentive can greatly increase use of the findings coming from the Jane’s Walk methodology. Check out how Jane’s Walk partnered with Evergreen CityWorks to offer micro-grants to implement the best ideas (http://janeswalk.org/canada/toronto/grants) with little money, but big results.

Rad Resource: Change Jane’s Walk into a game by geocaching. Hide small items (toys, badges, stories) in locations that fit a specific evaluation theme, such as a coffee shop with an accessible ramp. Then log the coordinates and cache description on http://www.geocaching.com. Use the app to find the cache. Its fun!

Evaluation 2015 Challenge: Organize a few Jane’s Walks for AEA 2015. A great opportunity to experience the methodology first hand and get to know Chicago and other AEA members better.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

We are Joy Kaufman, Associate Professor at Yale University School of Medicine and Director of Program and Service System Evaluation and Evaluation Research and Andrew Case, Assistant Professor or Psychology at the University of North Carolina Charlotte. We are pleased that the Evaluation Use TIG asked us to share work we have done in engaging consumers of mental health services in the evaluation process.

With a primary goal of better understanding consumer perspectives of receiving services at the Connecticut Mental Health Center, four consumer researchers were recruited from the clients served at the Center and trained in all aspects of focus group evaluation. The most salient aspect of this evaluation is the fact that it was developed, implemented and reported by consumers who receive services within the mental health center. Over the past 4 years this team has provided feedback regarding many aspects of care at the Center and their recommendations serve as a blueprint for Center administrators to use in improving the care environment. Perhaps one of the most important outcomes is that this consumer driven evaluation process is now part of how things are done at the mental health center.

Lessons Learned:

Having consumers of behavioral health services evaluate and report their results to the center where they receive care was profound. In our experience as professional evaluators leadership and front line staff, while interested in the results of an evaluation, are often passive recipients of the information. That was not the case in this evaluation, the professionals listened and immediately began reviewing ways to enhance the care experience for consumers.

Having peers lead the evaluation process led service recipients to feel that their voices were heard, a phenomena that consumers of publically behavioral health services do not often experience.

The Center leadership and clinical supervisors reported that the evaluation had added legitimacy and authenticity because of the central role of the consumer researchers.

As evaluators we have learned that while true partnership with service recipients may take more time, the results of the evaluation have increased validity, value and usefulness to the program.

Rad Resources: Patient-Centered Outcomes Research Institute provides resources, including funding to further the engagement of consumers in evaluation of health services.

A first person account of the evaluation process highlighted above was conducted and published in the American Journal of Community Psychology. This paper includes accounts from four stakeholder groups regarding how the project was perceived by stakeholders at the mental health center and the impact of this project on the care environment.

The Focus Group Kit (Morgan & Krueger 1997, Sage Publications) includes a very helpful volume on including community members in focus groups.

The American Evaluation Association is celebrating Evaluation Use (Eval Use) Topical Interest Group Week. The contributions all this week to aea365 come from our Eval Use TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Charmagne Campbell-Patton and I am an independent evaluation consultant. About a year ago, I made the transition from my role as program manager and internal evaluator at an education nonprofit, to an external evaluation consultant. I continued working with my former employer as a client and, in my naiveté, I thought the transition would be relatively straightforward. I figured that since I knew the inner workings of the organization and had strong relationships with most staff members, it would be easy to continue to conduct useful evaluation.

Lessons Learned: My first mistake was failing to recognize and address that as the program manager, I used to be the primary intended user of the evaluation results. When I made the transition to an external consultant, I needed to be much more intentional about designing evaluations that met the needs of the new intended users.

Hot Tip: Be aware of how your position affects use. The personal factor is different in different relationships – internal and external.

Lesson Learned: Process use is different internally and externally. As a staff member, I used to be able to identify opportunities for process use in an ongoing and informal way. As an external consultant, however, I again had to be much more intentional about identifying opportunities and planning for process use.

Hot Tip: External evaluators need to be intentional about seeking opportunities to support evaluative thinking across the organization through more formalized process use.

Cool Trick: One way to engage staff is a reflective practice exercise. Bring staff together to reflect on the question: “What are things you know you should be doing but aren’t?” This question gets people thinking about potential personal barriers to using information. That sets the stage for discussing barriers to evaluation use organizationally. Next identify enabling factors that support and enhance use, and ways to overcome barriers to use.

It’s also worth noting that despite some of the challenges noted above, the transition from internal to external also gave me a new perspective on evaluation use. Once I recognized some of the barriers to use as an external consultant, I was actually able to use my position to promote use more effectively than I did while internal. The added distance gave me some leverage that I lacked as a staff member to call attention to opportunities and challenges to evaluation use across the organization.

Rad Resources: Essentials of Utilization-Focused Evaluation, Michael Quinn Patton, Sage (2012).

Consulting Start-Up and Management, Gail Barrington, Sage (2012).

Using Reflective Practice for Developmental Evaluation, Charmagne Campbell-Patton, AEA365 March 2015.

The American Evaluation Association is celebrating Evaluation Use (Eval Use) Topical Interest Group Week. The contributions all this week to aea365 come from our Eval Use TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

We are Nora Murphy and Keith Miller with TerraLuna Collaborative, an evaluation cooperative in the Twin Cities, Minnesota. We feel fortunate to be the evaluation partners on several large developmental evaluations.

One project we are working on seeks to support the inner wellbeing journey of seasoned social entrepreneurs. On a recent conference call, a project team member asked: “How do you know when to use the data to make a change to the program? Isn’t struggle an important part of the individual’s wellbeing journey? If we react too quickly to data and ‘fix’ everything the participant isn’t comfortable with, aren’t we minimizing their opportunities for growth?”

He’s right. I (Nora) shared my perspective that evaluation data is only one source of information that should be used when making a decision. Also important to consider is: 1) our intuition, 2) our accumulated personal and professional wisdom, and 3) the collective wisdom of the group of people seeking to use the evaluation findings.

Hot Tip: Be reflective and identify the source(s) of wisdom you are drawing on.

Reflecting on that conversation, Keith and I realized that my response was rooted in the guiding principles of a three-year partnership with the Minnesota Humanities Center, Omaha Public Schools, and The Sherwood Foundation. The guiding principles are:

  • Build and strengthen relationships;
  • Recognize the power of story and the danger of absence;
  • Learn from and with multiple voices; and
  • Amplify community solutions for change.

These principles guide how we show up as evaluators and how we do our work. Evaluation use happens when there is a foundation of trust–trust in both the results and the evaluators. We’ve learned to build trust by investing in relationships, intentionally including multiple voices, seeking absent narratives, and amplifying community ideas and solutions.

Hot Tip: Be responsive, not reactive.

Patton (2010) suggests that one role of developmental evaluators is to look for and document “forks in the road that move the program in new directions. (p. 150)” As developmental evaluators we can facilitate conversations about whether the data be used immediately because it indicates a fork in the road, or whether the data is something to be aware of and track. During these conversations we can also create space for intuition and wisdom.

Lesson Learned: These guiding principles have helped us shape our role as evaluation partners and increase evaluation use. Our partners trust us to engage them in reflective conversations about what the findings mean how they might be used.

Rad Resource: Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Michael Quinn Patton, Guilford (2010).

Rad Resource: Nora F. Murphy and Jennifer Tonko – How Do You Understand the Impact of the Humanities?

The American Evaluation Association is celebrating Evaluation Use (Eval Use) Topical Interest Group Week. The contributions all this week to aea365 come from our Eval Use TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Keiko Kuji-Shikatani, the current chair of the Evaluation Use Topical Interest Group (TIG), one of the original AEA TIGs. The Evaluation Use TIG was born of the interest in evaluation utilization in the 1970s, extending into both theoretical and empirical work on Use in the 1980s and 1990s, and to a broader conceptualization of use and influence in the 2000s. The Evaluation Use TIG is committed to understanding and enhancing the use of evaluation in a variety of contexts and to maximizing the positive influence of evaluation through both the evaluation process and the results produced.

Program evaluation began with the desire to seek information that can be utilized to improve the human condition. Use may not be apparent to those who are not internal to an organization since the process of using evaluation requires discussions that may be very sensitive in nature. This week’s AEA365 will examine how Evaluation Use TIG members are striving to support various efforts in diverse and complex contexts.

As for me, as an internal evaluator for the Ontario Ministry of Education, utilization of evaluation is something that is the norm in what I do every day in pursuit of reaching every student. The world in which our students are growing up and going to be leaders and learners throughout their lifetime is a complex and a quickly changing place. In order to support students so they are the best that they can be, those in the system needs to work smarter and use evaluative thinking to guide every facet of improvement efforts.

Rad Resource: Evaluative thinking is systematic, intentional and ongoing attention to expected results. It focuses on how results are achieved, what evidence is needed to inform future actions and how to improve future results. One cannot really discuss Evaluation Use without Michael Quinn Patton – check out (http://www.mcf.org/news/giving-forum/making-evaluation-meaningful).

Our work as internal evaluators involve continually communicating the value of evaluative thinking and guiding developmental evaluation (DE) by modeling the use of evidence to understand more precisely the needs of all students and to monitor and evaluate progress of improvement efforts.

Hot Tips: Check out how evaluation (http://edu.gov.on.ca/eng/teachers/studentsuccess/CCL_SSE_Report.pdf) is used to inform the next steps https://www.edu.gov.on.ca/eng/teachers/studentsuccess/strategy.html) and how that change can look like (http://edu.gov.on.ca/eng/research/EvidenceOfImprovementStudy.pdf).

In our work, the ongoing involvement of evaluators, who are intentionally embedded in program and policy development and implementation teams contribute to modeling evaluative thinking and guiding DE that build system evaluation capacity. The emphasis is on being a learning organization through evidence-informed, focused improvement planning and implementation.

Hot Tips: check out how evaluative thinking is embedded in professional learning (http://sim.abel.yorku.ca/ )or how evaluation thinking is embedded in improvement planning (http://www.edu.gov.on.ca/eng/policyfunding/memos/september2012/ImprovePlanAssessTool.pdf).

The American Evaluation Association is celebrating Evaluation Use (Eval Use) Topical Interest Group Week. The contributions all this week to aea365 come from our Eval Use TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top