AEA365 | A Tip-a-Day by and for Evaluators

CAT | Organizational Learning and Evaluation Capacity Building

My name is Sharon Wasco, and I am a community psychologist and independent consultant. I describe here a recent shift in my language that underscores, I think, important trends in evaluation:

  • I used to pitch evaluation as a way that organizations could “get ahead of” an increasing demand for evidence-based practice (EBP);
  • Now I sell evaluation as an opportunity for organizations to use practice-based evidence (PBE) to increase impact.

I’d like evaluators to seek a better understanding of EBP and PBE in order to actively span the perceived boundaries of these two approaches.

Most formulations of EBP require researcher driven activity — such as randomized controlled trials (RCT) — and clinical experts to answer questions like: “Is the right person doing the right thing, at the right time, in the right place in the right way, with the right result?” (credit: Anne Payne)

In an editorial introduction to a volume on PBE, Anne K. Swisher offers this contrast:

“In the concept of practice-based evidence, the real, messy, complicated world is not controlled. Instead, real world practice is documented and measured, just as it occurs, “warts” and all.

It is the process of measurement and tracking that matters, not controlling how practice is delivered. This allows us to answer a different, but no less important, question than ‘does X cause Y?’ This question is: ‘how does adding X intervention alter the complex personalized system of patient Y before me?’”

Advocates of PBE make a good case that “evidence supporting the utility, value, or worth of an intervention…can emerge from the practices, experiences, and expertise of family members, youth, consumers, professionals and members of the community.

Further exploration should convince you that EBP and PBE are complementary; and that evaluators can be transformative in the melding of the approaches. Within our field, forces driving the utilization of PBE include more internal evaluators, shared value for culturally competent evaluation, a range of models for participatory evaluation, and interest in collaborative inquiry as a process to support professional learning.

Lessons Learned: How we see “science-practice gaps,” and what we do in those spaces, provide unique opportunities for evaluators to make a difference. Metaphorically, EBP is a bridge and PBE is a Midway.

PBE_EBP 2

 

Further elaboration of this metaphor and more of what I’ve learned about PBE can be found in my speaker presentations materials from Penn State’s Third Annual Conference on Child Protection and Well-Being (scroll to the end of the page — I “closed” the event).

Rad Resource: I have used Chris Lysy’s cartoons to encourage others to look beyond the RCT for credible evidence and useful evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

I’m Kathryn Lowerre, an internal evaluator for the Environmental Health Epidemiology Bureau (EHEB) at the New Mexico Department of Health (NMDOH). My background includes work in Health Impact Assessment (HIA) and teaching in the humanities.

Environmental Health Epidemiology looks at the connections between the environment and human health (nmhealth.org/about/erd/eheb). Funding for many EHEB programs comes through the Centers for Disease Control and Prevention (CDC), including Asthma, Environmental Public Health Tracking, and Lead Poisoning Prevention. As an internal evaluator I have to engage a predictably super-busy state health department staff, some of whom work primarily with data (epidemiologists, analysts) and some of whom work primarily with people (program coordinators, health educators, healthcare providers). I am also responsible for engaging stakeholders from community and professional groups.

Somewhere along the continuum of initial responses to having a new evaluator on board, ranging from “someone who will solve all our problems” to “someone who can’t possibly solve any of our problems,” is the fruitful middle ground. The combination of quantitative and qualitative skills used in evaluation also apply to connecting with colleagues of very different training, experience, or mindset. From them, learn everything you can about internal and external constraints, and program history.

Lesson Learned: In program and evaluation team meetings (as in teaching), smiles and nods are better than frowns and arms folded across the chest, but they don’t necessarily mean that you’ve succeeded in conveying to your audience the evaluation purpose and information you intended.

Whether it’s a big division-wide meeting or a small project-specific group, it’s good to identify one or more people you can touch base with informally, afterwards. This is your reality check. What did they hear, what did they think, and what (if anything) are they planning to do, or do differently? If there’s a specific evaluation component for which they’ll be responsible, make sure both of you agree on the details.

While developing evaluation capacity is always going to be work-in-progress, I believe it’s an important part of an internal evaluator’s role, to encourage colleagues to think systematically about how we do what we do: how we might not only fulfill the requirements of a particular grant, but use evaluation to improve planning and implementation of future projects to make the greatest possible positive change.

Rad Resources: Several CDC programs, including the Asthma Control Program, have great evaluation resources and staff support for public health evaluation, including capacity development (www.cdc.gov/asthma/program_eval/default.htm).

Another resource, familiar to attendees of Michele Tarsilla’s AEA presentations and workshops, is the Evaluation Capacity Development Group’s web site (www.ecdg.net). If it’s new to you, check it out.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! My name is Michele Tarsilla, an independent evaluation advisor and evaluation capacity development (ECD) specialist with experience in over thirty countries. I have served as Chair of the International and Cross-Cultural Evaluation TIG at AEA and am currently in transition to become the new OL-ECB TIG Co-Chair.

The idea for this blog sprang from the realization that decision- and policy-makers participating in a number of ECD initiatives around the world are not always cognizant of the differences between evaluation (whose purposes and methods have often been exposed to only for a couple of years) and Results-Based Management (RBM) which they have been taught for almost two decades. As a result, the evaluation logic is often associated with the practice of developing logical frameworks (a phenomenon I refer to as “RBM-ization of the evaluation function”) and the potential of cross-pollination between both fields has not been fully capitalised. In an effort to fill the knowledge gap, I developed a table on links between evaluation and RBM (see a snapshot of the actual tool I developed for the OECD/DAC Newsletter and check out Rad Resource #1).

RBM vs. Evaluation (Tarsilla, 2014)

Tarsilla 1

Tarsilla 2

While I used to stress the intrinsic diversity between evaluation and RBM (e.g., by presenting evaluation as a radically new and more extensive endeavor than RBM has ever been), I came to change my position on this topic over the last year.

Lesson Learned: I have come to terms with the fact that, for an ECD programme to be designed and implemented successfully, the language used in evaluation workshops, mentoring programmes and technical assistance sessions needs to build upon the terminology, processes and tools (e.g., RBM-related) which clients and partners in the field are already familiar with. This renewed “ECD opportunism”, has proved to be quite effective in my professional practice. Trainees and clients exposed to this approach appear to have assimilated, retained and used evaluation concepts and tools more effectively.

Hot Tip #1: I strongly suggest recognition of the purposes (and related concept and tools) of RBM and performance management every time you are developing evaluation workshop curricula and designing evaluation technical assistance programmes, especially those aimed at planning and management specialists.

Hot Tip #2: It is critical to ensure that evaluation methodologies and concepts disseminated as part of an ECD programme fit within the realm of leaders’ existing results-driven management practices. This new strategy, which I call the “normalization of the evaluation function”, appears to help avoid the risk of rejection and apathy.

Rad Resources:

  • For a more exhaustive review of contemporary ECD practices in international development, visit http://www.oecd.org/dac/evaluation/ecdnewsletter.htm
  • For a an in-depth analysis of a selected number of ECD-related issues, visit http://www.ecdg.net/projects-partnerships/ecd-global-scan/ecd-weekly-blog-series/
  • For a review of international ECD initatives funded by international funders in Africa, see Tarsilla, M. (2014). Evaluation capacity development in Africa: Current landscape of international partners’ initiatives, lessons learned and the way forward. African Journal of Evaluation, 2, 34-58

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, we are Tom Archibald (Assistant Professor and Extension Specialist, Department of Agricultural, Leadership, and Community Education at Virginia Tech) and Guy Sharrock (Senior Technical Advisor for Learning with Catholic Relief Services). We believe one way to integrate and sustain learning in an organization is by intentionally promoting “evaluative thinking.”

Evaluative thinking (ET) is an increasingly popular idea within the field of evaluation. A quick overview of ET is provided in a previous post here. Today, we share some principles and practices for instilling ET in organizations and programs, based on our experiences facilitating ET-promoting workshops with development practitioners in Ethiopia and Zambia.

Lesson Learned: From our research and practice, we identified these guiding principles for promoting ET:

  1. Promoters of ET should be opportunist about engaging learners in ET processes, building on and maximizing intrinsic motivation. Meet people where they are and in what they are doing.
  2. Promoting ET should incorporate incremental experiences, following the developmental process of “scaffolding.” For example, instead of starting by asking people to question their deeply-held beliefs, begin with something less threatening, such as critiquing a newspaper article, and then work up to more advanced ET.
  3. High-level ET is not a born-in skill, nor does it depend on any particular educational background; therefore, promoters should offer opportunities for it to be intentionally practiced by all who wish to develop as evaluative thinkers.
  4. Evaluative thinkers must be aware of—and work to overcome—assumptions and belief preservation.
  5. ET should be applied in many settings—program design, monitoring, evaluation, and so on. In order to best learn to think evaluatively, the skill should be applied and practiced in multiple contexts and alongside peers and colleagues.
  6. Old habits and practices die hard. It may take time for ET to infuse existing processes and practices. Be patient and persevere!

Lesson Learned: In addition, we learned that:

  • Interest and buy-in in the effort must be both top-down and bottom-up. From the top, in international development, some funders and large organizations (e.g., the US Agency for International Development) are increasingly supportive of learning-centered and complexity-aware approaches, favoring the promotion of ET.
  • Existing levels and structures of evaluation capacity must be considered; ET can and should fit within and augment those structures.
  • Hierarchical power dynamics and cultural norms, especially around giving and receiving constructive criticism (without getting defensive) must be addressed.

Rad Resource: InterAction and the Centre for Learning on Evaluation and Results for Anglophone Africa have undertaken a study of international NGO ET practices in sub-Saharan Africa. Their report provides some great insights on the enabling factors (at a general, organizational, and individual level) that can help ET, and learning, take hold.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Lisa Richardson and I am the internal Improvement Advisor/Evaluator for the UCLA-Duke University National Center for Child Traumatic Stress (NCCTS), which in addition to coordinating the collaborative activities of the National Child Traumatic Stress Network (NCTSN), provides leadership in many aspects of child trauma policy, practice, and training. Online surveys are a favored NCTSN tool, particularly for the collaborative development and evaluation of network products. By last count, over 600 surveys have been done since 2006!

This plethora of surveys has become an unexpected and successful mechanism to enhance evaluation and organizational learning. In the past two years, our evaluation team has taken on very few surveys ourselves and instead given over the process to NCCTS staff and NCTSN groups. We made a previously recommended review process required and increased technical assistance to augment capacity.

Approaching every review as an educational opportunity is the cornerstone to this process. The goal is not only to produce a well-designed survey but also enhance staff member’s ability to create better ones in the future. Coaching builds on staff’s intrinsic passion for working in the child trauma field and for doing collaborative work. Evaluative thinking is reinforced by coaching and shared learning over time.

We have seen the quality of surveys improve tremendously (along with response rates), larger more complicated surveys are being undertaken, and I now receive more queries about using different tools to answer their questions.

Lessons Learned:

  • Put comments in writing and in context. Be clear about required verses suggested changes.
  • Provide alternatives and let the person or group decide. Walk them through the implications of choices and the influence it would have on their survey or data and then get out of the way!
  • Have everyone follow the same rule. My surveys are reviewed as are those developed with input from renowned treatment developers.
  • Build incrementally and use an individualized approach. A well-done survey is still an opportunity for further development.

Rad Resource: Qualtrics , the online survey solution we use is user-friendly and sophisticated. When consulting on technical issues, I often link to technical pages on their excellent website. User Groups allow us to share survey template, questions, messages, and graphics, increasing efficiency and consistency.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Joe Bauer, the Director of Survey Research & Evaluation in the Statistics & Evaluation Center (SEC) at the American Cancer Society (ACS) in Atlanta, Georgia. I have been working as an internal evaluator at the ACS for almost nine years, in a very challenging, but very rewarding position.

Lesson Learned: Evaluation is always political and you must be aware of those cultural dynamics that are part of every environment. I came to the American Cancer Society to have an impact at a national level. I had envisioned evaluation (and still do) as a means to systematically improve programs to improve the lives of cancer patients.

In the beginning, many were not ‘believers’ in evaluation. The perception was that evaluation could only lead to finding things that were wrong or that were not working – and that this might lead to politically problematic situations. We needed to navigate the cultural mine fields, even as we were acting as change agents. Over time, our Center worked hard to build a sense of trust. As internal evaluators, one must always be aware that we are being judged, as to how nice you are playing in the sandbox, even as we strive and push for higher quality, better data, and better study designs. Evaluators ask the tough questions – which at times cause ‘friction’. However, an internal evaluator must have a comfort level and the confidence with taking that role of asking the tough questions, which can be lonely.

Hot Tips: As an internal evaluator, one must be willing to ‘stay the course’ and ‘weather the storms’ and to never compromise on your values. This is crucially important – because you always need to do the right thing. This does not mean you end up winning all these ‘battles’, because ultimately, you can and are over-ruled on many issues. However, you must keep your integrity – because that is something you need to own throughout your career. That is also what builds trust and credibility.

Rad Resources: The American Evaluation Association’s Guiding Principles for Evaluators http://www.eval.org/p/cm/ld/fid=51 – which are intended to guide the professional practice for evaluators and inform evaluation clients and the general public about the principles they can expect to be upheld by professional evaluators.

The Official Dilbert Website with Scott Adams http://www.dilbert.com/ – where there are many ‘real world’ examples of the cultural dynamics that occur in the world of work and the often absurd scenarios and dynamics that play themselves out. As an evaluator – you will not only need to have a good skill set and work hard at keeping your values and integrity – you will need to have a sense of humor and keep your perspective.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Bonnie Richards, an analyst from Foresee and Chair of the Organizational Learning and Evaluation Capacity Building TIG. Welcome to the OL-ECB sponsored AEA365 week!

This week our blog posts will cover a range of experiences discussing challenges and successes we have had sustaining learning or evaluation in our work with organizations or programs. Across our members’ varied experiences, you will learn more about their strategies and methods for facilitating learning and the challenges they have encountered.

In my own role working with clients, one of my main goals is to help them understand where to prioritize improvements for their stakeholders. One of the challenges in doing this is navigating the different environments of organizations, companies, and government agencies. Each group is unique. For example, among government agencies, while there are some similar requirements or processes that consistently govern each, the mix of involved stakeholders who serve as the primary point of contact actually vary significantly.

A primary contact could be a program analyst, or a director of the agency’s strategic planning and evaluation office, or technical director, or even a third party vendor.

Understanding and acclimating to each client, meeting them at their “level” and working within their context is key because it helps you to learn the best ways for interacting with different stakeholder groups. This sets the stage for a successful relationship.

Lessons learned: Ask questions.

  • So, how does one get to the point of successfully meeting stakeholders in the appropriate context? Ask questions:
  • Why are they beginning this process? Were they instrumental in initiating it, or are they tasked it as part of a directive from a director or committee? How do they intend to use the information? What are their goals? What information will be most useful?
  • Take some time to ask questions. Stakeholders will appreciate your interest and the opportunity, and it exposes you to the thoughts, concerns, and values that are top of mind to the people you will be working closely with.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! I am Liz Zadnik, Capacity Building Specialist at the New Jersey Coalition Against Sexual Assault. I’m also a new member of the aea365 curating team and first-time Saturday contributor!  Over the past five years I have been working within the anti-sexual violence movement at both the state and national levels to share my enthusiasm for evaluation and support innovative community-based programs doing tremendous social change work.

Over the past five years I have been honored to work with talented evaluators and social change agents in the sexual violence prevention movement. A large part of my work has been de-mystifying evaluation and data for community-based organizations and professionals with limited academic evaluation experience.

Rad Resources: Some of my resources have come from the field of domestic and sexual violence intervention and prevention, as well as this blog! I prefer resources that offer practical application guidance and are accessible to a variety of learning styles and comfort levels. A partnership between the Resource Sharing Project and National Sexual Violence Resource Center has resulted in a fabulous toolkit looking at assessing community needs and assets. I’m a big fan of the Community Tool Box and their Evaluating the Initiative Toolkit as it offers step-by-step guidance for community-based organizations. Very similar to this is The Ohio Domestic Violence Network’s Primary Prevention of Sexual and Intimate Partner Violence Empowerment Evaluation Toolkit, which incorporates the values of the anti-sexual violence movement into prevention evaluation efforts.

Lesson Learned: Be yourself! Don’t stifle your passion or enthusiasm for evaluation and data. I made the mistake early in my technical assistance and training career of trying to fit into a role or mold I created in my head. Activists of all interests are needed to bring about social change and community wellness. Once I let my passion for evaluation show – in publications, trainings, and technical assistance – I began to see marked changes in the professionals I was working with (and myself!). I have seen myself grow as an evaluator by leaps and bounds since I made this change – so don’t be afraid to let your love of spreadsheets, interview protocols, theories of change, or anything else show!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Marley Steele-Inama, and I manage Audience Research and Evaluation at Denver Zoo. The Local Arrangement Working Group’s (LAWG) is excited to share with you the great evaluation work taking place in Colorado, as well as give you advice for making the most of Evaluation 2014 in Denver. Coloradoans are very proud of our state; don’t be shocked to notice many locals wearing clothing that dons the state flag’s emblem!

Denver harbors a spirit of collaboration, and this rings true for an initiative of which I’m a part – the Denver-area Evaluation Network (DEN). This network is made up of 15 different museums and cultural institutions, most of whom are a part of the Scientific and Cultural Facilities District (SCFD), a sales and use tax that supports cultural facilities through the seven-county Denver metropolitan area. DEN’s goals are to increase evaluation capacity building (ECB) in museum professionals through a multidisciplinary model that includes trainings with national evaluation experts, attending workshops and conferences, mentoring and technical assistance, dissemination and meetings, and engaging in institutional and pan-institutional studies. Thanks to a grant from the Institute of Museum and Library Services (IMLS), all DEN members will be attending this year’s AEA conference in Denver – a first for most of these participants.

Lessons Learned: Collaboration is core to DEN, however, working together is challenging. We’ve learned that to be successful, we need:

  • Champions to steer the project, and subcommittees to engage members and activate the work.
  • Frequent in-person meetings to stay motivated and connected.
  • Flexibility and the acceptance to make adjustments quickly when needed.
  • Leadership involvement at our institutions in the project to sustain such a large and time-consuming ECB effort. Value buy-in is critical.
  • Two members from each institution as part of the project – those institutions with two members in DEN, compared to one, are more successful at transferring ECB back in their institutions.
  • To accept that pan-institutional studies don’t always work with such a large and diverse group; we’ve learned that cohort studies often work better.

Hot Tip: Colorado is home to endless adventure, and that includes its exploding addiction to running. Start a training plan now and lace up for the Denver Rock n’ Roll Marathon and Half Marathon, scheduled for Sunday, October 19, one day after the conference ends. Prefer a “hoppy” adventure? Colorado is booming with craft breweries. You won’t have to walk far to taste some of Denver’s finest ales. Taprooms close to your hotel room include Denver Beer Company, Great Divide Brewing Company, Jagged Mountain Brewery, Prost Brewing, Renegade Brewing Company, and the legendary Wynkoop Brewing Company. Of course, feel free to stick around after the conference and sample from more of Colorado’s 230+ craft breweries!

We’re thinking forward to October and the Evaluation 2014 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Registration will soon be open! Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

·

I’m Zhao Min, Deputy Director of the Asia-Pacific Finance and Development Center (AFDC) in Shanghai, China. AFDC is a member of the CLEAR Initiative and hosts the East Asia CLEAR Center. If you follow what is happening in international evaluation capacity building (ECB), you might have heard about CLEAR – the Regional Centers for Learning on Evaluation and Results. Through a network of regional centers, CLEAR provides regionally relevant ECB, p promoting practical knowledge-sharing and peer-to-peer learning. Our center provides many types of ECB activities to people in Asia and internationally – on topics such as the basics of M&E, impact evaluation, and performance based budgeting. In addition to CLEAR, we also receive support from the Independent Evaluation Group of the World Bank, the Independent Evaluation Department and also the Strategy and Policy Department of the Asian Development Bank (ADB), and the Ministry of Finance of China. Today I’m writing about performance management systems (PMSs), their uses and how they intersect with the budgeting process in the government sector. This is an area of my own research and my agency’s knowledge sharing efforts within Asia, especially in China.

Lessons learned: Worldwide, budgets are tight and citizens expect high performance on public sector projects and programs – be they in health, education, transportation and, really, any sector. I first became introduced to how PMSs, or results-based management systems, through my work with international financial institutions such as the ADB. Increasingly, I’m seeing the use of electronic PMSs in countries, states and municipalities – across the globe.

Some of the many uses of the systems are below.

  • They provide a systematic approach to management and monitoring of the performance of projects and programs. Instead of ad-hoc management, information can be systematically collected and monitored.
  • Access to information increases.
  • They provide structure in the early, design stage of projects, programs or policies. They typically set out important indicators and targets to enable meaningful monitoring.
  • Robust PMSs can be a source of key data for evaluations – and make evaluations stronger.
  • Evidence-based decision-making is more likely, resulting in greater efficiencies and effectiveness.
  • Decisions about budgeting and tying budgeting to performance are made possible by PMSs.

PMSs are likely to become more commonly used. It is an exciting time to learn and share with one another across regions in our experiences with PMSs. We can continue to refine such things as which indicators are most useful to track, experiences in collecting data, and how to communicate information to the public.

Rad Resources:

Clipped from http://www.theclearinitiative.org/Clear_about.html

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

<< Latest posts

Older posts >>

Archives

To top