AEA365 | A Tip-a-Day by and for Evaluators

CAT | Organizational Learning and Evaluation Capacity Building

Hello! My name is Michele Tarsilla, an independent evaluation advisor and evaluation capacity development (ECD) specialist with experience in over thirty countries. I have served as Chair of the International and Cross-Cultural Evaluation TIG at AEA and am currently in transition to become the new OL-ECB TIG Co-Chair.

The idea for this blog sprang from the realization that decision- and policy-makers participating in a number of ECD initiatives around the world are not always cognizant of the differences between evaluation (whose purposes and methods have often been exposed to only for a couple of years) and Results-Based Management (RBM) which they have been taught for almost two decades. As a result, the evaluation logic is often associated with the practice of developing logical frameworks (a phenomenon I refer to as “RBM-ization of the evaluation function”) and the potential of cross-pollination between both fields has not been fully capitalised. In an effort to fill the knowledge gap, I developed a table on links between evaluation and RBM (see a snapshot of the actual tool I developed for the OECD/DAC Newsletter and check out Rad Resource #1).

RBM vs. Evaluation (Tarsilla, 2014)

Tarsilla 1

Tarsilla 2

While I used to stress the intrinsic diversity between evaluation and RBM (e.g., by presenting evaluation as a radically new and more extensive endeavor than RBM has ever been), I came to change my position on this topic over the last year.

Lesson Learned: I have come to terms with the fact that, for an ECD programme to be designed and implemented successfully, the language used in evaluation workshops, mentoring programmes and technical assistance sessions needs to build upon the terminology, processes and tools (e.g., RBM-related) which clients and partners in the field are already familiar with. This renewed “ECD opportunism”, has proved to be quite effective in my professional practice. Trainees and clients exposed to this approach appear to have assimilated, retained and used evaluation concepts and tools more effectively.

Hot Tip #1: I strongly suggest recognition of the purposes (and related concept and tools) of RBM and performance management every time you are developing evaluation workshop curricula and designing evaluation technical assistance programmes, especially those aimed at planning and management specialists.

Hot Tip #2: It is critical to ensure that evaluation methodologies and concepts disseminated as part of an ECD programme fit within the realm of leaders’ existing results-driven management practices. This new strategy, which I call the “normalization of the evaluation function”, appears to help avoid the risk of rejection and apathy.

Rad Resources:

  • For a more exhaustive review of contemporary ECD practices in international development, visit http://www.oecd.org/dac/evaluation/ecdnewsletter.htm
  • For a an in-depth analysis of a selected number of ECD-related issues, visit http://www.ecdg.net/projects-partnerships/ecd-global-scan/ecd-weekly-blog-series/
  • For a review of international ECD initatives funded by international funders in Africa, see Tarsilla, M. (2014). Evaluation capacity development in Africa: Current landscape of international partners’ initiatives, lessons learned and the way forward. African Journal of Evaluation, 2, 34-58

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, we are Tom Archibald (Assistant Professor and Extension Specialist, Department of Agricultural, Leadership, and Community Education at Virginia Tech) and Guy Sharrock (Senior Technical Advisor for Learning with Catholic Relief Services). We believe one way to integrate and sustain learning in an organization is by intentionally promoting “evaluative thinking.”

Evaluative thinking (ET) is an increasingly popular idea within the field of evaluation. A quick overview of ET is provided in a previous post here. Today, we share some principles and practices for instilling ET in organizations and programs, based on our experiences facilitating ET-promoting workshops with development practitioners in Ethiopia and Zambia.

Lesson Learned: From our research and practice, we identified these guiding principles for promoting ET:

  1. Promoters of ET should be opportunist about engaging learners in ET processes, building on and maximizing intrinsic motivation. Meet people where they are and in what they are doing.
  2. Promoting ET should incorporate incremental experiences, following the developmental process of “scaffolding.” For example, instead of starting by asking people to question their deeply-held beliefs, begin with something less threatening, such as critiquing a newspaper article, and then work up to more advanced ET.
  3. High-level ET is not a born-in skill, nor does it depend on any particular educational background; therefore, promoters should offer opportunities for it to be intentionally practiced by all who wish to develop as evaluative thinkers.
  4. Evaluative thinkers must be aware of—and work to overcome—assumptions and belief preservation.
  5. ET should be applied in many settings—program design, monitoring, evaluation, and so on. In order to best learn to think evaluatively, the skill should be applied and practiced in multiple contexts and alongside peers and colleagues.
  6. Old habits and practices die hard. It may take time for ET to infuse existing processes and practices. Be patient and persevere!

Lesson Learned: In addition, we learned that:

  • Interest and buy-in in the effort must be both top-down and bottom-up. From the top, in international development, some funders and large organizations (e.g., the US Agency for International Development) are increasingly supportive of learning-centered and complexity-aware approaches, favoring the promotion of ET.
  • Existing levels and structures of evaluation capacity must be considered; ET can and should fit within and augment those structures.
  • Hierarchical power dynamics and cultural norms, especially around giving and receiving constructive criticism (without getting defensive) must be addressed.

Rad Resource: InterAction and the Centre for Learning on Evaluation and Results for Anglophone Africa have undertaken a study of international NGO ET practices in sub-Saharan Africa. Their report provides some great insights on the enabling factors (at a general, organizational, and individual level) that can help ET, and learning, take hold.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Lisa Richardson and I am the internal Improvement Advisor/Evaluator for the UCLA-Duke University National Center for Child Traumatic Stress (NCCTS), which in addition to coordinating the collaborative activities of the National Child Traumatic Stress Network (NCTSN), provides leadership in many aspects of child trauma policy, practice, and training. Online surveys are a favored NCTSN tool, particularly for the collaborative development and evaluation of network products. By last count, over 600 surveys have been done since 2006!

This plethora of surveys has become an unexpected and successful mechanism to enhance evaluation and organizational learning. In the past two years, our evaluation team has taken on very few surveys ourselves and instead given over the process to NCCTS staff and NCTSN groups. We made a previously recommended review process required and increased technical assistance to augment capacity.

Approaching every review as an educational opportunity is the cornerstone to this process. The goal is not only to produce a well-designed survey but also enhance staff member’s ability to create better ones in the future. Coaching builds on staff’s intrinsic passion for working in the child trauma field and for doing collaborative work. Evaluative thinking is reinforced by coaching and shared learning over time.

We have seen the quality of surveys improve tremendously (along with response rates), larger more complicated surveys are being undertaken, and I now receive more queries about using different tools to answer their questions.

Lessons Learned:

  • Put comments in writing and in context. Be clear about required verses suggested changes.
  • Provide alternatives and let the person or group decide. Walk them through the implications of choices and the influence it would have on their survey or data and then get out of the way!
  • Have everyone follow the same rule. My surveys are reviewed as are those developed with input from renowned treatment developers.
  • Build incrementally and use an individualized approach. A well-done survey is still an opportunity for further development.

Rad Resource: Qualtrics , the online survey solution we use is user-friendly and sophisticated. When consulting on technical issues, I often link to technical pages on their excellent website. User Groups allow us to share survey template, questions, messages, and graphics, increasing efficiency and consistency.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Joe Bauer, the Director of Survey Research & Evaluation in the Statistics & Evaluation Center (SEC) at the American Cancer Society (ACS) in Atlanta, Georgia. I have been working as an internal evaluator at the ACS for almost nine years, in a very challenging, but very rewarding position.

Lesson Learned: Evaluation is always political and you must be aware of those cultural dynamics that are part of every environment. I came to the American Cancer Society to have an impact at a national level. I had envisioned evaluation (and still do) as a means to systematically improve programs to improve the lives of cancer patients.

In the beginning, many were not ‘believers’ in evaluation. The perception was that evaluation could only lead to finding things that were wrong or that were not working – and that this might lead to politically problematic situations. We needed to navigate the cultural mine fields, even as we were acting as change agents. Over time, our Center worked hard to build a sense of trust. As internal evaluators, one must always be aware that we are being judged, as to how nice you are playing in the sandbox, even as we strive and push for higher quality, better data, and better study designs. Evaluators ask the tough questions – which at times cause ‘friction’. However, an internal evaluator must have a comfort level and the confidence with taking that role of asking the tough questions, which can be lonely.

Hot Tips: As an internal evaluator, one must be willing to ‘stay the course’ and ‘weather the storms’ and to never compromise on your values. This is crucially important – because you always need to do the right thing. This does not mean you end up winning all these ‘battles’, because ultimately, you can and are over-ruled on many issues. However, you must keep your integrity – because that is something you need to own throughout your career. That is also what builds trust and credibility.

Rad Resources: The American Evaluation Association’s Guiding Principles for Evaluators http://www.eval.org/p/cm/ld/fid=51 – which are intended to guide the professional practice for evaluators and inform evaluation clients and the general public about the principles they can expect to be upheld by professional evaluators.

The Official Dilbert Website with Scott Adams http://www.dilbert.com/ – where there are many ‘real world’ examples of the cultural dynamics that occur in the world of work and the often absurd scenarios and dynamics that play themselves out. As an evaluator – you will not only need to have a good skill set and work hard at keeping your values and integrity – you will need to have a sense of humor and keep your perspective.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Bonnie Richards, an analyst from Foresee and Chair of the Organizational Learning and Evaluation Capacity Building TIG. Welcome to the OL-ECB sponsored AEA365 week!

This week our blog posts will cover a range of experiences discussing challenges and successes we have had sustaining learning or evaluation in our work with organizations or programs. Across our members’ varied experiences, you will learn more about their strategies and methods for facilitating learning and the challenges they have encountered.

In my own role working with clients, one of my main goals is to help them understand where to prioritize improvements for their stakeholders. One of the challenges in doing this is navigating the different environments of organizations, companies, and government agencies. Each group is unique. For example, among government agencies, while there are some similar requirements or processes that consistently govern each, the mix of involved stakeholders who serve as the primary point of contact actually vary significantly.

A primary contact could be a program analyst, or a director of the agency’s strategic planning and evaluation office, or technical director, or even a third party vendor.

Understanding and acclimating to each client, meeting them at their “level” and working within their context is key because it helps you to learn the best ways for interacting with different stakeholder groups. This sets the stage for a successful relationship.

Lessons learned: Ask questions.

  • So, how does one get to the point of successfully meeting stakeholders in the appropriate context? Ask questions:
  • Why are they beginning this process? Were they instrumental in initiating it, or are they tasked it as part of a directive from a director or committee? How do they intend to use the information? What are their goals? What information will be most useful?
  • Take some time to ask questions. Stakeholders will appreciate your interest and the opportunity, and it exposes you to the thoughts, concerns, and values that are top of mind to the people you will be working closely with.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! I am Liz Zadnik, Capacity Building Specialist at the New Jersey Coalition Against Sexual Assault. I’m also a new member of the aea365 curating team and first-time Saturday contributor!  Over the past five years I have been working within the anti-sexual violence movement at both the state and national levels to share my enthusiasm for evaluation and support innovative community-based programs doing tremendous social change work.

Over the past five years I have been honored to work with talented evaluators and social change agents in the sexual violence prevention movement. A large part of my work has been de-mystifying evaluation and data for community-based organizations and professionals with limited academic evaluation experience.

Rad Resources: Some of my resources have come from the field of domestic and sexual violence intervention and prevention, as well as this blog! I prefer resources that offer practical application guidance and are accessible to a variety of learning styles and comfort levels. A partnership between the Resource Sharing Project and National Sexual Violence Resource Center has resulted in a fabulous toolkit looking at assessing community needs and assets. I’m a big fan of the Community Tool Box and their Evaluating the Initiative Toolkit as it offers step-by-step guidance for community-based organizations. Very similar to this is The Ohio Domestic Violence Network’s Primary Prevention of Sexual and Intimate Partner Violence Empowerment Evaluation Toolkit, which incorporates the values of the anti-sexual violence movement into prevention evaluation efforts.

Lesson Learned: Be yourself! Don’t stifle your passion or enthusiasm for evaluation and data. I made the mistake early in my technical assistance and training career of trying to fit into a role or mold I created in my head. Activists of all interests are needed to bring about social change and community wellness. Once I let my passion for evaluation show – in publications, trainings, and technical assistance – I began to see marked changes in the professionals I was working with (and myself!). I have seen myself grow as an evaluator by leaps and bounds since I made this change – so don’t be afraid to let your love of spreadsheets, interview protocols, theories of change, or anything else show!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Marley Steele-Inama, and I manage Audience Research and Evaluation at Denver Zoo. The Local Arrangement Working Group’s (LAWG) is excited to share with you the great evaluation work taking place in Colorado, as well as give you advice for making the most of Evaluation 2014 in Denver. Coloradoans are very proud of our state; don’t be shocked to notice many locals wearing clothing that dons the state flag’s emblem!

Denver harbors a spirit of collaboration, and this rings true for an initiative of which I’m a part – the Denver-area Evaluation Network (DEN). This network is made up of 15 different museums and cultural institutions, most of whom are a part of the Scientific and Cultural Facilities District (SCFD), a sales and use tax that supports cultural facilities through the seven-county Denver metropolitan area. DEN’s goals are to increase evaluation capacity building (ECB) in museum professionals through a multidisciplinary model that includes trainings with national evaluation experts, attending workshops and conferences, mentoring and technical assistance, dissemination and meetings, and engaging in institutional and pan-institutional studies. Thanks to a grant from the Institute of Museum and Library Services (IMLS), all DEN members will be attending this year’s AEA conference in Denver – a first for most of these participants.

Lessons Learned: Collaboration is core to DEN, however, working together is challenging. We’ve learned that to be successful, we need:

  • Champions to steer the project, and subcommittees to engage members and activate the work.
  • Frequent in-person meetings to stay motivated and connected.
  • Flexibility and the acceptance to make adjustments quickly when needed.
  • Leadership involvement at our institutions in the project to sustain such a large and time-consuming ECB effort. Value buy-in is critical.
  • Two members from each institution as part of the project – those institutions with two members in DEN, compared to one, are more successful at transferring ECB back in their institutions.
  • To accept that pan-institutional studies don’t always work with such a large and diverse group; we’ve learned that cohort studies often work better.

Hot Tip: Colorado is home to endless adventure, and that includes its exploding addiction to running. Start a training plan now and lace up for the Denver Rock n’ Roll Marathon and Half Marathon, scheduled for Sunday, October 19, one day after the conference ends. Prefer a “hoppy” adventure? Colorado is booming with craft breweries. You won’t have to walk far to taste some of Denver’s finest ales. Taprooms close to your hotel room include Denver Beer Company, Great Divide Brewing Company, Jagged Mountain Brewery, Prost Brewing, Renegade Brewing Company, and the legendary Wynkoop Brewing Company. Of course, feel free to stick around after the conference and sample from more of Colorado’s 230+ craft breweries!

We’re thinking forward to October and the Evaluation 2014 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Registration will soon be open! Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

·

I’m Zhao Min, Deputy Director of the Asia-Pacific Finance and Development Center (AFDC) in Shanghai, China. AFDC is a member of the CLEAR Initiative and hosts the East Asia CLEAR Center. If you follow what is happening in international evaluation capacity building (ECB), you might have heard about CLEAR – the Regional Centers for Learning on Evaluation and Results. Through a network of regional centers, CLEAR provides regionally relevant ECB, p promoting practical knowledge-sharing and peer-to-peer learning. Our center provides many types of ECB activities to people in Asia and internationally – on topics such as the basics of M&E, impact evaluation, and performance based budgeting. In addition to CLEAR, we also receive support from the Independent Evaluation Group of the World Bank, the Independent Evaluation Department and also the Strategy and Policy Department of the Asian Development Bank (ADB), and the Ministry of Finance of China. Today I’m writing about performance management systems (PMSs), their uses and how they intersect with the budgeting process in the government sector. This is an area of my own research and my agency’s knowledge sharing efforts within Asia, especially in China.

Lessons learned: Worldwide, budgets are tight and citizens expect high performance on public sector projects and programs – be they in health, education, transportation and, really, any sector. I first became introduced to how PMSs, or results-based management systems, through my work with international financial institutions such as the ADB. Increasingly, I’m seeing the use of electronic PMSs in countries, states and municipalities – across the globe.

Some of the many uses of the systems are below.

  • They provide a systematic approach to management and monitoring of the performance of projects and programs. Instead of ad-hoc management, information can be systematically collected and monitored.
  • Access to information increases.
  • They provide structure in the early, design stage of projects, programs or policies. They typically set out important indicators and targets to enable meaningful monitoring.
  • Robust PMSs can be a source of key data for evaluations – and make evaluations stronger.
  • Evidence-based decision-making is more likely, resulting in greater efficiencies and effectiveness.
  • Decisions about budgeting and tying budgeting to performance are made possible by PMSs.

PMSs are likely to become more commonly used. It is an exciting time to learn and share with one another across regions in our experiences with PMSs. We can continue to refine such things as which indicators are most useful to track, experiences in collecting data, and how to communicate information to the public.

Rad Resources:

Clipped from http://www.theclearinitiative.org/Clear_about.html

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, my name is Jeanne Hubelbank. I am an independent evaluation consultant. Most of my work is in higher education where, most recently, I help faculty evaluate their classes, develop proposals, and evaluate professional development programs offered to public school teachers. Sometimes, I am asked to make presentations or conduct workshops on evaluation. When doing this, I find it helpful to know something about the audience’s background. Clickers, hand raising, holding up colored cards, standing up, and clapping are ways to approach this. A recent AEA365 post, Innovative Reporting Part I: The Data Diva’s Chocolate Box, that showed how to present results on candy wrappers served as an impetus for another way to introduce evaluation and to assess people’s understanding of it.

Instead of results, write evaluation terms such as use, user, and methods on stickers and place them on the bottom of Hershey’s Kisses®; one word to a kiss. Participants arrange their candy in any format that they think represents how one approaches the process of conducting an evaluation. This can give one a quick view of how the participants view evaluation and most people like to eat the candy afterwards.

Hot tips:

  • Use three-quarter inch dotsHubelbank
  • Hand write or print terms you want your clients to display
  • Besides Hershey’s Kisses® provide Starbursts®, for those who are allergic or adverse to chocolate
  • Use different colored kisses for key terms, such as use and uses in silver and assessment in red, for a quick view on where people place them in the process
  • Wrap each collection of candy terms into a piece of plastic wrap and tie with a curled ribbon
  • Ask people to arrange candy in any format that they think represents how one approaches the process of doing an evaluation
  • You can do this before and after a presentation, but if you do it again, remind people to wait to eat.

Rad Resources:

Susan Kistler’s chocolate results

Stephanie Everygreen’s cookie results and her book Presenting Data Effectively: Communicating Your Findings for Maximum Impact.

Hallie Preskill and Darlene Russ-Eft’s book Building Evaluation Capacity: 72 Activities for Teaching and Training.

Michael Q. Patton’s book Creative Evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Kylie Hutchinson.  I am an independent evaluation consultant with Community Solutions Planning & Evaluation.  In addition to evaluation consulting and capacity building, I tweet at @EvaluationMaven and co-host the monthly evaluation podcast, Adventures in Evaluation along with my colleague @JamesWCoyle.

When I started out in evaluation 26 years ago, I was focused on being a good methodologist and statistician.  After deciding to work primarily with NGOs I learned the importance of being a good program planner.  Employing a participatory approach required me to become a competent facilitator and consensus-builder.  These days, the increased emphasis on utilization and data visualization is forcing me to upgrade my skills in communications and graphic design.  New developments in mobile data collection are making me improve my technical skills.  A recent foray into development evaluation has taught me the important role that a knowledge manager plays in evaluation. Finally, we are starting to understand evaluation capacity development as a process rather than a product, so now I need expertise in organizational development, change management, and the behavioral sciences.  Whoa.

HutchDon’t get me wrong, I’m not complaining.  Every day I wake up and think how lucky I am to have picked such a diverse career as evaluation. But with all these responsibilities on my plate, my toolbox is starting to get full and sometimes keep me awake a night.  How can I manage to be effective at all of these things?  Should I worry about being a Jack of all trades, Master of none?

Hot Tip:  You don’t have to do it all.  Determine your strengths and outsource your weaknesses. Pick several areas of specialization and ask for assistance with the others.  This help may come in the form of other colleagues or departments.  For example, if you think you need help with change management, sub-contract an organizational development consultant to your team.  If you work in an organization with a communications or graphic design department, don’t forget to call on their expertise when you need it.

Hot Tip:  Take baby steps.  If you want to practice more innovative reporting, don’t assume you have to become an expert in communication strategies overnight. Select one or two new skills you want to develop annually and pick away at those.

Hot Tip:  If you can, strategically select those evaluations that will expose you to a new desired area, e.g. mobile data collection or use of a new software.

Rad Resource:  Even if you’re not Canadian, the Canadian Evaluation Society’s Competencies for Canadian Evaluation Practice provide a great basis from which to reflect on your skills.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

<< Latest posts

Older posts >>

Archives

To top