AEA365 | A Tip-a-Day by and for Evaluators

CAT | Social Impact Measurement

Hello, my name is Victoria Carlan and I lead the impact measurement group at the Government of Canada’s Impact and Innovation Unit (IIU).  The IIU works with departments and agencies across the federal government to advance the integration of outcomes-based approaches in the design and delivery of Government of Canada policies, programs and services. It fulfills this purpose by promoting policy innovation and experimentation, supporting public sector leadership, providing advice and support in the design and implementation of new outcomes-based funding models, and continuously and rigorously examining and sharing our progress and insights.

This past summer, I worked with two talented researchers – Dana Crawhall-Duk and Alana Couvrette  – to examine how we can integrate Gender-Based Analysis Plus (GBA+) into our impact measurement practice to further enhance policy outcomes.  The “plus” in GBA+ acknowledges that GBA goes beyond biological (sex) and socio-cultural (gender) differences to include other identity factors, such as race, ethnicity, age, physical or mental disability, that intersect to make us who we are.

Our work is still underway, but here are a few lessons that we have learned so far:

Lessons Learned:

  • Do your own personal and policy homework first. Consideration of gender and other intersectional factors begins at the early stages of the policy design and impact measurement process. Coming prepared to these early planning meetings to initiate specific conversations about gender and other identity factors is critical.  It sometimes requires constructing “aha” moments through evidence of unintended outcomes when we have been gender +-blind or, if necessary, pointing to others (e.g., individuals, organizations or policy contexts) that have successfully adopted these inclusive analytical approaches.
  • Be prepared to be uncomfortable. Questioning and challenging our mental models of how people behave (or not), our underlying values that guide our decision making, and recognizing our own biases rarely produce immediate feelings of gratification or accomplishment. This work is hard.  By its very nature, meaningful co-creation and social innovation generates a discomfort that arises as you begin to challenge the status quo, question the validity of past research or evaluation findings, and explore the roots of your own experiences and beliefs.
  • Be creative when fighting against the demons of scare resources – in particular the lack of time. In response, I have developed my own “GBA+ toolkit” (yours will be different) to ensure I am asking the right questions (of myself and to others) in an order that leaves time and space for self- and group reflection and the necessary work of identifying, inviting and integrating the voices of those impacted by policy efforts and our measurement practices.

Rad Resource: GBA+ is a tool developed by the Status of Women Canada to guide the analysis of how diverse groups of women, men and gender-diverse people may experience policies, programs and initiatives.

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Carter Garber, Director of the Institute for Development, Evaluation, Assistance and Solutions (IDEAS). For 3.5 decades, I have been an evaluator and involved in a variety of other roles with what is now called “social impact investments” and their measurement.

I am fortunate to have a 360 degree perspective as a founder of four for-profit & non-profit financial institutions, as an evaluation consultant to major investors, as an investor, and as a trainer to investees & evaluators to do Social Impact Measurement (SIM).

Leason Learned: Evaluators, investors, investees and the SIM TIG members attempt to evaluate investments that seek financial, social and environmental returns.

The social impact investing field is challenged to select tools that are acceptable to both investors and investees. There is a plethora of resources, tools and hundreds of indicators, however many are either quantitative or qualitative. We evaluators often prefer a mixed method.

Cool Trick: I have experiences of how five impact assessment tools, two quantitative & three qualitative, were applied by evaluators and practitioners trained in ECB (evaluation capacity building). Evaluators work with investees in an effort to discover credible outcomes and impacts to be discussed with investors.

Leasons Learned: In my decade of teaching and using the Assessment Tools for Microfinance Practitioners  (AIMS-SEEP tools) on three continents, many found:

  • ECB helps investees develop their own hypotheses and indicators of the impact and adjust the tools accordingly. This prevents investees and investors from using standardized tools that do not capture pertinent impact.
  • By modifying mid-range impact assessment tools, investees come close enough to finding impact (association rather than attribution that some stakeholders require) and obtaining information that their organizations need to improve impact.
  • Tools measuring satisfaction are a helpful complement. If clients are not satisfied, they do not stay long enough to obtain impacts.
  • With the assistance of very few professional evaluators, these mixed method tools have been adjusted by hundreds of staff members of investees in Africa, Asia and Latin America. Some evaluations involved 50 staff working in multiple languages, cultures, and religions over a month.
  • Major US & European investor institutions (e.g. Oikocredit) and international donors (e.g. USAID) have found this methodology credible rather than the tempting “one tool fits all” worldwide.
  • Investees are emboldened to speak truths to those with power over current and future investments. This allows them to more strongly negotiate terms as well as to make practical changes to increase impact with low-income women and clients.

Rad Resources:

Hot Tip: I will be sharing more about this on Thursday November 1st at 8:00am!

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, we’re Debby Nixon Williams, Noel Verrinder and Kagiso Zwane. We are from Genesis Analytics, a South African economics consultancy based in Johannesburg (HQ), Nairobi and London. Our team provides evaluation, monitoring and learning advisory services to donors, foundations, NGOs, the public sector, and the private sector in the form of impact investors across Sub-Saharan Africa.

The Sustainable Development Goals (SDGs) are becoming increasingly important to funders and donors in the measurement of impact. For this reason, impact investment fund managers are beginning to integrate SDG tracking mechanisms in their monitoring systems. Unfortunately, in our experience, as well as others with whom we have discussed the same issue,  it seems that they do this to meet the demands of donors or investors rather than to experience any benefit themselves. We have found that using a theory of change to connect the actions of the impact investors to SDG related indicators helps fund managers to own and articulate the social impact of their work.

Lessons Learned: This diagram is a useful way of showing impact investors the value of concise indicators rooted in a theory of change that is aligned with the SDGs.

  1. A theory of change helps fund managers recognize that while this may be pushed on them, the value derived from the process of integrating SDG’s into an impact investment fund’s monitoring helps fund managers and beneficiaries better articulate their purpose and goals for impact.
  2. The wide range of impact measurement tools available to fund managers can often lead to “impact wash”. Impact wash, in this instance, means impact investors try to demonstrate they are contributing towards impact through so many indicators that these indicators overwhelm the audience and undermine the impact message of the fund. We find that working with fund managers to focus their impact ambitions around SDG’s achieves a grounding and shared understanding of their impact aims, focuses the organisation’s monitoring and generates a space to consider innovative measurements for social impact that are pertinent to tracking progress toward and adapting strategies to achieve our global 2030 goals.
  3. Using the UN SDG global indicators is a good starting point to help link impact investment activites for donors and provides a sound starting point to ensure SMART indicators that are pertinent.

Leason Learned: Ultimately, in our experience, using a theory of change has proven to be a valuable analytical hub for impact investors and allows them to consolidate their understanding of impact in a clear format.

Rad resources

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I am Heather Esper of the William Davidson Institute at the University of Michigan. At AEA 2018 I’ll be joined with my co-panelists, Julie Peachey of Innovations for Poverty Action and Scott Graham of FINCA International, to share how poverty data can provide unique insights into a company’s clients or program’s beneficiaries.

Hot Tip: Collecting poverty data can uncover truths of who is being reached and their needs, presenting opportunities for a company or program to better meet those needs. It is valuable to understand poverty beyond using economic data; as poverty is multi-dimensional and involves a range of well-beings from health to environmental. As investors, evaluators, and managers, we can collect poverty data to give power to the voice and realities faced by our clients and beneficiaries. Poverty data can be used to inform decisions to create mutual value for the company or program by increasing their impact and their effectiveness, while also enhancing the impact on their clients’ or beneficiaries’ lives.

Hot Tip: During our session at AEA 2018 (Saturday, November 3, 9:15am-10:00am), we will share a range of poverty measurement tools as well as our own experiences collecting and using data to give power to the voices of those living in poverty including:

  • The Poverty Probability Index® (PPI®) uses a statistical learning model to leverage national survey data, creating an easy-to-use poverty measurement tool.  The country-specific 10-question survey places a low burden on both surveyors and respondents. Answers are converted to individual household poverty likelihoods and group poverty rates. The PPI uses a consumption-based definition of poverty and provides results for national, international and relative poverty lines, thus it is both objective and standard, allowing organizations to compare poverty rates within and across countries and sectors. Custom versions of the tool can be made for various contexts.

Rad Resource:  On the PPI blog there are a number of examples of organizations using the PPI to understand and give voice to their poor clients.

  • FINCA’s Mission Monitor uses predictive statistics to assess an organization’s outreach to the poor, as well as its impact in terms of job creation and client empowerment. Data is collected through a 2-minute telephonic survey with incoming clients, using customized survey instruments to understand indicators of the clients’ families’ well-being, including educational status, daily consumption and the quality of their health. The digital, cloud-based infrastructure supporting the Mission Monitor includes rigorous data quality controls to eliminate errors and bias in the data.
  • The Indicator Framework for Inclusive Distribution Businesses includes short- and long-term poverty indicators including the PPI, among other indicators such as empowerment, self-efficacy, quality of life for children, social support, aspirations, and nutrition. Lessons learned in using this framework to identify, collect and analyze standardized poverty indicators across three inclusive distribution businesses are available and will be shared.

Rad Resource: Reach out to hmoehle@umich.edu for surveys we’ve developed that capture these well-being indicators. We hope to see you at AEA 2018!

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi everyone! We are Sara McGarraugh and Leah Goldstein Moses from The Improve Group, and Ben Fowler from MarketShare Associates. As evaluators that work with mission-driven organizations, we’re excited to talk about how evaluation and impact measurement and management intersect.

Leason Learned: Both require gathering data within real-time constraints.

The Improve Group is an evaluation partner for a USAID-Senegal-funded effort by Compatible Technology International (CTI) to address market gaps along the post-harvest section of the millet supply chain. We work with local data collectors and an evaluation specialist to ensure cultural competence and on-the-spot data monitoring. This means working across time zones to keep up strong communication about what data we are receiving and what we need.

MarketShare Associates worked with a major foundation in 2018 to understand the impact of an accelerator program and direct investments in social enterprises. We found that it was not always easy to capture information from stakeholders, particularly from the private sector. Social enterprises are busy and fitting time in for a discussion with evaluators can be difficult. Without access, however, it can be nearly impossible to properly conduct an evaluation.

Leason Learned: We focus on use—including to refine programs and demonstrate impact for stakeholders.

Evaluation can play a key role in supporting the refinement of business models to support social impact. One way this happens is through speaking to the full range of stakeholders affected by social enterprises. At MSA, we know this can generate helpful information on how a business model is and is not working. Despite the current focus on customer centricity, businesses do not always have full insight into the needs, preferences, and desires of their customers. For many social enterprises, better understanding their social impact is critical to their own operations but also for their stakeholders and investors. Having credible data on their impact can be instrumental in attracting new investment.

The Improve Group’s work with CTI offers an example of refining a program after learning more about beneficiaries/customers. By collecting data on who was buying CTI’s technology and what these customers were doing with their product, CTI learned that its assumptions about who they primarily serve with their product is different than who they hoped for. They are now considering what their key social impacts are, and what they can hope to achieve.

Leason Learned: We’re often trying to assess a change.

The Improve Group’s work with CTI is about asking questions to understand the ways in which access to a market technology changes processed grain yield (direct outcome) and income (indirect outcomes)—and for whom those benefits and changes accrue. The literature is already available that shows the benefits to society when women’s income increases; by targeting technologies in a space that is traditionally held by women—post-harvest processing of millet—CTI is exploring the extent to which this group experiences change.

We’ll talk about these topics and more at our session at AEA 2018—hope to see you in Cleveland!

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hi! We are Jane Reisman, Chair and Alyna Wyatt, Program Chair, Social Impact Measurement TIG.

As impact investing and other market solutions for addressing global goals have been evolving, so have approaches to social impact measurement. Evaluators have been increasingly engaged in efforts to create bridges between the evaluation and impact investing communities and these efforts are paying off.

Leason Learned: One particular bridging effort of note is the Evaluator Advisory Group which has been providing input into the development of IRIS plus—the update to the IRIS system for the Global Impact Investing Network (GIIN).  What would come as no surprise from a group of evaluators, is that the Evaluator Advisory Group advocated for asking more “why” questions, elevating the voice and values of the beneficiaries and clients, and using data to inform decision-making for transformational impact.

Evaluators have also been weighing in on the Impact Management Project – a global effort that has been developing shared norms across sectors pertaining to measuring and managing impact.  Through this effort, five main questions have emerged:  what, how much, who, contribution and risk. These types of questions are highly relevant to evaluative thinking and methods.

While conversations among evaluators over the past few years have revolved around ‘What is this Social Impact Measurement thing’, or ‘What does this mean for me in my work?’, evaluators have been joining many tables that are working on developing concrete and practical actions that cut across sectors and a range of actors. Many of these efforts will be highlighted in the SIM TIG track at Evaluation 2018. 

Cool Trick: Our SIM TIG Business Meeting (Thursday 1 November at 7:30pm) will continue to showcase the bridging of evaluation with the evolving conventions of impact measurement and management in the private sector by hosting a panel of representatives from cross-sectoral organizations who are working on policies to guide  social impact measurement.  Our participants, Victoria Carlan from the Impact and Innovation Unit of the Government of Canada, Joanna Cohen from the Office of Evaluation at the MacArthur Foundation, and Katsuji Imata from SIMI (Social Impact Measurement Initiative) and CSO Network Japan will each discuss how they have considered both standards in evaluation and the conventions in impact investing in formulating policies and guidelines.

Rad resources:

  • Guidance is continuously created and curated through a variety of different efforts. We have been actively updating the SIM webpage with links to these resources which you can access here.
  • As a nascent practice, lessons learnt are being routinely documented and shared. These will be highlighted in the coming week’s 365 blogs!

We can’t wait for the learning and sharing that will happen at Eval 2018 and we also look forward to welcoming new members to the TIG at Eval 2018.

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi, we are Ben Fowler, Co-Founder and Principal and Richard Horne, Managing Consultant at MarketShare Associates (MSA). We are a global firm who specializes in identifying, implementing and measuring business solutions for positive social impact.  Our work focuses on supporting impact investors, foundations and bilateral donors to assess the impact of their investments for job creation and adjust their strategies to maximize their job creation impacts.

Lessons Learned: We have learned a number of things about measuring jobs over the years:

  • Rethink your definition of a job. Some of the main resources with indicators for job measurement define jobs in a way that emphasizes full-time employment. However, we’ve found that this omits quite a lot of the jobs being created, particularly for those in rural areas and more marginalized communities. Another important consideration is whether to define jobs as people or as work. Although we often think of jobs as held by people, there are some cases where calculating impact in terms of full time equivalent positions is easier to aggregate across a portfolio.
  • Don’t shy away from more complex job creation measurements. Many investors tend to track and report only on the jobs directly created by their investees (i.e. direct jobs). However, investments typically create a much wider range of job impacts, including new hiring by suppliers and others in the value chain (indirect jobs), as well as the broader increase in jobs generated by increased spending by the company and its new staff (induced jobs). While there is some hesitation that these types of jobs are more difficult to measure or to credibly claim as a result of a particular programme, we have identified and/or developed several straightforward approaches for doing so, as outlined in the rad resource (“Measuring Job Creation in Private Sector Development”).
  • There are options available for late evaluations. The most rigorous assessments of job creation typically require a baseline and an endline assessment. However, we all too often are called onto the scene late in the day, and not all baseline information has been collected. However, the measurement of job creation often draws from secondary data and hence can provide ex-post assessments (as well as ex-ante) of job creation impacts if the right data is available.

Rad Resources:  Measuring Job Creation in Private Sector Development –  This paper provides practical guidance on job measurement from a project perspective. It outlines a range of methodologies, with practical case studies for each.

A practical example in which several of the tools were applied to capture the results of a youth employment program in Nairobi, Kenya are also useful.

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, we’re Rebecca Baylor, Heather Esper and Yaquta Fatehi from the William Davidson Institute at the University of Michigan (WDI). Our team specializes in performance measurement. We use data to improve organizations’ effectiveness, scalability, and sustainability and create more value for their stakeholders in low- and middle-income countries.

We believe combining social and business metrics helps organizations– including impact investors– gain unique insights to solve key business challenges while communicating impact evidence. Through our work, we have encountered a number of social + business metric ‘power couples’ that we’ve seen influence internal decision-making at organizations to improve results.

Hot Tip: Create a few social + business metric power couples. Analyze business and social metrics in parallel. Use data to see how your organization should holistically adapt to better meet its goals and increase intended impacts.

Lessons Learned: Finding the right metrics can be a lot like parenting. Selecting metrics takes time and patience. There isn’t one perfect formula, and it will likely evolve over time. However, organizations, including businesses, use some of the following strategies to make their job easier:

  1. Get Creative – Turns out, what works for one kid– or project– may not work for the next. Innovate. Brainstorm what matters most. To select the right combination metrics, ask: what insights do we need? What are our key challenges? Use ‘pause and reflect’ sessions to examine data across different teams. RAD RESOURCE: FSG’s Intentional Group Learning Guide
  2. Pretest – Would you buy your child new sneakers without trying them on first? Didn’t think so! Don’t expect spectacular data insights to come from metrics you haven’t tested first. RAD RESOURCE: Conducting a Successful Pretest
  3. Be Strategic – There is a reason why Child A doesn’t raise Child B. If you want business and social metrics that generate lessons and guide decision-making, you have to place parents– senior-level leadership– in the driver’s seat. The Boston Consulting Group’s article, Total Societal Impact: A New Lens for Strategy, offers an enlightening perspective on pursuing social impacts and business benefits simultaneously.
  4. Be Rigorous, Yet Right-sized – At bath time, one scrub behind the ear may not be enough to get the dirt away. Conversely, spending 3 hours soaked in suds is effective but unrealistic. MIT D-Lab’s Lean Research Framework is a great place to look for guiding principles to effective, right-sized research.

What are we missing? For starters, environmental impacts– but that’s another blog for another time. What Social + Business metric combinations have or haven’t worked for you? We would love to hear from your experiences! Send us your thoughts. The more we share and learn from one another, the more impact we can generate.

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Good morning! I’m Brian Beachkofski, lead for data and evaluation at Third Sector Capital Partners, Inc. We are a 501(c)3 consulting firm that advises governments, community organizations, and funders on how to better spend public funds to move the needle on pressing challenges such as economic mobility and the well-being of our children. Our proven approach is to collaborate with our clients and stakeholders to define impact, draw actionable insights from data, and drive outcomes-oriented government. Since 2011, we have helped over 40 communities implement increasingly effective government. We use Pay for Success (PFS) agreements — sometimes called Social Impact Bonds (SIB) — and other outcomes-oriented contracts to help governments and service providers improve outcomes for vulnerable members of their communities. Payment for social services in these projects is directly tied to the impact as measured by a third-party evaluator.

Lessons Learned: In PFS, evaluation has the potential to contribute beyond measuring impact to determine payment. Data, analysis and evaluation all have an important role starting before a project launches and continuing after it concludes.

Evaluation work occurring before the project feeds into the Retrospective Analysis and Baselining effort by providing the evidence base for an intervention. That prior information can indicate who is most at need, who is not benefiting from current practices, which interventions hold more promise, and how much of an improvement can be expected from the intervention.

In the original PFS concept, a Randomized Control Trial determined payment as well as built the evidence to inform scaling of the particular intervention. In our work, we have learned that evaluation best serves two purposes: measuring impact for “success payments” and quantifying impact to inform policy changes. In Santa Clara County’s Project Welcome Home, we evaluate for payment and policy separately.

Even successful PFS projects eventually end. Evaluation, however, provides a path to ensure that the community continues to make progress by embedding feedback into the way government reviews their services. Projects, such as Project Welcome Home, show how government can create a continual feedback loop to see the impact providers have on the people they serve. Once low-cost impact management is embedded as part of normal performance measurement, government can hold service providers accountable for quantifiable effectiveness while encouraging greater innovation.

Rad Resource: Stay in touch with the Pay for Success community and the role of evaluation in projects on our blog. You can also find more resources on Pay for Success here.

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi! We are Courtney Bolinson and Muthoni Wachira, Impact Evaluation Manager and Investment Director at Engineers Without Borders Canada (EWB). EWB invests in seed-stage social enterprises to help them scale and drive lasting, transformational impact within communities that need it most. EWB recognised the need for an evaluator to measure and interpret the impact of investments, however integrating an evaluator into a team unfamiliar with evaluation was challenging, and we learned some useful lessons about integrating our two areas of work.

Lesson Learned: Get on the same page. During one of the first retreats with the investment team, Courtney gave an introductory presentation on program evaluation, provided definitions, shared standards and guiding principles, and summarized the state of evaluation in the impact investing sector. This was a turning point for our team’s understanding of the evaluator’s role.

Lesson Learned: Work together to identify where evaluation can be used. At first, Courtney was unfamiliar with impact investing, and focused on summative evaluation. Over time and through collaboration, we were able to identify a number of formative and implementation evaluation needs at different stages of an impact investment, such as due diligence.

Hot Tips:

  1. If you’re new to impact investing, have an expert from your team or organization walk you through the steps of an investment process. This will help you identify areas where evaluation can be useful.
  2. Use the full spectrum of an evaluator’s skillset. An evaluator is well-placed to help clarify an investment theory of change, develop an impact thesis, and identify key performance indicators for the investment fund.
  3. Position your evaluator as a key member of the team. For example, ensure they are on weekly calls, strategy meetings, team retreats, etc. This will help them understand the context of their evaluation work better and will allow for the team to identify new areas for evaluation.
  4. Read each other’s literature. For an evaluator, reading impact investing blogs, state of the sector reports, and investor-specific impact reports is critical for providing evaluation context. For an investment director, reading evaluation literature can clarify what is possible regarding evaluation, and what is considered cutting edge.

Rad Resources:

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top