AEA365 | A Tip-a-Day by and for Evaluators

CAT | Social Impact Measurement

Hi, we are Ben Fowler, Co-Founder and Principal and Richard Horne, Managing Consultant at MarketShare Associates (MSA). We are a global firm who specializes in identifying, implementing and measuring business solutions for positive social impact.  Our work focuses on supporting impact investors, foundations and bilateral donors to assess the impact of their investments for job creation and adjust their strategies to maximize their job creation impacts.

Lessons Learned: We have learned a number of things about measuring jobs over the years:

  • Rethink your definition of a job. Some of the main resources with indicators for job measurement define jobs in a way that emphasizes full-time employment. However, we’ve found that this omits quite a lot of the jobs being created, particularly for those in rural areas and more marginalized communities. Another important consideration is whether to define jobs as people or as work. Although we often think of jobs as held by people, there are some cases where calculating impact in terms of full time equivalent positions is easier to aggregate across a portfolio.
  • Don’t shy away from more complex job creation measurements. Many investors tend to track and report only on the jobs directly created by their investees (i.e. direct jobs). However, investments typically create a much wider range of job impacts, including new hiring by suppliers and others in the value chain (indirect jobs), as well as the broader increase in jobs generated by increased spending by the company and its new staff (induced jobs). While there is some hesitation that these types of jobs are more difficult to measure or to credibly claim as a result of a particular programme, we have identified and/or developed several straightforward approaches for doing so, as outlined in the rad resource (“Measuring Job Creation in Private Sector Development”).
  • There are options available for late evaluations. The most rigorous assessments of job creation typically require a baseline and an endline assessment. However, we all too often are called onto the scene late in the day, and not all baseline information has been collected. However, the measurement of job creation often draws from secondary data and hence can provide ex-post assessments (as well as ex-ante) of job creation impacts if the right data is available.

Rad Resources:  Measuring Job Creation in Private Sector Development –  This paper provides practical guidance on job measurement from a project perspective. It outlines a range of methodologies, with practical case studies for each.

A practical example in which several of the tools were applied to capture the results of a youth employment program in Nairobi, Kenya are also useful.

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, we’re Rebecca Baylor, Heather Esper and Yaquta Fatehi from the William Davidson Institute at the University of Michigan (WDI). Our team specializes in performance measurement. We use data to improve organizations’ effectiveness, scalability, and sustainability and create more value for their stakeholders in low- and middle-income countries.

We believe combining social and business metrics helps organizations– including impact investors– gain unique insights to solve key business challenges while communicating impact evidence. Through our work, we have encountered a number of social + business metric ‘power couples’ that we’ve seen influence internal decision-making at organizations to improve results.

Hot Tip: Create a few social + business metric power couples. Analyze business and social metrics in parallel. Use data to see how your organization should holistically adapt to better meet its goals and increase intended impacts.

Lessons Learned: Finding the right metrics can be a lot like parenting. Selecting metrics takes time and patience. There isn’t one perfect formula, and it will likely evolve over time. However, organizations, including businesses, use some of the following strategies to make their job easier:

  1. Get Creative – Turns out, what works for one kid– or project– may not work for the next. Innovate. Brainstorm what matters most. To select the right combination metrics, ask: what insights do we need? What are our key challenges? Use ‘pause and reflect’ sessions to examine data across different teams. RAD RESOURCE: FSG’s Intentional Group Learning Guide
  2. Pretest – Would you buy your child new sneakers without trying them on first? Didn’t think so! Don’t expect spectacular data insights to come from metrics you haven’t tested first. RAD RESOURCE: Conducting a Successful Pretest
  3. Be Strategic – There is a reason why Child A doesn’t raise Child B. If you want business and social metrics that generate lessons and guide decision-making, you have to place parents– senior-level leadership– in the driver’s seat. The Boston Consulting Group’s article, Total Societal Impact: A New Lens for Strategy, offers an enlightening perspective on pursuing social impacts and business benefits simultaneously.
  4. Be Rigorous, Yet Right-sized – At bath time, one scrub behind the ear may not be enough to get the dirt away. Conversely, spending 3 hours soaked in suds is effective but unrealistic. MIT D-Lab’s Lean Research Framework is a great place to look for guiding principles to effective, right-sized research.

What are we missing? For starters, environmental impacts– but that’s another blog for another time. What Social + Business metric combinations have or haven’t worked for you? We would love to hear from your experiences! Send us your thoughts. The more we share and learn from one another, the more impact we can generate.

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Good morning! I’m Brian Beachkofski, lead for data and evaluation at Third Sector Capital Partners, Inc. We are a 501(c)3 consulting firm that advises governments, community organizations, and funders on how to better spend public funds to move the needle on pressing challenges such as economic mobility and the well-being of our children. Our proven approach is to collaborate with our clients and stakeholders to define impact, draw actionable insights from data, and drive outcomes-oriented government. Since 2011, we have helped over 40 communities implement increasingly effective government. We use Pay for Success (PFS) agreements — sometimes called Social Impact Bonds (SIB) — and other outcomes-oriented contracts to help governments and service providers improve outcomes for vulnerable members of their communities. Payment for social services in these projects is directly tied to the impact as measured by a third-party evaluator.

Lessons Learned: In PFS, evaluation has the potential to contribute beyond measuring impact to determine payment. Data, analysis and evaluation all have an important role starting before a project launches and continuing after it concludes.

Evaluation work occurring before the project feeds into the Retrospective Analysis and Baselining effort by providing the evidence base for an intervention. That prior information can indicate who is most at need, who is not benefiting from current practices, which interventions hold more promise, and how much of an improvement can be expected from the intervention.

In the original PFS concept, a Randomized Control Trial determined payment as well as built the evidence to inform scaling of the particular intervention. In our work, we have learned that evaluation best serves two purposes: measuring impact for “success payments” and quantifying impact to inform policy changes. In Santa Clara County’s Project Welcome Home, we evaluate for payment and policy separately.

Even successful PFS projects eventually end. Evaluation, however, provides a path to ensure that the community continues to make progress by embedding feedback into the way government reviews their services. Projects, such as Project Welcome Home, show how government can create a continual feedback loop to see the impact providers have on the people they serve. Once low-cost impact management is embedded as part of normal performance measurement, government can hold service providers accountable for quantifiable effectiveness while encouraging greater innovation.

Rad Resource: Stay in touch with the Pay for Success community and the role of evaluation in projects on our blog. You can also find more resources on Pay for Success here.

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi! We are Courtney Bolinson and Muthoni Wachira, Impact Evaluation Manager and Investment Director at Engineers Without Borders Canada (EWB). EWB invests in seed-stage social enterprises to help them scale and drive lasting, transformational impact within communities that need it most. EWB recognised the need for an evaluator to measure and interpret the impact of investments, however integrating an evaluator into a team unfamiliar with evaluation was challenging, and we learned some useful lessons about integrating our two areas of work.

Lesson Learned: Get on the same page. During one of the first retreats with the investment team, Courtney gave an introductory presentation on program evaluation, provided definitions, shared standards and guiding principles, and summarized the state of evaluation in the impact investing sector. This was a turning point for our team’s understanding of the evaluator’s role.

Lesson Learned: Work together to identify where evaluation can be used. At first, Courtney was unfamiliar with impact investing, and focused on summative evaluation. Over time and through collaboration, we were able to identify a number of formative and implementation evaluation needs at different stages of an impact investment, such as due diligence.

Hot Tips:

  1. If you’re new to impact investing, have an expert from your team or organization walk you through the steps of an investment process. This will help you identify areas where evaluation can be useful.
  2. Use the full spectrum of an evaluator’s skillset. An evaluator is well-placed to help clarify an investment theory of change, develop an impact thesis, and identify key performance indicators for the investment fund.
  3. Position your evaluator as a key member of the team. For example, ensure they are on weekly calls, strategy meetings, team retreats, etc. This will help them understand the context of their evaluation work better and will allow for the team to identify new areas for evaluation.
  4. Read each other’s literature. For an evaluator, reading impact investing blogs, state of the sector reports, and investor-specific impact reports is critical for providing evaluation context. For an investment director, reading evaluation literature can clarify what is possible regarding evaluation, and what is considered cutting edge.

Rad Resources:

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, my name is Michael Harnar and I am an assistant professor in the Western Michigan University Interdisciplinary PhD in Evaluation program, and a founding board member of Social Value US. I want to introduce you to a cool resource and invite you to participate in its development.

How many times have you had a conversation with someone on a project and you eventually realized that your definitions of key words or concepts were different from your colleague’s, and it mattered…a lot!?! Quite a few discussions at the Impact Convergence (Imp-Con) conference in Atlanta in 2016 hit on this issue of language, where, for instance, “your version of assessment and my version of assessment are different because we come from different disciplines”. These “aha” moments launched a discussion around the need to develop a resource that would cross–walk definitions used by different disciplines in the impact management and measurement space and provide insight into the etymology and common uses of such terms. Helped by financial and technical support from the Impact Management Project, and under the superb leadership of David Pritchard, members of Social Value US developed the Impact Management Glossary (IMG) which was launched in June, 2017.

Rad Resource: The Impact Management Glossary currently covers approximately 325 terms in common usage related to impact measurement in disciplines working in this space. Input into the first iteration came from accounting, evaluation, finance and impact investing, business and corporate social responsibility, economics, philanthropy, sustainable development, and social enterprise. Detailed information on the genesis of the IMG can be found here.

The glossary is not intended to be a comprehensive repository of terms used in any one discipline, or to replace discipline specific glossaries (e.g., Kylie Hutchinsons’s evaluation glossary, IRIS’s GIIN glossary). The IMG is the product of a collaborative attempt to clarify the similarities and differences in language usage among a number of disciplines that often interact yet may not be understanding each other due to these differences.

Take a look at the glossary and send us your feedback or volunteer to join the editorial board to ensure its effectiveness and usefulness. Either way, we hope you find it a useful tool and we look forward to hearing from you.

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi there, we’re Jane Reisman and Alyna Wyatt, Chair and Program Chair of the Social Impact Measurement TIG.

The fields of impact investing and other market solutions are rapidly advancing vehicles for addressing social and environmental goals. The Social Impact Measurement TIG is in its early stages of growth and exploration as we as evaluation professionals seek to understand and support this new and emerging field of development-oriented interventions with useful measurement practices given our expertise!

The importance of developing more rigorous measurement is particularly important in light of the defining characteristic of impact investing, innovative finance, and market-based solutions as demonstrating measurable change in social or environmental returns in addition to financial returns.

Hot Tip: Evaluators have much to offer during this critical developmental time in the maturation of measurement strategies for impact investing and other market solutions.  Reisman and Olazabal (2016) surveyed the landscape of approaches for social impact measurement and posited that the majority of measurement efforts in impact investing are best characterized as measuring standards or performance monitoring.  While these approaches are commonplace for finance-related disciplines, they are not sufficient for developing an evidence base for market solutions or for using data to manage for impact. We advocated in this landscape report for the continued growth in measurement practices that incorporate rigorous outcome or impact measurement or market systems analysis.

This week’s blogs will give us an overview of different types of innovative finance and market based solutions, and how evaluators have sought to improve the measurement of the social, economic and environmental impacts.

Rad Resources: There are a number of resources out there helping to put the ‘impact’ in impact investing and measuring impact in market system innovations.

  • The Impact Management Project (2018) facilitated by Bridges Ventures engaged over 700 stakeholders, including evaluators, to more robustly establish dimensions of impact for measuring and managing. Many of the principles that evaluators value, including clarity about outcomes, beneficiaries, contribution to unanticipated consequences are explored.
  • The Impact Toolkit, is an open source resource hub developed by the GIIN and launched only one week ago aims to address fragmentation in the understanding and accessing of impact measurement and management (IMM) tools and resources that are fit-for-purpose.
  • The Navigating Impact Project initiated by the GIIN (2017) examines academic and field research to develop logic models and their related metrics for specific social and environmental themes.
  • The Beam Exchange is a platform for knowledge exchange and learning about the role of markets in poverty reduction in developing countries. Materials explain the market systems approach and provide practical guidance for practitioners on monitoring and measuring market system changes. Loads of examples help to understand the practical realities of these approaches and tools.

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I’m Andrea Nelson Trice, President of Trice & Associates, an evaluation and consulting firm. This case came from my research for a book project on the human dimensions of social enterprise success.

Tony, a successful entrepreneur, visited several emerging markets and determined that dependency on fire for light is unacceptable in the 21st century. The health and safety dangers alone make an alternative essential. He knew how to design low-cost solar lights, so he quit his job and began building a social enterprise to address this problem.

He received a grant to give away thousands of his lights with the goal of priming the pump in multiple markets. Now, two years later, we’re brought in to evaluate the enterprise’s impact. The problem is, the company is far from breaking even. “I don’t know how many more things I can try,” Tony says. “People just aren’t buying our lights.”

As evaluators, do we simply pull out a standard template to evaluate the work, or do we risk asking deeper, more difficult questions around assumptions that are driving the enterprise? In interviews with Tony and emerging market social entrepreneurs, I’ve heard very different perspectives about “the problem.”

Rad Resources:

  • Perhaps one of the most important things we can contribute as program evaluators is help identifying faulty assumptions that guide the work. Here is my website, which includes more on this.
  • Increase your understanding of cultural differences. One of my favorite resources is from Professor Geert Hofstede, whose research team highlights national cultural differences.

Hot Tips:

  • As a Do-It-Yourself culture, we often assume we can make sense of cultural differences on our own. That’s rarely the case. Expats, who have lived in a culture for years, can be great resources.
  • Consider the Amish. It may seem futile to market solar lights to people who have no problem with their current light sources. But how often do we unintentionally overlay our values onto another culture as we work to solve a “pressing need?”

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

This is Heather Esper, senior program manager, and Yaquta Fatehi, senior research associate, from the Performance Measurement Initiative at the William Davidson Institute at the University of Michigan. Our team specializes in performance measurement to improve organizations’ effectiveness, scalability, and sustainability and to create more value for their stakeholders in emerging economies.

Our contribution to social impact measurement (SIM) focuses on assessing poverty outcomes in a multi-dimensional manner. But what do we mean by multi-dimensional? For us, this refers to three things. It first means speaking to all local stakeholders when assessing change by a program or market-based approach in the community. This includes not only stakeholders that interact directly with the organization, such as customers or distributors from low-income households, but also those that do not engage with the venture  ?  like farmers who do not sell their product to the venture, or non-customers. Second, this requires moving beyond measuring only economic outcome indicators; it includes studying changes in capability and relationship well-being of local stakeholders. Capability refers to constructs such as the individual’s health, agency, self-efficacy, and self-esteem. Relationship well-being refers to changes in the individual’s role in the family and community and also in the quality of the local physical environment. Thirdly, multi-dimensional outcomes means assessing positive as well as negative changes on stakeholders and on the local physical and cultural environment.

We believe assessing multidimensional outcomes better informs internal decision-making. For example, we conducted an impact assessment with a last-mile distribution venture and focused on understanding the relationship between business and social outcomes. We found a relationship between self-efficacy and sales, and self-efficacy and turnover, meaning if the venture followed our recommendation to improve sellers’ self-efficacy through trainings, they would also likely see an increase in sales and retention.

Rad Resources:

  1. Webinar with the Grameen Foundation on the value of capturing multi-dimensional poverty outcomes
  2. Webinar with SolarAid on qualitative methods to capture multi-dimensional poverty outcomes
  3. Webinar with Danone Ecosystem Fund on quantitative methods to capture multi-dimensional poverty outcomes

Hot Tips:  Key survey development best practices:

  1. Start with existing questions developed and tested by other researchers when possible and modify as necessary with a pretest.
  2. Pretest using cognitive interviewing methodology to ensure a context-specific survey and informed consent. We tend to use a sample size of at least 12.
  3. For all relevant questions, test reliability and variability using the data gathered from the pilot. We tend to use a sample size of at least 25 to conduct analysis, such as Cronbach’s alpha of multi-item scale questions).

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hello! We are Brian Beachkofski and Jeannie Friedman, Pay for Success (PFS) advisors at Third Sector Capital Partners. We spend most of our time assessing feasibility and designing social sector programs with rigorous evaluations and evidence-based interventions embedded into their contracting structure.

PFS is an innovative contracting model (shown in the figure below) that drives government resources toward high-performing social programs. The PFS model is designed to merge performance measurement using administrative data and rigorous evaluation of long-term outcomes into the contracting structure. This helps ensure that funding is directed toward programs that succeed in measurably improving the lives of people most in need.

 

Hot Tips:

  • Balance Factors in Evaluation Design: A randomized control trial (RCT) was once considered necessary for PFS evaluation, but now it is generally recognized that there is no one-size-fits-all answer. Factors such as operational complexities, sample size, observation windows, budget constraints, and limitations on service providers’ needs should be balanced against each other.
  • Focus on Outcomes: Aligning incentives around outcomes is a good first step. Providing interim insight on how the project progresses against those metrics allows the team to make improvements and act on those incentives. This feedback loop is fueled by interim outcome metrics and real-time program delivery modifications. Consistently keeping outcomes in mind maximizes final outcomes for those who are in need. Salt Lake County’s Homes Not Jail illustrates how different evaluation techniques apply to PFS.
  • Separate Payment from Policy: An evaluation intending to inform a payment decision is different than one evaluating a policy decision. A PFS project needs to clarify whether the evaluation is to inform payment, with quantifiable impact, or future policy, where causation is paramount. A good example of when we did that was in Santa Clara County’s Project Welcome Home.
  • Engage Stakeholders Early: Most PFS projects serve populations with complex, multi-faceted needs that cross multiple government agencies and community partners, which makes defining measurable and meaningful outcomes challenging. Collaboratively refining these goals into defined metrics can gain stakeholder buy-in from all partners.
  • Use a Pilot Period: Operationalizing data sharing, referral pathways, and randomization protocols are new skills for many projects. PFS projects are often a government’s first time releasing administrative data outside organizations. Protection requirements and prior practices can make data-sharing feel uncomfortable. A pilot period builds trust and experience in a collaborative shared-data project, easing the full project’s operations.

Rad Resources: Learn more about PFS and projects:

  • Introduction to Pay for Success to learn more about how the model works
  • Third Sector’s blog, with the latest news and thoughts on PFS are found
  • PFS Resource page, with links to more resources

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, we are Mishkah Jakoet and Amreen Choda from Genesis Analytics.

Social Impact Measurement (SIM) is important for the legitimacy, advancement, and management of impact investing. SIM can also help align incentives among stakeholders and improve communication. While innovative finance matured over the past decade, similar advancement in SIM is complicated by  diverse approaches, methods, and tools responding to various stakeholders. Unfortunately, much of SIM is focuses on outputs, uses limited evaluative thinking, and doesn’t consider how change happens.

Lessons Learned:

To best capitalize on the currency of SIM, investors and development practitioners/evaluators need to bridge the gap between their practices. At the 8th African Evaluation Association Conference  in Uganda last month, participants agreed that the evaluation profession has much to offer to overcome the challenges inherent in SIM. With support from The Rockefeller Foundation, Genesis Analytics curated the Innovations in Evaluation strand to start building this bridge by facilitating dialogue between investors and evaluators. A discussion output is below:

Lessons Learned:

  • For many years, the evaluation profession emphasized attribution of impact, but there is now a greater focus on contribution, which matters to investors looking to enhance the impact of their funds.
  • Investors use impact measurements for different objectives at different stages of the investment cycle. Evaluators must be flexible and responsive to meet these needs.
  • Some investors have been reluctant to embrace SIM, because they think a randomized control trial (RCT) is the only option, yet worry about the ethics of randomly assigning a treatment group. Evaluators and investors should share knowledge, particularly to explore the value of options beyond a randomized controlled trial, and jointly develop a contextualized definition of impact and a SIM technique based on this definition.

Rad Resources:

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Older posts >>

Archives

To top