I am Sachi Shenoy, the Co-Founder and Chief Impact Officer of Upaya Social Ventures. Upaya’s mission is to create dignified jobs for the poorest of the poor by investing in and consulting with small business entrepreneurs anchored in marginalized communities. To date, our 16 investee companies have cumulatively created more than 12,000 jobs.
In the decade-plus I’ve been doing this work, I’ve often heard mission-driven organizations talk about the importance of scale. While reporting large and wide outreach is a commendable goal, it should not preclude measuring the depth of impact. It’s important to ask how a program benefited its target audience in the intended way. If we have created new jobs, for example, have jobholders’ incomes increased, especially compared to their old jobs?
Mission-driven organizations should consider surveying a small sample of their beneficiaries. With each company Upaya invests in, we send third-party enumerators to visit a random sample of jobholders for a 10-minute survey. We try to learn how the program benefited its target audience and what program modifications could optimize effectiveness.
We spend on average $100,000 annually, or 10 percent of our budget, on these activities. We have always believed the benefits outweigh the costs, and that the same holds true for other organizations. Creative methods—such as employing student interns, leveraging free mobile survey tools like KoBoToolbox, and pulling it all together with systems like Impact Cloud by SoPact—can make this process straightforward and cost-effective.
The nearly 1,200 surveys we have collected over the years have provided invaluable insights. For example, one investee company found jobholders reported a 200 percent increase in their household incomes and were spending considerably more on food and consumer goods. But one year into these jobs, despite the uptick in income, there was little to no movement on longer-term indicators like improvements in housing condition. Midline surveys revealed jobholders had no safe way to save—they were not getting approved for bank accounts, and the proverbial “stuffing the mattress” method of savings was prone to theft. It was no wonder then, that short-term indicators were highly positive, but longer-term indicators were flat. This prompted our entrepreneur to devise savings schemes, such as automatic payroll deductions, for his employees. Early pilot testing showed good promise, and jobholders viewed this as an additional benefit of staying in the job.
For any organization, surveys can unearth valuable customer insights that can improve program effectiveness. The purpose of this information is not to “prove impact;” other more rigorous, resource-intensive methods like randomized control trials (RCTs) can establish causal links between program activities and outcomes. We disagree, however, with the binary notion that one must either invest in RCTs or do nothing at all. We believe there is a middle ground—one that involves gathering feedback from our target audience, in their own words, and constantly refining our programs to best serve their needs.
The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.