Hello! I am Samantha Langan, a Senior Data Intelligence Analyst with VentureWell’s Data Intelligence team. VentureWell is a national nonprofit headquartered in Hadley, Massachusetts, that specializes in funding, training, and cultivating a pipeline of science and technology inventors, innovators, and entrepreneurs. Together with our partners, we are driven to solve the world’s biggest challenges and create positive social and environmental impact.
At VentureWell, our programming exists within the dynamic world of STEM innovation and entrepreneurship (I&E). Here are some evaluation lessons we have learned to help focus both our partners’ and our work in this often choppy, but fun, space:
Lessons Learned
- Provide an anchoring framework or two.
Those who teach I&E tend to be innovators and entrepreneurs, themselves. They often think, act, and adapt quickly, and as evaluators, we try to bring intentionality, research, and rigor into their fast-paced work. One way we have done this is by sharing I&E ecosystem frameworks with our partners, such as the Triple Helix or Quintuple Helix models. We may also provide them with more detailed frameworks, such as those by Ianioglo (2022), that compare and contrast innovation and entrepreneurship ecosystems.
Once we identify where our partners’ programs fit within an I&E ecosystem framework, we then have an easier time helping them think through the logic and goals of their services, including if and how they may be collaborating with partners from academia, government, or industry.
- Operationalize I&E, including identifying specific I&E competencies.
After we have a sense of how our partners’ work is anchored by a larger framework, we then find it helpful to collaborate with them to operationalize what I&E means in their context. We have found that while many programs focus generally on I&E, they tend to lean one way or the other. We help our partners remember that, while very much related, I&E are two distinct constructs that each have deep literature bases and competencies.
For instance, with our entrepreneurial-leaning programs, we help our partners clarify the ways in which they hope their participants will become entrepreneurially competent. This may include increasing their participants’ knowledge about commercialization, strengthening their intentions to become business owners, or positively affecting their participants’ entrepreneurial mindsets and self-efficacy. Similarly, for innovation-leaning programs, we help our partners identify specific inventor traits and dispositions they are hoping to influence (e.g., calculated risk-taking, empathy, creativity, etc.). Ultimately, this level of specificity helps us design effective evaluation plans and develop instruments using evidence-based literature.
- Conduct longitudinal research, when feasible.
Even though I&E programs are typically short in duration–often lasting from a few weeks to less than a year–we find it beneficial to follow-up with participants to examine what types of longitudinal outcomes they may be experiencing, when feasible. This information serves two purposes: (1) the data we gather helps us adjust and improve our in-house instruments and future evaluation designs, and (2) sharing longitudinal findings with our forward-thinking partners helps them pause to reflect on the strengths and opportunities for their programs.
An example of this was a 2024 longitudinal study we conducted with 2017-2022 cohorts of faculty grantees. We learned that the majority of grantees had sustained their I&E course or program past the duration of the grant, but that others faced systemic-level obstacles that interfered with their ability to implement their grant long-term (e.g., lack of key I&E resources and infrastructure at their institution, lack of support from leadership, etc.). These findings helped inform the development of a new grant program that intentionally targets the limitations we discovered during the longitudinal follow-up.
Please check out our post tomorrow on using Lean Startup Methodologies in evaluation!
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.