Hello! I’m Pierce Gordon, recent Ph.D. grad and independent innovation and evaluation consultant. I’ve been researching the evaluation approaches for the Botswana Innovation System for two years, and I thought my methods and insights would be uniquely useful to the AEA community.
Recently, innovation has become the topic du jour for development practice, and Botswana’s policymakers have developed institutions over the past years which reflect this passion: from public universities, a top-of-the-line science and technology park, and applied research and development technology facilities. During this transitory period towards a knowledge-based society, my ethnography unpacks the approaches these flagship innovation actors use to track and define success.
For instance, the national innovation body, the Department for Research, Science, and Technology, calls upon the methodology of the Oslo Manual, the international standard to compare countries’ perception, influence, and focus on innovation practice. The evaluation’s main purpose? To compare the country’s innovation inputs and outcomes between countries, and to develop comparable policies for the national departments. The main innovation incubator, called the Botswana Innovation Hub, focuses on ensuring that entrepreneurs are ready to bring new high-tech resources to market by ranking the technical feasibility, the social impact, affordability, originality, and marketability of the ideas.
Alternatively, an MIT-based organization called the International Development Innovation Network developed a systematic, cohesive, and useful evaluation system to support their spread of development-centered co-design. Because the organization runs summits worldwide to support development-focused technological co-creation, they assess the networks of entrepreneurs and the perceived and actual outcomes of the workshops, while also developing evaluation tools and training to the designers during their summits. They are also open to evolving their tools to the needs and purpose of different summits; they once measured the amount of human waste during a sustainability-focused summit to determine ways the waste could be minimized or used in their technological designs.
What have we learned by comparing these approaches? These innovation actors mainly associate innovation as a commercial endeavor, instead of applying innovation to education, health, or government.
How are they misaligned? There is a real gap in how international systems measure innovation from the informal sector. Fortunately, international policymakers are currently working towards methods to systematically evaluate innovation activities across countries.
What can they learn from each other? The national organizations can learn why cohesive and useful evaluation systems, like IDIN’s, can catalyze learning, assessment, and evidence-based communication. Moreover, the local organizations must better institutionalize the role of intellectual property protection and transfer; innovation must not only be sustained, useful, and problem-solving, but also protected, to dissuade international exploitation.
There’s much more than was revealed from the research other than these insights about this institutional and cultural shift, and about the influence of innovation on the country. If you’re interested, check out the dissertation at bit.ly/PiercePhD. If you want to know more about the complexities of design, evaluation, and development, contact me at firstname.lastname@example.org.
Hope to hear from you soon!
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.