Greetings! I am Hilary Leav. I am the Director of MERL Learning at Compassion International, and I have a Masters degree in Measurement and Evaluation from American University. After nearly a decade conducting evaluations for both non-profit and for-profit industries, my focus has now shifted to learning and providing clear, actionable results for stakeholders.
The longer I work in the field of Monitoring, Evaluation, and Research, the more convinced I am that we can be our own worst enemy when it comes to data utilization.
Why? Because in our enthusiasm for the work, we miss key realities in the world of evaluation: Namely, the importance of making our data accessible to all stakeholders; and the parallel effort required to build a learning culture that values both the acquisition and use of data.
Why does a learning culture matter? Many organizations and stakeholders now instinctively understand they need to collect quantitative and qualitative data. They design rigorous efforts to do so. The first data arrives with celebration. More data follows; and still more. Eventually, collecting data loses its appeal, and our efforts wind up on the proverbial shelf.
Yet learning, which I have come to think of as “the thoughtful, intentional pause to understand what the data is telling us and what we need to do because of it,” means just that: Pulling data off the shelf and asking questions. This means understanding and utilizing information to inform decisions, whether they are programmatic, operational, or strategic. This requires meaningful interaction with data, and asking tough questions. To build a culture that considers and welcomes data before decision making, data needs to be consumable, understandable, and usable at all levels of an organization.
Building a learning culture is a daunting task. However, I have come to realize that evaluation professionals need to build the bridge from results to learning.
Hot Tips:
- Build the “why:” Evaluators need to write thoughtful, easily understood reports. But we also need to help decisionmakers understand why data matters, and how it can benefit them. In this way, building a learning culture includes understanding what people think is important about data, and how people experience change as it relates to our findings.
- Know your audience: If most of your stakeholders will not read a long report, then do not write one for them. With the diversity of options available to us – from detailed technical reports to executive summaries to cutting-edge visualizations and dashboards – we should never default to only one format, particularly if it is not working!
- Find a champion: When someone in the organization embraces learning, highlight it, and make mention of it. Senior-level managers can set the tone for the whole group by recognizing and appreciating data and learning.
Change will not happen overnight; like any other behavior-focused outcome we track, it takes time to see results, and the more dispersed or decentralized an organization, the longer the change will take. But it is important to get started!
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Hi Hilary,
I couldn’t agree more about the need for accessible data. As a data analyst, I’ve seen how inaccessible data paired with a lack of consideration for the presentation of data can decrease interest in the data from stakeholders. Being able to tell a story with data to engage the audience is a great way to contribute to a learning culture, which ties into your tips of building the “why” and knowing your audience. In cases like this, data visualizations can be extremely powerful. The best way to create a learning culture of collecting and utilizing data for data-driven decision making is to get stakeholders excited about data (and not let this excitement die down).
Your post made me think about iterative design, specifically user-centric design. By keeping the user (in this case, stakeholders) at the heart of evaluation, each stage of the evaluation process can be iterated to collect and use the most valuable and relevant data to stakeholders. This too links to what you said about “understanding what people think is important about data” so that they can “understand why data matters, and how it can benefit them.” Related to learning culture, iteration contributes to the idea of learning and improving based on previous iterations.
I appreciate your thoughts about finding a champion. This seems like a great strategy to get people behind the data in a program evaluation. Having an influential figure in the organization with some level of seniority can further promote the idea of a learning culture. I wonder if there would be value in having a champion from a junior level who is passionate about the evaluation being conducted. While they may not have the influence from seniority, I could see genuine passion being read as authentic interest by other stakeholders.
Thank you for sharing such valuable thoughts on the importance of a learning culture!
Hi Hilary
You raise some excellent questions about evaluation use in your post. Thank you for sharing. I think the creation of a “learning culture” is becoming more apparent not just in the evaluation space, but there seems to be a general shift in that direction from larger, innovative organizations, where more and more we see the development of internal Learning and Development teams.
I completely agree with your points on evaluators’ role in encouraging a learning culture, where there isn’t one. particularly in industry and field where politics and “red-tape” make change difficult (ie. Non-profit, healthcare, etc). Your points emphasize on-going debate on the evaluator’s role in evaluation use, and how much influence recommendations provided by an evaluation should have.
As I’m sure you’re aware the Weiss-Patton debate was foundational in this issue on whether evaluators should be held accountable for use, summarized by Shulha and Cousins (1997). Like you, I’d tend to argue that evaluators do have a responsibility to encourage responsible and effective use, and your recommendations are great.
Mostly, I love your emphasis on creating reports that are “consumable, understandable, and usable at all levels of an organization.” Not only does this adhere to the U5 standards of evaluation from The Joint Committee on Standards for Educational Evaluation, and other professional standards, but reflecting on the way research has been communicated throughout the pandemic, it has become increasingly clear that report clarity is key to broad understanding. As a technologist I love your point about using multiple tools for presenting information. There is definitely plenty of effective options out there!
Thanks very much for sharing!
Hi Hilary, this was a fantastic article, thank you for sharing. I definitely agree with your point about considering stakeholders’ needs and abilities concerning data. Too often grandiose ideas fall flat because those conceiving them do not consider the harsh realities of implementation, and the associated roadblocks and obstacles. A learning culture is important for organizational success and transformation, and many of the largest and most successful firms in the world boast a vibrant learning culture. However, as you said, a learning culture is not necessarily innate, and many organizations do not ask questions or interact with data in such a manner. In such case, I am curious to what extent, if any, do you believe that evaluators are responsible for fostering a learning culture? While I definitely see how evaluators can push an organization in that direction, it does not seem like an easy task, and it may not be one that is appreciated either.
Hi Hilary,
I am a senior at Texas A&M Central Texas University. I am currently enrolled in a program evaluation class. I think that making data understandable for others is very important. As evaluators, we are taught what these numbers mean and how to decipher it. Stakeholders might not have the same knowledge. It is important to explain these numbers in terms that make sense to the audience. Data is important because it allows us to see whether a program is working or not. Stakeholders need to understand that and ask questions to improve their programs. I also think that presenting data in different ways can help engage the stakeholders. Seeing data in graph charts and visual presentations might be more engaging than reading a lengthy report. This way stakeholders can ask questions and learn what the numbers mean because they have a visual aid.
Thank you Hillary, I’m learning from your article. Let’s continue building a strong Data Driven Decision Culture in C.I.