TAG | organizational learning
4 Comments · Posted by Sheila Robinson in Organizational Learning and Evaluation Capacity Building
Hello. I am Karen Widmer, a 4th year doctoral student in the Evaluation program at Claremont Graduate University. I’ve been developing and evaluating systems for performance (business, education, healthcare, and nonprofits) for a long time. I think organizations are a lot like organisms. While each organization is unique, certain conditions help them all grow. I get enthusiastic about designing evaluations that optimize those conditions!
Theme: My master’s research project looked at evaluation-related activities shared by high-performing organizations. For these organizations, evaluation was tied to decision making. Evaluation activity pulled together knowledge about organizational impact, direction, processes, and developments, and this fed the decisions. The challenge for evaluation is to pool the streams of organizational knowledge most relevant for each decision.
- Evaluative thinking identifies the flow of organizational knowledge and this provides decision makers with a point of reference for quality decisions.
- In technical language, Knowledge Flow may mediate or moderate the relationship between evaluative thinking and decision quality. Moreover, the quality of the decision could be measured by the performance outcomes resulting from the decision!
- Design your evaluation to follow the flow of knowledge throughout the evaluand lifecycle.
- Document what was learned when tacit knowledge was elicited; when knowledge was discovered, captured, shared, or applied; and knowledge regarding the status quo was challenged. (To explore further, look to the work of: M. Polanyi, I. Becerra-Fernandez, and C. Argyris and D. Schon.)
- For the organizations I looked at, these knowledge activities contained the evaluative feedback desired by decision makers. The knowledge generated at these points told what’s going on.
- For example, tacit perceptions could be drawn out through peer mentoring or a survey; knowledge captured on a flipchart or by software; or a team might “discover” knowledge new to the group or challenge knowledge previously undisputed.
Conclusion: By design or still shot, evaluative thinking can view the flow of knowledge critical to decisions about outcomes. Knowledge Flow offers a framework for connecting evaluation with the insights decision makers want for reflection and adaptive response. Let’s talk about it!
Rad Resource: The Criteria for Performance Excellence is a great government publication that links evaluative thinking so closely with decisions about outcomes that you can’t pry them apart.
Rad resource: Neat quote by Nielsen, Lemire, and Skov in the American Journal of Evaluation (2011) defines evaluation capacity as “…an organization’s ability to bring about, align, and sustain its objectives, structure, processes, culture, human capital, and technology to produce evaluative knowledge [emphasis added] that informs on-going practices and decision-making to improve organizational effectiveness.”
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
5 Comments · Posted by Sheila Robinson in Nonprofits and Foundations Evaluation, Organizational Learning and Evaluation Capacity Building, Qualitative Methods, Quantitative Methods: Theory and Design, Research on Evaluation
Hello! We are Johanna Morariu, Kat Athanasiades, and Ann Emery from Innovation Network. For 20 years, Innovation Network has helped nonprofits and foundations evaluate and learn from their work.
In 2010, Innovation Network set out to answer a question that was previously unaddressed in the evaluation field—what is the state of nonprofit evaluation practice and capacity?—and initiated the first iteration of the State of Evaluation project. In 2012 we launched the second installment of the State of Evaluation project. A total of 546 representatives of 501(c)3 nonprofit organizations nationwide responded to our 2012 survey.
Lessons Learned–So what’s the state of evaluation among nonprofits? Here are the top ten highlights from our research:
1. 90% of nonprofits evaluated some part of their work in the past year. However, only 28% of nonprofits exhibit what we feel are promising capacities and behaviors to meaningfully engage in evaluation.
2. The use of qualitative practices (e.g. case studies, focus groups, and interviews—used by fewer than 50% of organizations) has increased, though quantitative practices (e.g. compiling statistics, feedback forms, and internal tracking forms—used by more than 50% of organizations) still reign supreme.
3. 18% of nonprofits had a full-time employee dedicated to evaluation.
4. Organizations were positive about working with external evaluators: 69% rated the experience as excellent or good.
5. 100% of organizations that engaged in evaluation used their findings.
6. Large and small organizations faced different barriers to evaluation: 28% of large organizations named “funders asking you to report on the wrong data” as a barrier, compared to 12% overall.
7. 82% of nonprofits believe that discussing evaluation results with funders is useful.
8. 10% of nonprofits felt that you don’t need evaluation to know that your organization’s approach is working.
9. Evaluation is a low priority among nonprofits: it was ranked second to last in a list of 10 priorities, only coming ahead of research.
10. Among both funders and nonprofits, the primary audience of evaluation results is internal: for nonprofits, it is the CEO/ED/management, and for funders, it is the Board of Directors.
Rad Resource—The State of Evaluation 2010 and 2012 reports are available online at for your reading pleasure.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
1 Comment · Posted by jgothberg in Disabilities and Other Vulnerable Populations, Indigenous Peoples in Evaluation, International and Cross-cultural Evaluation, Organizational Learning and Evaluation Capacity Building
I’m Karen Anderson, AEA’s Diversity Coordinator Intern, and in this role I support AEA’s diversity programs, TIGs, and the AEA Public Statement on Cultural Competence in Evaluation Dissemination Working Group.
The baton has been passed from the Cultural Competence in Evaluation Task Force, the Statement developers, to the Cultural Competence in Evaluation Dissemination Working Group to translate the Statement from paper to practice. One strategy for its broader dissemination and use is integrating the Statement into the policies and procedures of organizations that conduct and commission evaluations.
The AEA Public Statement on Cultural Competence in Evaluation has several core concepts, including the implications culture has for all phases of evaluation, including staffing of evaluation efforts and ensuring that members of the evaluation team collectively demonstrate cultural competence in the context for each evaluation. How does an evaluation practitioner or commissioner begin to do this?
- The Organizational Cultural and Linguistic Competency Assessment Tool can be used to assess where organizations and individuals fall along the cultural competence spectrum, and to serve as a guide to identify training needs and areas to improve upon.
- Share the Statement and supplemental resources like Building Culturally Competent Organizations, Key Components to a Culturally Competent System, and It All Starts At The Front Desk with human resources and decision makers in organizations. Recommend the development of a cultural competence committee to monitor and make recommendations for policy revision, program development, and evaluation.
- Include cultural competence language in the development and response to requests for proposals (RFPs). Check out this post How to Spot a Lip Service Approach to Culturally Responsive Evaluation from Patricia Rogers and Jane Davison’s Genuine Evaluation blog for tips on pointing out when a client may not be walking the walk in relation to culture and program development, theory, and evaluation.
- If you or other employees at your organization belong to an AEA affiliate, organize an event at your office around the theory or practical applications of the Statement. The Atlanta Area AEA affiliate group hosted one recently, Taking a Stance Toward Culture: Cultural Competence in Evaluation. Reflections from the event can be found in the AEA Newsletter diversity article.
- Set up a series of lunch and learns to begin having dialogue with colleague to increase awareness and to encourage relationship building, or start a book club discussion using the Statement, and branch out to other reading material to light the spark for cultural competence in evaluation.
The American Evaluation Association will be celebrating Cultural Competence Week. The contributions all this week come from the Cultural Competence committee. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.
No comments · Posted by jgothberg in Organizational Learning and Evaluation Capacity Building
I’m Sarah Griffiths, Senior Partner, Wholonomy Consulting, in Tucson, AZ. I’m going to share some highlights of the recent AZENet conference which focused on Building Capacity for Organizational Effectiveness. The learning goals for participants were to:
- Understand what research shows about organizational effectiveness in three key areas: Adaptive Capacity, Leadership, and Management Practices.
- Explore the connection between organizational effectiveness and sustainability.
- Learn about elements of successful capacity building.
- Think about application in the social sector in Arizona.
Co-sponsors from multiple local organizations joined AZENet in this effort, including Young Nonprofit Professionals Network, Phoenix chapter, Grant Professionals Association, Arizona Chapter; Community Foundation of Southern Arizona; ASU Lodestar Center for Philanthropy and Nonprofit Innovation; and Arizona Alliance of Nonprofits. Participants reported a deeply engaging experience brought on by both the content and the opportunity to share learning and reflection with others from different parts of the social sector. This type of cross sector involvement promotes community learning – a goal of the conference.
Keynote speaker Peter York from the TCC Group shared his current work which builds on the TCC authored publication entitled, The Sustainability Formula: How Nonprofit Organizations Can Thrive in the Emerging Economy. From it, we learned the following formula:
Effective Leadership + Adaptability + Program Capacity = Sustainability.
To summarize, effective organizations exhibit the following behaviors:
- Leadership (capable of timely strategic decision making);
- Adaptability (ability to constantly learn and change) both financial adaptability and program adaptability; and
- Program Capacity (have resources to run effective programs)
A key element of sustainability is strategic learning. Peter shared some guiding principles of strategic learning which began with “listen to the client”, followed by these 7 steps:
- Determine the “real” outcomes,
- Find or create the metrics,
- Gather data, quantitative then qualitative,
- Make meaning out of patterns,
- Create/modify program models,
- Re-design programs, and
- Repeat steps 2-6.
Peter’s introduction was followed by small group discussions, led by conference participants, who shared examples of strategic learning in their work. Hosts included funders, consultants, university center staff, state agency staff, and YNPN Phoenix Board members.
Hot Tip: Partner with young leaders near you – find them through the Young Nonprofit Professional Network.
Rad Resource: The Sustainability Formula.
Rad Resources: To find nonprofit associations in your area to collaborate on learning activities, go to the Council of Nonprofits.
The American Evaluation Association is celebrating Arizona Evaluation Network (AZENet) Affiliate Week with our colleagues in the AZENet AEA Affiliate. The contributions all this week to aea365 come from our AZE members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
5 Comments · Posted by John LaVelle in Organizational Learning and Evaluation Capacity Building
My name is Jeff Sheldon and I am a doctoral student at the School of Behavioral and Organizational Sciences at Claremont Graduate University concentrating in evaluation and applied research methods. A challenge I’ve encountered in my own practice is organizations that simply aren’t ready to be engaged in the evaluation process.
Rad Resource: Readiness for Organizational Learning and Evaluation (ROLE). The ROLE (Preskill & Torres, 2000) was designed to help us determine the level of readiness for implementing organizational learning, evaluation practices, and supporting processes. I’ve implemented the ROLE with schools and other types organizations, using the results to: 1) identify the existence of learning organization characteristics; 2) diagnose interest in conducting evaluation that facilitates organizational learning; 3) identify areas of strength to leverage evaluative inquiry processes; and 4) identify areas in need of organizational change and development. ROLE items reflects the research on organizational learning, evaluative inquiry processes, and evaluation practices, and suggest that an organization must have certain internal elements in place if it is to support and encourage organizational learning. Evaluation research indicates an organization’s culture and context influence the extent to which evaluative inquiry occurs in support of learning and decision making. The ROLE consists of 78 items grouped into six major constructs: 1) Culture, 2) Leadership, 3) Systems and Structures, 4) Communication, 5) Teams, and 6) Evaluation. Four of these – Culture, Leadership, Systems and Structures, and Communication – are further divided into subconstructs. Individual responses to ROLE items created a composite perspective of the internal context of an organization that determines the extent to which organizational learning, evaluation practices, and systems are present.
The ROLE instrument is available as an appendix in Russ-Eft, D. & Preskill, H. (2001) Evaluation in organizations: A systematic approach to enhancing learning, performance, and change. New York, NY: Basic Books.
4 Comments · Posted by John LaVelle in Organizational Learning and Evaluation Capacity Building
My name is Stephen J. Gill. I’m author of Developing a Learning Culture in Nonprofit Organizations (SAGE, 2010)*. This might not sound like a book for evaluators, but it is. The book is about how the process of collecting, reflecting on, and learning from evaluative information can help nonprofits become more successful organizations. Nineteen evaluation tools from the book are available for downloading at http://www.sagepub.com/learnculstudy/chapters.htm.
Rad Resource: One of these tools that you can put to use immediately with any organization is the Organizational Learning Readiness Worksheet. This tool asks your clients to indicate to what extent seven principles are characteristic of their organizations. Do they observe each principle “not at all; never see it,” “occasionally see evidence,” or is “evidence all around us”? Then, most importantly, you should discuss the meaning of the ratings with your clients. Help them decide what they need and want to do to become more ready for organizational learning and change.
The seven statements are:
- We integrate and align our organization’s mission, people, processes, resources, structures, and culture.
- Each of our organization’s activities is an element of a process that is continuously improved through knowledge enhancement.
- We don’t rely on quick fixes to our performance deficits.
- Learning is continuous over the long term in order to achieve meaningful results.
- Learning is leveraged so that relatively small interventions result in long-term major changes for the organization.
- Each of our employees and volunteers is responsible for the system in which he or she works.
- The collective learning of all employees and volunteers is an essential aspect of capacity building.
You can download a pre-formatted PDF worksheet with the above seven items here: http://bit.ly/orglearningreadiness
*remember that as an AEA member you would receive 20% off on this title if you order directly from SAGE – sign into the AEA website with your username and password to look up the passcodes.
This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to firstname.lastname@example.org.