AEA365 | A Tip-a-Day by and for Evaluators

TAG | organizational change

Hi, we’re Tosca Bruno-van Vijfeijken (Director of the Syracuse University Transnational NGO Initiative) and Gabrielle Watson (independent evaluator). We engaged a group of practitioners at the 2015 AEA conference to talk about organizational change in International Non-Governmental Organizations (INGOs), and explore a hunch that Developmental Evaluation could help organizations manage change.

Several large INGOs are undergoing significant organizational change. These are complex processes – they’re always disruptive and often painful. The risk of failure is high. Roughly half of all organizational change processes either implode or fizzle out ( ). A common approach is not to build in learning systems at all, but rather to take an “announce, flounder, learn” approach ( ).

Lesson Learned: Most INGOs support change processes in three main ways: (1) external “expert” reviews; (2) CEO- level exchanges with peer organizations; (3) staff-level reviews. It is this last category – where change is actually implemented – that is least developed but where it’s most needed. Successful organizational change hinges on deep culture and mindset change ( ).

AEA Session participants highlighted key challenges:

  • Headquarters and country staff experience change very differently
  • Frequent revisiting of decisions
  • Ineffective communication; generates uncertainty and anxiety
  • Learning not well supported at country or implementation team level
  • Country teams retain a passive mindset when should be more assertive
  • Excessive focus on legal and administrative; not enough on culture and mind-set

Can organizations do better? Might Developmental Evaluation offer useful approaches and tools?

Hot Tip: seems tailor-made for large-scale organizational change processes. It is designed for innovative interventions in complex environments when the optimum approach and end-state are not known or knowable. It involves stakeholder sense-making supported by tailored & evolving evaluative inquiry (often also participatory) to quickly test iterations, track progress and guide adaptations. It’s designed to evolve along with the intervention itself.

Hot Tips: Session participants share some good practices:

  • Action learning. Exchanges among implementers increased adaptive capacity and made emotional experience with change easier
  • Pilot initiatives. Time-bound, with frequent reviews and external support
  • “Guerrilla” roll-out. Hand-picked early adopters sparked “viral” spread of new approaches

Lesson Learned: Our review suggests Developmental Evaluation can address many of the challenges of organizational change, including shifting organizational culture. Iterative participatory learning facilitates adaptations that are appropriate and owned by staff. It adds value by building a learning culture – the ultimate driver of large scale organizational change.

We are curious how many organizations are using Developmental Evaluation for their change processes, and what we can learn from this experience. Add your thoughts to the comments, or write to Tosca or Gabrielle if you have an experience to share.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

We are Rebecca Stewart, Chief Practice Officer, and Samantha Hagel, Chief Administrative Officer, both with The Improve Group, a firm based in St. Paul, Minnesota, with a mission to help mission-driven organizations make the most of information, navigate complexity, and ensure their investments of time and money lead to meaningful, sustained impact.

This year, we decided to consider a beautiful question: What is The Improve Group’s role in creating a diverse, inclusive field of evaluation? The question emerged as we were thinking about the 2015 Year of Evaluation and its focus on equity, on the AEA’s cultural competency statement, and our own desire to promote social justice. As we’ve pondered this question, we realize part of our role is to share lessons learned and success stories with others in the field of evaluation.  So, here goes!

Hot tip: One way we support diversity and inclusion in our practice is to utilize a competency model. Our competency model asks all of our staff to have a constant awareness of, be learning about, and apply cultural competence. It is not a one-time thing; we want to see team members thinking about this all the time, unprompted. We support them with several opportunities to reflect and learn from each other, including an organization-wide conversation on unconscious bias.

Lesson Learned: Develop strategies to diversify the pipeline of people entering the field of evaluation and applying for open positions. Over the years, the vast majority of our candidates have come from a single graduate program. This year, we are experimenting with connecting with non-traditional audiences. For example, we gave presentations in undergraduate programs, and to Vista and AmeriCorps members at Public Allies and the PSEI Vista Program, to raise awareness of evaluation as a profession. We also changed our internship program from a purely graduate-level program to a summer internship for undergraduates and a school-year partnership with the GEDI program.

Hot tip: Be expansive, curious, and collaborative in seeking out community partners. We got to know an organization, Partnership Resources, through an evaluation of how the Americans with Disabilities Act has affected employment for people with disabilities. This interaction challenged us to figure out how to be a supported employment site. We looked at our workplace in a new way and found a role for a new employee, matched to us by Partnership Resources.

Rad resource: We partnered with New Sector Alliance for our new summer internship program. We have a shared interest in broadening the scope of those entering the social sector, and they have helped us reach potential new evaluation professionals.

Interested in further conversation? Join us at the conference: http://bit.ly/1PbkqOV

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi, this is Bill Fear, independent evaluator and freelancer.  Over the years my interest in evaluation has spanned a number of disciplines and sectors.  The common theme that continually emerges is one of change management and more latterly, I have narrowed this down to organizational change.  This helps to focus the wide range of ideas, theories, methods and so provides a common language for senior stakeholders.

All evaluations are concerned with change, and all social interventions – at whatever level – involve some sort of organization.  All organizations have some degree of ‘management’.  Thus, learning the related language and assumptions about the change process and outcomes provides a means to frame any evaluation.

Hot Tip:  Learn at least the basics of change management/organizational change.  This will give a vocabulary and set of constructs that can be used with a wide range of international stakeholders.  It also provides a frame for any evaluation.

I also find it helpful to stay in touch with the day-to-day resources that senior managers will access as these resources tend to present a credible view of social interventions.  For example, searching for ‘evaluation’ on the Financial Times website  returns, among others: evaluations relating to aid for Haiti, calls for cost-benefit evaluations in the NHS (UK); evaluation by the International Rescue Committee; and the setting up of an Independent Evaluation unit in the Bank of England.

Hot Tip: Regularly scan the websites of reporting magazines and Newspapers such as Forbes, The Economist, and the Financial Times.

Many disciplines contribute to, and practice, evaluation and I have also found it helpful to peruse the websites of the relevant membership organizations.  Examples range from The Academy of Management  through The American Psychological Association, to the American Society for Anesthesiologists.

Hot Tip:  Find professional organization websites and look for ‘resources’, ‘toolkits’, ‘publications’, ‘news’, and search for ‘evaluation’ on the site.

Hot Tip:  Remember, evaluation is practiced within all disciplines and all disciplines practice in the field.  Don’t assume that as evaluators we can just fly in, apply a method, analyse the data, and fly out with total objectivity and impunity.  We are not alone.

As I said earlier, I find it useful to frame these seemingly disparate approaches using either change management or organizational change.

Rad Resource: Burnes, B. (2014).  Managing Change. Pearson Education Ltd: Edinburgh Gate; Tolbert, P. S. and Hall, R. H. (2008).  Organizations: Structures, Processes and Outcomes.  Pearson.

Rad Resource: For a UK slant http://www.businessballs.com/changemanagement.htm and for a more USA slant http://managementhelp.org/organizationalchange/.

Final Hot Tip: The problem with having an open mind is that people keep on wanting to put things into it.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Summer N. Jackson, Project Director at Bay Area Blacks in Philanthropy a regional, nonprofit membership organization that focuses on advancing the interests of African Americans in philanthropy and address the impact of racial disparity within philanthropic institutions and African American communities in the San Francisco Bay Area.

I had the opportunity to serve as a session scribe at Evaluation 2010 and one of the sessions I attended was session 304: Insights into Foundation Evaluation. I chose this session because I am interested in strategies to develop an organizational learning culture in a foundation.

Lessons Learned – Evaluation should be shared throughout the organization
Traditionally programs are responsible for engaging in the activities for a given evaluation. In the Pan Canadian system, new federal requirements dictate an organizational level approach with creates an institutional culture of learning rather than the top down approach produced when programs hold the responsibility.

Lessons Learned – Create shared values around evaluation
When planning to introduce evaluation into an organization, one should try to create opportunities for discussion through facilitated workgroups rather than as a mandate. An approach that seeks to develop shared values around the benefit of inquiry will increase buy-in from participants.

Hot Tips – Implementing a New Tool:

  • Start with a prototype
  • Identify challenges and work to enhance quality of data received year by year
  • Complete an informal feasibility study to gradually introduce processes that are more rigorous
  • Develop an actionable plan
  • Work with senior management to increase buy-in and to provide directives to staff
  • Consider using a facilitator to provide evaluation education and training
  • Be explicit about organizational goals and try to help staff understand how their work fits into them

Hot Tip – Internal Champions, Open Doors, and Meet & Adjust: When implementing a new evaluative tool or framework in an organization, identify an internal champion that will help promote the tool. Maintain an open door policy after the initial training and offer additionally Technical Assistance (TA) opportunities that are intimate in nature. Lastly, schedule a quarterly/monthly meeting to review data and challenges and readjust when necessary. This will enhance trust and communication between the program staff as well as enhance the quality of data you receive in the end.

This post is a modified version of a previously published aea365 post in an occasional series, “Best of aea365.” Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Greetings!  I’m Shep Zeldin and I am a professor in the Civil Society and Community Research Graduate Program at the University of Wisconsin-Madison and I’m Jane Powers from the Bronfenbrenner Center for Translational Research at Cornell University.

We are engaged in a long term project to promote and support youth-adult partnerships (Y-AP) in organizational and community change efforts.   The multiple resources that we have developed can be found at http://fyi.uwex.edu/youthadultpartnership/and www.actforyouth.net.  The notion that youth and adults can work collaboratively, as authentic partners, to design and implement programs remains an innovative idea in the United States.  There are, however, many exemplary models of Y-AP and a strong research base has emerged to support its practice.  Y-AP can be a confusing term.  It has overlap with other concepts such as youth participation, youth engagement, and youth empowerment — but is different in key ways.

Hot Tip:  We have learned that the first step toward Y-AP is definition, discussion, and reflection.  Organizations must come to consensus as to why they want to involve young people in program design and evaluation, as well as identify their goals:  where within the organization do they most want Y-AP to exist.  Once this consensus has been built, progress and momentum follow.

Rad Resource:  We have developed two tools to help organizations “make the case” for Y-AP, and to plan and design evaluation efforts:

 Although many organizations are motivated to engage in assessment and evaluation they often do not know where to start; few have staff experience in conducting these activities.  A further challenge is youth participation.  Organizations want to involve youth as partners in assessment and evaluation, but are often unsure on how best to collaborate with youth throughout the process.

Rad Resource:  We have developed a tool kit that guides organizations through all aspects of the organizational assessment and evaluation process from conceptualization, to forming questions, to collecting and analyzing data, to reporting on findings and recommendations.  Youth Adult Leaders for Program Excellence (YALPE) is especially appropriate for programs that actively seek to strengthen program quality and improve their services and include youth as key partners in that process.

The American Evaluation Association is celebrating Youth Focused Evaluation (YFE) TIG Week with our colleagues in the YFE AEA Topical Interest Group. The contributions all this week to aea365 come from our YFE  TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

My name is Stephen Axelrad.  I am a Human Capital consultant at Booz Allen Hamilton.  For almost a decade, in both my doctoral training and professional life, I have had one foot in the evaluation world and one foot in the industrial-organizational psychology (I-O) world.  Both disciplines use behavioral and social science research to solve real-world problems, improve organizational effectiveness, and enhance quality of life.  I am struck how little collaboration or even awareness the I-O and evaluation disciplines have with each other.  Both disciplines have so much to offer each other.

Hot Tip: Evaluating HR initiatives. If you want to make an HR practitioner squirm, ask them to demonstrate the merit or worth of their programs and services.  Sophisticated HR professionals could provide you dashboards and balanced scorecards linking their initiatives with bottom-line indicators relating to mission impact or profits. Most of the time, the only thing you see that resembles evaluation are training evaluations that follow the Kirkpatrick Level I-V framework.  Most I-O interventions lack a robust evaluation component to evaluate the effectiveness of implementing the intervention.  When, senior leaders ask “So what?” to justify the investments of time and resources, I-O professionals are not doing enough to help HR professionals answer that question.  Evaluators can play a key role in helping I-O professionals design and conduct independent practical, useful, and rigorous evaluations to accompany organizational change and improvement interventions.

Hot Tip: Making individual-level evaluation more robust. The evaluation field spends a lot of time at the organizational and programmatic level.  Many evaluations that assess the effectiveness of programs, policies, and initiatives at the individual level rely heavily on self-report, subjective measures (e.g., attitude surveys and focus groups).  Evaluators can expand their individual-level toolkits to include I-O psychological methods to obtain a more objective or comprehensive perspective as to how individual actions contribute to program and organizational effectiveness.  Competency models identify the underlying set of knowledge, skills, abilities, and attitudes that organizations can use to understand what makes an effective or talented employee.  Competencies are a missing element to many logic models.  Another missing piece in many evaluation efforts is having individual-level performance measures.  Many I-O professionals utilize performance feedback from supervisors, peers, direct reports, and customers/constituent to understand how employees are executing desired behaviors to key stakeholders within and outside of organizations.  Utilization of MSF in evaluations can provide evaluations with insights as to the quality of relationships that exist for members within a given organization or system and pinpoint some best and worst practices.

Rad Resource: I encourage you to read the book that got me started thinking about this topic.

Darlene Russ-Eft & Hallie Preskill.  Evaluation in organizations: a systematic approach to enhancing learning.

The American Evaluation Association is celebrating Society for Industrial & Organizational Psychology (SIOP) Week with our SIOP colleagues. The contributions all this week to aea365 come from our SIOP members and you may wish to consider subscribing to our weekly headlines and resources listwhere we’ll be highlighting SIOP resources. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

· · ·

Hello. My name is Dale S. Rose. I am the President of 3D Group, a firm that has done program evaluation, 360 degree feedback and employee surveys since 1994. One of the thorns in the side of Organization Development has always been the nagging lack of a cohesive theory about how to successfully create change.

Lately, I’ve noticed a surging increase in use of Logic Models in program evaluation.  Just about every RFP 3D Group writes in evaluation these days requires us (wisely) to start with a logic model.   Some of these clients don’t even want data – they just want a good logic model.  For the uninitiated, “logic models” are used in evaluation to identify linkages between program goals, activities, and outcomes.  Researchers would recognize them as hypotheses, trainers would call them curriculum plans, HR professionals would say this is how we show the value of what we do, and business leaders probably like them because it allows them to see how organizational initiatives and measurement systems are aligned with strategy.

Hot Tip: Usually, logic models are very precisely tailored to each program with all its eccentricities, but it may be possible to find common logic models for similar types of programs.  For example, a shared logic model popular in the last few weeks might look something like this:  “Weight loss (goal) of at least 15 lbs (outcome) will result when I exercise 3 times per week, sleep 8 hours per night, and comb my hair in the morning for 3 weeks (activities).”  This example points out the intrinsic value of creating logic models – often they point out flaws in our rationale and help us to be more reasonable with our activities.

I suspect that OD professionals could benefit from logic models, in much the same way evaluators have.  Logic models became popular because evaluators got tired of looking at results they couldn’t explain (usually because the program was poorly designed to achieve the “hoped for” results).

Hot Tip: Changing organizations could be a lot more effective if we take the time to deeply think through the linkages between activities and expected (“hoped for?”) outcomes.  In effect, we could be thinking about an Organization Development intervention as a “program” being evaluated and follow the methods evaluators use every day to clearly articulate our logic model.  Over time, we may be able to aggregate a number of logic models to derive solid OD theory.   I know I’m going out on a limb here for OD, but if we had some actual data on enough models, we might even be able to show that our “programs” can be replicated.

The American Evaluation Association is celebrating Society for Industrial & Organizational Psychology (SIOP) Week with our SIOP colleagues. The contributions all this week to aea365 come from our SIOP members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting SIOP resources. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

· · ·

My name is Jane Davidson and I run an evaluation consulting business called Real Evaluation Ltd. I also run a fun blog with Patricia Rogers at GenuineEvaluation.com. My doctoral training was in organizational psychology (with some industrial psyc at Master’s level), which is the career I was pursuing when I stumbled across evaluation. I was stunned by the synergies between the two disciplines. Why would an evaluator want to connect with SIOP, or an I/O psychologist with AEA? Lots of reasons! Here are some that I draw on regularly for ideas and inspiration…

Hot Tip (for evaluators): Don’t ever underestimate the value of looking at other areas of evaluation! The ‘industrial’ side of I/O psychology includes some really advanced thinking and methodologies in the area of personnel evaluation, covering assessment for selection, succession planning, and promotion as well as performance appraisal.

In personnel selection, the main task is ranking candidates, so there are methodologies here that have incredibly useful applications for evaluations that require any form of comparison, e.g. deciding which of several pilot programs should be rolled out more widely.

The real goldmine in the performance appraisal literature is the work on the psychology of it all – how it motivates, demotivates, what kinds of systems and approaches really work to drive a culture of excellence rather than fear. The exact same psychological forces are in play for the evaluation of programs, policies, projects, and other evaluands – and we have much to learn!

Hot Tip (for I/O psychologists): Have you ever felt like just about all of I/O psyc was ‘done’, that it’s hard to make a real contribution except in the tiniest of trivial niches? Wrong! There is an enormous need for the development of cutting-edge approaches to evaluating organizational change in all its forms. By far the biggest traction for evaluation has been in evaluating training programs (Jack Philips’ ROI work and Kirkpatrick’s four levels training evaluation model come to mind).

We have also seen some contributions over the years on the evaluation of organizational change, personnel selection and performance appraisal. Unfortunately, most of these have been pretty weak, when viewed through an evaluation lens. Many assume that it’s just a case of tracking a few metrics; that it’s impossible to say whether particular outcomes are more valuable than others; and that big-picture answers to strategic questions can’t be answered in direct terms that will make straight-talking sense to organizational leaders.

The good news is that the growing discipline of evaluation has some insightful answers for how to get this right – and to guide truly powerful utilization of evaluation at all levels of the organization.

The American Evaluation Association is celebrating Society for Industrial & Organizational Psychology (SIOP) Week with our SIOP colleagues. The contributions all this week to aea365 come from our SIOP members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting SIOP resources. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

· ·

Archives

To top