AEA365 | A Tip-a-Day by and for Evaluators

I’m Michael Quinn Patton, an independent evaluation scientist.  Last year for the Memorial Day holiday in the USA, AEA365 featured a series remembering and celebrating the contributions of distinguished evaluators no longer with us. This year we thought we’d memorialize obsolete and problematic evaluation terminology. I invited colleagues to consider memorializing outdated concepts, dubious terms, and approaches that are alleged to have died, are being threatened with death, are in a zombie state, need to be resurrected or revitalized, or otherwise eulogized or appreciated. Over the next week, you’ll be treated to several such ruminations, stimulated by our current political environment and accompanying linguistic turmoil.

The Cambridge Dictionary selects a Word of the Year every year and, as you may have seen in the news, their word for 2016 was “post-truth,” as in “we live in a post-truth era.” Thus, we might memorialize the demise of TRUTH: tongue-in-cheek, or seriously, or sarcastically, or with whatever tone would strike you as appropriate in eulogizing truth.

In the same terminological funeral we might eulogize facts, which have succumbed to an onslaught of “alternative facts.” Or news – superseded by “fake news.”

I facilitated a Think Tank session at AEA two years ago on evaluation terminology and invited participants to identify and discuss terms or phrases they thought should be retired from the profession. Candidates included: “Gold Standard” design and “Best Practices.”

Language is dynamic. Terms come and go, as do ways of describing who we are and what we do.

Defending science against anti-science rhetoric and politics

On April 22, 2017, thousands marched for science in 600 cities worldwide. The American Evaluation Association was one of 270 partner organizations that supported the March for Science (https://www.marchforscience.com/partners/).  The questions of the day were, naturally enough: Are you a scientist? What kind? If not, what’s your connection to science? In that context, marching in support of science and combatting anti-science political rhetoric, I tried on a new identity. “I’m an evaluation scientist,” I said. “I do evaluation science.”

At first the phrase felt strange, awkward, even alien. And, of course, I had to explain what evaluation science is, which I got better at as the March progressed.  Indeed, I reveled in explaining evaluation science.  So, this is an invitation to consider coming out as an evaluation scientist.  Try on the cloak of science. See how it feels.  Feel how it wears. Say the words out loud, first to yourself, then to others. “I am an evaluation scientist. I do evaluation science.” In that way, we make common cause with other scientists.

Rad Resource:  AEA eStudy on Evaluation Science, June 14, 15, 20, 22, 2017.

Rad Resources:

The American Evaluation Association is celebrating Memorial Week in Evaluation. The contributions this week are remembrances of evaluation concepts, terms, or approaches. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings! I’m Sheila B Robinson, aea365’s Lead Curator and sometimes Saturday contributor. After ushering in the weekend with a full day of data analysis and evaluation report writing, I’m ready for a break, so I’m here to share a little about Memorial Day, celebrated in the US on Monday May 29, 2017. On Memorial Day, we honor those who have given their lives in service to their country, and reflect on the freedoms they fought for that we now enjoy. For some, Memorial Day heralds in the summer and in my part of the country, we take a break from evaluation work to plant tomatoes and other summer vegetables in between outdoor barbecue parties with family and friends.

A couple of years ago, Michael Quinn Patton and I chatted about featuring some themed weeks in the aea365 calendar, and in 2016, launched our first Memorial Day week, featuring a series of memorials remembering and honoring some of our esteemed evaluation pioneers. The special 2-week series began with Michael’s introduction to the series on May 21 and continued through June 4. This year, we’ll keep it to one week, but will feature a unique take on it with memorials to concepts, terms, and approaches. In 2018, we plan to take yet another innovative approach to the idea of memorials, but no spoiler here. You’ll have to stay tuned for that!

For today, however, I urge you to take a little break with me and learn about this important holiday! I have to admit, part of my interest in learning more about this holiday is that despite the fact that about 20 other places have claimed it, the congressional recognition of the “birthplace of Memorial Day” is practically in my backyard! The Memorial Day holiday (first called “Decoration Day”) in May started in 1866 in the small village of Waterloo, NY, about 40 miles from my home (I live just outside of Rochester, NY). However, it’s important to note that honoring our fallen citizens is hardly a recent or even uniquely American custom. In fact, one of the first known public tributes to war dead was in 431 B.C., when the Athenian general and statesman Pericles delivered a funeral oration praising the sacrifice and valor of those killed in the Peloponnesian War

Rad Resources: Want to learn more about Memorial Day? Try these sites and articles:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Samuel Pratsch and I am proud to say I have been working at the University of Wisconsin-Extension for the past 7 years in program development and evaluation. I was fortunate to work with Ellen Taylor-Powell and I am honored to carry on her legacy working with logic models. Over the years, the University of Wisconsin-Extension has been the go to source for logic model resources; however, and for various reasons, our use and development of logic models internally has failed to live up to our reputation. In Mary Arnold’s 2015 article, “Connecting the Dots: Improving Extension Program Planning with Program Umbrella Models,” she provides a well-reasoned explanation for ways that the University of Wisconsin-Extension can improve on our logical model capacity building work. I agree with many of her ideas.

Lesson Learned:

In my own experiences supporting extension educators and specialists I have noticed that my colleagues had a great understanding of how to fill in the parts of the logic model, and there was an opportunity to increase their awareness of how those parts were connected to one another.

Cool Trick:

In an effort to put the “logic” back into logical models, I have developed an innovative capacity building approach to guide colleagues in making explicit the “pathways of change” for their programs. This approach is focused on increasing knowledge and use of “program logic” and “outcome chains” through a number of hands-on activities. I begin the session with an ice breaker that helps participants think about if/then relationships. We stand in a circle and the first person says an “if” statement and then passes a ball of yarn to someone else while holding on to the string. The next person answers the “if” statement with a “then” statement, which helps encourage thinking about causal relationships.

Next I divide the group into pairs and have them work through a “pathways of change” process. I ask them to write about a change they would like to see happen in the next three years as a result of their program. Then using “forward casting” and/or “backward casting” through a series of if/then statements, I have them sketch out the casual relationships between their activities and intended outcomes of their program. They then review those relationships and look for assumptions and biases in their logic. To date, I have used this approach with a number of different groups at the University of Wisconsin-Extension and in a variety of ways. I have conducted two separate workshops where the focus was on individual programming. I have also used this approach with programmatic teams who wanted to learn more about the theory of change of their collective programming and develop shared measures for their work. I see a lot of potential in this new approach and look forward to building upon it in the future.

The American Evaluation Association is celebrating The Wisconsin Idea in Action Week coordinated by the LEAD Center. The LEAD (Learning through Evaluation, Adaptation, and Dissemination) Center is housed within the Wisconsin Center for Education Research (WCER) at the School of EducationUniversity of Wisconsin-Madison and advances the quality of teaching and learning by evaluating the effectiveness and impact of educational innovations, policies, and practices within higher education. The contributions all this week to aea365 come from student and adult evaluators living in and practicing evaluation from the state of WI. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I am Steve Kimball, a researcher and evaluator with the Wisconsin Evaluation Collaborative at the UW-Madison, Wisconsin Center for Education Research. We have recently embraced a Networked Improvement Community (NIC) approach to learn from and with schools using varying approaches to personalized learning. I have been intrigued with the implications of the NIC concept as a form of participatory and utilization-focused evaluation.

The Personalization in Practice Networked Improvement Community (PiPNIC) focuses on developing or refining student conferring protocols with five schools, to help teachers and students engage in productive learning conversations. Each school team includes 4-5 teachers and school leaders. The teams are meeting with our research group over four Saturday sessions during a 90-day cycle. Between sessions, the teams reflected on current student conferring practices and developed and refined conferring protocols. They are now testing their protocols using scripts, taking notes using brief reflection forms, and using videos to capture the student-teacher discussions.

A UW-Madison research team led by Professor Richard Halverson facilitates the NIC. The work is part of a larger partnership with the Wisconsin Department of Public Instruction, funded by the U.S. Department of Education Institute of Education Sciences, to develop resources supporting Wisconsin’s state longitudinal data system.

Lessons Learned:

  • On the research side, it is time intensive to recruit, orient and support participants in the NIC process. Extensive preparation preceded the actual work. The evaluation team recruits and convenes participants, facilitates problem discovery and networking meetings, and helps participants with data collection and analysis. Similarly, practitioners must also commit time and personnel resources to participate in the NIC, and develop and test the protocols.
  • Learning by doing involves risk and unpredictable results. For this first project, it was important to recruit practitioners already engaged in cutting-edge personalized learning practices. These educators were willing to take on new challenges because the problems were anchored in their practice and addressed an immediate need.
  • Benefits of this approach includes collaboration with peers within schools and across schools and deep ownership potential of the process and results. Teachers and leaders can immediately see the results and put them to use for improvement. Participants have said the work has represented high quality professional learning. Collaboration within and across schools created a mutually supporting venture into the unknown.

The NIC model provides a great structure for participatory evaluation. We are eager to explore the approach with others, and engage in the next 90-day cycle with participating schools and districts.

Rad Resources:

Explore the virtual honeycomb from Cooperative Educational Service Agency 1 for summary of personalized education.

Carnegie 90-Day Cycle Handbook

The main text that the hub facilitators shared with their school teams is from:

Getting Ideas into Action – the Network Improvement Community Model for Professional Learning.

Also see, Carnegie Foundation’s Learning to Improve

The American Evaluation Association is celebrating The Wisconsin Idea in Action Week coordinated by the LEAD Center. The LEAD (Learning through Evaluation, Adaptation, and Dissemination) Center is housed within the Wisconsin Center for Education Research (WCER) at the School of EducationUniversity of Wisconsin-Madison and advances the quality of teaching and learning by evaluating the effectiveness and impact of educational innovations, policies, and practices within higher education. The contributions all this week to aea365 come from student and adult evaluators living in and practicing evaluation from the state of WI. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

What’s up! We are Gwendolyn Baxley and Larry D. Brown Jr., doctoral students at the University of Wisconsin-Madison and evaluators as part of the Wisconsin Evaluation Collaborative Clinic. The Clinic responds to the small-scale evaluation needs by matching trained graduate students at the University of Wisconsin-Madison with schools and education-focused community organizations in Dane County.

As graduate students, there are many benefits to engaging as professional evaluators, including obtaining applied and practical “research” experience in the field. The Clinic, and evaluation experience in general, provides us with an opportunity to connect our methodological and content expertise as trained academic scholars to serve and meet the evaluation needs of local schools and community organizations. Beyond solely publishing on or about organizations, we partner with them to provide real-time and annual feedback and technical assistance to better understand, improve, or transform their programs.

While conducting evaluations in the Clinic, we have learned two major lessons:

Lesson Learned: Teamwork and collaboration is key.

You cannot do this work alone.  It is not only important to leverage the perspective and expertise of colleagues, but also imperative to work in partnership with stakeholders in the programs we are evaluating. These include working in partnerships with youth, parents, program staff, and community members. With their local knowledge and expertise, youth, parents, program staff, and community members offer distinct sources of expertise and knowledge that enhance evaluation design, implementation, and use.

Lesson Learned: Critical reflection is integral to evaluation.

It is important to constantly reflect on one’s identity (culture, race, class, gender, educational level, sexual orientation, and social status, etc) and the sociopolitical contexts in which we do our work. Aspects of society and our own background may shape the evaluation design and process, in both intended and unintended ways.  Critical Reflexivity, particularly regarding issues of race, racism and marginality, is important in helping evaluators understand the ways in which  sociopolitical contexts and their own identities shape how they interact with evaluation “participants”,  view and interpret data, as well as frame and report evaluation findings.

As scholar-evaluators, graduate student evaluators gain valuable skills and experience that are rarely offered in a traditional academic program. The Clinic provides comprehensive training that prepares students for immediate hands-on opportunities in the field to apply obtained academic knowledge through practical experiences. Exposure to evaluation gives students insight into potential non-tenure track career options. Moreover, graduate student evaluators build networks, connect with and learn from the community in meaningful ways, and can engage in a culturally responsive manner.

The American Evaluation Association is celebrating The Wisconsin Idea in Action Week coordinated by the LEAD Center. The LEAD (Learning through Evaluation, Adaptation, and Dissemination) Center is housed within the Wisconsin Center for Education Research (WCER) at the School of EducationUniversity of Wisconsin-Madison and advances the quality of teaching and learning by evaluating the effectiveness and impact of educational innovations, policies, and practices within higher education. The contributions all this week to aea365 come from student and adult evaluators living in and practicing evaluation from the state of WI. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

No tags

Warm wishes from wonderful Wisconsin! I’m Kimberly Kile, Project Manager for the LEAD Center, housed within WCER at the University of Wisconsin. The LEAD Center is comprised of professional staff who conduct program evaluation within and about higher education both locally and nationally. I had the opportunity to take a leading role in developing our center as a host site for an intern through the American Evaluation Association’s Graduate Education Diversity Internship (AEA GEDI) program. This post will share some hot tips and lessons learned in becoming a host site.

The host site information on the AEA GEDI website identifies the site’s responsibilities, as well as the roles of the intern’s mentor. Once we reviewed these materials and knew that we were able to meet these expectations, we moved forward with the application.

Concurrently, we identified a potential project for the intern to work on. It was important to us to have a project that could be started and completed within the internship timeline (Sept through June). We also wanted the intern to see the entire process of an evaluation project, from the planning stages through an end product.

Hot Tip:

Consider finding a partner or project to share the cost of hosting an intern. In our case, our center paid the GEDI’s salary and benefits, while the project paid for the GEDI’s professional development expenses. Be sure to work closely with your financial folks to work out all the payment details.

Hot Tip:

Because of the tight timeline, in our case, we included a note in the application that funding for the position was still pending. There is no financial obligation unless you select an intern.

Lesson Learned:

AEA reviews the applications and then forwards potential GEDI applicants to each host site. Because travel can be a significant financial burden to graduate students, we offered interviews both in-person and via Skype.

Lesson Learned:

The interview window is set by the GEDI program so the sites have little flexibility related to the interview schedule. We blocked a couple of half-days within the interview window to be sure all interviewers could participate. This occurs in summer and vacations can conflict with interviews. If you partner with someone to share the cost (like we did), be sure to invite the partner to participate in the interviews. We also blocked an hour or so of time after all the interviews, so that all interviewers could discuss the applicants and everyone could make a decision together.

The LEAD Center had a delightful experience as an AEA GEDI host site. The GEDI at our site brought fresh ideas to our staff. We would highly recommend others consider hosting an AEA GEDI!

The American Evaluation Association is celebrating The Wisconsin Idea in Action Week coordinated by the LEAD Center. The LEAD (Learning through Evaluation, Adaptation, and Dissemination) Center is housed within the Wisconsin Center for Education Research (WCER) at the School of EducationUniversity of Wisconsin-Madison and advances the quality of teaching and learning by evaluating the effectiveness and impact of educational innovations, policies, and practices within higher education. The contributions all this week to aea365 come from student and adult evaluators living in and practicing evaluation from the state of WI. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I am Amy Hilgendorf, Associate Director for Engaged Research at the Center for Community and Nonprofit Studies of the University of Wisconsin-Madison. For the past three years, I have had the pleasure of convening a community of practice of evaluation practitioners here on our campus. What started with two graduate students and their interest in getting to know evaluators on campus and creating space for learning together, has grown to become much more.

Hot Tip:

A lot of good can come from just a little bit of effort! While there’s a lot you can read out there about communities of practice and how to support them, ours runs with just a few simple practices:

  • Come together regularly (typically once a month)
  • Highlight the work of a member and/or a topic of shared interest for evaluators
  • Provide food and reserve time for networking
  • Help members communicate with one another (like through an email list or with social media)

With just these few practices, we have grown to a network of more than 80 evaluation practitioners, students, and appreciators, and have started to incorporate members not affiliated with the university. I am especially proud of the exchanges we have had in our monthly gatherings, including thoughtful conversations around evaluation ethics enhancing social justice through evaluation practice.

Lesson Learned:

When you build a network of smart and passionate people, valuable developments will rise organically. Through our community of practice, I have learned about approaches and methods I knew little about before, such as Ripple Effect Mapping and critical cartography. We have also gained the inside scoop on AEA’s Graduate Education Diversity Initiative (GEDI) from our university’s resident intern and some of us are working together to coordinate applications to host GEDI’s next year. And this summer, we plan to use some of our time together to develop ideas for how we can build and support a pipeline at our university for more evaluators of diverse and underserved backgrounds.

Cool Trick:

Building relationships, including professional ones, often comes down to getting to know each other and having fun together. So make sure to reserve time and space for networking and fun in a community of practice. One of our best attended sessions has been our summer happy hour at the student union by the lake, with a cold Wisconsin brew in hand.

From our community of practice to yours, thank you.

The American Evaluation Association is celebrating The Wisconsin Idea in Action Week coordinated by the LEAD Center. The LEAD (Learning through Evaluation, Adaptation, and Dissemination) Center is housed within the Wisconsin Center for Education Research (WCER) at the School of EducationUniversity of Wisconsin-Madison and advances the quality of teaching and learning by evaluating the effectiveness and impact of educational innovations, policies, and practices within higher education. The contributions all this week to aea365 come from student and adult evaluators living in and practicing evaluation from the state of WI. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Koolamalsi njoos (Hello Colleagues/Friends).  I’m Nicole Bowman (Mohican/Lunaape) a culturally responsive (CR) and Indigenous Evaluator (CRIE) at the WI Center for Education Research (WEC and LEAD Center) and President/Evaluator at Bowman Performance Consulting, all located in Wisconsin.

In 1905, the President of UW, Charles Van Hise, provided the foundation for what has become fundamental to how I practice evaluation – The Wisconsin Idea:

“The university is an institution devoted to the advancement and dissemination of knowledge…in service and the improvement of the social and economic conditions of the masses…until the beneficent influence of the University reaches every family of the state” (p.1 and p.5).

My work as an Indigenous and culturally responsive evaluator exemplifies the WI Idea in action.  Through valuing, supporting, and resourcing culturally responsive and Indigenous theories, methods, and activities, I’m able to not only build organizational and UW’s capacity to “keep pace” (p. 3) in these areas but am empowered to be “in service” to others and not “in the interest of or for the professors” (i.e. self-serving) but rather as a “tool in service to the state…so the university is better fit to serve the state and nation” (p.4 and p.5).  My particular culturally responsive and Indigenous evaluation, policy, and governance expertise has brought university and Tribal governments together through contracted training and technical assistance evaluation work; has developed new partnerships with state, national, and Tribal agencies (public, private, and nonprofit) who are subject matter leaders in CR research and evaluation; and extended our collaborative CR and CRIE through AJE and NDE publications, AEA and CREA pre-conference trainings and in-conference presentations, and representation nationally and internationally via EvalPartners (EvalIndigenous). We’re not only living the WI Idea…we are extending it beyond mental, philosophical, and geographic boarders to include the original Indigenous community members as we work at the community level by and for some of the most underrepresented voices on the planet.
Rad Resources: 

During this week, you will read about how others practice the WI Idea. As evaluators, we play an integral role in working within and throughout local communities and statewide agencies. Daily, we influence policies, programs and practices that can impact the most vulnerable of populations and communities. Practicing the WI Idea bears much responsibility, humility, and humanity.  We need to be constant and vigilant teachers and learners.

The American Evaluation Association is celebrating The Wisconsin Idea in Action Week coordinated by the LEAD Center. The LEAD (Learning through Evaluation, Adaptation, and Dissemination) Center is housed within the Wisconsin Center for Education Research (WCER) at the School of EducationUniversity of Wisconsin-Madison and advances the quality of teaching and learning by evaluating the effectiveness and impact of educational innovations, policies, and practices within higher education. The contributions all this week to aea365 come from student and adult evaluators living in and practicing evaluation from the state of WIDo you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi!  I’m Kathy Newcomer, president of the American Evaluation Association.  I would like to learn and share what the AEA 2017 Conference theme “From Learning to Action” means to evaluators! To help share the theme across as wide an audience as possible, we’re inviting folks to submit brief videos that explore just this.   Links to winning videos will be posted on the AEA web page, and the top video will be featured during the opening plenary.   Up to five winners will also get great prizes!

Hot Tip: Your video doesn’t have to have high tech special effects or big name Hollywood stars.   By June 16th, submit a brief 45 to 60 second video addressing one or more of these questions:

  • What does the Evaluation 2017 theme From Learning to Action mean to you personally or professionally?
  • What could the theme mean to other evaluators?
  • What can the theme mean for the evaluation profession?

See more on how to enter your video here.

Hot Tip:  Consider the conference sub themes:

Learning to Enhance Evaluation Practices:  Evaluation theory and practice has been dynamically developing with innovative and expanding approaches. What are new developments in practicing and teaching evaluation that may advance our contribution to the generation of knowledge about effective human action?

Learning What Works and Why: Evaluation studies have been providing evidence about the effectiveness, efficiency, and utility of public programs and policies. We have been learning about mechanisms that contribute to the successes or failures of interventions. What have we learned about what works and why in different sectors and contexts, that could be useful for policy practitioners in improving public policies?

Learning from Others:  New communities such as behavioral insight teams, social labs, big data analysts, and design thinkers offer new insights to inform effective programs and policies. What can we learn from other communities, including evaluation communities outside of the US, to advance evaluation practice and knowledge about promising tools and approaches?

Learning About Evaluation Users and Uses:  For years’ evaluators have been struggling to increase meaningful use of evaluation by stakeholders. What have we learned about users of our work, their ways of acquiring and using knowledge, and useful ways to support them in applying evaluation findings to improve practice?

Need more ideas? See more on the conference theme here.

Rad Resource: Not sure how to get started making your brief video?

Hello! I am Yaw Agyeman, Program Manager at the Lawrence Berkeley National Laboratory. I am joined by my writing partner Kezia Dinelt, Presidential Management Fellow at the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy (EERE), to share how EERE developed and institutionalized a rigorous evaluation practice to quantify the impacts of EERE programs and investments.

Here’s the premise: Imagine you are brought into a federal agency with multiple energy programs, each of them with multiple portfolios encompassing investments in research, development, demonstration, and deployment (RDD&D) projects. Now you’re tasked with developing a rigorous evaluation process. What would you do?

We developed a holistic framework for program evaluation—a systemic approach that borrows from organizational psychology, institutional change, and principles of persuasion. Elements of the framework include

  1. Developing resources—guidance and tools for conducting and reviewing evaluation studies, including a guide on program evaluation management, a peer review method guide, a uniform method for evaluating realized impacts of EERE R&D programs, a non-RD&D evaluation method guide, and a quality assurance protocol to guide evaluation practice.
  2. Providing program evaluation training for organizational staff.
  3. Developing institutional links with the organization’s technology offices, budget office, communications team, stakeholder engagement team, project management office, and others.
  4. Developing data collection protocols for ongoing tracking of routine evaluation data.
  5. Developing an impact results repository and reporting tool for use across the organization.
  6. Partnering with the technology offices to plan and conduct evaluations involving third party experts, feed the results back into program improvement, and communicate findings to target stakeholders.

Lessons Learned: Seeding these pillars of evaluation practice within the federal organization has involved varying applications of the principles of organizational change, which scientists at the Lawrence Berkeley National Laboratory have filtered down to a dynamic interaction between the “roles, rules, and tools” for behavioral change within an institution. The implementation has been undulating, nonlinear, and has taken more than 8 years with fits and starts. But, EERE’s evaluation team successfully built evaluation capacity within EERE by tapping into the vast pool of evaluation expertise across the nation to help frame and mold this institutional change.

Over time, the victories have piled up: (1) nearly one-third of all R&D portfolio investments across EERE have been evaluated, revealing spectacular returns on investments; (2) program staff are increasingly conversant in the language of evaluation, and there is an active and abiding interest in commissioning evaluations and using results; (3) the organization has established a set of core evaluation metrics and measures that are adaptable for use by most program investments; (4) the guides and tools developed for evaluation are being used; and (5) a growing culture of evaluation through the guides and tools is leading to innovations in evaluation practice, such as the “Framework for Evaluating R&D Impacts and Supply Chain Dynamics Early in a Product Life Cycle,” which is the first of its kind anywhere across the federal government. It can be done.

The American Evaluation Association is celebrating Research, Technology and Development (RTD) TIG Week with our colleagues in the Research, Technology and Development Topical Interest Group. The contributions all this week to aea365 come from our RTD TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top