AEA365 | A Tip-a-Day by and for Evaluators

CAT | Government Evaluation

Greetings and welcome from the Disabilities and Underrepresented Populations TIG week.  We are June Gothberg, Chair and Caitlyn Bukaty, Program Chair.  This week we have a strong line up of great resources, tips, and lessons learned for engaging typically underrepresented population in evaluation efforts.

You might have noticed that we changed our name from Disabilities and Other Vulnerable Populations to Disabilities and Underrepresented Populations and may be wondering why.  It came to our attention during 2016 that sever of our members felt our previous name was inappropriate and had the potential to be offensive.  Historically, a little under 50% of our TIGs presentations represent people with disabilities, the rest are a diverse group ranging from migrants to teen parents.  The following Wordle shows the categorical information of presentations our TIGs presentation

Categories represented by the Disabilities and Underrepresented Populations presentations from 1989-2016

TIG members felt that the use of vulnerable in our name set up a negative and in some cases offensive label to the populations we represent.  Thus, after discussion, communications, and coming to consensus we proposed to the AEA board that our name be changed to Disabilities and Underrepresented Populations.

Lessons Learned:

  • Words are important! Labels are even more important!
  • Words can hurt or empower, it’s up to you.
  • Language affects attitudes and attitudes affect actions.

Hot Tips:

  • If we are to be effective evaluators we need to pay attention to the words we use in written and verbal communication.
  • Always put people first, labels last. For example, student with a disability, man with autism, woman with dyslexia.

The nearly yearlong name change process reminded of the lengthy campaign to rid federal policy and documents of the R-word.  If you happened to miss the Spread the Word to End the Word Campaign, there are several great video and other resources at r-word.org.

High School YouTube video

YouTube Video – Spread the Word to End the Word

 

 

 

 

 

 

https://www.youtube.com/watch?v=kTGo_dp_S-k&feature=youtu.be

Bill S. 2781 put into federal law, Rosa’s Law, which takes its name and inspiration for 9-year-old Rosa Marcellino, removes the terms “mental retardation” and “mentally retarded” from federal health, education and labor policy and replaces them with people first language “individual with an intellectual disability” and “intellectual disability.” The signing of Rosa’s Law is a significant milestone in establishing dignity, inclusion and respect for all people with intellectual disabilities.

So, what’s in a name?  Maybe more than you think!

 

· · · · · · ·

We are Wanda Casillas and Heather Evanson, and we are part of Deloitte Consulting LLP’s Program Evaluation Center of Excellence (PE CoE). Many of our team members and colleagues are privileged to work with a variety of federal agencies on program evaluation and performance measurement and, throughout this week, will share some of their lessons learned and ideas about potential opportunities to help federal agencies expand the value of evaluations.

This week members of our team will share lessons learned about working remotely on federal evaluations, the use of qualitative methods in federal programs that don’t always appreciate the value of mixed methods, the potential for federal programs to be more “selfish” in program planning, the value of conducting evaluation and performance measurement for federal programs, and making the most out of data commonly collected in federal programs. In the coming weeks, readers will find an additional article on scaling up federal evaluations.

Lesson Learned: Many federal clients use performance measurement, monitoring, evaluation, assessment, and other similar terms interchangeably; however, evaluators and clients don’t always have the same definitions, and therefore expectations, in mind for what these terms mean. It’s important to learn as much as possible about your federal client’s experiences and history with evaluation through research and conversations with relevant stakeholders in order to make sure you can deliver on a given agency’s needs.

Lesson Learned: Clients sometimes see evaluation or performance measurement as a requirement rather than an opportunity to understand how to improve upon or expand an existing program. As evaluation consultants, we sometimes have to work with clients to help them understand how evaluation can benefit them even after responding to a request for proposals.

Rad Resource: Alfred Ho provides some intriguing insights on the effects of the Government Performance and Results Act of 1993, which has resulted in many of the performance measurement and evaluation activity we see today in GPRA after a Decade: Lessons from the Government Performance and Results Act and Related Federal Reforms.

The American Evaluation Association is Deloitte Consulting LLP’s Program Evaluation Center of Excellence (PE CoE) week. The contributions all this week to aea365 come from PE CoE team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

We’re Sarah Brewer, Elise Garvey, Ted Kniker and Krystal Tomlin the current leadership of AEA’s Government Evaluation Topical Interest Group (TIG). To finish out Government Evaluation Week on AEA365, we decided to offer a glimpse into the future of Government Evaluation.

Since 2015 was the 25th anniversary of the Government Evaluation TIG, we wanted to forecast into the next 25 years, and sponsored a Birds of a Feather session at AEA 2015 on Predicting the Future of Evaluation and identifying if there are innovations we can make. Using an abbreviated scenario planning exercise, we set the context that scenario planning is about “stories” that illuminate the drivers of change.  We asked the group to brainstorm about what government evaluation could look like in 25 years — what are the innovations that will be thought of, what are the drivers of change in government evaluation, what is the future they imagine? A very positive shared vision emerged.

Lessons Learned:

  1. Performance metrics/evaluation findings presented through 1 page infographics/dashboards. Using improved data visualization, government evaluation can communicate more effectively.
  2. Increased use of open data and crowd sourcing for data to support evaluation. Government evaluation can lead the way to democratize data to understand how interventions succeed and can be used by more people.
  3. Diffusion of Evaluation capability to more government personnel – not concentrated in one Performance/Evaluation office. Organizational capacity building, organizational learning, and teaching of evaluation.
  4. Data and Evaluations are integrated across levels of government and across agency. More collaboration and networking of evaluation.
  5. The US would have a federal evaluation policy and/or more evaluations would be written into program authorizing legislation. AEA taking the lead.
  6. Improved technology for capture, structure, and analysis of qualitative data – i.e. voice recording. How can we take what’s been learned from the shared, portable music and apply it to data collection, analysis and reporting?
  7. Increased demand for evaluation capacity at all levels of government – especially at the county and city level. The more we innovate on the first six ideas, the more we can influence this one. The demand will increase.

Get Involved: The Government Evaluation TIG is taking these ideas, cross-walking them to our strategic planning goals to turn these possibilities into probabilities. Join us!

Rad Resources: Information and examples of Scenario Planning can be found in a multitude of resources, including:   U.S. Fish and Wildlife Services Guide (for use with Natural Resources), “Living in the Futures” article by Angela Wilkinson and Roland Kupers in the May 2013 issue of Harvard Business Review, and Scenario Planning: A Tool for Strategic Thinking MITSLOAN Magazine: Winter, January 15, 1995, by Paul J. H. Schoemaker.

The American Evaluation Association is celebrating Gov’t Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Gov’t Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello Everyone! I’m Ted Kniker, Senior Vice President of Enlighteneering, and Chair of AEA’s Government Evaluation Topical Interest Group (TIG). During our 25th anniversary year, the TIG sponsored a lively reflection on What is “Government” Evaluation from a multi-cultural perspective? The term “government evaluation” can mean so many different things.

Lessons Learned:

  • For example does it mean “federal, state or local?”  The TIG was originally started 25 years ago as a State and Local Government group, and has expanded to include evaluators from federal government, evaluation contractors, non-profit evaluators affected by government policies and practices and as well as managers in various organizations responsible for issues of organizational performance.
  • Does it mean funded, sponsored, or conducted?  The think tank session attendees agreed that a government evaluation focuses on a program that is either funded by or administered by a public sector entity. However, we struggled with whether a definition like that is still too limiting or even needed. When the ideas of policy and usage are introduced, government evaluation quickly includes a much larger universe of projects and evaluators.
  • What does it mean internationally?  As part of the discussion we learned from our friends from Japan that government evaluation means evaluating the government, and looking particularly for its inefficiencies. While many of us see government as context, others define it as the evaluand. We were reminded of the broadness of the term.
  • What does the definition mean for the populations being evaluated? Does it carry connotations that affect credibility, validity, and participation? The group agreed that government evaluation requires the same standards of excellence in practice of any evaluation. But one of the populations that seem to go unchecked, is ourselves. A question that generated a lot of reflection was, when we conduct an evaluation in a government context, do we consider ourselves government evaluators? While members of other methodological and contextual groupings often refer to themselves in those terms (e.g. qualitative evaluation has qualitative evaluators), why not government?

Lesson Learned: Government Evaluation is inclusive.  The attendees agreed that evaluators may have very narrow definitions of what government evaluation is and whether it applies to them, but that in reality it is far more expansive, has greater reach, and can include multiple contexts, evaluands, and methodologies. Far more evaluations can influence or be influenced by the government evaluation context. Therefore, government evaluation is a larger contextual group than might initially be thought. Have you worked in a government evaluation context but haven’t participated in the Government Evaluation TIG or attended the Government Evaluation TIG sponsored sessions? If so, we’d like to hear from you, or better yet, come join us! Here is our LinkedIn link: https://www.linkedin.com/grps/AEA-Government-Evaluation-TIG-6945047/about

The American Evaluation Association is celebrating Gov’t Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Gov’t Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

My name is Lauren Supplee and I work in the Office of Planning, Research and Evaluation at the Administration for Children and Families. Recent media and academic attention to transparency, replication, trust in science and lack of replication of findings in medicine research and psychology raises issues for evaluation as seen in articles in Nature Medicine and The Guardian. While evaluators can debate the concept of replication, one of the core issues of replication is trust in the evidence evaluation generates as a condition of whether it is used in policy or practice. As an evaluator I know that the perceived utility of my work to policy and practice is only as strong as the user’s trust in my findings.

While the evaluation field can’t address all of the aspects involved in the public’s trust in research and evaluation, we can proactively address building confidence and trust in design, analysis and interpretation of findings.

Hot Tips: Registering studies: A colleague and I recently wrote a commentary on the Society for Prevention Research’s revised evidence standards for prevention science. In the commentary we noted our disappointment that the new standards did not take transparency and trust head on. We stated the field needs to seriously consider engaging in practices such as pre-registering studies, pre-specifying analytic plans and sharing data with other evaluators to allow for replication of findings by independent analysts. There are multiple registries including the Open Science Framework which allows for publically sharing multiple aspects of project design and analysis; and for clinical trials new registries have been created by American Economic Association, Registry of Clinical Trials on What Works Clearinghouse, and clinicaltrials.gov.

Issues related to analysis: While pre-registering analysis plans may not always be appropriate for every study, the lack of adjustment for multiple comparisons or pre-specification of primary versus secondary outcome variables does not increase the public and policy-makers’ trust in our findings. Another factor in lack of replication is under-powered studies. A recent article in American Psychologist discusses this aspect and proposes the field should be considering statistical techniques such as Bayesian methods.

Interpretation of findings: My colleague who does work in tribal communities emphasizes the importance of having the community’s input in the interpretation of findings. In community-based participatory work, the partnership is embedded from the start and can naturally include this step. In some “high-stakes” policy-evaluation, a firewall has been built between the evaluator and the evaluated to gain independence of the findings.

Get Involved: How can we broaden the conversation to the larger community? What other ways can we build trust in evaluation findings, and ensure clear guidance on how to benefit from participant interpretation while still maintaining trust in the findings?

The American Evaluation Association is celebrating Gov’t Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Gov’t Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Diana Harbison and I’m the Director of the Program Monitoring and Evaluation Office at the U.S. Trade and Development Agency, which links U.S. businesses to global infrastructure opportunities. According to a recent survey, USTDA has some of the most engaged employees in the United States government. There are countless articles – and entire consulting businesses – built around the concept of “employee engagement,” but I think the reason USTDA is successful is, in part, that our employees are engaged in evaluation.image of USTDA "Catalyzing US Expertise to Power Africa" infographic

My office, as well as the rest of the Agency’s staff, collects feedback from our partners – over 2,000 last year – to evaluate the commercial and development results of the activities we have funded. We utilize this data to inform our daily, project-specific decisions. We also gather as a group once a year to review our results and discuss where we should focus our resources. This allows us to prioritize the countries and sectors where we work, and to identify new approaches for collaborating with our stakeholders – including our most important customers, the American people. We often employ data to communicate how our partners have or could benefit from our programs.

We also love to tell stories, like the time a South African pilot stood up and told an audience that she had been unsure about her career path but after participating in an aviation workshop we hosted, knew what she wanted to do next and was excited about the future. Or the time a small business owner told me that his first USTDA contract helped him expand his business in just three years, and he now has hundreds of millions of dollars in business, working with new clients. We have so many stories about our accomplishments that we have begun sharing them publicly on our website as staff commentaries.

My colleagues are committed to our mission and engaged in their work every day. Instead of simply doing what is required, they utilize our results to go beyond and do what is possible. So when I’m asked how USTDA continuously drives performance results and maintains such an engaged staff, I say it’s because everyone values – and evaluates – their work.

The American Evaluation Association is celebrating Gov’t Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Gov’t Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

We are Kathy Newcomer, Director of the Trachtenberg School of Public Policy and Administration at George Washington University and President-Elect of AEA, and Nick Hart, a PhD candidate at GWU and Board Member of Washington Evaluators. We both have extensive experience working with Federal agencies to implement evaluation and performance measurement initiatives, providing insights about lessons learned over the past 15 years as well as lessons that could have been learned, but were not.

 

The George W. Bush and Barack Obama Administrations both advocated for the generation and use of evidence to guide and improve government management. The two presidents brought very different experiences, views and advisors to the Federal bureaucracy, yet their management agendas established similar expectations and initiatives. For example, each administration focused both on delivering better results for the American public and improving accountability. But while the Bush evaluation and performance management agenda relied on the use of central oversight offices to establish ambitious goals and to coordinate implementation, the Obama Administration’s approach provided agencies flexibility and focused on decentralized institutionalization.

Lessons Learned: Below, we highlight eight lessons that were learned and/or re-learned in implementing the Bush and Obama initiatives. Each of these lessons can inform future efforts to improve government performance, organizational learning, and accountability.

#1: The role of central oversight offices in the Federal government must be calibrated to meet agency needs, providing sufficient oversight with an appropriate level of ownership among agencies.

#2: Establishing and sustaining an audience for the performance measurement and evaluation initiatives is challenging, but critical.

#3: Multi-agency management initiatives can be effectively implemented, with appropriate collaboration.

#4: Development of case studies to highlight success stories can help articulate the usefulness of performance initiatives.

#5: Sufficient evaluation capacity is necessary to support initiatives over the long-term.

#6: Additional emphasis is needed on creating and institutionalizing synergies between performance measurement and evaluation offices and staff within agencies.

#7: Training new political appointees and senior managers about their role in leading evaluation and performance measurement initiatives will help improve the institutional support needed to effectively implement management agendas.

#8: More consultation with intended users of the initiatives’ products will help better align the information provided by agencies to the actual needs of policy-makers.

Rad Resource:  Interested in learning more about the development of the Bush and Obama initiatives and the lessons described above? We had an “Evaluation in the Federal Government: Lessons Learned and Lessons Unlearned” panel session at Evaluation 2015 in Chicago, IL.

The American Evaluation Association is celebrating Gov’t Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Gov’t Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I am Elise Garvey, Management Auditor with the King County Auditor’s Office in Seattle, Washington and I serve as the co-chair of the Government Evaluation Topical Interest Group (TIG). In 2015, the Government Evaluation TIG is celebrating its 25th anniversary, and there is nothing like an anniversary to motivate a time of reflection and inspire a look to the future. At this year’s conference, the Government TIG hosted a session called “Defining Government Evaluation: What Is “Government” Evaluation from a Multi-Cultural Perspective?” One of our posts later this week will provide a recap of that think tank, but this post is intended to introduce you to a type of government evaluation that potentially could expand your professional network and resources: performance auditing.

Lessons Learned: The term “auditing” generally conjures up images of finances and taxes, but there is a branch called performance auditing that is fundamentally similar to evaluation. Our guiding document, the Yellow Book, defines performance auditing as “audits that provide findings or conclusions based on an evaluation of sufficient, appropriate evidence against criteria.” Performance audits cover a wide range of topics, including housing and homelessness, libraries, climate action, capital projects and infrastructure, and emergency medical services, among many others.

Rad Resources: The Association of Local Government Auditors (ALGA) is one of several professional organizations in the auditing world. Check out the ALGA website to learn more about performance auditing and to connect with people working in local governments across the U.S. and Canada with a growing presence from countries across the world. If your evaluation will involve working with local government, there may be a performance auditor you can reach out to for helpful information or resources!

The American Evaluation Association is celebrating Gov’t Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Gov’t Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are David J. Bernstein, a Senior Study Director with Westat, founding chair of the AEA Government Evaluation Topical Interest Group, and President-elect of Washington Evaluators, the DC area affiliate of AEA, and Kathy Newcomer, Director of the Trachtenberg School of Public Policy and Administration at George Washington University, a former AEA Board Member, and a Past President of Washington Evaluators. We both have a long-standing interest in ways to improve the way the U.S. Federal Government contracts for evaluation services.

Problem: The vast majority of United States Federal government evaluations are conducted by contractors, but effective contracting is rarely examined. Government evaluation is not rocket science, but it is complicated.

A. Procurement regulations are detailed, and maybe outdated.

B. Agency practices differ across the Federal government.

C. There appears to be a lack of research focused on contracting for Federal evaluation work (although there are GAO and other studies on Federal contracting).

Solution: At the 2014 AEA Conference, a panel of government evaluators, contractors, and academics addressed 5 questions related to evaluation contracting and how it can be done more effectively. At a July 2015 Washington Evaluators Brown Bag, we presented a summary of the AEA session and asked the audience for opinions and examples on the 5 questions:

  • Name one legal and/or regulatory obstacle that can affect the quality of contracted evaluations. Potential solutions?
  • Do Requests for Expressions of Interest and question and answer processes improve the quality of evaluation Requests for Proposals (RFPs)?
  • How do government estimates of level of effort (or lack thereof) and time frames influence evaluation budgets and the conduct of evaluations?
  • How do contractors decide to bid or not? Do certain practices discourage bidding?
  • What are the pros and cons of performance-based contracting? Is it possible or desirable for contracting evaluation services?

Hot Tip: Stage evaluation study timelines. During Year 1, focus on evaluation planning: conduct evaluability assessment; create an evaluation plan; develop data collection forms; and prepare for/conduct OMB review. In the outyears, collect and analyze data; build in time for processor implementation reviews to assess program context; and allow time needed for interventions to produce/demonstrate intended outcomes.

Rad Resource: The PowerPoint Presentation summarizing the AEA session is available on the Washington Evaluators website.

Interested in government evaluation contracting? Look for a session on Exemplary Practices in Contracting for Government Evaluation at Evaluation 2015 in Chicago, IL.

The American Evaluation Association is celebrating Washington Evaluators (WE) Affiliate Week. The contributions all this week to aea365 come from WE Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I am Brian Yoder, Director of Assessment, Evaluation and Institutional Research at the American Society for Engineering Education, a professional association located in Washington, D.C.  I also serve as President Elect for the Washington Evaluators, a local affiliate of AEA.

I’ve lived and worked in D.C. for the past seven years working as a contractor, in government and a professional society, and I believe government processes can be helped through the use and application of evaluation.  As the saying goes, there are no problems, only opportunities, and I’ve seen plenty of opportunities to improve government processes and the improved use of evaluation to assess government programs.

Lessons Learned: Traditionally, I think evaluators have tried to keep their role separate from implementation and the policy-making processes. But, based on my work in D.C., I’ve come to believe that policy makers and program implementers would be well served by evaluators being involved more closely and directly in policy making and program implementation processes. When you work in an environment where the answers to important questions were needed yesterday, and questions that need to be answered keep changing, the traditional approach to evaluation with formative evaluation leading to summative evaluation becomes too slow and irrelevant.

That’s why I helped to spearhead the Evaluators Visit Capitol Hill (EVCH) Initiative, a joint effort between the Washington Evaluators and AEA’s Evaluation Policy Task Force (EPTF).  EVCH is an initiative that coordinates attendees at the American Evaluation Association conference in Washington, D.C. this fall to meet with someone in the office of their congressperson so they discuss with them the importance of evaluation and give them EPTF materials.

My hope is that this initiative can accomplish three things:

  1. Make more policy makers aware of AEA and the work of EPTF.
  2. Expand the reach of EPTF to creating connections for EPTF.
  3. Give evaluators the opportunity to be part of the early policy-making process by providing materials on evaluation to policy makers prior to the policy being made.

The deadline to sign up to participate has past, but if you would like to learn more about the initiative, click here http://washingtonevaluators.roundtablelive.org/EVCH

Hot Tip: For those of you participating, please remember to pick up your packet of materials at the Local Affiliates Working Group table located close to AEA conference registration.

Rad Resource: If you would like to know more about the Evaluation Policy Task Force, click here http://www.eval.org/p/cm/ld/fid=129

Rad Resource: If you would like to learn more about the Washington Evaluators, click here http://www.washeval.org/

This is the last of three weeks this year sponsored by our Local Arrangements Working Group (LAWG) for Evaluation 2013, the American Evaluation Association Annual Conference coming up next month in Washington, DC. They’re sharing not only evaluation expertise from in and around our nation’s capital, but also tips for enjoying your time in DC. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

No tags

Older posts >>

Archives

To top