AEA365 | A Tip-a-Day by and for Evaluators

CAT | Government Evaluation

Hello, you may know me, Cheryl Oros, best from Policy Watch columns in the AEA Newsletter, as I have been the consultant supporting the Evaluation Policy Task Force over the past six years. I also directed federal evaluation offices, served at the executive level over broad programmatic efforts and have taught many evaluation courses.

Hot Tip: 

Metrics for both evaluation studies and performance management can be developed from a conceptual (logic) model of a program.  The important questions (related to input, output, outcomes and impact) about a program are developed from the model and the metrics are designed to answer these questions via appropriate analyses.

Cool Trick: 

You can blend learning from evaluation studies with performance metrics for decision makers to assist them in policy making and program adjustments.  Evaluation can also inform whether the targets chosen for performance metrics are reasonable.

Rad Resources :

Lessons Learned:

  • Evaluation studies are needed to determine the impact of programs and to understand why results occur (or not). When these studies also explore program processes, they can shed light on the features of the program over which managers have control, allowing them to influence program success.
  • Performance metrics are usually process oriented, addressing the inner workings of programs that can influence desired impact. Metrics addressing impact should only be used for performance management if they have indeed been validated via an established link to the program via evaluation.
  • Combining evaluation and performance monitoring enables managers to make policy decisions based on an in-depth understanding of the program as well as the ability to monitor and analyze program functioning via performance metrics, possibly in real time.

The American Evaluation Association is celebrating Washington Evaluators (WE) Affiliate Week. The contributions all this week to aea365 come from WE Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Demetra Smith Nightingale currently at  the Urban Institute and previously at the US Department of Labor. I want to take the opportunity to briefly describe how responding to information government publishes in the Federal Register can be useful.

There are many different types of notices posted for public comment in the Federal Register, such as notices of proposed rules or termination of rules, proposed information collection requests for program reporting requirements, draft data collection instruments and data collection requests for evaluation projects, statistical survey information requests, or broader requests for information (RFIs) asking for input on specific or general policy issues.  Comments on RFIs, and not just on study-specific notices, provide an important mechanism for evaluators and researchers to provide input into issues on which the Federal government may be considering action.

A recent notice from the Office of Information and Regulatory Affairs at the Office of Management and Budget (OMB) is an example of an RFI with direct implications for the evaluation community.  The notice regards combining data sets for statistical and research purposes, and requests comments on: “(1) Current and emerging techniques for linking and analyzing combined data; (2) on-going research on methods to describe the quality of statistical products that result from these techniques; (3) computational frameworks and systems for conducting such work; (4) privacy or confidentiality issues that may arise from combining such data; and (5) suggestions for additional research in those or related areas.”

This is a case where the request stems from efforts by the Chief Statistician of the United States to establish priorities and coordinate research efforts across the Federal Statistical System to focus on improving federal statistics, including a priority to use new techniques and methodologies based on combining data from multiple sources. Future decisions the Federal government makes will have direct implications for data that evaluators might want to utilize for their projects. AEA provided formal comments and feedback to the RFI on behalf of the membership.

Hot Tips: Evaluator comments to this or any other relevant notice will be most useful to Federal agencies if a few key points are kept in mind:

  • Comments should directly address the topic at hand. Comments unrelated to the question under consideration will not be considered – this is not an opportunity to comment on unrelated matters (though many people do!).
  • Comments should be as clear and concise as possible. Federal staff often have very limited time to review and consider comments, so try to make your point clearly and concretely.
  • Comments are most helpful when you can provide specific examples or evidence of the effects that a proposed rule, grant notice, or data collection will have. It is more difficult for agencies to consider comments that are based only on your opinions or theoretical outcomes.
  • Be judicious when deciding whether to comment.  Provide comment when you have something worth saying.  That is, don’t become that person that comments on anything and everything just because you can.

The American Evaluation Association is celebrating AEA’s Evaluation Policy Task Force (EPTF) week. The contributions all this week to aea365 come from members of AEA’s EPTF. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Stephanie Shipman, a founding member of AEA’s Evaluation Policy Task Force. I recently retired from the U.S. Government Accountability Office (GAO) where I found AEA’s Evaluation Roadmap extremely useful when consulting with U.S. and foreign agencies on how to organize an effective evaluation office.

Rad Resource: The Task Force’s “An Evaluation Roadmap for a More Effective Government”  responded to former President Barack Obama’s call to increase the use of evidence in government management and policymaking. This policy paper describes the essential role that evaluation can play in assessing the strengths and weaknesses of programs, policies, and organizations to improve their effectiveness, efficiency, and worth. As the public demands more accountability from the government, evaluation has become an increasingly important support for government programs and policies.

The Roadmap provides a framework to help agencies develop an evaluation program to support organizational learning. it also recommends ways the Congress can help institutionalize evaluation in government. Key principles of the framework include:

  • Support independent evaluation offices with adequate resources and skilled staff,
  • Ensure all programs and policies are subject to evaluation,
  • Select appropriate evaluation approaches from a broad range of methods,
  • Establish and publish evaluation policies and quality standards,
  • Plan a body of strategic evaluation work in consultation with stakeholders,
  • Disseminate evaluation results widely and follow up on their recommendations.

Several U.S. federal agencies used this framework in developing their own evaluation policies to ensure they provide credible, useful feedback for managers. For example, the Departments of Labor and State, the Administration for Children and Families, and the Centers for Disease Control and Protection each have policies that reflect the Roadmap.

Looking back a decade since we first drafted the Roadmap, the Task Force is considering ways to update the Roadmap to ensure its continued relevance to current discussions of evaluation policy. For example, in 2017, the U.S. Commission on Evidence-Based Policymaking recommended that agencies formalize an evaluation function and establish chief evaluation officers and multiyear research and evaluation plans, as well as improve researchers’ access to administrative data, with appropriate privacy protections, for program evaluation.

The Task Force welcomes insight from AEA members about the usefulness of the Roadmap and suggestions for how it might be improved as a communication tool going forward. Please send your comments and suggestions to the Task Force at: evaluationpolicy@eval.org.

The American Evaluation Association is celebrating AEA’s Evaluation Policy Task Force (EPTF) week. The contributions all this week to aea365 come from members of AEA’s EPTF. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Happy Labor Day Week! I am Calista H. Smith, President of C H Smith & Associates, a project management consulting and evaluation firm in Ohio.  C H Smith & Associates has done multiple evaluation projects for the Ohio Department of Education and designed evaluations for other clients related to public policy. In this work, it has been important to understand policymakers and the legislative decision-making process.

Lessons Learned:

  • Legislative processes may influence your evaluation design and timeline. Publicly sponsored projects may have reporting deadlines written into legislation or their funding streams may be subject to annual budgeting reviews.  Projects sponsored by private philanthropy may also be influenced by the legislative cycle as findings may be helpful to craft or change public policy.
  • Policymakers may get data and information from a variety of sources. It was common for a policymaker to have visited a program site or talked extensively with program champions. Program critics may also be vocal to policymakers. External criticism may be based on program perceptions (rooted in experiences or in ideology), or a sense of competition for resources. Your evaluation data will need to be clear and easily accessible to cut through what may be noise.
  • You may need various reports of the same analysis. For one evaluation, we produced a one pager of highlights for quick reference by high level administrators and officials, a 6-page summary of lessons to insert in a public annual report, and a full technical report with more detailed explanation of methodology and data for staffers and stakeholders.

Hot Tips (or Cool Tricks):

  • Spend time refining research questions related to what legislative decision-makers want to or should know regarding the project and related policies.
  • Regardless of the scope of your program evaluation, identify what policies and funding streams impact the program. This understanding helps you to gain clarity on who the stakeholders are and their interests and constraints.
  • In your evaluation design, consider legislative timelines. Think about what data you may be able to reasonably collect, analyze, and report to provide insights to legislators in line with the legislative decision-making process.
  • Encourage your client to think independently from your evaluation about courses of productive action they may take if findings are less favorable than expected. Consider building in extra review time for analysis so that the client can process data and determine how to make lessons actionable or identify questions that may emerge from policymakers about the results or the evaluation approach.

Rad Resources: 

  • The National Conference of State Legislators has a Program Evaluation Society for its state policy staff members. It is helpful to see what materials policy staff members may reference when they would like to implement or review an evaluation.
  • You may map out stakeholder interests, including policymaker’s interest, in your evaluations in a” Power/interest matrix.”:

The American Evaluation Association is celebrating Labor Day Week in Evaluation: Honoring the WORK of evaluation. The contributions this week are tributes to the behind the scenes and often underappreciated work evaluators do. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings and welcome from the Disabilities and Underrepresented Populations TIG week.  We are June Gothberg, Chair and Caitlyn Bukaty, Program Chair.  This week we have a strong line up of great resources, tips, and lessons learned for engaging typically underrepresented population in evaluation efforts.

You might have noticed that we changed our name from Disabilities and Other Vulnerable Populations to Disabilities and Underrepresented Populations and may be wondering why.  It came to our attention during 2016 that sever of our members felt our previous name was inappropriate and had the potential to be offensive.  Historically, a little under 50% of our TIGs presentations represent people with disabilities, the rest are a diverse group ranging from migrants to teen parents.  The following Wordle shows the categorical information of presentations our TIGs presentation

Categories represented by the Disabilities and Underrepresented Populations presentations from 1989-2016

TIG members felt that the use of vulnerable in our name set up a negative and in some cases offensive label to the populations we represent.  Thus, after discussion, communications, and coming to consensus we proposed to the AEA board that our name be changed to Disabilities and Underrepresented Populations.

Lessons Learned:

  • Words are important! Labels are even more important!
  • Words can hurt or empower, it’s up to you.
  • Language affects attitudes and attitudes affect actions.

Hot Tips:

  • If we are to be effective evaluators we need to pay attention to the words we use in written and verbal communication.
  • Always put people first, labels last. For example, student with a disability, man with autism, woman with dyslexia.

The nearly yearlong name change process reminded of the lengthy campaign to rid federal policy and documents of the R-word.  If you happened to miss the Spread the Word to End the Word Campaign, there are several great video and other resources at r-word.org.

High School YouTube video

YouTube Video – Spread the Word to End the Word

 

 

 

 

 

 

https://www.youtube.com/watch?v=kTGo_dp_S-k&feature=youtu.be

Bill S. 2781 put into federal law, Rosa’s Law, which takes its name and inspiration for 9-year-old Rosa Marcellino, removes the terms “mental retardation” and “mentally retarded” from federal health, education and labor policy and replaces them with people first language “individual with an intellectual disability” and “intellectual disability.” The signing of Rosa’s Law is a significant milestone in establishing dignity, inclusion and respect for all people with intellectual disabilities.

So, what’s in a name?  Maybe more than you think!

 

· · · · · · ·

We are Wanda Casillas and Heather Evanson, and we are part of Deloitte Consulting LLP’s Program Evaluation Center of Excellence (PE CoE). Many of our team members and colleagues are privileged to work with a variety of federal agencies on program evaluation and performance measurement and, throughout this week, will share some of their lessons learned and ideas about potential opportunities to help federal agencies expand the value of evaluations.

This week members of our team will share lessons learned about working remotely on federal evaluations, the use of qualitative methods in federal programs that don’t always appreciate the value of mixed methods, the potential for federal programs to be more “selfish” in program planning, the value of conducting evaluation and performance measurement for federal programs, and making the most out of data commonly collected in federal programs. In the coming weeks, readers will find an additional article on scaling up federal evaluations.

Lesson Learned: Many federal clients use performance measurement, monitoring, evaluation, assessment, and other similar terms interchangeably; however, evaluators and clients don’t always have the same definitions, and therefore expectations, in mind for what these terms mean. It’s important to learn as much as possible about your federal client’s experiences and history with evaluation through research and conversations with relevant stakeholders in order to make sure you can deliver on a given agency’s needs.

Lesson Learned: Clients sometimes see evaluation or performance measurement as a requirement rather than an opportunity to understand how to improve upon or expand an existing program. As evaluation consultants, we sometimes have to work with clients to help them understand how evaluation can benefit them even after responding to a request for proposals.

Rad Resource: Alfred Ho provides some intriguing insights on the effects of the Government Performance and Results Act of 1993, which has resulted in many of the performance measurement and evaluation activity we see today in GPRA after a Decade: Lessons from the Government Performance and Results Act and Related Federal Reforms.

The American Evaluation Association is Deloitte Consulting LLP’s Program Evaluation Center of Excellence (PE CoE) week. The contributions all this week to aea365 come from PE CoE team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

We’re Sarah Brewer, Elise Garvey, Ted Kniker and Krystal Tomlin the current leadership of AEA’s Government Evaluation Topical Interest Group (TIG). To finish out Government Evaluation Week on AEA365, we decided to offer a glimpse into the future of Government Evaluation.

Since 2015 was the 25th anniversary of the Government Evaluation TIG, we wanted to forecast into the next 25 years, and sponsored a Birds of a Feather session at AEA 2015 on Predicting the Future of Evaluation and identifying if there are innovations we can make. Using an abbreviated scenario planning exercise, we set the context that scenario planning is about “stories” that illuminate the drivers of change.  We asked the group to brainstorm about what government evaluation could look like in 25 years — what are the innovations that will be thought of, what are the drivers of change in government evaluation, what is the future they imagine? A very positive shared vision emerged.

Lessons Learned:

  1. Performance metrics/evaluation findings presented through 1 page infographics/dashboards. Using improved data visualization, government evaluation can communicate more effectively.
  2. Increased use of open data and crowd sourcing for data to support evaluation. Government evaluation can lead the way to democratize data to understand how interventions succeed and can be used by more people.
  3. Diffusion of Evaluation capability to more government personnel – not concentrated in one Performance/Evaluation office. Organizational capacity building, organizational learning, and teaching of evaluation.
  4. Data and Evaluations are integrated across levels of government and across agency. More collaboration and networking of evaluation.
  5. The US would have a federal evaluation policy and/or more evaluations would be written into program authorizing legislation. AEA taking the lead.
  6. Improved technology for capture, structure, and analysis of qualitative data – i.e. voice recording. How can we take what’s been learned from the shared, portable music and apply it to data collection, analysis and reporting?
  7. Increased demand for evaluation capacity at all levels of government – especially at the county and city level. The more we innovate on the first six ideas, the more we can influence this one. The demand will increase.

Get Involved: The Government Evaluation TIG is taking these ideas, cross-walking them to our strategic planning goals to turn these possibilities into probabilities. Join us!

Rad Resources: Information and examples of Scenario Planning can be found in a multitude of resources, including:   U.S. Fish and Wildlife Services Guide (for use with Natural Resources), “Living in the Futures” article by Angela Wilkinson and Roland Kupers in the May 2013 issue of Harvard Business Review, and Scenario Planning: A Tool for Strategic Thinking MITSLOAN Magazine: Winter, January 15, 1995, by Paul J. H. Schoemaker.

The American Evaluation Association is celebrating Gov’t Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Gov’t Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello Everyone! I’m Ted Kniker, Senior Vice President of Enlighteneering, and Chair of AEA’s Government Evaluation Topical Interest Group (TIG). During our 25th anniversary year, the TIG sponsored a lively reflection on What is “Government” Evaluation from a multi-cultural perspective? The term “government evaluation” can mean so many different things.

Lessons Learned:

  • For example does it mean “federal, state or local?”  The TIG was originally started 25 years ago as a State and Local Government group, and has expanded to include evaluators from federal government, evaluation contractors, non-profit evaluators affected by government policies and practices and as well as managers in various organizations responsible for issues of organizational performance.
  • Does it mean funded, sponsored, or conducted?  The think tank session attendees agreed that a government evaluation focuses on a program that is either funded by or administered by a public sector entity. However, we struggled with whether a definition like that is still too limiting or even needed. When the ideas of policy and usage are introduced, government evaluation quickly includes a much larger universe of projects and evaluators.
  • What does it mean internationally?  As part of the discussion we learned from our friends from Japan that government evaluation means evaluating the government, and looking particularly for its inefficiencies. While many of us see government as context, others define it as the evaluand. We were reminded of the broadness of the term.
  • What does the definition mean for the populations being evaluated? Does it carry connotations that affect credibility, validity, and participation? The group agreed that government evaluation requires the same standards of excellence in practice of any evaluation. But one of the populations that seem to go unchecked, is ourselves. A question that generated a lot of reflection was, when we conduct an evaluation in a government context, do we consider ourselves government evaluators? While members of other methodological and contextual groupings often refer to themselves in those terms (e.g. qualitative evaluation has qualitative evaluators), why not government?

Lesson Learned: Government Evaluation is inclusive.  The attendees agreed that evaluators may have very narrow definitions of what government evaluation is and whether it applies to them, but that in reality it is far more expansive, has greater reach, and can include multiple contexts, evaluands, and methodologies. Far more evaluations can influence or be influenced by the government evaluation context. Therefore, government evaluation is a larger contextual group than might initially be thought. Have you worked in a government evaluation context but haven’t participated in the Government Evaluation TIG or attended the Government Evaluation TIG sponsored sessions? If so, we’d like to hear from you, or better yet, come join us! Here is our LinkedIn link: https://www.linkedin.com/grps/AEA-Government-Evaluation-TIG-6945047/about

The American Evaluation Association is celebrating Gov’t Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Gov’t Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

My name is Lauren Supplee and I work in the Office of Planning, Research and Evaluation at the Administration for Children and Families. Recent media and academic attention to transparency, replication, trust in science and lack of replication of findings in medicine research and psychology raises issues for evaluation as seen in articles in Nature Medicine and The Guardian. While evaluators can debate the concept of replication, one of the core issues of replication is trust in the evidence evaluation generates as a condition of whether it is used in policy or practice. As an evaluator I know that the perceived utility of my work to policy and practice is only as strong as the user’s trust in my findings.

While the evaluation field can’t address all of the aspects involved in the public’s trust in research and evaluation, we can proactively address building confidence and trust in design, analysis and interpretation of findings.

Hot Tips: Registering studies: A colleague and I recently wrote a commentary on the Society for Prevention Research’s revised evidence standards for prevention science. In the commentary we noted our disappointment that the new standards did not take transparency and trust head on. We stated the field needs to seriously consider engaging in practices such as pre-registering studies, pre-specifying analytic plans and sharing data with other evaluators to allow for replication of findings by independent analysts. There are multiple registries including the Open Science Framework which allows for publically sharing multiple aspects of project design and analysis; and for clinical trials new registries have been created by American Economic Association, Registry of Clinical Trials on What Works Clearinghouse, and clinicaltrials.gov.

Issues related to analysis: While pre-registering analysis plans may not always be appropriate for every study, the lack of adjustment for multiple comparisons or pre-specification of primary versus secondary outcome variables does not increase the public and policy-makers’ trust in our findings. Another factor in lack of replication is under-powered studies. A recent article in American Psychologist discusses this aspect and proposes the field should be considering statistical techniques such as Bayesian methods.

Interpretation of findings: My colleague who does work in tribal communities emphasizes the importance of having the community’s input in the interpretation of findings. In community-based participatory work, the partnership is embedded from the start and can naturally include this step. In some “high-stakes” policy-evaluation, a firewall has been built between the evaluator and the evaluated to gain independence of the findings.

Get Involved: How can we broaden the conversation to the larger community? What other ways can we build trust in evaluation findings, and ensure clear guidance on how to benefit from participant interpretation while still maintaining trust in the findings?

The American Evaluation Association is celebrating Gov’t Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Gov’t Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Diana Harbison and I’m the Director of the Program Monitoring and Evaluation Office at the U.S. Trade and Development Agency, which links U.S. businesses to global infrastructure opportunities. According to a recent survey, USTDA has some of the most engaged employees in the United States government. There are countless articles – and entire consulting businesses – built around the concept of “employee engagement,” but I think the reason USTDA is successful is, in part, that our employees are engaged in evaluation.image of USTDA "Catalyzing US Expertise to Power Africa" infographic

My office, as well as the rest of the Agency’s staff, collects feedback from our partners – over 2,000 last year – to evaluate the commercial and development results of the activities we have funded. We utilize this data to inform our daily, project-specific decisions. We also gather as a group once a year to review our results and discuss where we should focus our resources. This allows us to prioritize the countries and sectors where we work, and to identify new approaches for collaborating with our stakeholders – including our most important customers, the American people. We often employ data to communicate how our partners have or could benefit from our programs.

We also love to tell stories, like the time a South African pilot stood up and told an audience that she had been unsure about her career path but after participating in an aviation workshop we hosted, knew what she wanted to do next and was excited about the future. Or the time a small business owner told me that his first USTDA contract helped him expand his business in just three years, and he now has hundreds of millions of dollars in business, working with new clients. We have so many stories about our accomplishments that we have begun sharing them publicly on our website as staff commentaries.

My colleagues are committed to our mission and engaged in their work every day. Instead of simply doing what is required, they utilize our results to go beyond and do what is possible. So when I’m asked how USTDA continuously drives performance results and maintains such an engaged staff, I say it’s because everyone values – and evaluates – their work.

The American Evaluation Association is celebrating Gov’t Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Gov’t Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Older posts >>

Archives

To top