AEA365 | A Tip-a-Day by and for Evaluators

CAT | Government Evaluation

I am Brian Yoder, Director of Assessment, Evaluation and Institutional Research at the American Society for Engineering Education, a professional association located in Washington, D.C.  I also serve as President Elect for the Washington Evaluators, a local affiliate of AEA.

I’ve lived and worked in D.C. for the past seven years working as a contractor, in government and a professional society, and I believe government processes can be helped through the use and application of evaluation.  As the saying goes, there are no problems, only opportunities, and I’ve seen plenty of opportunities to improve government processes and the improved use of evaluation to assess government programs.

Lessons Learned: Traditionally, I think evaluators have tried to keep their role separate from implementation and the policy-making processes. But, based on my work in D.C., I’ve come to believe that policy makers and program implementers would be well served by evaluators being involved more closely and directly in policy making and program implementation processes. When you work in an environment where the answers to important questions were needed yesterday, and questions that need to be answered keep changing, the traditional approach to evaluation with formative evaluation leading to summative evaluation becomes too slow and irrelevant.

That’s why I helped to spearhead the Evaluators Visit Capitol Hill (EVCH) Initiative, a joint effort between the Washington Evaluators and AEA’s Evaluation Policy Task Force (EPTF).  EVCH is an initiative that coordinates attendees at the American Evaluation Association conference in Washington, D.C. this fall to meet with someone in the office of their congressperson so they discuss with them the importance of evaluation and give them EPTF materials.

My hope is that this initiative can accomplish three things:

  1. Make more policy makers aware of AEA and the work of EPTF.
  2. Expand the reach of EPTF to creating connections for EPTF.
  3. Give evaluators the opportunity to be part of the early policy-making process by providing materials on evaluation to policy makers prior to the policy being made.

The deadline to sign up to participate has past, but if you would like to learn more about the initiative, click here http://washingtonevaluators.roundtablelive.org/EVCH

Hot Tip: For those of you participating, please remember to pick up your packet of materials at the Local Affiliates Working Group table located close to AEA conference registration.

Rad Resource: If you would like to know more about the Evaluation Policy Task Force, click here http://www.eval.org/p/cm/ld/fid=129

Rad Resource: If you would like to learn more about the Washington Evaluators, click here http://www.washeval.org/

This is the last of three weeks this year sponsored by our Local Arrangements Working Group (LAWG) for Evaluation 2013, the American Evaluation Association Annual Conference coming up next month in Washington, DC. They’re sharing not only evaluation expertise from in and around our nation’s capital, but also tips for enjoying your time in DC. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

No tags

This post comes to you from Valerie Caracelli, Senior Social Science Analyst at the U.S. Government Accountability Office (GAO). I’m serving with David Bernstein as co-chair of the Local Arrangements Working Group (LAWG) for the 2013 American Evaluation Association Conference in Washington, D.C. October 14-19.

Lessons Learned—Do federal managers use performance information? The conference theme, The State of Evaluation Practice in the Early 21st Century, lends itself to the question of whether and how federal managers use performance information to manage for results. GAO’s most recent periodic survey of Federal Managers was recently released under Managing for Results: 2013 Federal Managers Survey on Organizational Performance and Management Issues, GAO-13-519SP.

Lessons Learned—Do evaluations get used? The conference theme also raises the question of whether and how evaluations can make a difference! Stephanie Shipman spearheaded a companion GAO report that focuses on questions from the Federal Managers Survey that address how evaluations contribute to improving program management and policy making. Through case examples, GAO explored barriers that impede use, and strategies that agencies use to get evaluation results used. See Program Evaluation: Strategies to Facilitate Agencies’ Use of Evaluation in Program Management and Policy Making [GAO-13-570]. One finding is that evaluators rely on a body of evidence, rather than a single study, which allows for responding to a broad array of evaluation questions of interest to decision makers. You can download both reports from www.gao.gov.

[AE1] Hot tips for visiting D.C. in 2013. The perspective you gain from a body of evidence led me to think about your visit to our nation’s capital and where you might find the best perspective for viewing the city and its surroundings:

Mall View From Capitol Dome

The view of the National Mall from the Capitol Dome. Image credit: http://dclikealocal.com/dclikealocal/2009/3/23/best-view-of-dc-the-capitol-dome.html

  • Naturally, the Washington Monument comes to mind but that will be closed owing to repairs from the 2011 earthquake!
  • At just over half the height of the monument, you can see a 360 degree panorama of the same views from the Clock Tower at the Old Post Office Pavilion. Donald Trump recently purchased the building but the National Park Service still controls the Clock Tower which will remain open during renovations.
  • Evenings you can enjoy the view from the W Hotel’s 11th floor and P.O.V. Roof Terrace provide panoramic views of the monuments and D.C. landmarks which are illuminated in the evening.
  • Staying awhile? The Washington National Cathedral’s Pilgrim Observation Gallery offers a 360-degree view of the city of Washington and its environs.

For more resources on these and other activities, we encourage each of you to visit the Washington Evaluators Local Affiliate website at www.washeval.org. Washington Evaluators are looking forward to the conference and your arrival!

We’re thinking forward to October and the Evaluation 2013 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Registration is now open! Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

No tags

I am Andy Blum, Vice President for Program Management and Evaluation at the U.S. Institute of Peace (USIP), an independent organization that helps communities around the world, prevent, manage or recover from violent conflict.  I recently spoke at a brown bag for the Washington Evaluators about the process of improving my organization’s learning and evaluation processes in general, and creating an organizational evaluation policy in particular. In this post, I’ll provide a few of the takeaways that could be applicable in other contexts.

USIP2Here are three, hopefully generalizable, lessons from the process of crafting the evaluation policy at USIP.

Lesson Learned: Conducting a baseline assessment was extremely helpful. By asking staff their greatest hopes and fears of evaluation, themes emerged, and the findings proved useful as an assessment of where we stood regarding learning and evaluation, and for developing an action plan to improve evaluation.

Lesson Learned: When talking about changing how evaluations are done and used in organizations, you need to manage messaging and communications almost fanatically. The phrase “demystify evaluation” had real resonance. I found myself becoming almost folksy when discussing evaluation. Instead of saying theory of change, I asked, “Why do you think this is going to work?” Instead of saying indicator or metric, I would ask, “What are you watching to see if the program is going well?” Especially at the beginning of an effort to improve evaluation, you do not want to alienate staff through the use of technical language.

Lesson Learned: There is a tension between supporting your evaluation champions and creating organizational “standards.” Your evaluation champions have likely have created effective boutique solutions to their evaluation challenges. These can be undermined as you try to standardize processes throughout the organization. To the extent possible, standardization should build on existing solutions.

Rad Resource: The best change management book I’ve seen: Switch: How to Change Thing When Change is Hard, by Chip Heath and Dan Heath.

Hot Tips—Insider’s advice for Evaluation 2013 in DC: The Passenger is DC’s most famous cocktail spot, but if the weather is good you can’t beat Room 11 in Columbia Heights as a place to sit outside and drink real cocktails.

We’re thinking forward to October and the Evaluation 2013 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Registration is now open! Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

No tags

Hamilton1Welcome to the Evaluation 2013 Conference Local Arrangements Working Group (LAWG) week on aea365.Howdy!  My name is Jennifer Hamilton, and I am a Senior Study Director at Westat and a Board member of the Eastern Evaluation Research Society, an AEA affiliate. I am also a statistician and methodologist, who sometimes tweets about evaluation, in addition to other things too embarrassing and geeky to mention here.

Lessons Learned:

We have known for a while that the evaluation pendulum was swinging towards randomized designs, largely due to the influence of the Institute for Education Sciences (IES) at the U.S. Department of Education (DoE). IES has done this largely through leveraging its $200 million dollar budget to prioritize evaluations that allow impact estimates to be causally attributed to a program or policy.

Some evaluators have welcomed this shift toward experimental designs, while others have railed against it. Love it or hate it, I think the Randomized Controlled Trial (RCT) is here to stay. I say this with some conviction, based on my own experiences working with DoE and the fact that other federal agencies seem to be moving in the same direction. A case in point is last year’s memo from the Office of Management and Budget, (cleverly dubbed the OMG OMB memo).  It asks the entire Executive Branch to implement strategies to support evaluations using randomized designs. For example, when applying for grants, districts could be required to submit schools in pairs, so that one could be randomly assigned to the treatment and the other to a control condition.

Even though I believe the field is benefiting from the increased focus on experimental designs, the bottom line is that they are still not appropriate in all (or even most) situations. A program in its early stages of development asking formative questions should not be evaluated with an experimental design. Moreover, it is often costly and difficult to implement a high quality RCT (and don’t even talk to me about trying to recruit for them). Lastly, experimental methodology focuses on obtaining a high degree of internal validity, which often means that you are limiting the degree to which you can generalize your results and reducing external validity.

Hamilton2Rad Resource:

  • If you decide to utilize an experimental design, familiarize yourself with the What Works Clearinghouse (WWC) standards and procedures. Although getting their Good Housekeeping stamp of approval may not be your goal, the WWC has had a lot of *really* smart people thinking about methodology for a long time. If you follow their guidelines, you reap the benefit of their brain trust.

Hot Tips—Insider’s advice for Evaluation 2013 in DC:

We’re thinking forward to October and the Evaluation 2013 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). AEA is accepting proposals to present at Evaluation 2013 through until March 15 via the conference website. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

No tags

I am Melanie Hwalek, CEO of SPEC Associates and a member of AEA’s Cultural Competence Statement Dissemination Core Workgroup. My focus within the Workgroup is to help identify ways to disseminate the Statement and integrate its contents into evaluation policy. AEA’s Think Tank: Adoption of the AEA Public Statement on Cultural Competence in Evaluation: Moving From Policy to Practice and Practice to Policy gave me three big ideas for doing this.

Lesson Learned: Cultural Competence can be in big “P” policy and small “p” policy. Dissemination of the Cultural Competency Statement doesn’t have to start with federal or state level, big “P” policy change. Small polices like setting criteria for acceptable evaluation plans, for assuring that evaluation methods take culture into consideration, and for ensuring culturally sensitive evaluation products can go just as far – or further – in assuring that all evaluations validate the importance of culture in their design, analysis, interpretation and reporting.

Hot Tip: Start where there is a path of least resistance.Agencies that exist to represent or protect minority interests are, themselves, culturally sensitive. These are the agencies that should easily understand the importance of assuring that the evaluations of their programs should include cultural competence. If you are passionate about infusing cultural competence into municipal, state or federal policy, start with these types of agencies since they are likely to understand the importance of culturally sensitive evaluations. Keep in mind, though, that just because an organization “says” it values cultural competence doesn’t mean the really know how to be and act in a culturally competent way.

Hot Tip: Try to go viral.Infusing cultural competence into policy means that we need to be open to all kinds and levels of policy, much of which is identified only through practice. The lesson here is to start promoting cultural competence to anyone and anywhere evaluation planning, methods, analysis and reporting are discussed. In this networked world, the more people who think and talk about cultural competence in evaluation, the more likely it will find its way into evaluation practice and evaluation policy.

Rad resource: William Trochim wrote an informative article on evaluation policy and practice.

This week, we’re diving into issues of Cultural Competence in Evaluation with AEA’s Statement on Cultural Competence in Evaluation Dissemination Working Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I am David J. Bernstein, and I am a Senior Study Director with Westat, an employee-owned research and evaluation company in Rockville, Maryland. I was an inaugural member of AEA, and was the founder and first Chair of the Government Evaluation Topical Interest Group.

Westat was hired by the U.S. Department of Education’s Rehabilitation Services Administration (RSA) to conduct an evaluation of the Helen Keller National Center for Deaf-Blind Youths and Adults (HKNC). HKNC is a national rehabilitation program serving youth and adults who are deaf-blind founded by an Act of Congress in 1967, and operates under a grant from RSA, which is HKNC’s largest funding source.

The Westat evaluation was the first evaluation of HKNC in over 20 years, although HKNC submits performance measures and annual reports to RSA. RSA wanted to make sure that the evaluation included interviews with Deaf-Blind individuals who had taken vocational rehabilitation and independent living courses on the HKNC campus in Sands Point, New York. After meeting with HKNC management and teaching staff, it became clear that communication issues would be a challenge given the myriad of ways that Deaf-Blind individuals communicate. Westat and RSA agreed that in-person interviews with Deaf-Blind individuals would help keep the interviews simple, intuitive, and make sure that this critical stakeholder group was comfortable and willing to participate.

Hot Tips:

  • Make use of gatekeepers and experts-in-residence. Principle Three encourages simple and intuitive design of materials to address users’ level of experience and language skills. For the HKNC Evaluation, interview guides went through multiple reviews, including review by experts in Deaf-Blind communication not associated with HKNC. Ultimately, it was HKNC staff that provided a critical final review to simplify the instruments since HKNC was familiar with the wide variety of communication skills of their former students.
  • Plan ahead in regards to location and communication. Principle Seven calls for appropriate space to make anyone involved in data collection comfortable, including transportation accessibility and provision of interpreters, if needed. For the HKNC evaluation, interview participants were randomly selected who were within a reasonable distance of locations near HKNC regional offices. Westat worked with HKNC partners and HKNC regional representatives with whom interviewees were familiar. In the Los Angeles area, we brought the interviews to the interviewees, selecting locations that were as close as possible to where former HKNC students lived. Most importantly, Westat worked with HKNC to identify the Deaf-Blind individuals’ communication abilities and preferences, and had two interpreters on site for interviews. In one case we used a participant’s iPad with large print enabled to communicate interview questions.

Resource:

The American Evaluation Association is celebrating the Disabilities and Other Vulnerable Populations TIG (DOVP) Week. The contributions all week come from DOVP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · · · ·

I’m Donna Campbell, Director of Professional Development Capacity Building at the Arizona Department of Education (ADE). The Professional Development Leadership Academy (PDLA) is a three-year long curriculum of training and back-home application for school and district teams based on the Learning Forward Professional Learning Standards, derived from research.

Lesson Learned:

  • Legislation supports evaluation. I’ve learned it’s easier to train school teams to conduct Guskey Level 3 evaluations of organizational support than to scale this evaluation step to a state level.  The advent of the Common Core Standards (CCS) is raising awareness of the need for ADE to gather Level 3 data.  We are seizing this golden opportunity.
  • Understand significant shifts.  The CCS instructional shifts seem to be a catalyst for education leaders to challenge their assumption that if teachers just attend training sessions their instructional practice will change.
  • Building capacity is often top-down.  An ADE cross-divisional team is designing processes to build school leaders capacity to provide organizational support to teachers including opportunities for collaboration, time to practice new skills, follow-up, and feedback.  Our challenge: apply lessons learned from PDLA to every school and district in Arizona.
  • Teams set the stage. Teams’ attention to strengthening cultures of collegial support sets the stage for monitoring transfer of knowledge to the classroom, Guskey’s Level 4. If complex and large-scale instructional change is to be implemented and sustained, organizational support is essential.  Level 3 has been the missing link in previous standards-based reform efforts.

Hot Tips:

  • Teams develop their capacity to design, implement, and evaluate results-driven professional development (PD) to improve student learning. After focusing the first year on data analyses, goal-setting, theories of action, and planning PD to achieve a well-defined instructional change, teams are introduced to Guskey’s five-level evaluation model in year two.
  • School teams tend to focus Level 3 data gathering on school-level data.  For instance, we invite teams to annually administer two surveys: Learning Forward’s Standards Assessment Inventory (SAI) for teachers; and Education for the Future’s perception surveys for teachers, students, and parents. Teams analyze teacher survey data to assess perceived collegial and principal support over time. They also compare the amount of time designated at their school for professional learning from their start to finish of PLDA. Some routinely review written records of various teams at their school, checking for shared focus and follow-through. Results show examples of Level 3 progress through markers of increased candor and openness among faculty members or increased teacher participation in the PDLA team work.

Rad Resources:

The American Evaluation Association is celebrating Professional Development Community of Practice (PD CoP) Week. The contributions all week come from PD CoP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · · · ·

I’m Regan Grandy, and I’ve worked as an evaluator for Spectrum Research Evaluation and Development for six years. My work is primarily evaluating U.S. Department of Education-funded grant projects with school districts across the nation.

Lessons Learned – Like some of you, I’ve found it difficult, at times, gaining access to extant data from school districts. Administrators often cite the Family Educational Rights and Privacy Act (FERPA) as the reason for not providing access to such data. While FERPA requires written consent be obtained before personally identifiable educational records can be released, I have learned that FERPA was recently amended to include exceptions that speak directly to educational evaluators of State or local education agencies.

Hot Tip – In December 2011, the U.S. Department of Education amended regulations governing FERPA. The changes include “several exceptions that permit the disclosure of personally identifiable information from education records without consent.” One exception is the audit or evaluation exception (34 CFR Part 99.35). Regarding this exception, the U.S. Department of Education states:

“The audit or evaluation exception allows for the disclosure of personally identifiable information from education records without consent to authorized representatives … of the State or local educational authorities (FERPA-permitted entities). Under this exception, personally identifiable information from education records must be used to audit or evaluate a Federal- or State-supported education program, or to enforce or comply with Federal legal requirements that relate to those education programs.” (FERPA Guidance for Reasonable Methods and Written Agreements)

The rationale for this FERPA amendment was provided as follows: “…State or local educational agencies must have the ability to disclose student data to evaluate the effectiveness of publicly-funded education programs … to ensure that our limited public resources are invested wisely.” (Dec 2011 – Revised FERPA Regulations: An Overview For SEAs and LEAs)

Hot Tip – If you are an educational evaluator, be sure to:

  • know and follow the FERPA regulations (see 34 CFR Part 99).
  • secure a quality agreement with the education agency, specific to FERPA (see Guidance).
  • have a legitimate reason to access data.
  • agree to not redisclose.
  • access only data that is needed for the evaluation.
  • have stewardship for the data you receive.
  • secure data.
  • properly destroy personally identifiable information when no longer needed.

Rad Resource – The Family Policy Compliance Office (FPCO) of the U.S. Department of Education is responsible for implementing the FERPA regulations, and they have a wealth of resources about it on their website. Also, you can view the entire FERPA law here. The statutes of most interest to educational evaluators will be 34 CFR Part 99.31 and 99.35.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · · ·

Greetings from Boise, the city of trees! We are Rakesh Mohan (director) and Margaret Campbell (administrative coordinator) of Idaho’s legislative Office of Performance Evaluations (OPE). Margaret reviews drafts of our reports from a nonevaluator’s perspective, as well as copyedits and desktop publishes each report. In this post, we share our thoughts on the importance of writing evaluation reports with users in mind. Some of our users are legislators, the governor, agency officials, program managers, the public, and the press.

Lessons Learned: Writing effective reports for busy policymakers embraces several criteria, such as logic, organization, and message. But in our experience, if your writing doesn’t have clarity, the report will not be used. Clear writing takes time and can be difficult to accomplish. We have examined some reasons why reports may not be written clearly and declare these reasons to be myths:

Myth 1: I have to dumb down the report to write simply. Policymakers are generally sharp individuals with a multitude of issues on their minds and competing time demands. If we want their attention, we cannot rely on the academic writing style. Instead, we write clear and concise reports so that policymakers can glean the main message in a few minutes.

Myth 2: Complex or technical issues can’t be easily explained. When evaluators thoroughly understand the issue and write in active sentences from a broad perspective, they can explain complex and technical issues clearly.

Myth 3: Some edits are only cosmetic changes. Evaluators who seek excellence will welcome feedback on their draft reports. Seemingly minor changes can improve the rhythm of the text, which increases readability and clarity.

Our goal is to write concise, easy-to-understand reports so that end users can make good use of our evaluation work. We put our reports through a collaborative edit process (see our flowchart) to ensure we meet this goal. Two recent reports are products of our efforts:

Equity in Higher Education Funding

Reducing Barriers to Postsecondary Education

Hot Tips

  1. Have a nonevaluator review your draft report.
  2. Use a brief executive summary highlighting the report’s main message.
  3. Use simple active verbs.
  4. Avoid long strings of prepositional phrases.
  5. Pay attention to the rhythm of sentences.
  6. Vary your sentence length, avoiding long sentences.
  7. Write your key points first and follow with need-to-know details.
  8. Put technical details and other nonessential supporting information in appendices.
  9. Minimize jargon and acronyms.
  10. Use numbered and bulleted lists.
  11. Use headings and subheadings to guide the reader.
  12. Use sidebars to highlight key points.

Rad Resources

  • Revising Prose by Richard A. Lanham
  • Copyediting.com
  • Lapsing Into a Comma by Bill Walsh

We’re celebrating Data Visualization and Reporting Week with our colleagues in the DVR AEA Topical Interest Group. The contributions all this week to aea365 come from our DVR members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting DVR resources. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

·

Hi, I’m Gary Huang, a Technical Director and Fellow at ICF Macro, Inc. in Calverton, Maryland. My colleagues, Sophia Zanakos, Erika Gordon, Gary McQuown, Rich Mantovani, and I are presenting at AEA’s upcoming conference on improper payment (IP) studies. We conduct research and evaluation relating to benefit eligibility and payment errors under rubric of IP. This kind of research, required by law (IPERA 2010, formerly IPIA 2002), is becoming increasingly important for improving government accountability and financial integrity.

Lessons Learned: To define benefit eligibility error and to make decisions on data sources and methods to use to generate IP estimates, we must prioritize stakeholders’ different interests. This includes meeting the technical and statistical rigor required by the Office of Management and Budget (OMB), understanding the intricacies in program concerns by federal agencies, dealing with the reluctance to cooperate among local agencies, and facing the logistic challenges for surveying program participants. Two types of data sources are used in IP studies: program administrative records and survey data.

Hot Tip: A comprehensive IP study of the assisted-housing programs at HUD involves a stratified sample survey and administrative data collection to generate nationally representative estimates of 1) the extent of erroneous rental determinations, 2) the extent of billing error associated with the owner-administered program, and 3) the extent of error associated with tenant underreporting of income. The extensive data collection effort requires coordination and data quality control to ensure data accuracy in tenant file abstraction, in-person CAPI interviewing, third party information, and data matching with Social Security and National Directory of New Hires databases.

Hot Tip: Some agencies conduct national representative surveys of individuals served and entities paid for providing services. In some cases, these surveys bear close similarities to audits and are overt or covert with the data collector posing as a customer. The Food and Nutrition Service (FNS) is increasingly emphasizing the use of administrative data to update estimates obtained from surveys. However, the administrative data are usually biased, and therefore must be modified. Statistical modeling for updating improper payment estimates seems a possible and efficient alternative in IP studies.

Hot Tip: For the Center for Medicare Medicaid Services (CMS) to identity probable fraudulent claims and the resulting improper payments to health care providers, computer programs were developed to examine four years of Medicaid administrative claims data for all US states and territories, applying a variety of algorithms and statistical processes. Both individual health care providers and related institutions were reviewed. For such large administrative data analyses, evaluators struggle to understand various issues from technical, managerial and political perspectives.

Rad Resources: Check OMB’s implementing guidance to all federal agencies (http://fedcfo.blogspot.com/search/label/IPIA) on IP measurement and policy and technical requirements for IP studies.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Gary? Gary and his colleagues will be presenting as part of the Evaluation 2011 Conference Program, November 2-5 in Anaheim, California.

<< Latest posts

Older posts >>

Archives

To top