AEA365 | A Tip-a-Day by and for Evaluators

CAT | Government Evaluation

I am Andy Blum, Vice President for Program Management and Evaluation at the U.S. Institute of Peace (USIP), an independent organization that helps communities around the world, prevent, manage or recover from violent conflict.  I recently spoke at a brown bag for the Washington Evaluators about the process of improving my organization’s learning and evaluation processes in general, and creating an organizational evaluation policy in particular. In this post, I’ll provide a few of the takeaways that could be applicable in other contexts.

USIP2Here are three, hopefully generalizable, lessons from the process of crafting the evaluation policy at USIP.

Lesson Learned: Conducting a baseline assessment was extremely helpful. By asking staff their greatest hopes and fears of evaluation, themes emerged, and the findings proved useful as an assessment of where we stood regarding learning and evaluation, and for developing an action plan to improve evaluation.

Lesson Learned: When talking about changing how evaluations are done and used in organizations, you need to manage messaging and communications almost fanatically. The phrase “demystify evaluation” had real resonance. I found myself becoming almost folksy when discussing evaluation. Instead of saying theory of change, I asked, “Why do you think this is going to work?” Instead of saying indicator or metric, I would ask, “What are you watching to see if the program is going well?” Especially at the beginning of an effort to improve evaluation, you do not want to alienate staff through the use of technical language.

Lesson Learned: There is a tension between supporting your evaluation champions and creating organizational “standards.” Your evaluation champions have likely have created effective boutique solutions to their evaluation challenges. These can be undermined as you try to standardize processes throughout the organization. To the extent possible, standardization should build on existing solutions.

Rad Resource: The best change management book I’ve seen: Switch: How to Change Thing When Change is Hard, by Chip Heath and Dan Heath.

Hot Tips—Insider’s advice for Evaluation 2013 in DC: The Passenger is DC’s most famous cocktail spot, but if the weather is good you can’t beat Room 11 in Columbia Heights as a place to sit outside and drink real cocktails.

We’re thinking forward to October and the Evaluation 2013 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Registration is now open! Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

No tags

Hamilton1Welcome to the Evaluation 2013 Conference Local Arrangements Working Group (LAWG) week on aea365.Howdy!  My name is Jennifer Hamilton, and I am a Senior Study Director at Westat and a Board member of the Eastern Evaluation Research Society, an AEA affiliate. I am also a statistician and methodologist, who sometimes tweets about evaluation, in addition to other things too embarrassing and geeky to mention here.

Lessons Learned:

We have known for a while that the evaluation pendulum was swinging towards randomized designs, largely due to the influence of the Institute for Education Sciences (IES) at the U.S. Department of Education (DoE). IES has done this largely through leveraging its $200 million dollar budget to prioritize evaluations that allow impact estimates to be causally attributed to a program or policy.

Some evaluators have welcomed this shift toward experimental designs, while others have railed against it. Love it or hate it, I think the Randomized Controlled Trial (RCT) is here to stay. I say this with some conviction, based on my own experiences working with DoE and the fact that other federal agencies seem to be moving in the same direction. A case in point is last year’s memo from the Office of Management and Budget, (cleverly dubbed the OMG OMB memo).  It asks the entire Executive Branch to implement strategies to support evaluations using randomized designs. For example, when applying for grants, districts could be required to submit schools in pairs, so that one could be randomly assigned to the treatment and the other to a control condition.

Even though I believe the field is benefiting from the increased focus on experimental designs, the bottom line is that they are still not appropriate in all (or even most) situations. A program in its early stages of development asking formative questions should not be evaluated with an experimental design. Moreover, it is often costly and difficult to implement a high quality RCT (and don’t even talk to me about trying to recruit for them). Lastly, experimental methodology focuses on obtaining a high degree of internal validity, which often means that you are limiting the degree to which you can generalize your results and reducing external validity.

Hamilton2Rad Resource:

  • If you decide to utilize an experimental design, familiarize yourself with the What Works Clearinghouse (WWC) standards and procedures. Although getting their Good Housekeeping stamp of approval may not be your goal, the WWC has had a lot of *really* smart people thinking about methodology for a long time. If you follow their guidelines, you reap the benefit of their brain trust.

Hot Tips—Insider’s advice for Evaluation 2013 in DC:

We’re thinking forward to October and the Evaluation 2013 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). AEA is accepting proposals to present at Evaluation 2013 through until March 15 via the conference website. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

No tags

I am Melanie Hwalek, CEO of SPEC Associates and a member of AEA’s Cultural Competence Statement Dissemination Core Workgroup. My focus within the Workgroup is to help identify ways to disseminate the Statement and integrate its contents into evaluation policy. AEA’s Think Tank: Adoption of the AEA Public Statement on Cultural Competence in Evaluation: Moving From Policy to Practice and Practice to Policy gave me three big ideas for doing this.

Lesson Learned: Cultural Competence can be in big “P” policy and small “p” policy. Dissemination of the Cultural Competency Statement doesn’t have to start with federal or state level, big “P” policy change. Small polices like setting criteria for acceptable evaluation plans, for assuring that evaluation methods take culture into consideration, and for ensuring culturally sensitive evaluation products can go just as far – or further – in assuring that all evaluations validate the importance of culture in their design, analysis, interpretation and reporting.

Hot Tip: Start where there is a path of least resistance.Agencies that exist to represent or protect minority interests are, themselves, culturally sensitive. These are the agencies that should easily understand the importance of assuring that the evaluations of their programs should include cultural competence. If you are passionate about infusing cultural competence into municipal, state or federal policy, start with these types of agencies since they are likely to understand the importance of culturally sensitive evaluations. Keep in mind, though, that just because an organization “says” it values cultural competence doesn’t mean the really know how to be and act in a culturally competent way.

Hot Tip: Try to go viral.Infusing cultural competence into policy means that we need to be open to all kinds and levels of policy, much of which is identified only through practice. The lesson here is to start promoting cultural competence to anyone and anywhere evaluation planning, methods, analysis and reporting are discussed. In this networked world, the more people who think and talk about cultural competence in evaluation, the more likely it will find its way into evaluation practice and evaluation policy.

Rad resource: William Trochim wrote an informative article on evaluation policy and practice.

This week, we’re diving into issues of Cultural Competence in Evaluation with AEA’s Statement on Cultural Competence in Evaluation Dissemination Working Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I am David J. Bernstein, and I am a Senior Study Director with Westat, an employee-owned research and evaluation company in Rockville, Maryland. I was an inaugural member of AEA, and was the founder and first Chair of the Government Evaluation Topical Interest Group.

Westat was hired by the U.S. Department of Education’s Rehabilitation Services Administration (RSA) to conduct an evaluation of the Helen Keller National Center for Deaf-Blind Youths and Adults (HKNC). HKNC is a national rehabilitation program serving youth and adults who are deaf-blind founded by an Act of Congress in 1967, and operates under a grant from RSA, which is HKNC’s largest funding source.

The Westat evaluation was the first evaluation of HKNC in over 20 years, although HKNC submits performance measures and annual reports to RSA. RSA wanted to make sure that the evaluation included interviews with Deaf-Blind individuals who had taken vocational rehabilitation and independent living courses on the HKNC campus in Sands Point, New York. After meeting with HKNC management and teaching staff, it became clear that communication issues would be a challenge given the myriad of ways that Deaf-Blind individuals communicate. Westat and RSA agreed that in-person interviews with Deaf-Blind individuals would help keep the interviews simple, intuitive, and make sure that this critical stakeholder group was comfortable and willing to participate.

Hot Tips:

  • Make use of gatekeepers and experts-in-residence. Principle Three encourages simple and intuitive design of materials to address users’ level of experience and language skills. For the HKNC Evaluation, interview guides went through multiple reviews, including review by experts in Deaf-Blind communication not associated with HKNC. Ultimately, it was HKNC staff that provided a critical final review to simplify the instruments since HKNC was familiar with the wide variety of communication skills of their former students.
  • Plan ahead in regards to location and communication. Principle Seven calls for appropriate space to make anyone involved in data collection comfortable, including transportation accessibility and provision of interpreters, if needed. For the HKNC evaluation, interview participants were randomly selected who were within a reasonable distance of locations near HKNC regional offices. Westat worked with HKNC partners and HKNC regional representatives with whom interviewees were familiar. In the Los Angeles area, we brought the interviews to the interviewees, selecting locations that were as close as possible to where former HKNC students lived. Most importantly, Westat worked with HKNC to identify the Deaf-Blind individuals’ communication abilities and preferences, and had two interpreters on site for interviews. In one case we used a participant’s iPad with large print enabled to communicate interview questions.

Resource:

The American Evaluation Association is celebrating the Disabilities and Other Vulnerable Populations TIG (DOVP) Week. The contributions all week come from DOVP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · · · ·

I’m Donna Campbell, Director of Professional Development Capacity Building at the Arizona Department of Education (ADE). The Professional Development Leadership Academy (PDLA) is a three-year long curriculum of training and back-home application for school and district teams based on the Learning Forward Professional Learning Standards, derived from research.

Lesson Learned:

  • Legislation supports evaluation. I’ve learned it’s easier to train school teams to conduct Guskey Level 3 evaluations of organizational support than to scale this evaluation step to a state level.  The advent of the Common Core Standards (CCS) is raising awareness of the need for ADE to gather Level 3 data.  We are seizing this golden opportunity.
  • Understand significant shifts.  The CCS instructional shifts seem to be a catalyst for education leaders to challenge their assumption that if teachers just attend training sessions their instructional practice will change.
  • Building capacity is often top-down.  An ADE cross-divisional team is designing processes to build school leaders capacity to provide organizational support to teachers including opportunities for collaboration, time to practice new skills, follow-up, and feedback.  Our challenge: apply lessons learned from PDLA to every school and district in Arizona.
  • Teams set the stage. Teams’ attention to strengthening cultures of collegial support sets the stage for monitoring transfer of knowledge to the classroom, Guskey’s Level 4. If complex and large-scale instructional change is to be implemented and sustained, organizational support is essential.  Level 3 has been the missing link in previous standards-based reform efforts.

Hot Tips:

  • Teams develop their capacity to design, implement, and evaluate results-driven professional development (PD) to improve student learning. After focusing the first year on data analyses, goal-setting, theories of action, and planning PD to achieve a well-defined instructional change, teams are introduced to Guskey’s five-level evaluation model in year two.
  • School teams tend to focus Level 3 data gathering on school-level data.  For instance, we invite teams to annually administer two surveys: Learning Forward’s Standards Assessment Inventory (SAI) for teachers; and Education for the Future’s perception surveys for teachers, students, and parents. Teams analyze teacher survey data to assess perceived collegial and principal support over time. They also compare the amount of time designated at their school for professional learning from their start to finish of PLDA. Some routinely review written records of various teams at their school, checking for shared focus and follow-through. Results show examples of Level 3 progress through markers of increased candor and openness among faculty members or increased teacher participation in the PDLA team work.

Rad Resources:

The American Evaluation Association is celebrating Professional Development Community of Practice (PD CoP) Week. The contributions all week come from PD CoP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · · · ·

I’m Regan Grandy, and I’ve worked as an evaluator for Spectrum Research Evaluation and Development for six years. My work is primarily evaluating U.S. Department of Education-funded grant projects with school districts across the nation.

Lessons Learned – Like some of you, I’ve found it difficult, at times, gaining access to extant data from school districts. Administrators often cite the Family Educational Rights and Privacy Act (FERPA) as the reason for not providing access to such data. While FERPA requires written consent be obtained before personally identifiable educational records can be released, I have learned that FERPA was recently amended to include exceptions that speak directly to educational evaluators of State or local education agencies.

Hot Tip – In December 2011, the U.S. Department of Education amended regulations governing FERPA. The changes include “several exceptions that permit the disclosure of personally identifiable information from education records without consent.” One exception is the audit or evaluation exception (34 CFR Part 99.35). Regarding this exception, the U.S. Department of Education states:

“The audit or evaluation exception allows for the disclosure of personally identifiable information from education records without consent to authorized representatives … of the State or local educational authorities (FERPA-permitted entities). Under this exception, personally identifiable information from education records must be used to audit or evaluate a Federal- or State-supported education program, or to enforce or comply with Federal legal requirements that relate to those education programs.” (FERPA Guidance for Reasonable Methods and Written Agreements)

The rationale for this FERPA amendment was provided as follows: “…State or local educational agencies must have the ability to disclose student data to evaluate the effectiveness of publicly-funded education programs … to ensure that our limited public resources are invested wisely.” (Dec 2011 – Revised FERPA Regulations: An Overview For SEAs and LEAs)

Hot Tip – If you are an educational evaluator, be sure to:

  • know and follow the FERPA regulations (see 34 CFR Part 99).
  • secure a quality agreement with the education agency, specific to FERPA (see Guidance).
  • have a legitimate reason to access data.
  • agree to not redisclose.
  • access only data that is needed for the evaluation.
  • have stewardship for the data you receive.
  • secure data.
  • properly destroy personally identifiable information when no longer needed.

Rad Resource – The Family Policy Compliance Office (FPCO) of the U.S. Department of Education is responsible for implementing the FERPA regulations, and they have a wealth of resources about it on their website. Also, you can view the entire FERPA law here. The statutes of most interest to educational evaluators will be 34 CFR Part 99.31 and 99.35.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · · ·

Greetings from Boise, the city of trees! We are Rakesh Mohan (director) and Margaret Campbell (administrative coordinator) of Idaho’s legislative Office of Performance Evaluations (OPE). Margaret reviews drafts of our reports from a nonevaluator’s perspective, as well as copyedits and desktop publishes each report. In this post, we share our thoughts on the importance of writing evaluation reports with users in mind. Some of our users are legislators, the governor, agency officials, program managers, the public, and the press.

Lessons Learned: Writing effective reports for busy policymakers embraces several criteria, such as logic, organization, and message. But in our experience, if your writing doesn’t have clarity, the report will not be used. Clear writing takes time and can be difficult to accomplish. We have examined some reasons why reports may not be written clearly and declare these reasons to be myths:

Myth 1: I have to dumb down the report to write simply. Policymakers are generally sharp individuals with a multitude of issues on their minds and competing time demands. If we want their attention, we cannot rely on the academic writing style. Instead, we write clear and concise reports so that policymakers can glean the main message in a few minutes.

Myth 2: Complex or technical issues can’t be easily explained. When evaluators thoroughly understand the issue and write in active sentences from a broad perspective, they can explain complex and technical issues clearly.

Myth 3: Some edits are only cosmetic changes. Evaluators who seek excellence will welcome feedback on their draft reports. Seemingly minor changes can improve the rhythm of the text, which increases readability and clarity.

Our goal is to write concise, easy-to-understand reports so that end users can make good use of our evaluation work. We put our reports through a collaborative edit process (see our flowchart) to ensure we meet this goal. Two recent reports are products of our efforts:

Equity in Higher Education Funding

Reducing Barriers to Postsecondary Education

Hot Tips

  1. Have a nonevaluator review your draft report.
  2. Use a brief executive summary highlighting the report’s main message.
  3. Use simple active verbs.
  4. Avoid long strings of prepositional phrases.
  5. Pay attention to the rhythm of sentences.
  6. Vary your sentence length, avoiding long sentences.
  7. Write your key points first and follow with need-to-know details.
  8. Put technical details and other nonessential supporting information in appendices.
  9. Minimize jargon and acronyms.
  10. Use numbered and bulleted lists.
  11. Use headings and subheadings to guide the reader.
  12. Use sidebars to highlight key points.

Rad Resources

  • Revising Prose by Richard A. Lanham
  • Copyediting.com
  • Lapsing Into a Comma by Bill Walsh

We’re celebrating Data Visualization and Reporting Week with our colleagues in the DVR AEA Topical Interest Group. The contributions all this week to aea365 come from our DVR members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting DVR resources. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

·

Hi, I’m Gary Huang, a Technical Director and Fellow at ICF Macro, Inc. in Calverton, Maryland. My colleagues, Sophia Zanakos, Erika Gordon, Gary McQuown, Rich Mantovani, and I are presenting at AEA’s upcoming conference on improper payment (IP) studies. We conduct research and evaluation relating to benefit eligibility and payment errors under rubric of IP. This kind of research, required by law (IPERA 2010, formerly IPIA 2002), is becoming increasingly important for improving government accountability and financial integrity.

Lessons Learned: To define benefit eligibility error and to make decisions on data sources and methods to use to generate IP estimates, we must prioritize stakeholders’ different interests. This includes meeting the technical and statistical rigor required by the Office of Management and Budget (OMB), understanding the intricacies in program concerns by federal agencies, dealing with the reluctance to cooperate among local agencies, and facing the logistic challenges for surveying program participants. Two types of data sources are used in IP studies: program administrative records and survey data.

Hot Tip: A comprehensive IP study of the assisted-housing programs at HUD involves a stratified sample survey and administrative data collection to generate nationally representative estimates of 1) the extent of erroneous rental determinations, 2) the extent of billing error associated with the owner-administered program, and 3) the extent of error associated with tenant underreporting of income. The extensive data collection effort requires coordination and data quality control to ensure data accuracy in tenant file abstraction, in-person CAPI interviewing, third party information, and data matching with Social Security and National Directory of New Hires databases.

Hot Tip: Some agencies conduct national representative surveys of individuals served and entities paid for providing services. In some cases, these surveys bear close similarities to audits and are overt or covert with the data collector posing as a customer. The Food and Nutrition Service (FNS) is increasingly emphasizing the use of administrative data to update estimates obtained from surveys. However, the administrative data are usually biased, and therefore must be modified. Statistical modeling for updating improper payment estimates seems a possible and efficient alternative in IP studies.

Hot Tip: For the Center for Medicare Medicaid Services (CMS) to identity probable fraudulent claims and the resulting improper payments to health care providers, computer programs were developed to examine four years of Medicaid administrative claims data for all US states and territories, applying a variety of algorithms and statistical processes. Both individual health care providers and related institutions were reviewed. For such large administrative data analyses, evaluators struggle to understand various issues from technical, managerial and political perspectives.

Rad Resources: Check OMB’s implementing guidance to all federal agencies (http://fedcfo.blogspot.com/search/label/IPIA) on IP measurement and policy and technical requirements for IP studies.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Gary? Gary and his colleagues will be presenting as part of the Evaluation 2011 Conference Program, November 2-5 in Anaheim, California.

Greetings from beautiful Boise!  We are Rakesh Mohan and Maureen Brewer from the Idaho legislature’s Office of Performance Evaluations. Our post complements previous posts by Dawn Smart (8-30-11) and Tessie Catsambas (9-18-11).

Last year, we were asked to evaluate the governance of EMS agencies in Idaho because there were concerns about the duplication of and gaps in emergency medical services and a lack of clarity about the jurisdiction of EMS agencies.  To address these concerns, we offered a framework for the legislature to begin a policy debate that will help establish an effective system of EMS governance that places patient care as the top priority.

This project challenged us to step out of our familiar state agency-level evaluation environment and try to understand the divergent needs and values of stakeholders at the local government level and how local interests aligned with state interests.  Stakeholders in the study included the legislative and executive branches of state government; associations of cities, counties, fire chiefs, hospitals, fire commissioners, volunteer fire, and professional firefighters; several county and city governments; and many local EMS agencies.

Lessons Learned: The saying “all politics is local” was truly evident in this study.  We had to devote considerable time—more than the time we usually spend on evaluations involving only state-level stakeholders—understanding the issues and associated politics specific to each stakeholder.  The local level is where the impact of a policy is directly felt by citizens who are not too far away from their city halls and county offices should they need to express their dissatisfaction.  The fact that the state’s role and authority are limited at the local level further added complexity to our study.  We had to clearly understand what the state can and cannot do and what would or would not be well received at the local level.

Hot Tips

  1. Evaluators competent in evaluation design and analytical methods will still need to get the cooperation and buy-in from all stakeholders to successfully manage politics without participating in it.
  2. Evaluators should remain transparent by apprising stakeholders of the evaluation plan and methods and assuring them that there will not be any surprises.
  3. Instead of making prescriptive recommendations that may get lost in a lengthy political turf battle, evaluators can sometimes add value to the public policy process by simply offering a framework for decision makers to begin a meaningful policy debate.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Want to learn more on this topic? Attend their session “Whom Does an Evaluation Serve? Aligning Divergent Evaluation Needs and Values” at Evaluation 2011.  aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings from Maryland! I am Tessie Catsambas of EnCompass LLC, an evaluation, leadership, and organizational development organization. How easy it is to get lost in a client’s maze of politics, anxiety, stress, and complexities! And how easy it is to step on toes you do not even know were there! This post discusses strategies for not only managing competing priorities and sensitivities, but paradoxically, staying independent by stepping right in the winds of controversy constructively, and using the controversy to do better evaluation.

Lesson Learned: Being “independent” does not mean you are without a point of view. “Independence” means transparency about your ethics, assumptions and professional boundaries, and a commitment to honesty

Hot Tips – being “appreciative”: A client is typically made up of different points of view and agendas, and that is fine! Help everyone appreciate each separate perspective, and understand its origins. Others’ interpretation of what is going on and what things mean will make you and the whole group smarter. As they talk, they are already benefiting from the evaluation process you have created.

Hot Tip – appropriate process: There are many tools for creating appropriate participation in evaluation. I like to use Appreciative Inquiry – described in detail in the book Hallie Preskill and I co-authored—but there are many others: success case method, empowerment evaluation tools, structured dialogue, and many creative exercises. (FAQs on the application of Appreciative Inquiry in evaluation in this PDF file.) Do not get cornered fighting other people’s fight—through good processes and tools, first get issues articulated, and then get out of the way, so your client(s) can talk things through and work out differences.

Hot Tip – stay open: You, the visiting evaluator, know very little. Before you rush to create categories and analyze, stay open, and use some of the Soft Systems tools such as described by Bob Williams on his webpage to question assumptions. Open yourself up to different ways of seeing. Develop good and effective questions, because by asking them, you will enable others to perceive more expansively, and to generate more creative recommendations than you could alone.

Hot Tip – care: You can fake a lot of things, but you cannot fake caring, even if you use very sophisticated tools. People know when you care, and they engage with you and the evaluation at a deeper level, in a more trusting and productive way.

Hot Tip – be respectfully honest: It is hard to report on unpleasant findings, but if you do so respectfully, with data and context information, appreciating efforts made, and not blaming, you can provide a useful evaluation report that echoes the voices of diverse agendas and common ground, and helps to forge a constructive way forward.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Want to learn more from Tessie and colleagues? Attend their session “Whom Does an Evaluation Serve? Aligning Divergent Evaluation Needs and Values” at Evaluation 2011.  aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

<< Latest posts

Older posts >>

Archives

To top