AEA365 | A Tip-a-Day by and for Evaluators

My name is Scott Chaplowe and I currently work as the Director of Evidence, Measurement and Evaluation for climate at the Children’s Investment Fund Foundation (CIFF). As an evaluation professional, much of my work is not simply doing evaluation, but building the capacity of others to practice, manage, support and/or use evaluation.

Hot Tips: Beyond the 5 considerations shared in part 1 of this post, here are the other 5, based on an expert lecture I gave on this topic at AE2017:

  1. Ensure your ECB strategy is practical and realistic to organizational capacities. ECB should be realistic given the available time, budget, expertise and other resources. This is underscores the importance of initial analysis and local stakeholder engagement to set up ECB for success.
  2. Identify & capitalize on existing sources for ECB. There are a multiplicity of resources for and approaches to ECB, ranging from face-to-face delivery to webinars, communities of practice, discussion boards, self-paced reading, and blogs like this. These resources can be used in solo or blended as part of a capacity building program that forts different learning styles and needs. Indeed, it is important not to ‘reinvent the wheel’ if it can be ‘recycled.’ However, do not fall into the trap of adopting just because it is available—ensure that ECB resources are relevant for the desired capacity building objectives, or can be modified accordingly.
  3. Design and deliver learning grounded on adult learning principles. Adults are self-directed learners that bring to training past experiences, values, opinions, expectations and priorities that shape why and how they learn. Principles for adult learning stress a learner-centered approach that is applied, experiential, participatory and builds upon prior experience. You can read more about this here.
  4. Uphold professional standards, principles and ethics. An essential aspect of capacity building it to instill an understanding of and appreciation for ethical conduct and other standards for good practice. Specific guidelines and principles will vary according to context – sometimes specific to the organization itself, other times adopted from industry standards, such as the. AEA’s Guiding Principles For Evaluators and Statement on Cultural Competence in Evaluation, and the JCSEE’s Program Evaluation Standards Statements.
  5. Monitor and evaluate your ECB efforts to learn and adapt. Practice what we preach and track and assess ECB efforts to adapt, improve and be accountable to our ECB objectives. This begins at the design stage, when identifying those capacities that will be assess.

Ancillary Consideration. The above top 10 list is far from exhaustive, and as it is about human organizations and behavior, it is not absolute.

Rad Resources – Read more about this top 10 list here, and you can view the AEA365 presentation. Also, check out the book, Monitoring and evaluation Training: A Systematic Approach, and this webpage has an assortment of resources to support evaluation learning and capacity building.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, my name is Jayne Corso and I am the community manager for the American Evaluation Association.

Social media offers a great way to have conversations with like-minded individuals. But, what if those like-minded individuals don’t know you have a Facebook, Twitter, or LinkedIn page. I am sharing just a few easy tips for getting the word out on your social media channels.

Hot Tip: Have Social Media Predominately Displayed on Your Website

A great way to show that you are on social media channels is to display social media icons at the top of your website. Some organizations put these at the bottom of their website where they usually get lost—when was the last time you scrolled all the way to the bottom of a website?

Moving your icons to the top of your website is also helpful for mobile devices. More and more people are using their cell phones instead of desktops to search website. With the icons above the “fold” or at the top of your page, they are easy to find no matter what device you are using.

Hot Tip: Reference Social Media in Emails

You are already sending emails to your followers or database, so why not tell them about your social media channels? You can do this in a very simple way, by adding the icons to your email template, or you can call out your social channels in your emails. Try doing a dedicated email promoting your social channels. Social media is the most direct way to communicate with your followers or database, so showcase this benefit to your fans!

Hot Tip: Continue the Conversation on Social Media

Moving conversations to your social media pages can add longevity to your discussion and invites more people to participate. If you have written an email about an interesting topic, invite your database to continue the conversation on Twitter. You can create a hashtag for your topic, so all posts can be easily searched. You can also do this on Facebook and encourage a conversation in the comments of a post.

I hope these tips were helpful. Follow AEA on Facebook and Twitter!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Ramesh Tuladhar

Ramesh Tuladhar

Greetings, I am Ramesh Tuladhar, focal point and coordinator for Disaster Risk Reduction (DRR) Thematic Committee of Community of Evaluators in Nepal (COE-Nepal). I am a professional geologist with experience in disaster risk management, monitoring, and evaluation. I am currently engaged as the monitoring and evaluation consultant of the Pilot Project on Climate Resilience (PPCR) implemented by the Department of Hydrology and Meteorology, Government of Nepal.

Lessons Learned: Eighty-seven out of 192 (45%) United Nations member states responded to the Sendai Framework Data Readiness Review in 2017. This proportion suggests that more stakeholders from member states, and also non-member states, may consider learning about and contributing to the Sendai Framework, which includes four priorities for action, to help improve effectiveness and sustainability of DRR interventions.

Hot Tip:

Rad Resources: To learn about the progress of DRR in Nepal, please visit:

The American Evaluation Association is celebrating Disaster and Emergency Management Evaluation (DEME) Topical Interest Group (TIG) Week. The contributions all this week to aea365 come from our DEME TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings! My name is Nnenia Campbell, and I am a postdoctoral research associate at the University of Colorado Natural Hazards Center as well as an independent evaluation consultant specializing in disaster-related programming. As an alumna of AEA’s Graduate Education Diversity Internship (GEDI) program, I frequently consider how I can engage in culturally responsive evaluation or call attention to the role of cultural context in my work. These concepts are particularly important in disaster and emergency management evaluation because extreme events can affect diverse populations with vastly disparate impacts.

Scholars and practitioners alike have observed that initiatives designed to alleviate the burden of disaster losses often fail to meet their goals, particularly within underserved communities. Moreover, although the concept of social vulnerability has begun to feature prominently in emergency management discourse, common issues and oversights can inadvertently reinforce inequality and undermine the interests of those who suffer the most crippling disaster impacts. Opaque or exclusionary decision-making practices, discounting of local knowledge, and imposition of locally inappropriate “solutions” are common complaints about programs intended to help communities prepare for or respond to hazard events.

In evaluating disaster resilience and recovery initiatives, it is important to pay attention to which stakeholders are at the table and how that compares to the broader populations they serve. Which interests are being represented? What histories may inform how a program is perceived? Alternatively, what factors may influence how program implementers engage clients and characterize their needs? Culturally responsive evaluation provides a powerful lens for answering such questions and for clarifying why they are important to ask in the first place.

Hot Tip:

  • Do your homework. Culturally responsive evaluation literature emphasizes the importance of capturing the cultural context of the program under study. Ignoring factors such as the history of a program and its stakeholders, the relationships and power dynamics among them, or the values and assumptions that shape their actions can lead to grave errors in interpretation.
  • Seek out cultural brokers. In order to adequately address the concerns of diverse stakeholders, evaluators must establish trust and respect. Working with cultural brokers, or trusted liaisons who can help to communicate concerns and advocate on behalf of a group, can foster greater understanding encourage meaningful engagement.

Rad Resources:

The American Evaluation Association is celebrating Disaster and Emergency Management Evaluation (DEME) Topical Interest Group (TIG) Week. The contributions all this week to aea365 come from our DEME TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Alicia Stachowski and I am an Associate Professor of Psychology at the University of Wisconsin, Stout.  I am working with Sue Ann Corell Sarpy on a Citizen Science Program sponsored by the National Academies of Science.  We would like to share some preliminary findings from this research.

Lessons Learned:

Why Use Citizen Scientists?

In the aftermath of a disaster, communities often lack information about environmental contamination that could be used to guide prevention and recovery activities.    Community-led citizen science, where lay individuals or non-experts lead or participate in data collection and research activities, offers great promise for promoting equitable, cross-boundary collaborations, fostering scientific literacy, and empowering community-based actions around environmental risks.

Building a Network of Citizen Scientists

The Citizen Science Training Program was designed to build organizational capacity and enhance community health and well-being through promotion of citizen science in coastal Louisiana communities.  The training program was aimed at developing a network of citizen scientists for environmental contamination monitoring, creating avenues for communication and dissemination of project activities to stakeholders, and strengthening collaborative partnerships to enable sustainable networks for knowledge, skills, and resources.  Our evaluation includes a social network analysis of the existing and developing relationships among participants.

How Does a Citizen Scientist Networks Develop?

The project is designed to create and support Citizen Science networks.  We used Social Network Analysis to examine the emergency of these networks. Our project is on-going, but the following figures show an example of our preliminary findings:

We asked participants to indicate who they know, who they share information and resources with, who they discuss community issues with, who they go to for advice, and who they collaborate with. Our preliminary results illustrate an increase in ties, or connections among participants (i.e., network density). For example, respondents indicated which other participants they discussed community issues with before and after training. Before the survey, network density was 4% (see Figure below).

Pre-training ties among participants coded by parish regarding with whom they discussed community issues with.

Following the survey, network density increased to 45%.

Post-training ties among participants coded by parish regarding with whom they discussed community issues with. 

 

Lessons Learned:

Although our project is still in progress, we have found critical factors that lead to success in building and enhancing Citizen Scientists’ Networks:

Diversity Among Trainees.  We included a diverse group of participants.  They varied in age, gender, race/ethnicity, and occupations.

Small Group Activities.  The training included small group activities that encouraged information and resource sharing among participants.

Hands on Activities/Exercises.  The training included hands-on activities and exercises in using the monitoring and testing equipment.  These activities/exercises encouraged active participation and interaction among trainees.

Large and Small Group Discussion.  The small group activities and hands-on exercises were followed by discussion among participants that allowed for exchange of different points of view.

Follow-up Field Research.  The training culminated with participants identifying a community-based need that they are currently addressing using the knowledge, resources, and community capacity that was enhanced by the training.

The American Evaluation Association is celebrating Disaster and Emergency Management Evaluation (DEME) Topical Interest Group (TIG) Week. The contributions all this week to aea365 come from our DEME TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hello!  We are Phung Pham, doctoral student at Claremont Graduate University, and Jordan Freeman, new alumna of The George Washington University.  As novices of evaluation and research in the contexts of disasters and emergencies, we would like to share what we have found helpful in getting acquainted with disaster and emergency management evaluation and research.

Rad Resources:

  • Brief overview of the disaster and emergency management cycle, which includes mitigation, preparedness, response, and recovery.
  • Issue 126 of Volume 2010 in New Directions for Evaluation is a collection of chapters illustrating evaluation, research, policy, and practices in disaster and emergency management.
  • World Association for Disaster and Emergency Medicine (WADEM) offers frameworks for disaster research and evaluation.
  • United Nations Children’s Emergency Fund (UNICEF) has a database of evaluation reports, including ones focused on emergencies.
  • Active Learning Network for Accountability and Performance (ALNAP) is a global network of organizations dedicated to improving the knowledge and use of evidence in humanitarian responses, and has an extensive library of resources.

Get Involved!  Here are some trainings and events for your consideration:

We hope these resources are helpful to those of you who are new to or curious about evaluation and research in the contexts of disasters and emergencies.  There is a lot to learn and great work to be continued!

The American Evaluation Association is celebrating Disaster and Emergency Management Evaluation (DEME) Topical Interest Group (TIG) Week. The contributions all this week to aea365 come from our DEME TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Sue Ann Corell Sarpy, Principal of Sarpy and Associates, LLC and Program Chair of the Disaster and Emergency Management Evaluation (DEME) TIG.  This week, contributions represent research and practice in DEME encompassing a national and international scope.  This week starts with an evaluation study I conducted concerning resiliency training for workers and volunteers responding to large-scale disasters.

The National Institute of Environmental Health Sciences (NIEHS) Worker Training Program with the Substance Abuse and Mental Health Services Administration (SAMSHA) identified a need to create trainings that promoted mental health and resiliency for workers and volunteers in disaster impacted communities.  We used a developmental evaluation approach that spanned two distinct communities (e.g., Gulf South region; New York/New Jersey region) disasters (e.g., Gulf of Mexico Oil Spill; Hurricane Sandy) and worker populations (e.g., disaster workers/volunteers; disaster supervisors) to identify effective principles and address challenges associated with these dynamic and complex training needs.  We used an iterative evaluation process to enhance development and delivery of the training such that project stakeholders provided active support and participation in the evaluation and discussion of findings and incorporated the effective principles (best practices/lessons learned) into the next iteration of training.

Evaluation results supported the usefulness of this type of developmental evaluation approach for designing and delivering disaster worker trainings.  The training effectiveness was demonstrated in different geographic regions responding to different disaster events with different target audiences.  This graphic depicts percentage of ratings of agreement with how well the training met the needs of the various target audiences.  We found that ratings increased as we continued to integrate the best principles in the training, starting with disaster worker training in the Gulf South region to the final phase of the project – the disaster supervisory training in the New York/New Jersey region.

Note: DWRT = Disaster Worker Resiliency Training; DSRT = Disaster Supervisor Resiliency Training; Responses ranged from “SOMEWHAT Agree” to “STRONGLY Agree”

Lessons Learned: Several key factors were critical for success in evaluating resiliency training for workers and volunteers responding to large-scale disasters:

Major stakeholders actively involved in development, implementation, and evaluation of trainings.  We included information from workers, supervisors, community-based organizations, and subject matter experts in the evaluation data and discussion of findings.

Evaluation conducted early on in the training design and feedback of effective principles used in each iteration.  Evaluators were brought in as a key stakeholder early in the process and we were integral in revising and refining training products as the project progressed.

Build relationships and trust with various stakeholders to gather information and refine curriculum.  The inclusion of stakeholder feedback, so that everyone gets a voice in the system, built trust and buy-in in the evaluation process and was key to its success.

Creating balance between standardized training and flexibility to tailor training to meet needs (adaptability).  The best principles that emerged across worker populations, communities, and disasters/emergencies provided a framework that allowed for a structure for the trainings but afforded flexibility/adaptability needed to meet specific training needs.

Rad Resources: Visit the NIEHS Responder and Community Resiliency website for training resources related to this project.    

The American Evaluation Association is celebrating Disaster and Emergency Management Evaluation (DEME) Topical Interest Group (TIG) Week. The contributions all this week to aea365 come from our DEME TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I am Tosca Bruno-van Vijfeijken, and I direct the Transnational NGO Initiative at Syracuse University, USA. The Initiative has assisted a number of major international non-governmental organizations (NGOs) to review their leadership and management practices related to large-scale organizational change. My colleagues – Steve Lux, Shreeya Neupane, and Ramesh Singh – and I recently completed an external assessment of Amnesty International’s Global Transition Program (GTP). The assessment objectives included among others the way in which GTP had affected Amnesty’s human rights advocacy outcomes. It also assessed the efficacy of Amnesty’s change leadership and management. One of the fundamental difficulties with this assessment was limitations on time and resources. As such, it was not possible to develop objective measures that directly represented either assessment objectives. Instead, the assessment process primarily triangulated staff perceptions at various levels and coming from across different identity groups within the organization as an approximate measure of the effect of GTP on human rights advocacy goal achievement.

Lessons Learned:

  1. The change process was controversial within Amnesty and generated high emotions – both for and against. To protect the credibility of the assessment, we gathered multiple data sources and triangulated staff views through careful sampling for surveying, interviewing and focus group use. A survey with external peers and partners added independent perspectives. Workshops to validate draft findings with audiences that had both legitimacy and diversity of views were critical as well.
  2. Evaluating human rights advocacy outcomes is complex. Process and proxy indicators were essential in our assessment.
  3. It is equally difficult to attribute human rights advocacy outcomes to Amnesty’s change process, due to the lack of comparative baseline information or counterfactuals.
  4. Amnesty is a complex, democratic, membership-based NGO. Given the controversy around the ‘direction of travel’ under GTP, Amnesty promised accountability towards its members by requesting this External Assessment barely four years after the change process had been announced. Statements about the extent of correlation between the GTP and human rights advocacy outcomes thus had to be all the more qualified.

With high profile, high-emotion evaluations like this that are also largely dependent on staff perspectives, the measurement of number of ‘mentions’, and/or recurrent staff views was one obvious indicator. However, as evaluators we also need — in a defensible way — to judge the strength of points made or issues raised – and include not just their frequency but also the gravity of their expression.

5. Evaluators need to be acutely aware of where power is situated in organizations if they want to produce actionable, utilization-focused evaluations.

6. In high profile evaluations such as this, an ability to both understand senior leadership contexts, perspectives and world views and to speak truth to power are important.

Rad Resources: The frameworks by Bolman and Deal (Reframing Organizations: Artistry, Choice and Leadership, 2017) and William and Susan Bridges (Managing Transitions, 2017)  offer consistent value in evaluating organizational change processes in INGOs.

Continue the conversation with us!  Tosca tmbruno@maxwell.syr.edu

 

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Johanna Morariu and Katie Fox of Innovation Network and Marti Frank of Efficiency for Everyone. Since 2015 our team has worked with the Center for Community Change (CCC) to develop case studies of economic justice campaigns in Seattle, Washington, DC and Minneapolis/St. Paul.

In each case study, our goal was to lift up the factors that contributed to the success of the local advocacy campaigns to deepen learning for staff within the organization about what it takes to run effective campaigns. After completing two case studies that shared a number of success factors, we realized an additional benefit of the studies: to provide CCC with additional criteria for future campaign selection. Our case study approach was deeply qualitative and allowed success factors to emerge from the stories and perspectives of the campaign’s participants. Using in-depth interviews to elicit perspectives on how and why change happened, we constructed an understanding of campaign timelines and the factors that influenced their success.

Lessons Learned: This form of inquiry produced two categories of success factors:

(1) contextual factors that are specific to the history, culture, or geography of place; and

(2) controllable factors that may be replicable given sufficient funding, time, and willingness on the part of partners in the field.

These factors broaden the more traditional campaign selection criteria, particularly by emphasizing the importance of local context.

Traditional campaign selection criteria often focus on considerations like “winnability,” elite and voter interests, and having an existing base of public support. While important, these factors do not go deep enough in understanding the local context of a campaign and the unique dynamics and assets of a place that may impact success.

Take for example one of the contextual factors we identified: The localities’ decision makers and/or political processes are accessible to diverse viewpoints and populations. In each of the case studies, the local pathways of influence were relatively accessible to advocates and community members. If this factor is in the mix, a funder making a decision about which campaigns to support may ask different questions and may even come to a different decision. In addition to asking about a campaign’s existing level of support and the political alignment of the locality, the funder would also need to know how decisions are made and who has the ability to influence them.

Lesson Learned: Our analysis produced five other contextual factors that influenced success, including: high levels of public awareness and support for the campaign issue; a progressive population (the campaigns focused on economic justice issues); an existing network of leaders and organizations with long-standing relationships; the existence of anchor organizations and/or labor unions with deep roots in the local community; and the small relative size of the cities.

Hot Tip: The factors provided a useful distinction between assets that were in existence or not (contextual) and factors that, if not already present, could potentially be developed by a new campaign (controllable). The factors also highlight the need to attend to place-based characteristics to understand the success of campaigns.

 

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hello! We are Carlisle Levine with BLE Solutions in the United States and Toyin Akpan with Auricle Services in Nigeria. We served as evaluation partner to Champions for Change Nigeria, an initiative that builds Nigerian NGOs’ evaluation capacities of to more effectively advocate for policies and programs that support women’s and children’s health. Through this experience, we learned important lessons about international partnerships and their value for advocacy evaluation.

Lesson Learned: Why building international teams is important for advocacy evaluation?

Much advocacy measurement relies on access, trust and accurately interpreting information provided.

  • Assessing advocacy capacity: Many advocacy capacity assessment processes rely on advocates’ self-reporting, often validated by organizational materials. For advocates to answer capacity assessment questions honestly, trust is required. That trust is more easily built with evaluators from the advocates’ context.
  • Assessing advocacy impact: Timelines and document reviews can identify correlations between advocates’ actions and progress toward policy change. However, reducing uncertainty about the contribution of an initiative to observed results often requires triangulating interview sources, including relevant policymakers. An evaluator from a specific policy context is more likely to gain access to policymakers and accurately interpret the responses they provide.

In advocacy evaluation, an evaluation teammate from a specific policy context ideally:

  • Understands the context;
  • Is culturally sensitive;
  • Has relationships that give her access to key stakeholders, such as policymakers;
  • Knows local languages;
  • Can build trust more quickly with evaluation participants;
  • Knows appropriate data collection approaches; and
  • Can correctly interpret data collected.

An evaluation teammate from outside a specific policy context ideally helps ensure that:

  • An evaluation is informed by other contexts;
  • Additional critical questions are raised; and
  • Additional alternative perspectives are considered.

Rad Resources: How did we find each other?

We did not know each other before this partnership. We found each other through networking, and then interviewed each other and checked each other’s past work.

There are a number of other resources we could have used to find each other:

Hot Tips: How did we make it work?

How did we make it work?

  • We communicated frequently to get to know each other. Building trust was critical to our partnership’s success.
  • We stayed in touch using Skype, phone, WhatsApp and email.
  • We were open to each other’s ideas and input.
  • We were sensitive to our cross-cultural communication.
  • We learned about our complementary evaluation skills: Carlisle wrote succinctly, while Toyin collected and analyzed data in the Nigerian context. Over time, our expectations of each other and the speed with which we worked improved.
  • We made our partnership a learning experience, seeking opportunities to strengthen our skills and to present our findings.

Building our international evaluation team took effort. As a result of our investment, we provided our client with more nuanced and accurate insights to inform initiative improvement, and we grew as evaluators.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top