AEA365 | A Tip-a-Day by and for Evaluators

CAT | Research on Evaluation

Hi! We are Sara Vaca (EvalQuality.com) and Pablo Vidueira (professor at the Universidad Politécnica de Madrid). Today we are going to talk about the benefits of using the latest advances in data visualization to improve ST tools.

Systems Thinking (ST) is the new paradigm in Evaluation which represents a significant mind-set shift and a powerful tool to tackle complex environments. It refers to the adoption of concepts, methodologies and tools coming from the systems field.

Lesson Learned: ST already use data visualizationVaca 1

Among the wealth of tools, concepts and approaches within the systems field there are hard and soft systems approaches. Among soft systems, rich pictures and the soft systems methodology are widely used. In the hard systems side, system dynamics (SD) is one of the most famous systems approaches.

And all these tools already use data visualization: they depict ideas, relationships and concepts relying in shapes and figures more than a textual explanation.

Rad Resource: Knowing how graphical perception works

For many years vision researchers have been investigating how the human visual system analyses images. An important initial result was the discovery of a limited set of visual properties that are detected very rapidly and accurately by the low-level visual system.

An important discovery of early studies investigating how the human visual system analyzes images was the identification of a limited set of visual features that are detected very rapidly by low-level, fast-acting visual processes. These properties were initially called preattentive, since their detection seemed to precede focused attention, occurring within the brief period of a single fixation. Attention plays a critical role in what we see, even at this early stage of vision. The most relevant pre-attentive visual features are: orientation, length, width, closure, size, curvature, density, contrast, number, estimation and color.

Cool Trick: Using graphical perception principles to improve the ST tools Vaca 2

 

 

Vaca 3

 

 

We are studying ST tools conventions of symbols and are working on variations to broaden its variety using simple features. For example: in the typical standard scheme for a Stock and Flow diagram, we are playing with the width of the arrows to represent the relevance of each variable. Thicker arrows (flow variable 2 and auxiliary A) would indicate bigger influence than thinner arrows (flow variable 3 and auxiliary B)

 

Another example would be replacing +/- symbols in causal loop diagrams by colors (green=positive, red=negative), to make the causal relationships between variables easier to interpret.

We think these improvements would make these tools more informative for those using them and more attractive for those new to them.

We welcome your reactions and hope to share an upcoming paper on this topic with you in Chicago!Vaca 4

 

 

 

Vaca 5

 

 

 

The American Evaluation Association is celebrating Research on Evaluation (ROE) Topical Interest Group Week. The contributions all this week to aea365 come from our ROE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi! This is Miriam Jacobson, a Doctoral Student, and Tarek Azzam, an Associate Professor, at Claremont Graduate University. We are talking today about how to use online crowdsourcing to conduct RoE (Research on Evaluation). Online crowdsourcing is a process of recruiting people online to do specific tasks, such as completing a survey, categorizing information, or translating text. We are currently exploring how access to the “crowd” can contribute to the development of new methods and approaches to evaluation.

Lesson Learned:

  • Crowdsourcing allows you to quickly and inexpensively recruit participants for RoE. For example, using Amazon’s MTurk (one of the largest crowdsourcing services) you can post a survey and receive hundreds of participants within a few days, for about $.50 – $2.00 per survey.
  • Crowdsourcing also allows you to engage populations that are otherwise difficult to access for RoE studies, such as public constituents.

Hot Tips:

  • Consider whether your research is a good fit with participants on MTurk (also commonly called “MTurkers”), who tend to be younger and more educated than the overall public. To further understand who is participating in your study, remember to ask about relevant individual characteristics.
  • When recruiting participants, be clear about what the task involves, the time required to complete it, and if applicable, any inclusion criteria for participants.
  • Make sure instructions are clear for a range of people— if you aren’t sure, first pilot test the instructions.
  • Treat MTurkers fairly—respond to email questions and promptly pay people for completing tasks.
  • To increase quality, you can limit those completing your task using specific criteria, such as a minimum approval rating of 95% (this is a measure of satisfaction with their previous work), successful completion of 500+ tasks, and geographic location (currently you can select only countries and US states).

Cool Tricks:

  • Use crowdsourcing to:
    • Pilot test a survey before administering it to a non-crowdsourced population (e.g., evaluation stakeholders).
    • Study the effectiveness of different types of report language or data presentation formats. For example, you can post multiple versions of a report and see which best communicates the intended information.
    • Involve MTurkers to operationalize evaluation-related concepts in a way that is understandable and relevant to a broad range of people.
    • Engage large groups of people to code qualitative RoE data (e.g., open-ended survey responses, documents or videos) to quickly classify information and get an outside perspective on the data.

The American Evaluation Association is celebrating Research on Evaluation (ROE) Topical Interest Group Week. The contributions all this week to aea365 come from our ROE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Who We Are: We are a team of three scholars working cross-continentally: Jori N. Hall, University of Georgia, Leanne M. Kallemeyn and Cliff McReynolds, Loyola University Chicago, and Nanna Friche, Danish Institute for Local and Regional Government Research, Copenhagen. In 2011 we started a conversation on differences and similarities in evaluation practice across North America and Europe. This conversation turned into a dialogue and a study on this topic.

The purpose of our study was to explore how evaluation practice is conceived as reflected in articles published in the American Journal of Evaluation (AJE) and Evaluation, a journal supported by the European Evaluation Society. To explore evaluation practice across different contexts we found it useful to draw on the evaluation theory tree typology as articulated by Marvin C. Alkin and Christina A. Christie. This typology reflects the following three components of evaluation practice: (a) methods, (b) use, and (c) valuing.

Lessons Learned: What we learned from this international comparison is that evaluation practice (as reflected in AJE and Evaluation) emphasizes methods, in comparison to use and valuing. By using Peter Dahler-Larsen’s discussion on evaluation societies we conclude that the “audit society,” (e.g., the spread of auditing practices in society beyond financial institutions) might account for the trend of a methods-centric evaluation practice across continents.

Based on this lesson we would like to invite evaluators and other interested stakeholders to engage in a global dialogue. We offer the following questions:

1) What methods are emphasized in different contexts across the globe in evaluation practice?  What are the similarities and differences in how evaluators conceptualize the role of methods in evaluation practice?

(2) What are the implications for maintaining the current emphasis on methods dominant evaluation practice in local and global contexts?

(3) Do we have a responsibility, as evaluators, to uphold other types of approaches to evaluation practice (i.e., evaluation for use, evaluation for contextual and cultural understanding)? If so, how might we go about enacting these understandings?

(4) What, if any, additional understandings of evaluation practice do we want to maintain and uphold across continents? How can these understandings be maintained and upheld across continents?

The American Evaluation Association is celebrating Research on Evaluation (ROE) Topical Interest Group Week. The contributions all this week to aea365 come from our ROE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Chris Coryn and Lyssa Wilson from the Interdisciplinary Ph.D. in Evaluation program at Western Michigan University. In the last decade, research on evaluation theories, methods, and practices has increased considerably. Even so, little is known about how frequently published findings from research on evaluation are read and whether such findings influence evaluators’ thinking about evaluation or their evaluation practice. To address these questions, and others, we (including our colleagues Satoshi Ozeki, Gregory Greenman II, Daniela Schröter, Kristin Hobson, Tarek Azzam, and Anne Vo) recently completed a study using a random sample of AEA members and a purposive sample of prominent evaluation theorists and scholars.

Lessons Learned:

  • Nearly all (96.94% ±38%) AEA members and all (100%) theorists and scholars consider research on evaluation important
  • A majority of AEA members (80.95% ±60%) and theorists and scholars (84.21%) regularly read research on evaluation
  • A majority of those sampled indicate that research on evaluation has influenced their thinking about evaluation and their evaluation practice (97.00% ±38% and 94.00% ±4.79% [for AEA members] and 100% and 100% [for prominent theorists and scholars], respectively)
  • The American Journal of Evaluation and New Directions for Evaluation are, overall, the most frequently read journals by a majority of AEA members (70.35% ±76% and 51.18% ±7.44%, respectively)
  • In addition to the American Journal of Evaluation and New Directions for Evaluation, prominent theorists and scholars tend to also read other journals semi-regularly or regularly (e.g., Evaluation: The International Journal of Theory, Research and Practice, Journal of MultiDisciplinary Evaluation)
  • AEA members most often read articles on evaluation methods (92.85% ±64%), reflections on evaluation practice (87.80% ±6.15%), or research on evaluation (80.95% ±7.60%), whereas theorists and scholars most often read articles on evaluation theory (94.73%), evaluation methods (89.47%), research on evaluation (84.21%), and evaluation ethics (84.21%)
  • For AEA members, research on evaluation has significantly influenced their thinking about evaluation and their evaluation practice (97.00% ±38% and 94.00% ±4.79%, respectively)
  • Research on evaluation has influenced all theorists and scholars’ thinking about evaluation as well as their evaluation practice (100% and 100%, respectively)
  • AEA members and prominent theorists and scholars believe that findings from research on evaluation contribute to ‘improving, informing, and guiding evaluation practice’ (40.59% and 50.00%, respectively)

Rad Resources:

Christie’s article ‘Advancing empirical scholarship to further develop evaluation theory and practice’ in the Canadian Journal of Program Evaluation (2011)

Henry and Mark’s article ‘Toward an agenda for research on evaluation’ in New Directions for Evaluation (2003)

Szanyi, Azzam, and Galen’s article ‘Research on evaluation: A needs assessment’ in the Canadian Journal of Program Evaluation (2012)

The American Evaluation Association is celebrating Research on Evaluation (ROE) Topical Interest Group Week. The contributions all this week to aea365 come from our ROE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi there! I’m Anne Vo, Assistant Professor of Clinical Medical Education and Associate Director of Evaluation at the Keck School of Medicine of USC. I’m also Program Chair of the Research on Evaluation TIG. I’ll share a bit about what we have learned about evaluation use within the education sector.

Evaluation & Knowledge Use

The evaluation field’s knowledge base on use can be traced to the 1970’s—a period that Mel Mark referred to as the “golden age of evaluation,” when research on evaluation use was particularly prevalent. The development of our knowledge base on evaluation use is connected to thinking and research that had been done about knowledge use.

Rad Resource:

To learn more about this history, consider the following resource as a starting point:

  • Rich, R. (1977). Uses of social science information by federal bureaucrats: Knowledge for action vs. knowledge for understanding. In C.H. Weiss (Ed.), Using social research in public policy making. Lexington, MA: Lexington.

Research on Decision-Making in the Education Sector

Cynthia Coburn and colleagues conducted a series of studies on decision-making in elementary schools and urban school districts while the State of California was in the process of implementing new reading instruction policies. They learned that:

  • Teachers in the study relied on their professional experiences and mental models to make choices about classroom practice in response to new reading policies. Going about decision-making in this manner seemed particularly prevalent when a robust, school-wide collaborative culture; explicit connections between policy and classroom practice; and the space for exploring differences in worldviews were not available.
  • School and district administrators’ interpretive processes—informed by experience and previously-held beliefs—had greater influence on their decision-making than actual data. This was attributed to lack of relevant information and varied use of the same information within an organization. Further, the administrator’s choice to use or not use available information was contingent upon what’s organizationally and politically feasible at the time the decision needed to be made.

Rad Resource:

To learn more about decision-making in educational settings and to locate leads for further reading, consider the following resource:

Evaluation use will continue to be an issue of interest to the evaluation community. For the latest perspectives on the use of evaluation for decision-making, consider the following edited volume. It includes contributions from some of the field’s leading scholars and practitioners on use and decision-making as related to internal evaluation, evaluation influence, cultural responsiveness, and misuse:

The American Evaluation Association is celebrating Research on Evaluation (ROE) Topical Interest Group Week. The contributions all this week to aea365 come from our ROE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings! I’m Leslie Fierro, Assistant Clinical Professor of Evaluation at Claremont Graduate University and AEA Research on Evaluation TIG Chair. This week, aea365 is focusing on Research on Evaluation (RoE). As an avid fan of this topic, I’ll offer a working definition for RoE and provide some thoughts on where future fruitful research may emerge in our field.

Lessons Learned:

  1. People don’t always know what we are talking about! If there is one thing I’ve learned as an evaluation capacity builder, evaluator, and professor engaging in RoE it’s that that the first question people ask about this topic is…”What is RoE?” To date, we do not have a central definition – although scholars are busily working on creating definitions as you read this entry! As a frame of reference, I’ll offer up a definition I developed to orient my students to this topic, “A research investigation that generates findings with the intended purpose of creating a stronger evidence base and infrastructure for the applied practice of evaluation.”
  2. We are too insular – let’s leverage information from other disciplines to stimulate RoE. When students embark on RoE it is a rare occurrence that they are not stunned at the lack of research available in evaluation to build upon. Although it is often refreshing to learn that the “world is our oyster” that isn’t always so comforting when the goal is to do something of interest, add to the literature, and well…move on. All hope is not lost, I find in RoE we are often a bit to insular. Why not pursue studies that integrate decades of research in other disciplines (e.g., cognitive psychology, adult learning theory) when creating new RoE studies?

Rad Resources:

Interested in doing RoE, but not sure where to start? Here are some examples of what we might call “Integrative Evaluation Science” to stimulate creative research ideas that build upon established work in other fields and have great potential to benefit our growing field!

The American Evaluation Association is celebrating Research on Evaluation (ROE) Topical Interest Group Week. The contributions all this week to aea365 come from our ROE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I am Jindra Cekan, PhD, an independent evaluator with 25 years of international development fieldwork, at www.ValuingVoices.com.

What if we saw our true clients as project participants and wanted the return on investment of projects be maximally sustained? How would this change how we evaluate, capture, learn together?

Lesson Learned: Billions of dollars of international development assistance are spent every year and we do baseline, midterm and final evaluations on most of them.  We even sometimes evaluate sustainability using OECD’s DAC Criteria for Evaluating Development Assistance: relevance, effectiveness, efficiency, impact and sustainability.  This is terrific, but deeply insufficient. We rarely ask communities and local NGOs during or after implementation what they think about our projects, how to best sustain activities themselves and how to help them do so.

Also, very rarely do we return 3, 5, or 10 years after projects close and ask participants what is “still standing” that they managed to sustain themselves. How often do we take community members, local NGOs, or national evaluators as the leaders of evaluations of long-term self-sustainability of our projects? Based on my research 99% of international aid projects are not evaluated for sustainability or impact after project close by anyone, much less by the communities they are designed to serve.

With $1.52 trillion dollars in US and EU foreign aid being programmed for 2014–2020, our industry desperately needs feedback on what communities feel will be sustainable now, what interventions offer the likelihood of positive impact beyond the performance of the project’s planned (log-framed) activities. Shockingly, this does not exist today.

Further, such learning needs to be transparently captured and shared in open-date format for collective learning, especially at the country and implementer level. Creating feedback loops between project participants, national stakeholders, partners and donors that foster self-sustainability will foster true impact.

Hot Tip: We can start in current project evaluations. We need to ask these questions of men, women, youth, elders, the richer and poorer in communities as well as of local stakeholders. Ideally we would request national evaluators to ask (revise!) questions such as:

  • How valuable have you found the project overall in terms of being able to sustain activities yourselves?
  • How well were project activities transferred to local stakeholders?

o   Who is helping you sustain the project locally once it ends?

  • What were the activities do you think you can least maintain yourselves?

o   What should be done to help you?

  • What were activities that you wish the project had supported that build on your community’s strengths?
  • Was there any result that came of the project that was surprising or unexpected?
  • What else do we need to learn from you to have greater success in the future?
Clipped from http://www.oecd.org/dac/evaluation/daccriteriaforevaluatingdevelopmentassistance.htm

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello! We are Johanna Morariu, Kat Athanasiades, and Ann Emery from Innovation Network. For 20 years, Innovation Network has helped nonprofits and foundations evaluate and learn from their work.

In 2010, Innovation Network set out to answer a question that was previously unaddressed in the evaluation field—what is the state of nonprofit evaluation practice and capacity?—and initiated the first iteration of the State of Evaluation project. In 2012 we launched the second installment of the State of Evaluation project. A total of 546 representatives of 501(c)3 nonprofit organizations nationwide responded to our 2012 survey.

Lessons Learned–So what’s the state of evaluation among nonprofits? Here are the top ten highlights from our research:

1. 90% of nonprofits evaluated some part of their work in the past year. However, only 28% of nonprofits exhibit what we feel are promising capacities and behaviors to meaningfully engage in evaluation.

2. The use of qualitative practices (e.g. case studies, focus groups, and interviews—used by fewer than 50% of organizations) has increased, though quantitative practices (e.g. compiling statistics, feedback forms, and internal tracking forms—used by more than 50% of organizations) still reign supreme.

3. 18% of nonprofits had a full-time employee dedicated to evaluation.

Morariu graphic 1

4. Organizations were positive about working with external evaluators: 69% rated the experience as excellent or good.

5. 100% of organizations that engaged in evaluation used their findings.

Morariu graphic 2

6. Large and small organizations faced different barriers to evaluation: 28% of large organizations named “funders asking you to report on the wrong data” as a barrier, compared to 12% overall.

7. 82% of nonprofits believe that discussing evaluation results with funders is useful.

8. 10% of nonprofits felt that you don’t need evaluation to know that your organization’s approach is working.

9. Evaluation is a low priority among nonprofits: it was ranked second to last in a list of 10 priorities, only coming ahead of research.

10. Among both funders and nonprofits, the primary audience of evaluation results is internal: for nonprofits, it is the CEO/ED/management, and for funders, it is the Board of Directors.

Rad Resource—The State of Evaluation 2010 and 2012 reports are available online at for your reading pleasure.

Rad Resource—What are evaluators saying about the State of Evaluation 2012 data? Look no further! You can see examples here by Matt Forti and Tom Kelly.

Rad Resource—Measuring evaluation in the social sector: Check out the Center for Effective Philanthropy’s 2012 Room for Improvement and New Philanthropy Capital’s 2012 Making an Impact.

Hot Tip—Want to discuss the State of Evaluation? Leave a comment below, or tweet us (@InnoNet_Eval) using #SOE2012!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

Hello, I am Maxine Gilling, Research Associate for Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP). I recently completed my dissertation entitled How Politics, Economics, and Technology Influence Evaluation Requirements for Federally Funded Projects: A Historical Study of the Elementary and Secondary Education Act from 1965 to 2005. In this study, I examined the interaction of national political, economic, and technological factors as they influenced the concurrent evolution of federally mandated evaluation requirements.

Lessons Learned:

  • Program evaluation does not take place in a vacuum. The field and profession of program evaluation has grown and expanded over the last four decades and eight administrations due to political, economic, and technological factors.
  • Legislation drives evaluation policy. The Elementary and Secondary Education Act (ESEA) of 1965 established policies to provide “financial assistance to local educational agencies serving areas with concentrations of children from low-income families to expand and improve their educational program” (Public Law 89-10—Apr. 11, 1965). This legislation also had another consequence: it helped drive the establishment of educational program evaluation and the field of evaluation as a profession.
  • Economics influences evaluation policy and practice. For instance in the 1980’s evaluation took a downturn due to the stringent economic policies. Program evaluators resorted to lessons learned through writing journals and books.
  • Technology influences evaluation policy and practice. The rapid emergence of new technologies all contributed to changing goals, standards, and methods and values underlying program evaluation.

Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · · · · ·

I’m Regan Grandy, and I’ve worked as an evaluator for Spectrum Research Evaluation and Development for six years. My work is primarily evaluating U.S. Department of Education-funded grant projects with school districts across the nation.

Lessons Learned – Like some of you, I’ve found it difficult, at times, gaining access to extant data from school districts. Administrators often cite the Family Educational Rights and Privacy Act (FERPA) as the reason for not providing access to such data. While FERPA requires written consent be obtained before personally identifiable educational records can be released, I have learned that FERPA was recently amended to include exceptions that speak directly to educational evaluators of State or local education agencies.

Hot Tip – In December 2011, the U.S. Department of Education amended regulations governing FERPA. The changes include “several exceptions that permit the disclosure of personally identifiable information from education records without consent.” One exception is the audit or evaluation exception (34 CFR Part 99.35). Regarding this exception, the U.S. Department of Education states:

“The audit or evaluation exception allows for the disclosure of personally identifiable information from education records without consent to authorized representatives … of the State or local educational authorities (FERPA-permitted entities). Under this exception, personally identifiable information from education records must be used to audit or evaluate a Federal- or State-supported education program, or to enforce or comply with Federal legal requirements that relate to those education programs.” (FERPA Guidance for Reasonable Methods and Written Agreements)

The rationale for this FERPA amendment was provided as follows: “…State or local educational agencies must have the ability to disclose student data to evaluate the effectiveness of publicly-funded education programs … to ensure that our limited public resources are invested wisely.” (Dec 2011 – Revised FERPA Regulations: An Overview For SEAs and LEAs)

Hot Tip – If you are an educational evaluator, be sure to:

  • know and follow the FERPA regulations (see 34 CFR Part 99).
  • secure a quality agreement with the education agency, specific to FERPA (see Guidance).
  • have a legitimate reason to access data.
  • agree to not redisclose.
  • access only data that is needed for the evaluation.
  • have stewardship for the data you receive.
  • secure data.
  • properly destroy personally identifiable information when no longer needed.

Rad Resource – The Family Policy Compliance Office (FPCO) of the U.S. Department of Education is responsible for implementing the FERPA regulations, and they have a wealth of resources about it on their website. Also, you can view the entire FERPA law here. The statutes of most interest to educational evaluators will be 34 CFR Part 99.31 and 99.35.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · · ·

Older posts >>

Archives

To top