AEA365 | A Tip-a-Day by and for Evaluators

We are Sonia Worcel, Vice President of Strategy and Research, and Kim Leonard, Senior Evaluation Officer, at The Oregon Community Foundation (OCF). The research team at OCF conducts internal evaluations as well as oversees contractors for some external evaluations. We use Evaluation Advisory Groups (EAGs) for our internal evaluations, and currently have EAGs for three varied evaluation projects: a developmental evaluation, a policy/systems change evaluation, and a more traditional process and outcomes evaluation.

Hot Tip: Evaluation Advisory Groups have great potential to enrich the design and implementation of evaluations, and can support and enhance the use of evaluation results. In addition, the groups also provide external, impartial oversight to the work.

Our advisory groups serve as a sounding board providing valuable feedback on ideas and plans for evaluation design and methodology. For example, one EAG has provided valuable feedback about the design of a comparison group component of one evaluation. Another EAG weighed in on the development of several key evaluation data collection tools including a survey and photo voice component.

Hot Tip: Select members of an EAG in order to represent a depth and breadth of backgrounds and expertise, including content-area experts, evaluation methodology experts and practitioners. This will position the EAG to offer a variety of perspectives on the work and can complement and build upon the expertise of the evaluation team.

Lesson Learned: Our EAGs have connected us with resources and experts that in turn have allowed us to further refine the evaluations and contribute to the larger field. Some EAG members have spent additional time to share related work or to review tools or results in greater detail. One of the EAGs has come to feel like a professional community of practice, as many members are now working together on other related evaluation and research efforts – building shared measurement and design in one particular content area.

As each evaluation progresses, we anticipate that the EAGs will play a growing role in supporting evaluation dissemination and use. One EAG recently spent a longer mini-retreat meeting reviewing early findings in detail alongside the Research team, resulting in rich discussions about how best to share results with grantees and valuable feedback about what felt worthy of further analysis and broader dissemination.

Rad Resource: The Winter 2012 Issue of New Directions for Evaluation featured articles about Evaluation Advisory Groups. We found the advice included in those articles useful as we designed and continue to adapt our EAGs, and this advice is reflected in the tips and lessons learned above.

The American Evaluation Association is celebrating Oregon Community Foundation (OCF) week. The contributions all this week to aea365 come from OCF team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I am Kim Leonard, Senior Evaluation Officer at The Oregon Community Foundation. Today I want to share lessons learned from a developmental evaluation we’re undertaking for our five-year arts education grantmaking initiative – Studio to School.

The nature of this Initiative is developmental and rooted in the arts. Creativity, adaptation, and risk are supported. The first phase of the evaluation is focused on understanding and supporting the arts education programming being developed by the project teams funded through Studio to School. We are now approaching the mid-point of the Studio to School Initiative evaluation, and have learned a lot about the benefits and challenges of implementing a developmental evaluation.

Lesson Learned: Taking a developmental evaluation approach has allowed the Research team to adapt the evaluation in response to the evolution of the Initiative. It took us a little while to get used to this approach! We’ve summarized our evaluation on this handout, and find ourselves coming back to it repeatedly to keep us grounded as we plan new evaluation activities.

Lesson Learned: The Research team has worked in an ongoing way to develop rigorous evaluation activities to collect and provide useful information in a feedback loop. Robust reflection is built into the process; debrief meetings are held following each major learning community and evaluation activity to share and document learnings. These often turn into planning sessions for future evaluation and learning community efforts. In addition, the project teams are journaling electronically – quarterly reflections on what they are learning in response to prompts have been one of the most valuable data sources to date. Prompts (like this example) are developed one or two at a time, so that they are as timely and relevant as possible.

Lesson Learned: A key element of the evaluation, and goal of the Initiative, is to surface and articulate principles of high quality sustainable arts education programming. We began developing principles based on the first year’s evaluation findings, and asked project teams to reflect and provide feedback on draft principles at a recent gathering. We were thrilled with how engaged the teams were in this effort. The photo below shows a project team member reviewing feedback provided on sticky notes. Attendees also placed red dots (as seen in photo) next to those principles that most resonated with their experience. Doing this as a larger group allowed project teams to discuss their feedback and for attendees to react to and comment on one another’s feedback.Leonard

Rad Resources: In addition to the excellent Developmental Evaluation Exemplars (Patton, McKegg, and Wehipeihana, 2016), we have found the Developmental Evaluation Primer and DE 201: A Practitioner’s Guide to Developmental Evaluation from the McConnell Foundation especially helpful. Additional resources are listed at http://betterevaluation.org/plan/approach/developmental_evaluation.

The American Evaluation Association is celebrating Oregon Community Foundation (OCF) week. The contributions all this week to aea365 come from OCF team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I am Holly Kipp, Researcher, from The Oregon Community Foundation (OCF). Today’s post shares some of what we’re learning through our efforts to measure social-emotional learning (SEL) in youth in the context of our K-12 Student Success Initiative.

The Initiative, funded in partnership with The Ford Family Foundation, aims to help close the achievement gap among students in Oregon by supporting expansion and improvement of out-of-school time programs for middle school students.

Through our evaluation of the Initiative, we are collecting information about program design and improvement, students and their participation, and student and parent perspectives. One of our key data sources is a survey of students about their social-emotional learning (SEL).

Rad Resources: There are a number of places where you can learn more about SEL and its measurement. Some key resources include:

  • The Collaborative for Academic Social and Emotional Learning, or CASEL
  • The University of Chicago Consortium on School Research, in particular their Students & Learning page

In selecting a survey tool, we wanted to ensure the information collected would be useful both for our evaluation and for our grantees. By engaging grantee staff in our process of tool selection, they had a direct stake in the process and would hopefully buy-in to using the tool we chose – not only for our evaluation efforts but for their ongoing program improvement processes. 

Hot Tip: Engage grantee staff directly in vetting and adapting a tool.

We first mined grantee logic models for their outcomes of interest, reviewed survey tools already in use by grantees, and talked with grantees about what they wanted and needed to learn. We then talked with grantees about the frameworks and tools we were exploring in order to get their feedback.

We ultimately selected and adapted The Youth Skills and Beliefs Survey developed by the Youth Development Executives of King County (YDEKC) with support from American Institutes for Research.

Rad Resource: YDEKC has made available lots of information about their survey, the constructs it measures, and how they developed the tool.

Rad Resource: There are several other well-established tools worth exploring, such as the DESSA (or DESSA-mini) and DAP and related surveys, especially if cost is not a critical factor.

Hot Tip: Student surveys aren’t the only way to measure SEL! Consider more qualitative and participatory approaches to understanding student social-emotional learning.

Student surveys are only one approach to measuring SEL. We are also working with our grantees to engage students in photo voice projects that explore concepts of identity and belonging – elements that are more challenging to measure well with a survey.

Rad Resource: AEA’s Youth Focused TIG is a great resource for youth focused and participatory methods.

The American Evaluation Association is celebrating Oregon Community Foundation (OCF) week. The contributions all this week to aea365 come from OCF team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

I am David Keyes, Researcher, from The Oregon Community Foundation. The Research team at The Oregon Community Foundation (OCF) is currently conducting program evaluations of several OCF grantmaking initiatives, including an out-of-school-time initiative, intended to help close the achievement gap by supporting quality improvement and expansion of out-of-school time programming. Twenty-one grantee organizations with a variety of different program models are currently participating in the Initiative.

To learn about program quality and support program improvement, we are using the Youth Program Quality Assessment (YPQA) framework, tools, and training because it provides a low-stakes but research-based means of capturing program quality information that can be used to improve programming regardless of program model. The YPQA is a validated tool shown to be a predictor of programming quality. The assessment covers a range of topics and is completed using observation of programming, analysis of documents and interviews with staff.

Lessons Learned: The assessment process is rigorous. In the first year, initiative grantees completed online training provided by the Weikart Center before conducting an initial round of self-assessments. Grantees then received a detailed report, which they used to analyze their strengths and weaknesses and develop improvement plans. In the second year, the self-assessment process was repeated, and an external assessment was added. Several grantee staff members also received additional training and conducted assessments of other grantees.

The integration of the YPQA has also required intentional trust-building between the grantees, the evaluation team and the grant-making team to ensure that grantees engage in the YPQA process honestly and openly.

OCF has noted several benefits from using the YPQA in this initiative:

  • Grantees now have a shared language about what quality programming looks like regardless of population-focus or program model;
  • Grantee organizations are learning about research-based or promising practices for the work they are doing as well as how to implement related program improvements; and
  • Grantee evaluation capacity is growing as they engage deeply in a relevant, systematic data collection and improvement process.

Hot Tip: Program quality assessment can be a valuable alternative to assessing fidelity to program models. The YPQA has enabled the research team at OCF to focus on spurring improvement in a context in which models of implementation for programming vary across grantees. Instead of assessing fidelity to a single model (which doesn’t exist in the initiative we are evaluating), we have focused on quality, no matter the programming model. This has allowed us to capture program information about programs in a way which serves our purposes of capturing information across our grantee group while also providing grantees data they can use to constantly improve their programming.

The American Evaluation Association is celebrating Oregon Community Foundation (OCF) week. The contributions all this week to aea365 come from OCF team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Caitlin Ruffenach, Researcher, and Kim Leonard, Senior Evaluation Officer, from The Oregon Community Foundation (OCF). Among other things, we are working on an evaluation of the Studio to School Initiative at OCF, which focuses on the development of sustainable arts education programs through partnerships between arts organizations and schools.

This past summer, in collaboration with the Oregon Arts Commission, we conducted a survey of arts organizations in Oregon in an effort to learn about the arts education programming they provide, often in concert with what is available more directly through the school system.

The purpose of this survey was to help the Foundation understand how the grantees of its Studio to School Initiative fit into the broader arts education landscape in Oregon. We hope the survey results will also serve as a resource for grantees, funders, and other stakeholders to understand and identify programs delivering arts education throughout the state.

Lesson Learned: To ensure we would have the most useful information possible, our survey design process included several noteworthy steps:

  1. We started with existing data; by gathering information about organizations who had received funding in arts education in Oregon in the past we were able to target our efforts to recruit respondents.
  2. We consulted with others who have done similar surveys to learn from their successes and challenges;
  3. We paid close attention to survey question wording to ensure that we were focusing as tightly on what was measurable by survey as possible; and
  4. We vetted our early findings with arts education stakeholders.

Hot Tip: A collaborative, inclusive survey design process can result in better survey tools. We used a small, informal advisory group throughout the process that included members who had conducted similar surveys and representatives of our target respondent group. They helped with question wording, as well as with identifying a small survey pilot.

Hot Tip: Vetting preliminary findings with stakeholders is fun and helps support evaluation use. We took advantage of an existing gathering of arts stakeholders in Oregon to share and workshop our initial findings. We used a data placemat, complete with re-useable stickers, to slowly reveal the findings. We then engaged the attendees in discussions about how the findings did or didn’t resonate with their experiences. What we learned during this gathering is reflected in our final report.

Resources: We are not the first to try a more inclusive process both in developing our survey tool and in vetting/interpreting the results! Check out the previous aea365 post about participatory data analysis. And check out the Innovation Network’s slide deck on Data Placemats for more information about that particular tool.

The American Evaluation Association is celebrating Oregon Community Foundation (OCF) week. The contributions all this week to aea365 come from OCF team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I am Sonia Worcel, Vice President of Strategy and Research at The Oregon Community Foundation (OCF). This week AEA365 features a series of posts from OCF’s research team highlighting lessons learned through several of our ongoing evaluations. Today I start by sharing more generally about our evaluation efforts. At OCF we are committed to documenting the outcomes of our investments and continually improving our programming. The Research Department oversees program evaluation of our programs and initiatives, conducting some program evaluations in-house and contracting with universities and research firms for others.

OCF’s approach to program evaluation is utilization-focused, collaborative and geared toward continual program improvement. The purpose of this work is to promote learning among staff, grantees and partners; to inform strategic decision making, the larger philanthropic field and the fields in which we invest (education, arts and culture, etc.); and ultimately, to maximize positive impact for Oregon.

Hot Tip: Balancing foundation and grantee needs results in more meaningful evaluation efforts. In order to ensure our evaluations are useful, evaluations are designed and conducted collaboratively with Foundation program staff and our grantees. Each evaluation’s research questions reflect what both the Foundation and the grantees want to learn; the evaluation methods and tools are appropriate for the research questions and do not place undue burden upon grantees; and research findings are shared and vetted with grantees, program staff and other partners.

Lesson Learned: We focus program evaluation resources on board priorities, which tend to be programs with large, multi-year investments with a defined group of grantees working toward common goals. As a result, we do not conduct evaluations of all grantmaking programs. This has allowed us to prioritize our efforts and to ensure the evaluation is the “right size” for the program.

When we determine that a program evaluation is appropriate, our research and program departments work collaboratively to create an evaluation design, including identifying key evaluation questions, determining the appropriate evaluation design and deciding whether to conduct the evaluation in-house or through a contractor. Foundation program staff are active participants in the implementation of our evaluations and engage with the research team in interpreting and learning from results.

In addition to working closely with program staff within the Foundation, grantees are key partners in our evaluations. We share data with grantees, provide opportunities for them to interpret findings and give them tools to help them make meaning and use of evaluation data for program planning and improvement.

Rad Resource: Grantmakers for Effective Organizations is a key resource for foundations who want to learn from and improve their work. Check out their Learn For Improvement page for more information.

The American Evaluation Association is celebrating Oregon Community Foundation (OCF) week. The contributions all this week to aea365 come from OCF team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Happy Saturday readers!  Liz Zadnik here with a post on one of my FAVORITE topics (other than evaluation, of course): typography and font types.  I believe an appreciation of fonts and some intentional steps in selecting a font or fonts can really “up our game” with proposals, reports, blog design, and even email!   

I probably spend way too much time looking at fonts when I’m creating proposals, web graphics, and reports.  I love how each font bring its own energy and tone, as well as how each artist creates something that conveys a distinct personality.  The right font can tell your reader who you are before they even take in one word.  Want a fashionable header font with a chic edge?  Try Moon.  Looking for a more contemporary and geometric look for your blog navigation?  Quicksand could be a great choice.  A well-placed handwriting font like Shadows Into Light can provide a nice light and personable feel for pull-out quotes.  A lot of fonts evolve and change over time and this evolution is fascinating to watch.  

Rad Resource: Google Fonts is a one-stop shop for fonts.  You can search according to font characteristics such as thickness, if it’s to be used as a heading, or if it’s a Serif or Sans Serif font.  Once you’ve selected your font(s) you can download them to install on your computer or add them to your website using the HTML and CSS codes listed.  Google Fonts offers some background on each font, statistics on its usage across the world, pairing suggestions, and links to Typecast for folks working across platforms and devices.  

I find the font pairing suggestions incredibly helpful when I am creating a blog, report, or website style guide.  Branding is so important in our digital world.  Having a thoughtful and consistent looks boosts our credibility and helps visitors, readers, collaborators, or project partners engage with data and findings in meaningful ways.   

Hot Tip:  This may not be news to some, but keep your font types to two or three (at most).  Too many font types can be distracting and actually keep people from concentrating or absorbing your information.

Liz's Brilliant Evaluation Report

There’s so much more to explore when it comes to fonts.  What are your favorites?  Have you been inspired to try something new?  What are you looking for in a font?

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Katherine Dawes from the U.S. Environmental Protection Agency. I’m currently on a year-long assignment as a Visiting Scholar at The George Washington University Trachtenberg School of Public Policy and Public Administration (find me at kdawes [at] gwu [dot] edu).

Earth Day 2016 theme is “Trees for the Earth. Let’s get planting.” Everyone knows that trees changing with the season are perfect metaphors for transitions. Every four to eight years, as spring trees start blooming, evaluators in the United States’ federal sector start contemplating our major upcoming seasonal change – the transition to a new Presidential Administration. We wonder: What will be our new federal evaluation goals and policies?  How will we change (or continue) our work to meet the needs and expectations of a new, energetic Administration?

Aside from tree leaves, what On Earth can an evaluator read to learn what the next Administration cares about (or is hearing from national experts) concerning evaluation, management, accountability, data… any issue that will directly or indirectly influence my work?

To understand the forest…err…big picture of U.S. presidential transitions and to learn what prospective federal leaders are considering planting, veteran transition watchers have many Rad Resources. Some of my favorites for evaluation-relevant info:

  • The White House Transition Project provides information to prospective federal leaders to help “[streamline] the process of transition from one administration to the next.” The Project coordinates with government agencies and non-government groups like the Partnership for Public Service and National Academy of Public Administration.
  • The National Academy of Public Administration’s Transition 2016 publishes articles and papers intended “to inform incoming national leaders about the policy and management challenges facing the nation.”
  • The Partnership for Public Service established the Center for Presidential Transition supporting the “Ready to Govern®” initiative. It has a repository for documentation from previous transitions and “shares management recommendations for the new administration to address government’s talent and operational challenges…”
  • As part of Ready to Govern, the IBM Center for the Business of Government joined with the Partnership in launching the Management Roadmap. The Roadmap presents “a set of management recommendations for the next administration – enhancing the capacity of government to deliver key outcomes for citizens.”

Daily news organizations and social networks with a federal focus supply fantastic transition information in short, readable bites – check out Government Executive and GovLoop. In addition to daily reporting, Federal News Radio co-sponsors longform interviews that are available as podcasts.  A recent interview with Professor Martha Kumar, a White House Transition project director, shares the rich history of U.S. presidential transitions. (You can also find fascinating interviews focused on program evaluation.)

Share your Rad Resources for government transitions. Let’s get reading!

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Rupu Gupta, Co-Chair of AEA’s Environmental Program Evaluation Topical Interest Group and Researcher at NewKnowledge.org, a research and evaluation think tank.  Earth Week brings with it an ideal time to critically consider the potential of environmental education programs to create opportunities for professionals to protect the planet for a living!

Rad Resource: I have been leading the evaluation of The Nature Conservancy’s LEAF (Leaders in Environmental Action for the Future) Program.  This national conservation internship program aims to create skilled and empowered environmental stewards. Across different cohorts, we have consistently found that high school youth are motivated and keenly interested in pursuing higher education in environment-related majors and joining environmental careers.

With the most recent cohort of this program we expanded our evaluation framework to study how the interns conceptualized and thought about the activities involved and the skills that are necessary for in environmental careers. Our multi-phase evaluation revealed striking shifts in how the youth perceived environmental careers.

Before they participated in the internship, teens had vague understandings of environmental jobs. After the program, they recognized that persistence and impacts on ecosystems, animals, and people were the defining features of these careers. They could also connect these attributes to different disciplines (e.g., law, environmental education) and activities (e.g., conducting research, participating in activism). Moreover, five months after the internship, their perceptions had grown sharper, so that they could think about the skills, activities, and recipients in discrete ways.

Lessons Learned:

  • Emphasize the multiple disciplinary pathways to environmental careers– programs need to create greater awareness of the diverse educational backgrounds that can contribute to an environmental career.
  • Character traits are critical in an environmental career – a determined, action-oriented personal attribute was perceived by the youth to be a key component of careers aiming to protect the environment.
  • The environmental workforce may be a psychologically resilient group – if persistence, a character-based aspect is a necessary aspect of the job, its implications on being adaptive and responsive to changing circumstances are worth studying.
  • Pathways to environmental careers for youth need to be extended – beyond broadening the career horizons for youth to pursue environmental careers, conservation programs need to create access to these careers for youth, and especially those from racial and ethnic minority groups.
  • Every day can be Earth Day! A more diverse environmental workforce means greater, sustained engagement in environmental protection by a larger segment of the population, and more professionals working, in their own unique ways, to save the world!

Rad Resources: Read more about Diversity in the Environmental Workforce to discover more about the possibilities of environmentally-minded careers!

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi! I’m Shari Grossarth from the Environmental Protection Agency’s (EPA) Evaluation Support Division. I’m part of a team at EPA that’s trying out a knowledge synthesis approach to inform our work to advance sustainable purchasing in the federal government and other large institutions. The project is still underway, but I’ll share a bit about it and some of the resources we’re using to guide our approach.

Our focus is on understanding outcomes and lessons learned from servicizing approaches. Servicizing is the procurement of a function of a product rather than the product itself, like bike-sharing instead of bike ownership or purchasing a floor covering system instead of carpet. Servicizing should lead to more sustainable outcomes, like fewer products being produced, products being made to last and customers using only what they need, but we want to see what the existing knowledge base tells us about outcomes.

We’re casting a broad search for existing evaluative knowledge relevant to outcomes and lessons learned from servicizing approaches, and synthesizing the key findings to inform EPA’s efforts and share with others. Our general approach is depicted in the graphic below. We’ll search a broad range of evaluative knowledge, including grey literature, published literature, evaluations, and discussions with experts. We’ll synthesize the knowledge we find and create an online collection to share the key findings and knowledge sources. We hope that the collection will ultimately become a collaborative space where others add and share additional related knowledge. We’re working with IssueLab, a knowledge sharing platform, and collaborating with the Environmental Evaluators Network (EEN) in their efforts to develop an Architecture for Environmental Evaluation, or ArchEE.

Grossarth

Rad Resources: We’re truly trying this out, borrowing from other related endeavors and learning as we go. These are some of the resources we’ve referenced to shape our efforts.

Image courtesy of Shari Grossarth

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top