AEA365 | A Tip-a-Day by and for Evaluators

CAT | Arts, Culture, and Audiences

My name is Diego Escobar and I am Director of Institutional Development at Jalisco’s Ministry of Culture. We recently hosted an event that was part of CLEAR-LA’s Evaluation Week Mexico 2016. We were very excited to be part of the program.

We took the opportunity to ignite discussion and bring together experts from what sometimes appear to be isolated fields, such as arts promotion and public administration and evaluation.

The event produced consensus around the following issues (among others):

  • Cultural programs deal with issues that are specific to the field and pose endemic “evaluation challenges.” For example, grant programs may eventually need to establish and compare the artistic merit of different projects funded in any given year, which will hardly be achieved without debate.
  • Nonetheless, much can be learned from evaluation in other fields and government sectors. For this to happen it is crucial to identify cultural programs with similar interventions to those of programs seeking non-artistic goals (i.e. anything from improving employment to promoting hygiene habits). Translating knowledge is vital.

Lesson Learned: Methodology! In Latin America (and beyond) the arts and the artistic components of community programs have extraordinary powers. This raises the stakes for evaluation. How do we know if there is a relationship between arts engagement in youth and criminal behaviours in adulthood? A report by Giving Evidence and UCL’s Institute of Education explores the relationship between short term and long term outcomes produced by outdoor adventure programs in the UK. The arts sector can use this approach to research and strengthen its claims regarding the relationship between effective programs in the short term and likely outcomes in the long-term.

Hot tip: Arts organizations don’t usually have the budget to send professionals or staff around to evaluation conferences or seminars. Although their participation in said events is growing, it might be a good idea to find ways for people and organizations to engage easily and without travel expenses. Using Facebook’s live streaming, we were able to boost our audience and reach people that made interesting contributions to the discussion.

Rad Resource: If you are in charge of evaluating (or managing!) a program it is useful to see if other organizations have commissioned evaluations for programs similar to the one you are interested in. Although it is often said that there is very little going on regarding arts and evaluation, I’ve found it very useful to check the International Federation of Arts Councils and Culture Agencies’ publications registry.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi! I’m Sarah Cohn, Association Manager for the Visitor Studies Association (VSA). VSA and I are a part of the Building Informal Science Education project (BISE). In this post, I’ll share information about VSA and how the Association supported the BISE project.

Lessons Learned: VSA served as a platform for the BISE project to engage with a community of evaluators and researchers throughout the field of informal learning, united around a set of common goals:

  • to ensure the project’s work was grounded in the experiences of these audiences,
  • to ensure that the resulting resources were useful and relevant to the wide range of evaluators that might use them.

At VSA annual meetings, the BISE project team sought input regarding:

  • the development of the BISE Coding Framework,
  • the direction of the project, and
  • the findings from the synthesis papers.

This sort of member-checking is important in qualitative research to ensure that the findings are truly reflective of the audience’s perspectives or information. Evaluators can be just as tricky and diverse in their ideas, needs, and opinions as any other audience! So what did we learn from this process?

  • When conducting research on evaluation, provide multiple venues and points at which evaluators can reflect on the study’s data, ideas, and findings. Find different check-in points over the course of a year or the life of the project; offer different modes for engagement, be they digital, in-person, asynchronously, etc.
  • Be as specific as possible in your requests for feedback. We are reflective, by nature, so your fellow evaluators will provide feedback on every aspect of a project if you let them!

Rad Resources: The Visitor Studies Association is a global network of professionals dedicated to understanding and enhancing learning experiences in informal settings wherever they may occur—in museums, zoos, parks, visitor centers, historic sites, and the natural world—through research, evaluation and dialogue. VSA’s membership and governance encompass those who design, develop, facilitate, and study learning experiences. We offer a number of resources for evaluators to learn more about evaluation in informal settings.

  • An annual conference every summer that brings over 200 professionals together to talk about new advances in the field, current projects, and major issues they are facing.
  • The Visitor Studies journal, which is a bi-annual, peer-reviewed journal that publishes high-quality articles focusing on research and evaluation in informal learning environments, reflections on the field, research methodologies, and theoretical perspectives. The journal covers subjects related to museums and learning in out-of-school settings.
  • Online webinars, produced in partnership with other museum-related associations, such as the Association for Science-Technology Centers.
  • Regional meet-ups, workshops, and an active listserv.

The American Evaluation Association is celebrating Building Informal Science Education (BISE) project week. The contributions all this week to aea365 come from members of the BISE project team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hello! I am Rebecca Teasdale, a doctoral student in Educational Psychology specializing in evaluation methodology at the University of Illinois at Urbana-Champaign. I’m also a librarian and have served as an administrator and science librarian in public libraries. My current work focuses on the evaluation of interest-driven learning related to science, technology, engineering and math (STEM) that takes place in public libraries and other informal learning settings.

I first became involved with the Building Informal Science Education (BISE) project as an intern at the Science Museum of Minnesota while I was pursuing a certificate in evaluation studies at the University of Minnesota. (See my blog post, “Measuring behavioral outcomes using follow-up methods,” to learn more). Now, I’m using the BISE database to support my research agenda at Illinois by identifying methods for evaluating the outcomes of public library STEM programming.

Evaluation practice is just getting started in the public library context, so few librarians are familiar with evaluation methods measuring mid- and long-term outcomes of informal science education (ISE) projects. I used the BISE reports to provide a window into understanding (a) the types of outcomes that ISE evaluators study, (b) the designs, methods and tools that they use, and (c) the implications for evaluating the outcomes of STEM programs in public libraries.

Lessons Learned:

  • I’ve found little standardization among the evaluation reports in the BISE database. Therefore, rather than provide a single model for libraries to replicate or adapt, the BISE database offers a rich assortment of study designs and data collection methods to consider.
  • Just 17% of the reports in the BISE database included the follow-up data collection necessary to examine mid- and long-term outcomes. In particular, library evaluators should ensure that we design studies that examine these effects as well as more immediate outcomes.
  • Collecting follow-up data can be challenging in informal learning settings because participation is voluntary, participants are frequently anonymous, and engagement is often short-term or irregular. The reports in the BISE database offer a number of strategies that library evaluators can employ to collect follow-up data.
  • All five impact categories from the National Science Foundation-funded Framework for Evaluating Impacts of Informal Science Education Projects are represented in the BISE database. I’m currently working to identify some of the methods and designs for each impact category that may be adapted for the library context. These impact categories include:
    • awareness, knowledge or understanding
    • engagement or interest
    • attitude
    • behavior
    • skills

 Rad Resource:

  • I encourage you to check out the BISE project to inform evaluation practice in your area of focus and to learn from the wide variety of designs, methods, and measures used in ISE evaluation.

The American Evaluation Association is celebrating Building Informal Science Education (BISE) project week. The contributions all this week to aea365 come from members of the BISE project team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi! This is Amy Grack Nelson, Evaluation and Research Manager, and Zdanna King, Assistant Evaluation and Research Manager, from the Science Museum of Minnesota. If you are like us, you may have shared an evaluation report with other evaluators on websites such as AEA’s eLibrary or informalscience.org. Even though opportunities to share reports online are increasing, the evaluation field lacks guidance on what to include in evaluation reports meant for an evaluator audience. If the evaluation field wants to learn from evaluation reports posted to online repositories, how can evaluators help to ensure the reports they share are useful to this audience? As part of the Building Informal Science Education (BISE) project, we explored this question through the analysis of 520 evaluation reports uploaded to informalscience.org. The BISE team created an extensive coding framework to align with features of evaluation reports and evaluators’ needs. We then used the framework to identify how often elements were included or lacking in evaluation reports.

Lessons Learned: Our analysis brought to light where reports in the field of informal education evaluation may already meet the needs of an evaluator audience and where reports are lacking. To help maximize learning and use across the evaluation community, we developed guiding questions evaluators can ask themselves as they prepare a report to share with other evaluators.

  1. Have I described the project setting in a way that others will be able to clearly understand the context of the project being evaluated?
  2. Is the subject area of the project or evaluand adequately described?
  3. Have I identified the type of evaluation (formative, summative, etc.)?
  4. Is the purpose of the evaluation clear?
  5. If I used evaluation questions as part of my evaluation process, have I included them in the report?
  6. Have I described the data collection methods?
  7. If possible, can I include data collection instruments in the report?
  8. Do I provide sufficient information about the sample characteristics? If I used general terms such as “visitors,” “general public,” or “users,” do I define what ages of individuals are included in that sample?
  9. Have I reported sample size for each of my data collection methods?
  10. If I report statistically significant findings, have I noted the statistical test(s) used? Do I only use the word “significant” if referring to statistically significant findings?
  11. If I provided recommendations to the client, did I include them in the report?

Rad Resources:

The American Evaluation Association is celebrating Building Informal Science Education (BISE) project week. The contributions all this week to aea365 come from members of the BISE project team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! I am Carey Tisdal, Director of Tisdal Consulting in St. Louis, Missouri. I work with people who develop informal learning experiences for museum exhibitions, museum programs, documentary films, and media-based projects. Many of my projects include websites as one element of a learning system. I used the Building Informal Science Education (BISE) project as an opportunity to develop a framework to focus studies involving websites. This experience helped me improve my own practice by analyzing other evaluators’ work as well as connecting to key concepts in the website evaluation literature. I hope you find it useful, too!

I developed my website evaluation framework by analyzing 22 reports from the BISE database that were coded as “website” evaluands (i.e. the entity being evaluated). The overarching method I used to analyze the reports was Glaser & Strauss’ Grounded Theory. I then connected concepts in the program theory to literature about website evaluation. The resulting website evaluation framework uses high-level program theory to guide the identification of focus areas and questions to structure website evaluations. As illustrated in the graphic below, I organized seven of the major areas of consideration as a set of sequential, necessary steps influencing User Impacts and System Effectiveness. Read my whitepaper, “Websites: A guiding framework for focusing website evaluations,” to learn more!

Tisdal

Lessons Learned:

  • Some of the evaluations I reviewed focused on appeal (content, visuals, or forms of engagement), which is certainly a very important aspect of website evaluation. Yet, when connecting the focus areas, I realized that without testing usability, as well as appeal, it is not possible to draw strong conclusions about how audience impact is or is not accomplished.
  • Evaluating the system effectiveness of a website is essential in multiplatform projects. Awareness and access play important roles in whether or not users of other parts of an informal education system (e.g. an exhibition, program, or film) even get to the website, or, in turn, if website viewers see a film or attend an exhibition.
  • In my own work, I’ve found that this website framework helps project teams and website designers to clarify what they really need to know.

Rad Resources:

  • The U.S. Department of Health and Human Services offers an amazing set of resources to get you started in usability testing for websites. This site has been updated since I did my research and is now even better!
  • The BISE database and the website org provide access to a wide range of evaluation reports. When I need to look at how colleagues approached evaluation designs, they are my first stops!

The American Evaluation Association is celebrating Building Informal Science Education (BISE) project week. The contributions all this week to aea365 come from members of the BISE project team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings! I’m Beverly Serrell, museum exhibition consultant, evaluator, and developer with Serrell & Associates in Chicago, Illinois. As a practitioner, I am very interested in finding helpful information to improve my practice in the planning, development, and assessment of exhibits. When the Building Informal Science Education (BISE) project invited me to look at their database and investigate a question of my choice, I was most curious about recommendations in summative evaluation reports. Did the advice, (e.g. recommendations or suggestions for improvements) compare to mine? Were there trends that could be shared and applied?

I started my report by looking at 50 summative evaluation studies in the BISE database that were coded as including “recommendations.” Further sorting brought the list down to 38—with a diverse selection of science disciplines, (e.g., botany, zoology, astronomy, biology, ecology, geology, and health sciences).

Lesson Learned: Orientation was often the single biggest challenge to get right in exhibitions. Using a bottom-up method of review, the issue that emerged as most common included the need for better orientation within an exhibition. Recommendations for improvements to orientation came from problems related to the various physical and psychological needs of museum visitors. Two other suggestions were closely tied to orientation: more clarity in conceptual communication and better delineation of exhibit boundaries. These recommendations and more are discussed and examples are given in my full report, “A Review of Recommendations in Exhibition Summative Evaluation Reports.”

Hot Tip: Criticism is about the work, and the work can always be improved. Whether to include a section on recommendation in an exhibitions summative evaluation is somewhat controversial. Some evaluators think that it is the client’s job––not the evaluators––to interpret the data, and that making recommendations for improvements can cast a negative light on the institution and hurt its reputation with funders. It is important for evaluators to make sure at the outset of a project that the client is eager to hear the thoughts of an experienced evaluator.

My advice for making recommendations in summative evaluation reports is to go ahead and make them. Without couching them in meek tones, be specific and give the context and evidence for why the recommendation is being made. Evaluation is recognized today as a valuable part of the process; it’s no longer us (evaluators) against them (designers, curators, etc.).

My favorite example of an exhibition report with numerous indicators of success and a balanced offering of practical suggestions for improvements is Sue Allen’s 2007 summative evaluation of “Secrets of Circles” at the San Jose Children’s Museum.

The American Evaluation Association is celebrating Building Informal Science Education (BISE) project week. The contributions all this week to aea365 come from members of the BISE project team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi! I’m Amy Grack Nelson, Evaluation & Research Manager at the Science Museum of Minnesota. I’m part of a really cool National Science Foundation-funded project called Building Informal Science Education, or as we like to refer to it – BISE. The BISE project is a collaboration between the University of Pittsburgh, the Science Museum of Minnesota, and the Visitor Studies Association. This week we’ll share what we learned from the project and what project resources are freely available for evaluators to use.

Within the field of evaluation, there are a limited number of places where evaluators can share their reports. One such resource is informalscience.org. Informalscience.org provides evaluators access to a rich collection of reports they can use to inform their practice and learn about a wide variety of designs, methods, and measures used in evaluating informal education projects. The BISE project team spent five years diving deep into 520 evaluation reports that were uploaded to informalscience.org through May 2013 in order to begin to understand what the field could learn from such a rich resource.

Rad Resources:

  • On the BISE project website, you’ll find lots of rad resources we developed. We have our BISE Coding Framework that was created to code the reports in the BISE project database. Coding categories and related codes were created to align with key features of evaluation reports and the coding needs of the BISE authors. You’ll find our BISE NVivo Database and related Excel file where we’ve coded all 520 reports based on our BISE Coding Framework. We have a tutorial on how to use the BISE NVivo Database and a worksheet to help you think about how you might use the resource for your own practice. You can also download a zip file of all of the reports to easily have them at your fingertips.
  • This project wouldn’t be possible without the amazing resource informalscience.org. If you haven’t checked out this site before, you should! And if you conduct evaluations of informal learning experiences, consider sharing your report there.

Lessons Learned:

  • So what did we learn through the BISE project? That you can learn A LOT from others’ evaluation reports. In the coming week you’ll hear from four authors that used the BISE database to answer a question they had about evaluation in the informal learning field.
  • What lessons can you learn from our collection of evaluation reports? Explore the BISE Database for yourself and post comments on how you might use our resources.

The American Evaluation Association is celebrating Building Informal Science Education (BISE) project week. The contributions all this week to aea365 come from members of the BISE project team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Ivonne Chand O’Neal, Co-Chair of the American Evaluation Association’s Arts Culture and Audiences Topical Interest Group (TIG) and Chief Research Officer at Creativity Testing Services (CTS), a research consulting firm specializing in the creation and validation of creativity assessments, and applications of creativity testing in corporate, educational, and artistic environments. In this role, one example of my work is to conduct evaluations of national performing arts centers throughout the U.S., examining such themes as board development, the impact of artistic programming on the American public, the development of exceptional talent, and the impact of the arts on students in PreK – 12 environments. Prior to my work with CTS, I evaluated creativity as Director of Research and Evaluation at the John F. Kennedy Center for the Performing Arts, Creativity Consultant with the Disney Channel, Director of Research at the David Geffen UCLA School of Medicine, and as a Curator of the Museum of Creativity.

Lessons Learned: In a recent example of the application of metrics to assess creativity to inform artistic programming, my colleagues and I worked with artists in the Alvin Ailey American Dance Theater to understand the trajectory of artistic development to determine how to shape artistic programming for early elementary and middle school students at the Kennedy Center. We asked artists about such things as their interests and hobbies as children, the age they knew they had exceptional talent and skill, and the age their teacher/mentor/instructor put them forward in recognition of their exceptional talent and skill. Comparing the artists to an age-matched control group of performing arts center interns, we were surprised to find that at the critical age of 9 or 10, the artists dropped the majority of hobbies and interests common to elementary school-aged children, and focused solely on dance; while the control group continued to pursue interests in sports, music, dance, and science and math clubs. These types of findings are critical to arts programmers and educators alike as they seek to use their resources to provide the most cognitively and developmentally appropriate arts programming for elementary school students, as well as master classes and instruction for those young students with exceptional skill and ability.

Using Creativity Testing in evaluating programs is a focus that has recently emerged as a way to boost innovation and productivity in both non-profit and for- profit organizations. Stakeholders have been eager to add this component to existing evaluations as a way to foster a new way of approaching process and product-oriented work.

Hot Tip: Be bold and clear in offering new rigorous methods to assess impact in organizations with which you work. Stakeholders are often interested in finding a new approach to address uninspired or ineffective programming and look to the evaluation community for cutting edge options to address these concerns.

The American Evaluation Association is celebrating Arts, Culture, and Audiences (ACA) TIG Week. The contributions all week come from ACA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Rekha S. Rajan and I am an associate professor at Concordia University Chicago and program leader for the Masters degree in grant writing, management, and evaluation. I am also the author of the books: Integrating the Performing Arts in Grades K-5, Grant Writing: Practical Strategies for Scholars and Professionals, and the forthcoming titles, In the Spotlight: Children’s Experiences in Musical Theater, and Musical Theater in Schools.

Lessons Learned: The value of the arts has consistently been debated, discussed, and challenged both within schools and in our communities. As an arts educator, I have been involved in many of these discussions at the state and national levels. As an evaluator of arts-based programs and partnerships, and with a background in teacher education, I have had the opportunity to see “both sides of the coin” – to observe how learning takes place in schools, and to find ways of documenting the process of arts engagement.

Even for those of us who know how important the arts are to learning and development, the question often arises as to how do we document learning in the arts? The field of evaluation is a resolution to this conflict, providing strategies for exploring artistic experiences across a wide range of contexts, disciplines, and programs.

In a recent evaluation that I completed for the Chicago Humanities Festival, I was asked to document student engagement with live multimedia performance. The Stages Engagement Pilot Program (SEPP) was developed as an extension of the First Time for a Lifetime initiative through the Chicago Humanities Festival, with the goal of examining student learning and appreciation for live theater. Importantly, students experienced live performance, leaving their classrooms to be audience members.

Many evaluators and researchers might look at another arts evaluation and say – “we know the arts are important, so what?” However, every arts program is unique, often only bringing one discipline (music, theater, dance, visual arts) into classrooms. The value is found in the types of activities that engage students, the artistic discipline, and the level of active participation that extends after the program concludes.

A central component of the SEPP program was that students were engaged in a pre- and post-performance activity that was designed with strong collaboration between the teachers and teaching artists. The opportunity to prepare in advance was beneficial for the teachers, artists, and students, enabling everyone involved to clarify expectations and follow through with activities after the performance.

Hot Tip: Although funders often place a heavy emphasis on quantitative reporting, much of what we know about the learning that takes place through the arts is evident in the rich narratives and observations of qualitative data. Any evaluation of arts programs should strive to be mixed-methods in the approach, to provide the statistical data that funders need coupled with examples of student work, teachers’ perceptions, and the teaching artists’ experiences.

The American Evaluation Association is celebrating Arts, Culture, and Audiences (ACA) TIG Week. The contributions all week come from ACA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Jessica Sperling, and I work in research and evaluation for education, youth development, arts/media/culture, and civic engagement programs. I am currently a researcher with the City University of New York (CUNY), where I consult on evaluation for educational programs and for StoryCorps, a storytelling/narrative-sharing program. Before joining CUNY, I had developed StoryCorps’ evaluation program as internal staff.

Developing an organization’s research and evaluation program can be challenging for myriad reasons: non-intuitive outcomes and “hard-to-measure” desired impact, the existence of many distinct sub-programs, dynamic organizational priorities, resource limitations, and more. The fact is, however, many entities fitting these characteristics must nonetheless proceed and progress in evaluation. I thus outline select lessons in initiating and implementing an evaluation program at such organizations, drawing from my work with StoryCorps and other early-stage organizational evaluation programs.

Lessons Learned:

Start with the big picture. Begin evaluation planning with a theory of change and a macro-level evaluation framework focused around organizational goals. This should be obvious to evaluators, but you may need to make its value clear to program stakeholders, particularly if they prefer that you dive straight into data collection and results. In addition to permitting focused evaluation, this can also contribute to overall organizational reflection and planning.

Utilize existing research to inform projects and draw connections. Literature review is integral, and definitely a step not to be skipped! Previous research can inform your anticipated outcomes, situate your program within a larger body of work, and demonstrate the causal links between measured/observed outcomes and the organization’s broader desired impacts – a link you may not be able to empirically demonstrate through your own work.

Highlight evaluation for organizational learning. Overtly frame evaluation as an opportunity for strategic learning, rather than as a potentially punitive assessment. Highlight the fact that even seemingly negative results have positive outcomes, in terms of permitting informed programmatic change; most programs naturally change over time, and evaluation results, including formative evaluation, help the program do so in an intentional way. This perspective can promote stakeholder buy-in and develop a culture of evaluation.

An unusual or outside-the-box program doesn’t preclude rigor in research methods. In some cases, having relatively difficult-to-measure or atypical program goals may lead to a presumption (intentional or otherwise) that methods involved in such evaluation may be less rigorous. This, however, is not a given conclusion. Once short-terms outcomes are defined – and they should always be defined, even if doing so takes some creativity or outside-the box thinking – an approach to measurement should incorporate intentional, informed, and methodologically appropriate evaluation design.

Hot Tip: Spend time and energy building positive relationships with internal programs and staff, and with potential external collaborators. Both, in their own ways, can help foster success in evaluation implementation and use.

The American Evaluation Association is celebrating Arts, Culture, and Audiences (ACA) TIG Week. The contributions all week come from ACA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top