AEA365 | A Tip-a-Day by and for Evaluators

CAT | Arts, Culture, and Audiences

My name is Ivonne Chand O’Neal, Co-Chair of the American Evaluation Association’s Arts Culture and Audiences Topical Interest Group (TIG) and Chief Research Officer at Creativity Testing Services (CTS), a research consulting firm specializing in the creation and validation of creativity assessments, and applications of creativity testing in corporate, educational, and artistic environments. In this role, one example of my work is to conduct evaluations of national performing arts centers throughout the U.S., examining such themes as board development, the impact of artistic programming on the American public, the development of exceptional talent, and the impact of the arts on students in PreK – 12 environments. Prior to my work with CTS, I evaluated creativity as Director of Research and Evaluation at the John F. Kennedy Center for the Performing Arts, Creativity Consultant with the Disney Channel, Director of Research at the David Geffen UCLA School of Medicine, and as a Curator of the Museum of Creativity.

Lessons Learned: In a recent example of the application of metrics to assess creativity to inform artistic programming, my colleagues and I worked with artists in the Alvin Ailey American Dance Theater to understand the trajectory of artistic development to determine how to shape artistic programming for early elementary and middle school students at the Kennedy Center. We asked artists about such things as their interests and hobbies as children, the age they knew they had exceptional talent and skill, and the age their teacher/mentor/instructor put them forward in recognition of their exceptional talent and skill. Comparing the artists to an age-matched control group of performing arts center interns, we were surprised to find that at the critical age of 9 or 10, the artists dropped the majority of hobbies and interests common to elementary school-aged children, and focused solely on dance; while the control group continued to pursue interests in sports, music, dance, and science and math clubs. These types of findings are critical to arts programmers and educators alike as they seek to use their resources to provide the most cognitively and developmentally appropriate arts programming for elementary school students, as well as master classes and instruction for those young students with exceptional skill and ability.

Using Creativity Testing in evaluating programs is a focus that has recently emerged as a way to boost innovation and productivity in both non-profit and for- profit organizations. Stakeholders have been eager to add this component to existing evaluations as a way to foster a new way of approaching process and product-oriented work.

Hot Tip: Be bold and clear in offering new rigorous methods to assess impact in organizations with which you work. Stakeholders are often interested in finding a new approach to address uninspired or ineffective programming and look to the evaluation community for cutting edge options to address these concerns.

The American Evaluation Association is celebrating Arts, Culture, and Audiences (ACA) TIG Week. The contributions all week come from ACA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Rekha S. Rajan and I am an associate professor at Concordia University Chicago and program leader for the Masters degree in grant writing, management, and evaluation. I am also the author of the books: Integrating the Performing Arts in Grades K-5, Grant Writing: Practical Strategies for Scholars and Professionals, and the forthcoming titles, In the Spotlight: Children’s Experiences in Musical Theater, and Musical Theater in Schools.

Lessons Learned: The value of the arts has consistently been debated, discussed, and challenged both within schools and in our communities. As an arts educator, I have been involved in many of these discussions at the state and national levels. As an evaluator of arts-based programs and partnerships, and with a background in teacher education, I have had the opportunity to see “both sides of the coin” – to observe how learning takes place in schools, and to find ways of documenting the process of arts engagement.

Even for those of us who know how important the arts are to learning and development, the question often arises as to how do we document learning in the arts? The field of evaluation is a resolution to this conflict, providing strategies for exploring artistic experiences across a wide range of contexts, disciplines, and programs.

In a recent evaluation that I completed for the Chicago Humanities Festival, I was asked to document student engagement with live multimedia performance. The Stages Engagement Pilot Program (SEPP) was developed as an extension of the First Time for a Lifetime initiative through the Chicago Humanities Festival, with the goal of examining student learning and appreciation for live theater. Importantly, students experienced live performance, leaving their classrooms to be audience members.

Many evaluators and researchers might look at another arts evaluation and say – “we know the arts are important, so what?” However, every arts program is unique, often only bringing one discipline (music, theater, dance, visual arts) into classrooms. The value is found in the types of activities that engage students, the artistic discipline, and the level of active participation that extends after the program concludes.

A central component of the SEPP program was that students were engaged in a pre- and post-performance activity that was designed with strong collaboration between the teachers and teaching artists. The opportunity to prepare in advance was beneficial for the teachers, artists, and students, enabling everyone involved to clarify expectations and follow through with activities after the performance.

Hot Tip: Although funders often place a heavy emphasis on quantitative reporting, much of what we know about the learning that takes place through the arts is evident in the rich narratives and observations of qualitative data. Any evaluation of arts programs should strive to be mixed-methods in the approach, to provide the statistical data that funders need coupled with examples of student work, teachers’ perceptions, and the teaching artists’ experiences.

The American Evaluation Association is celebrating Arts, Culture, and Audiences (ACA) TIG Week. The contributions all week come from ACA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Jessica Sperling, and I work in research and evaluation for education, youth development, arts/media/culture, and civic engagement programs. I am currently a researcher with the City University of New York (CUNY), where I consult on evaluation for educational programs and for StoryCorps, a storytelling/narrative-sharing program. Before joining CUNY, I had developed StoryCorps’ evaluation program as internal staff.

Developing an organization’s research and evaluation program can be challenging for myriad reasons: non-intuitive outcomes and “hard-to-measure” desired impact, the existence of many distinct sub-programs, dynamic organizational priorities, resource limitations, and more. The fact is, however, many entities fitting these characteristics must nonetheless proceed and progress in evaluation. I thus outline select lessons in initiating and implementing an evaluation program at such organizations, drawing from my work with StoryCorps and other early-stage organizational evaluation programs.

Lessons Learned:

Start with the big picture. Begin evaluation planning with a theory of change and a macro-level evaluation framework focused around organizational goals. This should be obvious to evaluators, but you may need to make its value clear to program stakeholders, particularly if they prefer that you dive straight into data collection and results. In addition to permitting focused evaluation, this can also contribute to overall organizational reflection and planning.

Utilize existing research to inform projects and draw connections. Literature review is integral, and definitely a step not to be skipped! Previous research can inform your anticipated outcomes, situate your program within a larger body of work, and demonstrate the causal links between measured/observed outcomes and the organization’s broader desired impacts – a link you may not be able to empirically demonstrate through your own work.

Highlight evaluation for organizational learning. Overtly frame evaluation as an opportunity for strategic learning, rather than as a potentially punitive assessment. Highlight the fact that even seemingly negative results have positive outcomes, in terms of permitting informed programmatic change; most programs naturally change over time, and evaluation results, including formative evaluation, help the program do so in an intentional way. This perspective can promote stakeholder buy-in and develop a culture of evaluation.

An unusual or outside-the-box program doesn’t preclude rigor in research methods. In some cases, having relatively difficult-to-measure or atypical program goals may lead to a presumption (intentional or otherwise) that methods involved in such evaluation may be less rigorous. This, however, is not a given conclusion. Once short-terms outcomes are defined – and they should always be defined, even if doing so takes some creativity or outside-the box thinking – an approach to measurement should incorporate intentional, informed, and methodologically appropriate evaluation design.

Hot Tip: Spend time and energy building positive relationships with internal programs and staff, and with potential external collaborators. Both, in their own ways, can help foster success in evaluation implementation and use.

The American Evaluation Association is celebrating Arts, Culture, and Audiences (ACA) TIG Week. The contributions all week come from ACA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Steven Holochwost, and I am a Senior Research Scientist at WolfBrown, a consultancy and research firm, and a Visiting Assistant Professor in the School of Education at Johns Hopkins University. In those roles I conduct evaluations of educational programs for children, and in particular, children at risk.

Lessons Learned: One of the areas in which I work is early childhood education. Over the last two decades, there has been growing interest among policy-makers and the public in this area. While this growth has been driven by a number of factors, it is attributable in part to studies incorporating what might broadly be termed neurophysiological measures into evaluations of early education program. It is one thing to demonstrate that students in high-quality early education programs have better test scores in kindergarten than their peers; it’s quite another to be able to show that these differences have correlates in the function of children’s brains and bodies.

Evaluators may feel that using neurophysiological measures are beyond them, in part due to cost. Where some measures are concerned, this is true: using neuroimaging techniques to observe the structure and function of children’s brains, for example, is tremendously expensive. However, recent advances in technology have made other, non-invasive neurophysiological measures relatively affordable. For example, it is now possible to examine children’s levels of stress by measuring the hormones in their saliva, or to track their engagement in a task by recording their heartbeats. The insight these techniques afford the evaluator can help address not only the question of whether a program is achieving its desired effects, but why.

Hot Tip: Of course, neurophysiological measurement requires specialized expertise and training to be used effectively and responsibly. My suggestion: pool your resources. Even if no one in your organization has this expertise, people in other organizations do, including researchers working in colleges and universities. They may be willing to help in exchange for an opportunity to apply their expertise to the ‘real world.’

Resources: The company Salimetrics offers training and analytical services for people interested in collecting a variety of neurophysiological measures. See the training page on their website athttps://www.salimetrics.com/training-resources

The website of two wonderful potential collaborators, Drs. Cathi Propper and Roger Mills-Koonce, can be found here. Dr. Propper is a nationally-known expert on vagal tone, a non-invasive measure of attention and engagement, while Dr. Mills-Koonce is an authority on numerous physiological measures available in saliva, including alpha-amylase and cortisol.

The American Evaluation Association is celebrating Arts, Culture, and Audiences (ACA) TIG Week. The contributions all week come from ACA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi. I am Don Glass, an independent consultant who is a visual artist, learning designer, and developmental evaluator.

Hot Tips: One of the things that I love about attending the AEA annual conference is getting the opportunity to better understand how my work can relate to and be informed by recent debates and developments in the field. For example, Michael Quinn Patton suggested at the 21st Empowerment Evaluation anniversary session that EE should more actively consider a Complexity Perspective and Systems Thinking. This got me reflecting on being more aware of how people and programs operate in complex systems.

It also provided me with some conceptual language for my own session for the Arts, Culture, and Audiences TIG to talk about Root Cause Analysis (RCA).

Lessons Learned: I used Root Cause Analysis as part of the developmental evaluation work that I am doing for an arts museum education program that was integrating visual art and STEM. A recent external evaluation had shown limited impact on student learning. I was contracted to use a more collaborative design and evaluation approach to inform and support the redesign of the education program.

During the exploratory phase, I interviewed staff, observed teaching, and analyzed curricular materials. Based on this initial qualitative data and the findings from the external program evaluation, I conducted a RCA to map the various problems of the program and its contexts.

A fishbone diagram showing how limited student learning may be caused by weak cross-curricular connections, overly complex activities, and no assessment or feedback

Root Cause Analysis Fishbone Diagram

The RCA was visually displayed as a fishbone diagram to help us select and focus on a manageable set of improvement areas (e.g., weak curricular connections, overly complex activities, and no assessment feedback), as well as see where these areas fit into and possibly influence the rest of the system (e.g., teacher engagement). This visual display of inter-connected problems and conditions, helped us to “see the system” and begin to not only understand why it was producing lackluster outcomes, but how we might strategically develop and test new approaches.

Rad Resources: Michael Quinn Patton’s chapter The Developmental Evaluation Mindset: Eight Guiding Principles in the new book Developmental Evaluation Exemplars: Principles in Practice.

The chapter See the System That Produces the Current Outcomes in Learning to Improve by the folks at the Carnegie Foundation for the Advancement of Teaching who are exploring Improvement Science and Networked Improvement Communities in education.

Tools for viewing systems and processes in the encyclopedic reference The Improvement Guide: A Practical Approach to Enhancing Organizational Performance.

The American Evaluation Association is celebrating Arts, Culture, and Audiences (ACA) TIG Week. The contributions all week come from ACA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I am Annabel Jackson, Co-chair of the Arts, Culture and Audiences TIG. My Co-chair, Ivonne Chand O’Neal, and I are delighted to host a week of aea365. Together, we have curated this week-long series to highlight examples of evaluation methods used to explore arts and culture, arts education, arts participation, and informal learning. Featured evaluation methods will include the use of neurolinguistic programming (NLP) to capture non-verbal and tacit knowledge, root cause analysis, neurophysiological measurement, storytelling/narrative-sharing, and the use of creativity measures.  We look forward to hearing from you using the comments feature of aea365 to let us know how these methods may influence the work in your field of evaluation. Thank you for joining us and Happy Holidays!

I am an evaluator based in the UK who also works in America, as well as Africa and Asia. 70% of my work is in the arts. My clients include icons such as the British Museum, Royal Opera House, Glyndebourne, Sadler’s Wells, National Portrait Gallery, Barbican, Tate, ICA, Hayward, Old Vic, Film London, Cleveland Orchestra and many others across the art forms.

Lessons Learned: If evaluation is about learning as much as accountability then where should we look for learning?

Artists and practitioners in the arts often develop exquisite sense-based skills. We should not be surprised that musicians invariably develop finely tuned auditory skills; visual artists invariably develop intricate visualization skills; and dancers invariably embody deep understanding of timing and kinesthetic knowing. Artists excel at their use of metaphor and lateral problem-solving. Arts organizations have something to tell us about risk-taking, and combining perfectionism with innovation.

I have used NLP, in particular the experiential array and list of sub modalities, as frameworks for my observation tools to evaluate the quality of artist-delivered educational workshops, and also when interviewing on the subject of artistic quality.

Resources: Gordon, David and Dawes, Graham (2005) Expanding Your World. Modeling the Structure of Experience. Desert Rain.

The benefits of using NLP are:

1. A structure to expand our boundaries in conceptualizing learning.

2. Prompts to expand our questions beyond verbal and conscious knowing.

3. A guide for questionnaires for observation.

4. Support, as we develop our cultural competence as evaluators, to be sensitive to and respect non-verbal contextualities and resources.

When it comes to sense-based learning, artists and the arts have something to teach us all.

Hot Tip: When evaluating individual and organizational learning, look beyond verbal and conscious articulations. Explore non-verbal skills, and resources that lie in organizational beliefs, metaphors and values.

The American Evaluation Association is celebrating Arts, Culture, and Audiences (ACA) TIG Week. The contributions all week come from ACA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, I am Rupu Gupta, Analyst at New Knowledge Organization Ltd. and Co-Chair of AEA’s Environmental Program Evaluation Topical Interest Group. My evaluation work focuses on learning about the environment and conservation in informal settings. As we celebrate Earth Day, I would like to share some reflections on evaluating these experiences.

Lessons Learned: Informal learning settings are critical to learn about the environment and actions to protect it. Informal learning settings offer opportunities for “free-choice” learning, where the learners choose and control what they learn. They are typically institutions such as zoos, botanic gardens, aquariums, and museums, distinct from formal educational settings like schools. With hundreds of millions of visits to these institutions annually, they are prime settings to engage the public in thinking about the environment. Conservation education is often a key aspect of these institutions’ programming, where visitors can learn about different forms of nature (e.g., animals, natural habitats), threats they face (e.g., climate change), and actions to address them (e.g., reducing energy use). Educational experiences here are often referred to as informal science learning for their connection with understanding natural systems.

Learning about the environment in informal learning settings can happen through a variety of experiences. Informal learning is socially constructed, through a complex process that includes oneself, close others (friends, family) and more distant others (institution staff). Specific experiences, like animal encounters, hands-on interactions with flora in botanic gardens, or media-based elements (e.g., touch screens) enable visitors to engage with information about nature and the environment. Docents play an important role in helping visitors ‘interpret’ the message embedded in the experiences and exhibits. Evaluators assessing the impact of the different experiences in informal settings, need to be mindful of the multiple pathways for visitors to engage with the environmental information.

Informal learning manifests broadly. Learning experiences in informal settings encompass outcomes, beyond learning traditionally associated with school-based education. In the process of making meaning of the various experiences, learning is tied to the multiple aspects of the human experience. They can be cognitive (e.g., gaining knowledge about climate change impacts), attitudinal (e.g., appreciating native landscapes), emotional (e.g., fostering empathy towards animals) or behavioral (e.g., signing a petition for an environmental cause). A mix of qualitative and quantitative methods are best to capture the complex learning experiences. By considering the range of learning possibilities, evaluators can design and conduct effective evaluations to understand how people engage with the multi-faceted topic of the environment.

Rad Resources: The following are great to get acquainted with evaluation in informal learning settings:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello from Laureen Trainer, Principal of Trainer Evaluation in Denver and Joy Kubarek, Vice President of Learning for Shedd Aquarium in Chicago. We both came from museum education backgrounds and now find ourselves in the world of evaluation where we work to improve museum practice, advance our industry, and demonstrate value. We are also passionate about making sure those who do not consider themselves evaluators, but who are tasked with the role of evaluating in museums, i.e. museum educators, are brought into the discussion and supported by the evaluation community.

The environment of accountability and outcomes-based work came to museums a little more slowly than other industries, but it is here now and museums feel the pressure to demonstrate impact to their communities, learners and funders. This demands that more attention be paid towards evaluating programs, which generally falls to museum educators. However, museum educators rarely have “evaluator” in their job title and in their job description it lands under “other duties as assigned.” All too often, this means that the people in charge of evaluation in museums are challenged by a lack of time, resources, and/or experience.

We hoped to address this situation by offering an entry point for museum educators who were grappling with what it meant to evaluate and where to start. Our guest-edited issue of the Journal of Museum Education (JME), Empowering Museum Educators to Evaluate, provides a range of case studies highlighting how others have faced the challenges of conducting program evaluation in museums and gives museum educators practical tools and techniques to maximize their efforts. The case studies emphasize how to approach evaluation within an environment of limited time and resources, from building staff capacity, to developing standardized evaluation methods, to communicating results.

Hot Tip: Just because your project falls through doesn’t mean that it can’t live on in another form. Originally we were asked to contribute evaluation case studies to a larger book project, which never came to fruition. Rather than letting our work, and the work of other contributing authors sit on a hard drive, we approached the JME to publish our case studies as an issue.

Hot Tip: The JME had never published a case study only issue before, but we believed this was the best approach for our audience. So, we wrote a proposal outlining what we hoped to achieve and why we believed case studies were the appropriate method for delivering information to museum educators who were de facto evaluators, and the editorial board agreed.

Rad Resource: Check out this issue of the JME.     Trainer

Rad Resource: Join us and learn more about this topic during the free Ed-Com Virtual Book Club on April 16th.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings to the aea365 community.  We are Kelly Washburn and Julie Carpineto from The Institute for Community Health in Cambridge, MA.  We are the local evaluators of an urban high school-based teen pregnancy prevention program in Greater Boston. We would like to share an example of a creative evaluation project developed with this program.

Over 90 high students participated in a three-series collage evaluation project. This resource was developed in collaboration with our program partners, who wanted to find new and creative ways to involve youth in their program evaluation.

Lesson Learned: Drawing from the principles of Photovoice and other arts-based evaluation techniques; we aimed to engage participants in a discussion of a major theme addressed in the program, healthy relationships. We collaborated with the program coordinator to come up with the evaluation question. “What does a healthy relationship look like to you?  This question was chosen for a couple of reasons: the first being the timing of the evaluation with the alignment of the program sessions. Students had recently completed a series of classroom sessions focused on healthy and unhealthy relationships, and we were interested in understanding how those sessions shaped their vision of healthy relationships. Furthermore, data from focus groups conducted in previous years revealed that the relationship sessions were those that stuck out to students as having had the most impact on them personally.  By asking this question, we were able to gain a more in-depth understanding of students’ opinions about this topic.

Students were asked to individually respond to this question by pulling images from magazines. They were then led in a large group discussion about why they chose their images.  Looking at their chosen images, students were asked to reflect on their thoughts, feelings, perceptions and experiences of relationships.

Students were then asked to work in small groups to attach their images to form a larger collective collage poster.

These posters are visual representations of the student’s responses to this question.

Washburn 1 Washburn 2

 

 

 

 

 

 

 

 

 

 

 

The final phase of this project will include individual interviews with participating students to gather additional information about the process and their experience as program participants.

Lessons Learned:  

  • Magazine selection should be a part of initial planning.
  • Chose a diverse range of magazines. Make sure your selection of images is representative of your population!

Hot Tip:

  • This project can be done with a small budget, limited time and with almost any participant population.
  • This project is a fun and creative way to engage young people in evaluation.

Cool Trick: If available, take notes from the large discussion on a large white board. This serves as another visual representation and can help students develop their collages.

Washburn 3

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

My name is Susan Kistler and I am on a crusade to expand our reporting horizons. Earlier this month, we looked at little chocolate reports. Today, let’s consider adding videos to your evaluation reporting toolbox.

aea365_videos_suck_cover

Get Involved: But first, a little incentive for you to share your best alternative reporting ideas. And possibly get a reward for doing it. In the notes to this blog, or via twitter using the hashtag #altreporting, share either (a) your best unique evaluation reporting idea, or (b) a link to a great alternative evaluation report, and in either case note why you love it. I’ll randomly draw one winner from among the commenters/tweeters and send you a copy of “How to Shoot Video That Doesn’t Suck,” a book that can help anyone create video that isn’t embarrassing. Contribute as often as you like, but you will be entered only once in the random drawing on May 1.

Back to our programming. If you are reading this via a medium that does not allow you to view the embedded videos, such as most email, please click back through to the blog now by clicking on the title to the post.

Rad Resource – Unique Reporting Videos: Kate Tinworth, via a post on her always thought-provoking ExposeYourMuseum blog, recently shared three wonderful short video reports made by her audience insights team when she was working at the Denver Museum of Nature and Science. Each uses everyday objects to help visualize evaluation findings in an engaging way.

This video is my favorite of the three. It introduces the evaluators, reports demographics via a stacked bar chart built from jellybeans, and is at once professional and accessible.

Cool Trick: Kate’s team met museum volunteers and staff at the door with small bags of jellybeans that included a cryptic link to the report in order to get people to view the video.

Rad Resource – Unique Reporting Videos: This video from a team in Melbourne, Australia, shares findings from an evaluation of a primary school kitchen gardening program. It introduces the key stakeholders and deepens our understanding of the program without listing its components.

Rad Resource – Unique Reporting Videos: I wrote before on aea365 about getting this mock reporting video made for $5. I can still envision it embedded on an animal shelter’s website, noting how the shelter is using its evaluation findings. My favorite part is that it talks about evaluation use – how things are changing because of the evaluation at a small business.

Rad Resource: Visit the Alternative Reporting – Videos Pinterest Page I’m curating for TheSmarterOne.com for more reporting video examples and commentary.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

<< Latest posts

Older posts >>

Archives

To top