AEA365 | A Tip-a-Day by and for Evaluators

Hi my name is Jayne Corso and I am the Community Manager for AEA.

Photos used to be the most valuable asset to your Facebook post and still are the most common type of post on Facebook. However, due to their overuse photos are saturating Facebook feeds which means there is lots of competition for your clicks, likes, and shares. Due to this trend, videos are emerging as the most effective way to reach your followers on Facebook.  Here are a few tips for posting videos that are effective on Facebook.

Hot Tip: Keep it short

Facebook videos are should be short and to the point. The attention span of your followers is usually less than a minute. You should also identify your topic early in the video to gain the interest of your followers.

Hot Tip: Uploading to Facebook

Facebook often favors videos that are posted natively to their platform. A post with a video directly uploaded to Facebook will out preform any photo, link, or even external reference to a video. Direct uploads to Facebook is your best opportunity for your followers to view your post.

Hot Tip: YouTube  

If you decide to house your videos on YouTube, here are a few tips to keep in mind.

  • Include a description on your video
  • Create a short title that sums up the topic of your video
  • Use keywords that relate to the topic of your video
  • Add a date and location to your video for reference

I hope these simple video tips are helpful! Please share your tips or questions in the comments.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Anne Gienapp and Sarah Stachowiak from ORS Impact, a consulting firm that helps organizations use data and evaluation to strengthen their impact, especially in hard-to-measure systems change efforts.

Ten years ago, when the field of advocacy and policy change was first coalescing, a number of excellent field building publications helped make the case for the value of theory of change, identification of interim outcomes, and the application of new tools and methods to fit the dynamic and adaptive space of advocacy efforts.

As the field has grown, so has the number of resources and frameworks that evaluators can use to deepen their evaluative practice in this space.  If you are like us, you probably have a “good intention” reading pile somewhere, where you have taken note of some of these as they were initially disseminated.  To round out the APC TIG week, we’ve listed three of our favorite newer resources that expand upon earlier work that helped define the field of advocacy evaluation.

Rad Resources

  • Beyond The Win: Pathways for Policy Implementation While early advocacy evaluation primarily focused on unique campaign wins, there has been increasing acknowledgement that understanding more than legislative wins would strengthen advocacy and policy change theories of change and evaluation designs.  The Atlas Project supported this publication to help identify ways in which to understand key aspects of policy implementation
  • Assessing and Evaluating Change in Advocacy Fields Early on, there was agreement that advocacy capacity could be a legitimate and important advocacy outcome.  Jewlya Lynn of Spark Institute   expands upon that notion with an evaluation framework for funders who recognize that a long-term strategy for meaningful and sustained policy change can include building the collective capacity and alignment of a field of individuals and organizations toward a shared broad vision.
  • Measuring Political Will: Lessons from Modifying the Policymaker Ratings Method While Julia Coffman and Ehren Reed’s original Unique Methods in Advocacy Evaluation first shared the idea of Policymaker Rating, there hasn’t been more public writing about it since.  This piece shares lessons learned about putting this method into practice in various circumstances and shares some things to do—and things to avoid—if you want to implement it.

This is certainly not an exhaustive list;  for more rad resources, be sure to check out the Center for Evaluation Innovation, Point K resource page and the Atlas Project website.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! I’m Carlisle Levine, President and CEO of BLE Solutions, LLC. We offer evaluation, applied research and technology services to help organizations increase their effectiveness and contribute to better outcomes. I specialize in global advocacy, peacebuilding and strategic evaluation.

A tremendous challenge in advocacy evaluation is identifying links between advocacy activities and changes in people’s lives, given the many factors that are involved and the time it takes for change to come about. The Most Significant Change approach can help respond to this challenge.

Rad Resource: The Most Significant Change (MSC) approach, an inductive, participatory outcome monitoring and evaluation approach, was developed by Rick Davies and then widely publicized in a guide co-authored with Jess Dart. It uses storytelling to gather evidence of intended and unintended, as well as positive and negative change. The stories are then reviewed and analyzed by a core team to identify the most significant change from their point of view. Importantly, MSC is not a standalone method. Rather, it can point to outcomes that require further validation using more deductive methods.

The approach involves 10 steps, according to the MSC Guide:

MSCStepsGraphic.Levine

Lessons Learned

  • In evaluating advocacy efforts, I first use methods that help me identify the contribution that advocacy efforts have made to policy changes. I then use MSC to explore early evidence of how those policy changes are affecting people’s lives.
  • In my design, I do not define domains of change, but wait to see what domains emerge from the stories themselves.
  • By triangulating a storyteller’s story with information provided by people familiar with the storyteller’s life, I increase the story’s credibility.
  • With my clients, I use the selection process to help them understand the variety of changes in people’s lives resulting, at least in part, from their targeted policy change. I also conduct a meta-analysis that shows them trends in those changes. With this information in hand, they can reinforce or adjust their policy goals and advocacy efforts in order to contribute to the types of change they most desire.

Hot Tip: To build trust with storytellers, I partner with story collectors who speak their language and are familiar with their context. The more storytellers believe a story collector can relate to their reality and will not judge them for it, the more open storytellers will be.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, I’m Oscar Espinosa from Community Science . We recently evaluated the effectiveness of professional development programs in various sectors that seek to diversify their leadership or workforce to be more responsive to communities of color.

Hot Tips

  • Specify what program effectiveness means–to all stakekholders! A program’s intended objectives are oftentimes skewed to the perspective of the funder. As an evaluator, you need to consider the various program stakeholders and determine what effectiveness looks like for each of them. To that end, sessions to develop program logic models should be held with the funder and separately with other program stakeholders. Vetting and reconciling the models is an essential step to establish a good foundation, before moving on to an evaluation design. Allocate enough time for this process as reaching consensus can be a laborious task.
  • Capture participants’ accomplishments but don’t downplay challenges. Despite pressures from funders, who understandably want to highlight positive impacts, as an evaluator you have to identify unintended program consequences and areas for improvement. Data collection needs to focus on challenges participants experienced, including perceptions that activities were not tailored to people of color or their cultural or linguistic needs. Be prepared to have uncomfortable discussions about structural racism or equity issues. Doing this can lead to solid recommendations for program improvement.
  • Numbers and stories are BOTH essential. We were interested in what brought participants to the program; their expectations as compared to their actual experience; and the influence the program had on them. We found that a combination of forced-response survey items and open-ended, semi-structured interviews before and after participants complete the program were effective methods for getting a full picture.

Lesson Learned: To effectively evaluate professional development programs, one needs to take into account both funding organizations’ policies and culture and people’s of color needs and background.  The evaluators’ art is their ability to extract the voice of program participants from the noise produced by program requirements and the institutional context. Ultimately, a program’s effectiveness should be judged on the extent to which is motivates people of color to continue to take on new challenges and advance in their profession.

Rad Resources

  • Handbook on Leadership Development Evaluation is a comprehensive resource filled with examples, tools, and the most innovative approaches to evaluate leadership development in a variety of settings.
  • L. Kirkpatrick’s Evaluating Training Programs focuses on evaluation approaches to measuring reaction, learning, behavior, and results.
  • Special Issue: Building a New Generation of Culturally Responsive Evaluators through AEA’s Graduate Education Diversity Internship Program.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, we are Kat Athanasiades and Johanna Morariu from Innovation Network, an evaluation firm that works with philanthropic and nonprofit organizations, especially those engaged in advocacy.

Advocacy evaluation has been a difficult field to generate shareable lessons. Many organizations and campaigns are concerned that sharing their “secrets” (information usually divulged in evaluation reports such as methods, approaches, and incremental wins) will give the “enemy” valuable information that might undermine future phases of their work. Given this dearth, we saw an opportunity to try to support more general learning. We sought to highlight this process by looking at the past ten years of The Atlantic Philanthropies’ immigration reform advocacy grantmaking in the US. We used this campaign as a case study to highlight important decision points that a broader audience of nonprofits, funders, and evaluators could apply to their own work, regardless of the issue.

Hot Tip: Get more mileage out of your evaluations! With good planning, one evaluation may result in a paper for the board, a blog post for the evaluation community, and a visualization to send out over Twitter. It’s not always feasible to do this, given budget and confidentiality constraints, but when you can, do it! You will add to the advocacy evaluation field and contribute to improved practice among advocates.

Our evaluation project resulted in a fairly traditional report, documenting the history of US federal immigration reform. The report is most likely to be used by close-in project stakeholders (Atlantic and immigration reform advocates), but to expand the relevance of strategic lessons to a broader audience, we pulled Atlantic’s key decisions out of this report and then elaborated the implications, pros, and cons, around each decision. We also developed discussion questions that evaluators can use as a facilitation guide with partners who are considering advocacy work.

Rad Resource: We’ve also created a Funder Discussion Guide to accompany the traditional report. As an example, one decision that any advocacy funder may make is whether to emphasize grassroots or grasstops funding strategies. The decision partly rests on whether the issue of interest has become politicized and what opportunities current political realities afford. Using the questions in the Guide, an evaluator can help walk an advocacy partner through a theory of change process, thinking through the context, assumptions, and needs that underlie their work.

fdg_cover_image

Lesson Learned: Share lessons! We started this post by explaining that there’s a dearth of information in the advocacy evaluation field. We invite you all to share how you have made use of evaluation projects to expand learning and use of your findings.

Image via

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! We are Rhonda Schlangen (US), Julie Tumbo (Tanzania), and Ben Awinda (Tanzania), evaluation consultants specializing in advocacy and development.  In Tanzania and other countries, civil society advocates struggle with developing skills and finding support for their efforts. We’re using the dissemination phase of an evaluation of three groundbreaking Tanzanian advocacy campaigns to support advocate capacity-building. This encouraged us to rethink the usual presentation-and-distribution-of-reports approach and conceptualize dissemination as a platform on which to build learning.

Lesson learned: Take a campaign approach to dissemination. For this project, we designed the Mwanaharakati (“Activist”) Campaign to both share the evaluation results and involve civil society actors, particularly those in Tanzania, in conversations about the case studies and use of the information to advance their own advocacy work.

Hot tips:

  • Start designing the dissemination campaign plan by first asking advocates how they and their constituencies would like to receive the information. We discovered that Facebook is out, WhatsApp is in, and everyone loves cartoons.
  • Keep the conversation going! Ben poses biweekly questions related to issues highlighted in the case studies on the campaign’s social media accounts in both Kiswahili and English.

Rad Resource: Graphic novels!  We wanted to reach young people, individuals with limited literacy abilities, and people with limited patience to read long documents, so we worked with a talented cartoonist to develop comic versions of the case studies. It was important to work with a local artist who understood the political context and the cultural nuances of images and colors. He accurately and compellingly conveyed the advocacy issues and tactics.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Welcome to the Advocacy and Policy Change (APC) TIG week on AEA365!  I’m Jared Raynor, Director of Evaluation at TCC Group and co-chair of the TIG.  Our TIG is celebrating our 10-year anniversary at this fall’s AEA conference.  This week’s blog posts share some of the great insights gained regarding evaluation’s role in advocacy work around the world.

In preparation for the TIG week, I asked for some reflections from some people who were with me when the TIG formed.  Tom Kelly’s had a striking insight: “We’d always said we are not inventing any new tools of evaluation but were looking for ways that evaluation can be applied in complex, rapidly changing policy advocacy environments—although look at the new tools that have come along.”  To start the APC week, I wanted to reflect on a few of the amazing developments in our field.

Rad Resource: Over the course of the TIG’s development we’ve been asked on occasion how to share resources within the AEA community.  On each occasion, we have opted to promote existing aggregators of information.  Innovation Network’s Point K Learning Center has consistently gathered resources from the field and The Center for Evaluation Innovation (CEI) has supported the development of new material. Both make the information freely available.

Rad Resource: One of the early pieces of writing on advocacy evaluation, The Challenge of Assessing Policy and Advocacy Activities, remains a great starting place.  The authors identify seven key challenges faced by foundations in advocacy evaluation, including complexity, role of external forces, timeframe, shifting strategies and milestones and attribution.  More recently, the Overseas Development Institute did a comprehensive review of Monitoring and Evaluation of Policy Influence and Advocacy that looked at trends, approaches, frameworks and methods for evaluating advocacy.  And, coming out later this year is the first book on advocacy evaluation by Annette Gardner!

Lessons Learned: In late 2015, the Aspen Planning and Evaluation Program and CEI convened a small group of advocacy evaluators to review the state of the field. I want to share three things that struck me from that conversation.  First, advocacy evaluators need to become more savvy at eliciting theories of change alongside theories of action.  We are fairly adept at the latter and frequently let the former slide as too abstract.  Second, we should continue to push ourselves to incorporate counterfactual thinking into evaluations.  Third, we should constantly consider the political implications of our work—how it is positioned, whose voice is prioritized, and what bias we bring to the advocacy work.

We have come a long way and I look forward to where we as a field go next!

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings loyal AEA365 readers: I’m Sheila B Robinson, our Lead Curator and sometimes Saturday contributor with an invitation to compose a very special AEA365 article.

Who is your favorite evaluator? Who have you learned from? Who do you follow, collaborate with or just plain admire for their unique and significant contributions to our field?

As you may remember, to coincide with Memorial Day in the US, AEA365 featured two weeks of blog posts remembering our evaluation pioneers who have passed. Each post introduced the honoree and described the individual’s pioneering and enduring contributions.

Get Involved: To coincide with Labor Day in the US (September 5, 2016), AEA365 will feature two weeks of blog posts honoring our living evaluation pioneers. Wouldn’t you like the opportunity to publicly acknowledge someone whose evaluation work has had an important impact? Because there are so many potential honorees, we are letting YOU identify them!

Hot Tip: Are you interested in contributing a post to this special series? It’s easy! There are only two things you’ll need to do:

  1. Email me at (aea365@eval.org) by Friday July 22 to identify your honoree and commit to composing the blog post (do this quickly – we will feature only one post per honoree!).
  2. Email me your blog post by Friday August 5.

Hot Tip: Posts should be under 450 words and use the following outline and headings:

Introduction:

Briefly introduce yourself (see any AEA365 blog post for the general format and tone of introductions) and the person your post will be honoring.

Why I chose to honor this evaluator:

Contributions to our field:

Resources:

List (and link!) to any related resources: e.g. the honoree’s publications, website, LinkedIn, Twitter handle, etc.

That’s it!! Remember, we can only accept 12 entries, and will honor exactly 12 evaluation pioneers, so choose your honoree and let me know via email ASAP!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hello! My name is Dana Keener Mast. I am a senior manager at ICF International in our Atlanta office. ICF is currently working with the Healthcare Georgia Foundation to conduct a multi-year, multi-site evaluation of their childhood obesity prevention program. As part of this project, ICF provides ongoing evaluation technical assistance to four Georgia community grantees working diligently to improve policies and environments that make it easier for people to make healthier choices. My team at ICF for this project includes Carole Harris, Cathy Lesesne, Thearis Osuji, Stacey Willocks, Toni DeWeese, and Shelby Cash.

Lesson Learned: Measuring change over time in environments and policies is challenging! Many factors, other than the “intervention,” can influence environmental and policy outcomes—many of which are beyond the control of any one person or organization. Second, numerous steps typically need to occur to achieve a particular change in a policy or environment, and those steps may not be linear or stepwise.

Hot Tip: When evaluating environmental and policy outcomes, use a process that can document interim steps, progress, setbacks, and achievements that occur over time. Early in the project, we worked with each grantee to define the milestones they needed to achieve in order to reach their desired environmental and/or policy outcomes. Once every quarter, our team reviews and documents their progress on those milestones. This allows us to capture and report significant progress achieved over time, even if the ultimate outcomes don’t turn out exactly as intended. This process also helps the grantees identify challenges and struggles they need to overcome (or work around) to keep the process moving towards their goals.

I look forward to seeing you here in Atlanta this October at AEA!

While you are in Atlanta! Speaking of transformative and healthy environments, be sure to take some time to walk, run, or bike on the Atlanta Beltline while you are here. The Atlanta Beltline is touted as the most comprehensive transportation and economic development effort ever undertaken in the City of Atlanta.  https://www.youtube.com/watch?v=nwL2E0i_kIM

We’re looking forward to October and the Evaluation 2016 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

No tags

Hi! We are Krista Collins, Director of Strategy & Innovation at Boys & Girls Clubs of America (BGCA), and Mike Armstrong, Vice President of Club Operations and Evaluation at Boys & Girls Clubs of Metro Atlanta (BGCMA). Together we seek to understand how our professional development courses and youth programs work in tandem to support the 58,000 staff members in local communities and across the nation that create opportunities for approximately 4 million youth each year to achieve great futures through our priority focus on Academic Success, Good Character and Citizenship, and Healthy Lifestyles.

Since 2011, BGCA has conducted the annual National Youth Outcomes Initiative (NYOI) to measure how effectively the Club experience is being implemented and its’ impact on our members. Built on research-informed indicators of youth achievement that align with our priority outcomes, and benchmarked against other leading national youth surveys, NYOI data is used to drive continuous quality improvement efforts and communicate our impact to key stakeholders across the youth development field.

Rad Resource: Looking for comparison data to understand the impact of youth development programs? Download our 2015 National Outcomes Report: Measuring the Impact of Boys & Girls Clubs. A few highlights from our report:

  • 74% of members aged 12-17 who attend the Club regularly say they earn mostly A’s and B’s, compared to 67% of youth nationally.
  • By 12th Grade, Club members’ rate of monthly volunteering is more than double that of the national average for same-grade peers.
  • Teens who stay connected to the Club as they get older seem better able to resist high-risk behaviors than teens nationally at the same ages.

Hot Tip: Sharing Club-level results and training on data-utilization promotes survey participation

In four years the number of NYOI participants has grown from 2,800 Club members to 165,000 – that is an increase of almost 6000%! Much of this growth can be attributed to BGCA’s efforts to demonstrate the value of data to local Clubs. BGCA prepares reports for each participating Club organization, and provides local trainings and consultations to ensure that the results are interpreted correctly and used to drive improvement.

Hot Tip: Data-utilization requires learning that is strategic and intentional.

To fully realize the value that formal measurement and evaluation brings, local clubs have employed continuous quality improvement systems that integrate knowledge generation and decision-making at all levels of their organization. Decision making that affects everything from resource allocation at the corporate level to programmatic foci and staff assignments at the club-site level only occurs if a formal and iterative process of reflection and dialogue practiced.

We’re looking forward to October and the Evaluation 2016 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

No tags

Older posts >>

Archives

To top