AEA365 | A Tip-a-Day by and for Evaluators

Hello, my name is Michel Laurendeau, and I am a consultant wishing to share over 40 years of experience in policy development, performance measurement and evaluation of public programs. This is the second of seven (7) consecutive AEA365 posts discussing a stepwise approach to integrating performance measurement and evaluation strategies in order to more effectively support results-based management (RBM). In all post discussions, ‘program’ is meant to comprise government policies as well as broader initiatives involving multiple organizations.

This post presents the approach to the development of result chains and their integration within a Theory of Change (TOC) from a program perspective.

Step 2 – Developing the Program Theory of Intervention (PTI)

Program interventions are best modeled using chains of results with a program delivery (activity – output) sequence followed by an outcome sequence linking outputs to the program’s intended result (final outcome). Most models use only two levels of outcomes, although some authors advocate using as many as five. However, three levels of outcomes would seem to be optimal as it allows properly linking chains of results to broader TOCs, with the link being made through factors (immediate outcomes) that influence behaviors (intermediate outcomes) in target populations, in order to resolve the specific societal issue (final outcome) that has given rise to the program (see Figure 2a).

 

In chains of results, outputs are the products delivered by the program (as well as services, through a push-pull approach) that reach target populations, marking the transition between the sequence controlled by the program (i.e. program control zone) and the sequence controlled by recipients (i.e., influence zone of the program).

Logic models developed using this approach help clarify how the program intervention is assumed to achieve its intended results (i.e., the nested program theory of intervention) under the conditions defined in the broader TOC (see Figure 2b).

Developed this way, logic models do resolve a number of issues:

  • The models provide a clear depiction of the chains of results and of the underlying working assumptions or hypotheses (i.e. salient causal links) of the program interventions and of their contribution to a common final result that is specific to the program;
  • The models provide the basis to identify comprehensive sets of indicators supporting ongoing performance measurement (i.e. monitoring) and periodic evaluations, from which a subset can be selected for reporting purposes;
  • Indicators can also cover external factor/risks that have (or may have) an ongoing influence on program results and that should be considered (i.e. included as control variables) in analyses to obtain more reliable assessments of program effectiveness.

However, developing a logic model that is a valid representation of program theories of interventions is easier said than done. The next AEA365 post will offer some suggestions for achieving that goal. Further, since logic models focus heavily on program outcomes, they provide very little information on delivery processes in support of management oversight and control. Subsequent posts will be discussing how program delivery can be meaningfully addressed and properly integrated in program theories of intervention.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

AEA365 Curator note: Today begins a special theme week with an extended (7 day) series on one topic by one contributing author. 

Hello, my name is Michel Laurendeau, and I am a consultant wishing to share over 40 years of experience in policy development, performance measurement and evaluation of public programs. This is the first of seven (7) consecutive AEA365 posts discussing a stepwise approach to integrating performance measurement and evaluation strategies in order to more effectively support results-based management (RBM). In all post discussions, ‘program’ is meant to comprise government policies and broader initiatives involving multiple organizations.

Step 1 of 7 – Developing the Theory of Change (TOC)

Effectively addressing an issue normally requires first understanding what you are dealing with. Models are generally used in evaluation to help clarify how program are meant to work and achieve intended results. However, much confusion exists between alternative approaches to modelling, each based on different ways of representing programs and the multiple underlying assumptions on which their interventions are based.

Top-down models, such as the one presented in Figure 1a, usually provide a narrow management perspective relying on inductive logic in order to select the evidence (based on existing knowledge and/or beliefs) that is necessary to support ex ante the strategic and operational planning of program interventions. Assumptions are then entirely about whether the program created necessary and/or sufficient conditions (as discussed in the TOC literature) for achieving intended results. In this context, the role of ex post evaluation is too often limited to focusing on program delivery and vindicating management’s contention that observed results depend to some (usually unknown) extent on existing program interventions.

As a research function, evaluation should also support (re)allocation decisions being made by senior government officials regarding the actual funding of public programs. However, this stronger evaluation role would involve reliably assessing individual program contributions to observed results in a given context, and require properly measuring real/actual program impacts while taking external factors into account.

The first difficulty in achieving this task is recognizing that Randomized Control Trials (RCT) are rarely able to completely eliminate the influence of all external factors, and that the statistical ‘black box’ approach it uses prevents reliably transposing (i.e., forecasting by extrapolating) observed results to situations with varying circumstances. Generalization is then limited to a narrow set of conditions formulated as broad assumptions about the context in which the program operates. Providing a more extensive base to reliably measure program effectiveness would entail, in a first step:

  1. developing more exhaustive Theories of Change (TOC) including all factors that created the need for program interventions and/or that likely have an influence on the issue or situation being addressed by the program; and,
  2. determining which factors/risks within the TOC are meant to be explicitly ‘managed’ by the program, with all others becoming external to the program intervention.

Figure 1b shows what a program logic model would normally look like at the end of this first step.

The next AEA365 post will articulate the approach to the development of the more detailed Program Theory of Intervention (PTI) that is imbedded within the broader TOC.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Season’s Greetings and Happy Holidays loyal AEA365 readers! I’m Sheila B Robinson, Lead Curator and sometimes Saturday contributor with a few tips and tools for blogging in 2018. First off, content is king with blogging. Always. There’s no other rule as important as composing well-written, well-conceived content that is relevant and relatable to your audience.

Hot Tips: Here are a few up-to-date tips from 10 New Rules for Blogging Towards 2018, where author Yvette McKenzie advises:

1.) Long form is gaining traction.

…“longer reads” or long form content has been gaining traction for several years now. According to Kissmetrics, “Long-form content gets you more online visibility (social shares, links), more proof of your authority and industry expertise, and more material for altruistic community building and engagement.” It might not suit every type of post but long form content should be included as part of your broader blogging and content strategy.

2.) Consider a vlog or podcast.

…not everyone engages with blocks of text as a preferred medium. Many people prefer on-the-go content, including visuals like infographics, audio-only mediums such as podcasts or easily consumable videos. Generally speaking, a solid mix of these elements might gain you the best traction but knowing your audience and how they best engage should be what guides your strategy.

3.) Your audience always comes first.

…knowing your audience/s will always be crucial to your success. Blogs can be a great way to start a conversation, engage with an audience and to state your authority and expertise on a subject. Consider your audience first and try to “solve their problems” by providing the answers they are seeking. Putting your audience first will always be the cornerstone to successful blogging, so make audience data tracking something you incorporate often into your content strategy.

Next up, author Jasmine Demeester, in Blogging Trends 2018-2019 : Latest Blogging Trends agrees with using longform posts and video, and also offers this:

Cool Trick: Images, Graphics, Illustrations – – Creativeness still Ruling Blogging Trends 2018

Since readers now have a wide range of options, a blog would need much more beauty in 2018. By this, we mean that bloggers would have to spiffy up their platforms with beautiful illustrations, images, and anything that could immediately pull in a visitor…Flat designs like these are also more easily downloadable and integrated with any kind of content you have. Plus, they are immediately viewable by any first-time visitor. 

Don’t forget to check out AEA’s page of bloggers and tweeters!

Wishing all of you a happy, healthy 2018!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on theaea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! We are Justin Sullivan, Libby Smith, and Kate Bentley from the Applied Research Center at the University of Wisconsin-Stout. When we attended Evaluation 2017, the AEA annual conference this year we knew we wanted to use the opportunity to get more active on Twitter. I (Libby) had avidly followed the conference hashtags in past years, but hadn’t jumped on the tweeting bandwagon. I knew that there was a growing community of evaluators on Twitter and we all wanted to be more connected to our peers and our field. With the help of our designer and social media manager, Kate Bentley, we devised a plan and dove in during the conference. We tweeted and followed others throughout the week. When we got back, we did what evaluators do…we pulled the data, analyzed it, and reported out!  Here is our infographic and some hot tips:

Click for larger image

Hot Tips:

  • You can scrape data from Twitter using R to analyze trends. This approach allows you to customize your search to focus on hashtags (#Eval17) or specific Twitter users. The resulting data set will include tweets, user names, and like and retweet data. You can also pull data to create a snapshot of what’s happening now or track trends over time.
  • Here is a step-by-step guide on how to connect R to Twitter to pull data. This guide is designed for first time R users. Learning how to code in R can be daunting; it comes with a steep learning curve. This guide includes a graphical user interface and code that you can simply copy and paste into R to get things going quickly. After working your way through this exercise, you will have a basic R skillset you can use to try other things.
  • When creating an infographic, it’s best to start by choosing a color palette using 4-6 colors. Choose colors that are complementary (think opposite sides of the color wheel) and suitable to your project. If you are working with an organization, use the palette they use for their branding.
  • The Noun Project has arguably the best icons on the web. You can search from over a million icons from thousands of authors. Licenses are available under Creative Commons, and there is both a free and paid version. These are high quality icons that will make your project stand out. Your icons should match the data you are presenting in content and context. Download a few icons and start thinking about how you plan to layout your content.
  • Use PowerPoint to start making infographics. It’s a simple interface with useful tools to move things around. Be sure to choose a catchy title, infuse a bit of variation in the size and scale of your icons, and try not to have too many repeating graph selections.

Thanks for reading and we look forward to seeing you at #Eval18!! You can find us on Twitter @arcevaluation and online at ARCevaluation.com.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Kelly Robertson and Lori Wingate, and we work at The Evaluation Center at Western Michigan University and EvaluATE, the National Science Foundation-funded evaluation resource center for Advanced Technological Education (ATE).

Rad Resource:

We’re excited to announce our new rad resource, the “Checklist of Program Evaluation Report Content.” We created this checklist to address a need for practical guidance about what should go in a traditional evaluation report—the most common means of communicating evaluation results. The checklist is strictly focused on the content of long-form technical evaluation reports (hence, the name). We see the checklist as complementary to the exciting work being done by others to promote the use of evaluation through alternative ways of organizing, formatting, and presenting data in evaluation reports. If you want guidance on how to make your great content look good, check out the new Evaluation Report Guidance by the Ewing Marion Kauffman Foundation and Evergreen Data.

How is our checklist on reporting different from others you may have come across?

  • It not only lists key elements of evaluation reports, but it also defines these elements and explains why they are relevant to an evaluation report.
  • Its focus is not on judging the quality of a report. Rather, our checklist is intended to support practitioners in making informed decisions about what should be included in an evaluation report.
  • It’s not tailored to a specific type of program or evaluand and is presented as a flexible guide rather than rigid specifications.

We hope multiple audiences find the checklist useful. For example, new evaluators may use it to guide them through the report writing process. More experienced evaluators may reference it to verify they did not overlook important content. Evaluators and their clients could use it to frame conversations about what should be included in a report.

Lesson Learned:

It takes a village to raise a great checklist. We received feedback from five evaluation experts, 13 of our peers at Western Michigan University, and 23 practitioners (all experts in their own right!). Their review and field testing were invaluable, and we are so grateful to everyone who provided input—and they’re all credited in the checklist.

Like checklists? See the WMU Evaluation Center’s Evaluation Checklists Project for more.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Dec/17

27

Color Scripting by Wendy L. Tackett

I’m Wendy Tackett, the president of iEval, sometimes faculty member at Western Michigan University, and lifelong Disney fan!

Lessons Learned: You never know where you’re going to run into inspiration for your evaluation work, so keep your eyes, ears, and mind open. In 2015, I went to the D23 Expo in Anaheim, California. I went purely for myself, since I love Disney everything, and I never dreamed I would learn something that could be applicable to my evaluation practice.

In a session with the Pixar team, I learned about a technique created by Ralph Eggleston called color scripting. Color scripting is a type of story boarding, but Ralph would change the main colors of each panel to reflect the emotion the animated film was supposed to portray at that time. It helped the Pixar team understand what was going on in the film emotionally, and it also made it easier to create a musical score to enhance those emotions.

A few days later, I was taking notes on the engagement and enthusiasm of a large audience. I created some metrics on the spot including number of people on their mobile devices, number of people leaving the event, murmuring, applause, etc. Then inspiration hit me, and I used the color scripting idea to create a timeline of the event, highlighting who was presenting at different times, and coloring the data. The client felt it was an extremely useful overview of how the audience related to the event and the discussion that ensued really helped them figure out how to change the event for the next time.

Since then, my adaptation of color scripting has evolved, and my team has used it on different projects including professional development training, farmer’s markets, nutrition lessons, etc. Recently, we asked K-6th grade students what they learned at the end of each nutrition lesson, then analyzed the data by lesson topic, grade levels, and topic order. These graphs resulted in thoughtful conversations with the nutrition educators about what students think, the impact of specific lessons, and the progression of lessons. The color scripting graphs visually indicated the percentage of students expressing changes in knowledge (blues, the darker the blue – the more substantial the knowledge) or behavior (greens, the darker the green – the more substantial the behavior).

Hot Tip: When you learn (or create!) a new technique, try applying it in different contexts to 1) practice using it, 2) identify where it’s most meaningful in analyzing data, and 3) determine various ways clients will be able to use it.

Rad Resource: If you missed my presentation at AEA 2017 on Color Scripting and would like to download it, you can grab it and other presentations at iEval’s web site. In the presentation, there are detailed examples of how you can use Color Scripting. You can also grab the step-by-step directions on how to do Color Scripting!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Isidro Maya Jariego

I’m Isidro Maya Jariego, Associate Professor, Social Psychology Department of the Universidad de Sevilla (Spain). I’m participating in a project to promote the adoption of open educational resources (OER) and open educational practices (OEP) for improving the quality of education of universities in Egypt, Jordan, Morocco and Palestine. OpenMed is an international cooperation project co-funded by the Erasmus+ Capacity Building in Higher Education programme of the European Union.

Throughout project implementation, we observed that higher education institutions in the Middle East and North Africa (MENA) region face problems of massification, and occasionally cover large areas or rural extents of difficult accessibility. Massive Online Open Courses (MOOCs) and digital media allow facing these types of difficulties; at the same time, they offer opportunities for educational innovation.

This allowed us to observe the adjustment and incorporation of the project into four different national contexts.

Lessons Learned:

The degree of internationalization of the university is a good indicator of readiness to adopt OER and OEP. Universities that are bicultural, use a second language in teaching, have bilateral agreements with other universities outside the country, have a culturally diverse teaching staff or receive and send students in academic exchanges, tend to be more receptive to the incorporation of open educational resources.

During implementation of the OpenMed project we realized that participating universities and teachers were characterized by having a more international character than other local universities and teachers. Internationalization indirectly reports about readiness to adopt OER. It seems to be a self-reinforcing process: international experiences predispose for the incorporation of OEP and the incorporation of OEP contributes to the university’s internationalization.

https://www.researchgate.net/publication/320024153_Localising_Open_Educational_Resources_and_Massive_Open_Online_Courses

Hot Tip: Focus on organizational dynamics and local relevance. In southern Mediterranean countries there is usually a greater distance to the authority of the teacher, and the cohesion and harmony of the group have greater weight than the individual interests, in comparison with Europe and North America. However, beyond these cultural peculiarities, we have learned that organizational factors are key. Institutional constraints in each university (e.g, textbook use policies and incentives) are determinants of the likelihood of content reuse. On the other hand, in the reuse of content it is also opportune to incorporate locally relevant examples connected to local needs.

Hot Tip: Prevent exclusion of more local universities. Local universities that are less internationally connected, run the risk of being excluded from the processes of educational innovation and the incorporation of open education practices. These are universities somewhat disconnected from the elite of higher education institutions in the country. It is a high-risk group in terms of accessibility to quality education, which requires specific actions.

Rad Resources:

The OpenMed project has produced useful resources for planning to implement or evaluate a MENA region program:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Kylie Hutchinson and I’m an independent evaluator with Community Solutions Planning & Evaluation.  Every year I adapt a Christmas carol to share with my colleagues and clients.

Season’s greetings, and enjoy!

Do You Read What I Read?

(sung to the tune of “Do You Hear What I Hear?”)

Said evaluator to the manager,

Do you hear what I hear?

Said in board rooms here and everywhere,

Do you hear what I hear?

Deci-sions made, with-out evidence,

It just doesn’t make any sense,

It just doesn’t make any sense.

 

Said the manager to evaluator,

Do you see what I see?

Sitting in my office everywhere,

Do you see what I see?

Stacks, and stacks, of boring long reports,

Why can’t you give me something that’s short?

Why can’t you give me something that’s short?

 

Said evaluator to the manager,

Listen to what I say,

I can give you just a two-pager,

Listen to what I say,

With lay-ering, it’s really up to you,

You can read as much as you choose,

You can read as much as you choose.

 

Said the manager to evaluator,

Do you know what I need?

Put your key findings in rever-se order,

Do you know what I need?

Recommenda-tions first, then put all the rest,

That would leave me much less stressed,

That would leave me much less stressed.

 

Said evaluator to the manager,

I will do as you say,

To promote use of eval everywhere,

I will do as you say,

Your time, your time, is valuable and short,

I won’t give you a lengthy report,

I won’t give you a lengthy report.

 

Rad Resource:  For tips on more effective reporting and alternatives to the traditional report, check out my new book, A Short Primer on Innovative Evaluation Reporting.

 

Rad Resource:  Liven’ up your office Christmas party with additional Christmas carols for evaluators and nonprofits.

 

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I’m Prentice Zinn.  I work at GMA Foundations, a philanthropic services organization in Boston.

I went to a funder/grantee dialogue hosted by Tech Networks of Boston and Essential Partners that discussed the tensions between nonprofits and funders about data and evaluation.

Lessons Learned:

Funders and their grantees are not having an honest conversation about evaluation.

A few people accepted this dynamic as one of the existential absurdities of the nonprofit sector.

Others shared stories about pushing back when the expectations of foundations about measurement were unrealistic or unfair.

Everyone talked about the over-emphasis on metrics and accountability, the capacity limits of nonprofits, and the lack of funding for evaluation.

Others began to imagine what the relationship would be like if we emphasized learning more than accountability.

As we ended the conversation, someone asked my favorite question of the day:

“Are funders aware of their prejudices and power?”   

Here is what I learned about why funders may resist more honest conversations with nonprofits about evaluation and data:

Business Conformity. When foundations feel pressure to be more “business-like” they will expect nonprofit organizations to conform to the traditional business models of strategy developed in the late 20th century.  Modern management theory treats organizational strategy as if it was the outcome of a rational, predictable, and analytical process when the real world is messy and complex.

Accountability and Risk Management. When foundations feel pressure to be accountable to the public, their boards, and their peers, they may exert more control over their grantees to maximize positive outcomes.  Exercising fiduciary responsibility pressures funders to minimize risk by estimating probabilities of success and failure.  They will put pressure on grantees to provide conforming narratives based on logic models, theories of change, outcome measurements, and performance monitoring.

Outcomes Anxiety. Funders increase their demands for detailed data and metrics that indicate progress when

they get frustrated at the uneven quality of outcome information they get from nonprofits.

Data Fetishism. Funders may seek data without regard for its validity, reliability, or usefulness because society promotes unrealistic expectations of the explanatory power of data. When data dominates the perception of reality and what we are seeing, it may crowd out other ways of understanding what is going on.

Confirmation Bias and Overgeneralization. When foundations lack external pressures or methods to examine their own assumptions about evaluation, they may overgeneralize about the best ways to monitor and evaluate change and end up collecting evidence that confirms their own ways of thinking.

Careerism and Self-Interest. When the staff of foundations seek to advance their professional power, privilege, and prestige, they may favor the dominant models of organizational theory and reproduce them as a means of gaining symbolic capital in the profession.

Rad Resource: Widespread Empathy: 5 Steps to Achieving Greater Impact in Philanthropy. Grantmakers for Effective Organizations.  2011.  Tips to help funders develop an empathy mindset.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Peace! My name is Dr. Monique Liston and I am a GEDI alum. I enjoyed every minute of my internship year and consider my cohort close friends in the professional and academic world. GEDI exposed me to evaluation as a profession. While I felt like I lacked evaluator skills because of the limitations of my graduate program, the GEDI program connected me to resources to help me increase my own capacity. The mentorship provided by the GEDI program leadership helped me to define myself as a professional, focused on racial justice and liberation, within the field of evaluation. Since I graduated from the program, I have applied the things that I have learned to continue my personal and professional development. Here are two HOT TIPS I have for new evaluators / new GEDI that I gained from my experience in the program.
Hot Tip 1: Follow up with anyone and everyone. I know that many people who know me would not believe that I was shy, but I am. GEDI programs put me in close contact with the heavy hitters in the evaluation field. While many members of my cohort had strong small-talk game, I often felt like I was missing out because my anxiety around meeting people kept me quiet in many social situations with people who’s work I had admired. I opted however to make sure that I emailed after being in those spaces. A short – I saw you at X place. I appreciated that you said Y. I am working on Z. – note went a long way. I was able to develop relationships in a way that was comfortable and affirming for me. In addition, many people do not follow up, so following up in general helps you to stand out in a crowd! I also made new friends in the field from across the country.
Hot Tip 2: Read. Read. Read. There is no shortage of evaluation literature, but the more you read, the more opportunities that you have to connect your experiences to the reflections of others. When I was in the program, I was overwhelmed because I felt that others who came from schools with intense evaluation programs were constantly inundating the conversation with theorists and frameworks that I had not been exposed to. Now that I am well-versed in evaluation literature I cannot get enough of it! The GEDI program is an excellent opportunity to find the literature that interests you and even connect to the key authors in that area!
Rad Resources:
For my work, bridging racial justice and culturally responsive evaluation was key. Here are two readings that helped:
McDermott, C. M., & O’Connor, G. C. (2002). Managing radical innovation: an overview of emergent strategy issuesJournal of product innovation management19(6), 424-43

·

<< Latest posts

Older posts >>

Archives

To top