AEA365 | A Tip-a-Day by and for Evaluators

TAG | logic models

Greetings, we are Kristina Jamal and Jacqueline Singh. In addition to being NPFTIG members, we serve on the PDTIG leadership team. Kristina is founder of Open Hearts Helping Hands (OH3), a nonprofit that collaborates with student-focused organizations and community members. Jacqueline is an evaluation/program design advisor and founder of Qualitative Advantage, LLC. We started working together to help OH3 move from being a young nonprofit “flying by the seat of its pants” to becoming a viable organization that competes for funds to bring about common outcomes between formal and informal secondary education organizations.

Because foundations and grantors look for promising programs that can get results, we wanted to move beyond logic model linearity to show a complementary and easy-to-understand way of how a nonprofit program is strategic and intentional. From a nonprofit’s perspective, this AEA365 article addresses the utility of conceptual frameworks and models for front-end evaluation activities, measurement, and strategic planning. 

Lesson Learned: Collecting evidence for improvement, decision-making, and accountability continues to intensify. Funders expect recipients to partner with other organizations and provide evidence of program outcomes. Young nonprofits are overwhelmed at the thought of where to begin. Indeed, navigating disciplinary fields, paradigms of inquiry, and complex environments that commingle evaluation with research can be daunting. Conceptual frameworks can reveal program alignment with other operating mechanisms that logic models alone may miss—and, help bridge the relationship evaluation has with strategic planning, measurement, program management, and accountability. They are often used within the context of evaluability assessment (EA) and prospective evaluation synthesis (PES) as exemplified within these links. Similarly, nonprofits can use conceptual frameworks to clarify their purpose, questions and build evaluation capacity.

Program designs are merely abstractions unless conceptualizations are made explicit and understood by stakeholders. Creating conceptual frameworks is developmental and experiential. The process involves document analysis, reading literature, asking questions, describing and defining relationships, capturing or proposing plausible links between components or emerging factors—dependent upon what is to be evaluated. Conceptual frameworks such as the OH3 Conceptual Framework take “context” into account and help nonprofits to expand their view of what logic models capture.

Hot Tip: Do not undervalued or overlook conceptual frameworks. They come in a variety of forms, serve different purposes, and help figure out what is going on. Conceptual frameworks provide an aerial view and are useful for connecting multiple areas of disciplinary work (e.g. research, theory, policy, technology, etc.). They help guide the selection of useful data collection tools and evaluation strategies.

Rad Resources: What we have found to be useful for understanding how to create conceptual frameworks, thinking through overlapping aspects of program design, measurement, and focusing future evaluations are: 1) James Jaccard. & Jacob Jacoby’s Theory Construction and Model-Building Skills, 2) Joseph Maxwell’s Qualitative Research Design: An Interactive Approach, 3) Joseph Wholey’s Exploratory Evaluation approach (EA) in the Handbook of Practical Program Evaluation, and 4) Matthew Miles & Michael Huberman’s Expanded Sourcebook: Qualitative Data Analysis.

 

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Kirk Knestis, CEO of Hezel Associates and huge logic model fan, back on aea365 to share what I think are useful tweaks to a common logic modeling approach. I use these “Conditional Logic Models” to avoid traps common when evaluators work with clients to illustrate the theory of action of a program or innovation being studied.

Rad Resource – The W.K. Kellogg Foundation’s Logic Model Development Guide is an excellent introduction to logic models. It’s very useful to getting started or to ensuring that members of a team are on the same page regarding logic models. The graphic on the first page of Chapter 1 is also a perfect illustration on which to base description of a Lesson Learned and some Hot Tips that inform the Conditional Logic Model approach.

Lesson Learned – Variations abound, but the Kellogg-style model exemplifies key attributes of the general logic model many evaluators use—a few categorical headings framing a set of left-to-right, if-then propositions, the sum of which elaborate some understanding of “how the program works,” as such:

Inputs > Activities > Outputs > Outcomes > Impact

While the multiple levels of “intended results” (Outputs to Impact, above) provide some flexibility and accommodate limited context and complexity, program designers or managers often get bogged down in the semantic differences among heading definitions. Alternate labels may help but even then, clients and evaluators are either constrained by the number of columns, or have to work out even more terms for headings.

Hot Tip – Free yourself from labels! Rather than fussing over these terms, leave them out completely. Instead, define each element—still in its left-to-right structure—as a present-tense statement of a condition. For example, the Input “Running shoes” might become “Running shoes purchased.” The Activity “Run 3x per week” becomes “Exercise increases.” The Outcome “Weight will decrease” becomes “Weight decreases.” This mostly requires using passive language for Activities, but also necessitates thinking of what results look like once achieved, rather than describing them as expectations. These changes in semantic structure eliminate confusion about terms, and head off issues related to tense. The lack of constraining headings also accommodates the complexity and context often left out of typical logic models (e.g., our US Department of Labor projects, illustrations of which require 12+ columns).

Hot Tip – Translate the logic model into evaluation data needs by considering measures of quantity and quality for every element of the model, irrespective of where it falls in the chain of logic. Address the extent to which, and the quality with which, each condition is realized. One interesting note, in this approach Outputs become part and parcel to those measures, rather than pieces of the causal puzzle, but that’s an additional post.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Sep/16

14

Ann Gillard on Using Logic Models

Howdy, I am Ann Gillard, Ph.D., Director of Research and Evaluation at The Hole in the Wall Gang Camp. Hole in the Wall is dedicated to providing “a different kind of healing” to seriously ill children and their families throughout the Northeast, free of charge. Last year I led a logic model creation process with our Hospital Outreach Program. After the process, the staff asked, “Great! But now what?”

Lessons Learned: In all of the wonderful guides to teach how to create logic models, the theories behind them, and how to involve stakeholders in their creation, I have found only rare examples of what to do with a logic model after it’s completed. Below are some ideas most relevant to nonprofit organizations, but other types of institutions can adapt these to their contexts.

Hot Tips:

  1. Circle back to your stakeholders with the final logic model in a short meeting. Explain what changes you made, point out where their ideas are reflected, and get one more final ok.
    1. In this meeting, discuss who has the responsibility for sharing the logic model with new staff, new collaborators, etc., and how they will share it.
  2. Help out these people by writing a one-page “talking points” memo so they will remember the important points about the logic model, such as logic model definitions, what the logic model shows, why we created this, who uses it, and other ideas relevant to your organization.
  3. Send the logic model (and talking points) to your fund development team for use in proposal writing and reporting.
  4. Send the logic model (and talking points) to your communications team to put onto your website so people will find it when looking for program descriptions.
  5. At your next board and staff meetings, present and explain the logic model.
  6. As an evaluator, show staff how you use the specific wording and ideas from the logic model to develop evaluation questions, surveys, interviews, etc.
  7. Take your logic model file to a printing place and blow it up to poster size, laminate it, and hang it in the office.
  8. Laminate regular-sized copies of the logic model for current and new employees as part of their supervisory conversations
  9. Set a date on your calendar in the future to meet with staff to review the logic model again.

Rad Resources:

University of Wisconsin-Extension has great suggestions on how to evaluate your logic model. Check out the sections of this document titled “How good is my logic model?” and “Using logic models in evaluation.”

Thank you. Please add your own ideas to the comments below!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Tara Gregory, Research and Evaluation Coordinator for Wichita State University’s Center for Community Support and Research (CCSR). CCSR works with non-profit, community and faith-based organizations across Kansas and was originally supported through the Community Psychology graduate program at Wichita State University. Many of our staff members, myself included, are graduates of this program so we’ve maintained a strong community psychology orientation in our principles and practices. Given the principle of meeting people where they are, we often use forms of storytelling to help organizations develop logic models

We use the following techniques to facilitate creative discussion while still attending to the elements in a traditional logic model. These processes encourage participation by multiple staff, administrators and stakeholders and can use the organization’s vision or impact statement as the “happily ever after.”

Hot Tip: Script writing: We ask participants to think of their program and it’s outcomes in terms of a movie trilogy. In small groups, they create scripts for each part of the trilogy then report out on the significant scenes (much like they would if they were describing a movie they’d just seen). These scenes inform the elements of their logic model, which we typically help them to complete later, and could be focused on the individual or other contexts (e.g., community). We specifically ask them to think of Part 1 as the story of what people experience while involved in the program; Part 2 picks up at a later date (the specific timeframe depends on the program) and reflects the progression of outcomes; and Part 3 represents the transition to “happily ever after.”

The specific questions we ask participants to address in their scripts are:

  • Who are the characters, settings or contexts?
  • What do they experience/what happens to them?
  • What actions do they take as a result?

Hot Tip: Pictorial timeline: Using a similar process to script writing, we ask participants to envision one of their clients, then to draw the activities and resulting behaviors or conditions that occur at various points along a timeline. This approach offers a visual path toward “happily ever after.”

Lessons Learned:

  • Participants are less likely to get bogged down in concerns about the “right” way to fill out a logic model and are better able to identify outcomes, including those that are unintended or less positive, than with traditional methods.
  • Whereas completing the typical logic model matrix can be intimidating for some, these processes tend to be energizing and fun
  • These techniques work particularly well with organizations that are innovative and are open to playfulness and experimentation.

The American Evaluation Association is celebrating Best of aea365, an occasional series. The contributions for Best of aea365 are reposts of great blog articles from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m Gretchen Jordan. I’ve been doing logic modeling and writing and teaching about how to develop them for more than 20 years. I find the process stimulating and fun, and have noticed that the more I do logic models, and the more I learn about the subject matter, the easier it is and the better the model. However, most people do not do multiple logic models in one subject area.

Lessons Learned:

1. Generic logic models can be a huge help in evaluation. Evaluation frameworks with a well explained generic logic model and accompanying indicators, building on deep subject matter expertise, can be a huge help to evaluators or program staff. With this as a guide, they can work through a logic model and evaluation plan tailored to their specific program. These generic guides save resources and improve the quality of evaluation studies. If used for a group of related programs the common framework for collecting and analyzing data sets up the possibility of synthesis of findings across those programs. This can point to features of an intervention that matter most and that are not otherwise visible. 

Example. The Research, Technology and Development (RTD) Topical Interest Group of the AEA has written a paper “Evaluating Outcomes of Publicly Funded Research, Technology and Development (RTD) Programs: Recommendations for Improving Current Practice.” Central to the paper is a generic logic model and table of indicators that could guide evaluation planning for many different types of RTD programs.

2. A generic logic model reflects knowledge of the big picture. Logic modeling is a management and evaluation tool to develop a succinct picture of a program’s goals and the strategies for achieving these within a broader context. It requires a real understanding of the program and its context. This knowledge can come partially from documents (including assessments of similar programs) but ultimately the best information comes from program managers and staff sharing different perspectives and perceptions.

Example. The generic logic model in the RTD TIG paper builds on existing theories, evaluation studies, and other generic logic models. The diagram shows two major areas of RTD, research and application of research, to reflect the reality that these are often done by different organizations and evolve over a considerable length of time. The interactions between the two streams occur with four main areas: RTD community, government/policy entities, industry, and public groups. At the top left of this model is the essential step of program design and implementation. At the bottom, related programs and influences are called out in addition to three levels of other external influences (micro, meso/sector, and macro).

A Generic Logic Model for Research, Technology and Deployment Programs Source: “Evaluating Outcomes of Publicly Funded Research, Technology and Development (RTD) Programs: Recommendations for Improving Current Practice." 2015

A Generic Logic Model for Research, Technology and Deployment Programs
Source: “Evaluating Outcomes of Publicly Funded Research, Technology and Development (RTD) Programs: Recommendations for Improving Current Practice.” 2015

The American Evaluation Association is celebrating Logic Model Week. The contributions all this week to aea365 come from evaluators who have used logic models in their practice. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

I’m Ian David Moss, and I use logic models and theories of change to help people make more strategic decisions. Recently, some smart voices in evaluation and philanthropy have argued that logic models are outdated, as implementation in a complex world too often makes a mockery of those neat and tidy diagrams that supposedly make sense of everything.

Call me stubborn, but I’m not ready to give up on logic models. After nearly a decade of working with them, I remain convinced of their value as tools for program design, strategic clarification, defining a measurement regimen, and yes, evaluation. The (cool) trick is to make sure that logic models don’t suck. Here are some ways I’ve found to improve the odds:

Lesson Learned: Combine a logic model with a theory of change

Logic models and theories of change (here’s a primer on the difference between the two) developed from entirely separate schools of thought. Like Debra Smith and Galen Ellis, however, I’ve found that each of these tools is strengthened by the presence of the other. These days, to ensure a tight integration between the logic model and theory of change, I develop both in a single PowerPoint document. In it, the theory of change (activities, outcomes, and impacts) stands alone on the first slide, then on subsequent slides it appears grayed out with elements of the logic model (inputs, values, environmental factors, target population, assumptions, and measures) superimposed on top.

Moss 1

Fractured Atlas theory of change and logic model detail

Fractured Atlas theory of change and logic model detail

Lesson Learned: Embrace the flywheel

A common knock against logic models is that they are too linear. I agree – but that doesn’t mean we have to give up on them! A common situation I run into is when a program is intended to facilitate a virtuous cycle that has self-reinforcing impacts. I depict these dynamics with a “flywheel” to denote the iterative nature of the intended effects.

Detail: ArtsWave theory of change

Detail: ArtsWave theory of change

Lesson Learned: Different audiences need different things

One client I worked with recently, the Santa Cruz Museum of Art and History (MAH), found its logic model invaluable for developing a suite of performance indicators to track on an ongoing basis, but worried that its presentation didn’t reflect the museum’s fun, accessible brand. Solution: commission a graphic artist to make an illustrated version of the theory of change. Voilà – boredom be gone!

Santa Cruz MAH theory of change: “artist version”

Santa Cruz MAH theory of change: “artist version”

Hot Tip: It doesn’t have to end here

There’s plenty of room to innovate beyond what I’ve described above. Wouldn’t it be awesome to have an interactive version that could zoom in or out to the appropriate level of detail? Or a way to reflect levels of confidence in the connections between different elements? Here’s my hypothesis: there’s nothing wrong with logic models that can’t be solved by better design.

The American Evaluation Association is celebrating Logic Model Week. The contributions all this week to aea365 come from evaluators who have used logic models in their practice. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Kylie Hutchinson, independent evaluation consultant and trainer with Community Solutions Planning & Evaluation. I also tweet regularly at @EvaluationMaven.

Systems thinking and evaluation is a hot topic these days, and as someone who spends a fair bit of time in evaluation capacity building, it has me thinking a lot about logic models. Some of you might recall a post I did for AEA365 back in 2014 on “Are Logic Models Passé?’, where I mused about the utility of static logic models in highly dynamic and complex programs.

Since then I’ve been on the hunt for examples of more “fuzzy” logic models but have only been able to find one example. Which leads me to wonder if what programs really need is not something “fuzzy”, but rather something that is both structured and flexible at the same time. Sort of like Lego.Lego Bridge #2

Imagine that a program is a bridge, designed to get people from one side of a canyon to another. The program logic model is the bridge’s design, and the better the design, the greater the chances of receiving funding to build it. The bridge construction initially occurs according to plan, however, as time goes on things come up and the bridge contractor wants to make some changes. What do you do? Stick with the original design but risk not reaching the other side? If your bridge is made of steel or concrete, you’re stuck moving forward. But if you build it with Lego, it’s easier to swap pieces in and out, without having to demolish the whole bridge. Eventually you’ll get to the other side, but maybe the bridge looks a bit different than you originally intended.

I know that some funders and government departments aren’t comfortable with the idea of “fuzzy” and I can appreciate that. Perhaps a Lego bridge is something more in line with their needs.

Rad Resource: Here are two Pinterest pages with resources on both logic models and systems evaluation.

Rad Resource: For a quick overview of systems thinking and evaluation, check out this five minute video.

The American Evaluation Association is celebrating Logic Model Week. The contributions all this week to aea365 come from evaluators who have used logic models in their practice. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Michele Tarsilla (@MiEval_TuEval) and I am a transformative evaluator with a focus on capacity development in international and cross-cultural settings. Having worked in 30 countries, I have become aware of the detached -and somewhat cynical- attitude that grantees organizations have towards their funder’s requirement for developing and using logic models (see the table below). As a result, the development of logic models has often been integrated acritically into organizational practices, merely as a simple “password for funding”.

Source: www.keystoneaccountability.org

Source: www.keystoneaccountability.org

In response to such mechanistic use of logic models among many organizations working in international development, my effort has been to strike a balance between:

  • the need for accountability to my main client (e.g., the international organization asking me to work with local grantees and staff to develop a logframe and a theory of change); and
  • the ethical/professional (rather than contractual) obligation to be accountable to those very same local grantees and staff whose planning, monitoring and evaluation capacity development I am expected to contribute to.

Lessons Learned:

In an effort to promote a genuine understanding of how a logic model could become indeed an organizational asset (and by so doing, to enhance the ownership of both the final product and the process leading to its development), I have often asked my clients two things. First, to challenge some of those long-term goals recommended the funders and often inserted by default in the Logic Model template distributed to them). I particularly encourage them to translate those often ambiguous goals into lower-level objectives aligned with their specific vision. A small organization in Kinshasa that supported the professional development of young artists, for instance, did not see the relevance of including the Millennium Development Goal on poverty reduction –which the funder has assigned to them- as the ultimate rationale for their program in their logical framework. As a result, they replaced the goal within a different one (Increased support by the National Ministry of Culture for youth Culture and Development creations in the Kinshasa province”).

Second, I invite local organizations and staff to combine the monitoring of activities and processes that funders are particularly interested in (e.g., for accountability and comparability purposes across project sites) with that of one or two additional programmatic aspects even if ignored by the funders’ guidelines. Furthermore, I push them for an ever more creative visualization of their respective programs inputs and results (“framers” will favor linear representations of program processes whereas “circlers” will be more keen at embracing a systemic and adaptive perspective of their program dynamics).

Rad Resource: For an interesting review of different logic models development processes, see Reina Neufeldt’s 2011 Handbook on “Frameworkers” and “Circlers”

The American Evaluation Association is celebrating Logic Model Week. The contributions all this week to aea365 come from evaluators who have used logic models in their practice. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Debra Smith and Galen Ellis, two evaluators who discovered through AEA that we share a common method of using logic models to facilitate systems thinking with our clients. Many people think logic models are a complicated exercise with little value. Some are downright cynical, saying they tend to represent “a tenuous chain of unproven assumptions used to justify the pre-determined program model” (Public Health Director).

We both use a two-phase logic model development process: first, we help our clients develop a balcony view “theory of change” by identifying the global goal or vision and mapping key resources, strategies and outcomes. Clarity in Phase I, makes going to Phase II—identifying outputs and short, mid and long-term outcomes and measures—more manageable and meaningful.

Debra: I first used this approach while working with a museum education department to develop an evaluation system for their programs. We mapped the overall theory of the department, tracking resources and activities leading to their long-term vision, which they described as “the community loving the museum.” Staff were then able to develop logic models for their individual programs, and then a system that streamlined the data they collected within and across programs.

Galen: I have facilitated logic model processes for the development of agency-wide evaluation systems with several organizations in this two-step process. The theory of change process helps the client articulate how their activities and the outcomes they expect fit with their agency’s values and mission. Then I work with each individual program/project within the organization to develop its own logic models that link to the agency’s broader theory of change. This shifts the culture of the organization towards being outcomes-based, and helps connect the distinct programs via common outcomes that reflect the agency’s values and mission.

Lessons Learned:

  • Logic models can help prevent mission drift. The agency-level logic model will capture outcomes that are aligned with the mission. Programs within the organization can then align with those outcomes and share evaluation measures, leveraging the broader organizational goals to guide their own success.
  • Using the logic model process to develop an agency-wide evaluation system elevates the value of evaluation within the organization.

Rad Resources:

Hot Tips:

  • Showing how a logic model tells a story can help clients understand the role and value of a logic model. Galen uses the metaphor of crossing a river. Video Clip
  • Even in developmental projects, it can be helpful to map the theory of change, then refine it based on what is learned.

The American Evaluation Association is celebrating Logic Model Week. The contributions all this week to aea365 come from evaluators who have used logic models in their practice. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I’m Tom Chapel. My “day job” is Chief Evaluation Officer at the CDC where I help our programs/ partners with evaluation and strategic planning. I took on both roles because large organizations do strategic planning and evaluation in different silos, even though both silos start with “who are we?” “what are we trying to accomplish?” and “what does success look like?”

In response, we’ve crafted an approach to strategic planning which employs logic models, but in a different way than for evaluation. The key steps: Compose a simple logic model of activities and outcomes (or what some might call a “theory of change”). I want stakeholders to understand the “what” of their program (activities) and the “so what” (the sequence of outcomes/impacts). Usually, we add arrows to reflect the underlying logic/theory.

  1. Choose/affirm an “accountable outcome”. It’s great to include “reduced morbidity and mortality” in the model as a reminder of what we’re about. But be sure to explain that these are areas for “contribution” and not outcomes attributable solely to their efforts.
  2. Have the “output talk”. The model shows which activities drive which outcomes. Outputs are the chance to define how the activity MUST be implemented for those outcomes to occur. This discussion sets up creation of process measures for the evaluator later on but at this point provides clarity for planners and implementers on the levels of intensity/quality/quantity needed.
  3. Help them identify “killer assumptions”. There are dozens of inputs and moderating factors (context) over which a program has less or no control. Look for ones so serious that if that input or moderator is not dealt with the program really can’t achieve its intended outcomes. Depressing as this exercise can be, it spurs creative thinking— how might we work around/refine our activities to accommodate it?
  4. Tie it all together with a (short) list of key strategic issues. Hit the high points —mission, vision, SWOT and move on to goals and objectives. This technique avoids the painful wordsmithing that often comes with traditional strategic planning.

Lessons Learned:

  • Use existing resources. The organization may have a mission and vision, an existing strategic plan, a business plan, or a set of performance measures. Extract the starter model from these resources so they see the logic model as a visual depiction of how they already think about their program and not something completely new.
  • Do the process in digestible bites and WITH the program. You want people to follow the storyline and that happens more often if they are part of the model construction.
  • If in return for minimal word-smithing we inflict endless arrow-smithing, fatigue will soon set in. Declare victory when the group is 85% in agreement with the picture.

Rad Resource: Phillips and Knowlton: The Logic Model Guidebook (2nd edition)

The American Evaluation Association is celebrating Logic Model Week. The contributions all this week to aea365 come from evaluators who have used logic models in their practice. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Older posts >>

Archives

To top