AEA365 | A Tip-a-Day by and for Evaluators

TAG | logic model

Kirk Knestis (Hezel Associates CEO) here again with more thoughts about ways logic models can inform an evaluator’s work and potentially benefit the program managers and innovation developers with whom we work. One area where I think our contributions may often be underutilized is that of sustainability of the evaluand programs or innovations we study.

Lesson Learned – I confess that I’ve been guilty of neglecting “inputs” as they are typically illustrated in logic models. I’ve generally focused on defined activities and desired outcomes as priorities when making decisions about evaluation designs, data requirements, and analysis plans. Inputs were, in my view, simply part of the context within which an evaluand existed to be studied. Recent work of two of my colleagues, Andrew Hayman and Sarah Singer, has changed my perspective on this, particularly focusing my attention on inputs as the source of valuable insights into how a program might be sustained. Sustainability is often a concern for our clients, so this understanding can translate into additional value we offer as part of our evaluation services.

Rad Resource – Sarah and Andrew presented a Roundtable at Evaluation 2016, facilitating discussion about sustainability, how we can evaluate it, and how evaluation might help programs become sustainable. For this session (slides are available in the AEA Public Library), they defined “sustainability” as “continuing program activities or vital components of program activities to achieve intended outcomes without relying on future grant funding.” Their session identified obstacles to evaluating sustainability but more importantly, explored strategies to help a program be more sustainable, including by backward-mapping through its theory of action.

Hot Tip – As an example, consider this slice of a conventional style logic model:

Input (Resource) > Activity (Program Component 1) > Outcome A > Outcome B

If the evaluation finds Program Component 1 (an activity) to be mission-critical to immediate Outcome A, which is in turn required to achieve distal Outcome B, then that activity should arguably be sustained. If that activity requires a particular resource as an input, then sustainability requires (a) sustaining that input to support the crucial program component, (b) replacing that input with another providing similar support for the activity, or (c) modifying the activity so it can be delivered without (or with less of) the input. Regardless, planning for sustainability requires attention to inputs, and evaluators can help program managers or innovation developers plan ahead to structure the evaluation and data collection to that end.

Rad Resource – Other hints are shared in the notes from a similar conversation we facilitated at the 2015 conference of the Eastern Evaluation Research Society (EERS), examining distinctions between “evaluating sustainability” and “evaluating FOR sustainability.” Presented in terms of Challenges and Solutions, these ideas provide concrete ways evaluators might start to leverage logic models to meet another need for clients, developed from some of our recent projects.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Kirk Knestis at Hezel Associates back with the promised “additional post” about logic models, this time challenging the orthodoxy of including “outputs” in such representations. Irrespective of the style of model used to illustrate the theory behind a program or other innovation, it’s been my experience that inclusion of “outputs” can create confusion, often decreasing the utility of a model as evaluation planning discussions turn to defining measures of its elements (activities and outcomes).  A common, worst-case example of this is when a program manager struggles to define the output of a given activity as anything beyond “it’s done.” I’m to the point where I simply omit outputs as standard practice if facilitating logic model development, and ignore them if they are included in a model I inherit. I propose that if you encounter similar difficulties, you might do the same.

Lesson Learned – W.K. Kellogg explained in the foundational documentation often referenced on this subject that outputs “are usually described in terms of the size and/or scope of the services and products delivered or produced by the program.” As “service delivery/implementation targets,” outputs are the completion of activities or the “stuff” those efforts produce. It’s generally understood that outputs can be measures of QUANTITIES of delivery (e.g., number of clients served, hours of programming completed, units of support provided). Less obvious, perhaps, is the idea that we should examine the QUALITIES of those activities. Even more neglected, however, is an understanding that the stuff produced can be usefully viewed as a source of measures of the qualities of the activities that generated it. In short, outputs are more data sources than parts of an evaluand’s theory of action.

Hot Tip – Instead of including outputs as a separate column in tabular or pathway-style models, hold off considering them until planning gets to defining how quantities and qualities of delivery will be measured for “implementation” evaluation purposes. Making the distinction here between that and measures of outcomes assessing “impact” of activities, this approach layers a “process-product” orientation on implementation evaluation, looking at both quantities and qualities with which activities of interest are completed. This simplifies thinking by avoiding entanglements in seemingly redundant measures among activities and their outputs, and can encourage deeper consideration of implementation quality; harder to measure so easier to ignore. It also takes outputs out of the theoretical-relationships-among-variables picture; an important issue evaluations testing or building theory.

Hot Tip – Work with program/innovation designers to determine attributes of quality for BOTH activities (processes) and the stuff in which they result (products). Develop and use rubrics or checklists to assess both, ideally baked into the work itself in authentic ways (e.g., internal quality-assurance checks or formative feedback loops).

Hot Tip – Another useful trick is to consider “timeliness” as a third aspect of implementation, along with quantity and quality. Compare timelines of “delivery as planned” and “delivery as implemented” measuring time slippage between the ideal and the real, and documenting the causes of such slippage.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Kirk Knestis, CEO of Hezel Associates and huge logic model fan, back on aea365 to share what I think are useful tweaks to a common logic modeling approach. I use these “Conditional Logic Models” to avoid traps common when evaluators work with clients to illustrate the theory of action of a program or innovation being studied.

Rad Resource – The W.K. Kellogg Foundation’s Logic Model Development Guide is an excellent introduction to logic models. It’s very useful to getting started or to ensuring that members of a team are on the same page regarding logic models. The graphic on the first page of Chapter 1 is also a perfect illustration on which to base description of a Lesson Learned and some Hot Tips that inform the Conditional Logic Model approach.

Lesson Learned – Variations abound, but the Kellogg-style model exemplifies key attributes of the general logic model many evaluators use—a few categorical headings framing a set of left-to-right, if-then propositions, the sum of which elaborate some understanding of “how the program works,” as such:

Inputs > Activities > Outputs > Outcomes > Impact

While the multiple levels of “intended results” (Outputs to Impact, above) provide some flexibility and accommodate limited context and complexity, program designers or managers often get bogged down in the semantic differences among heading definitions. Alternate labels may help but even then, clients and evaluators are either constrained by the number of columns, or have to work out even more terms for headings.

Hot Tip – Free yourself from labels! Rather than fussing over these terms, leave them out completely. Instead, define each element—still in its left-to-right structure—as a present-tense statement of a condition. For example, the Input “Running shoes” might become “Running shoes purchased.” The Activity “Run 3x per week” becomes “Exercise increases.” The Outcome “Weight will decrease” becomes “Weight decreases.” This mostly requires using passive language for Activities, but also necessitates thinking of what results look like once achieved, rather than describing them as expectations. These changes in semantic structure eliminate confusion about terms, and head off issues related to tense. The lack of constraining headings also accommodates the complexity and context often left out of typical logic models (e.g., our US Department of Labor projects, illustrations of which require 12+ columns).

Hot Tip – Translate the logic model into evaluation data needs by considering measures of quantity and quality for every element of the model, irrespective of where it falls in the chain of logic. Address the extent to which, and the quality with which, each condition is realized. One interesting note, in this approach Outputs become part and parcel to those measures, rather than pieces of the causal puzzle, but that’s an additional post.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi, I’m John Burrett, of Haiku Analytics Inc., Ottawa. One serious problem with logic models is that they usually leave out external influences and feedback effects, even when they may be important, because they make the model “too complex”. It is good to simplify, but ignoring important influences on program success when planning an evaluation may lead evaluators to fail to collect important data and to misinterpret results.

Trying to embrace complexity by drawing a web of boxes and arrows is not helpful: it’s too complex to use and explain and will drive your audience away. This will probably come only from the mind of the evaluator or program manager, thereby easily missing important external influences and other complexities.

Hot Tip: I recently stumbled onto an alternative approach during a mapping of factors of cause and effect related to a complex policy problem. Data was obtained from an expert panel, developing a matrix linking a number of factors with an estimate of strength and direction of relationship between them. Mapping this with network analysis software helped the panel to visualize what they had created.

It followed that this form of data could generate outcomes chains and logic models. Here’s a simple example: a program supporting trades training by providing grants to students and developing state of the art teaching materials in collaboration with trade schools drives the immediate outcomes of…

  • Students gaining the ability to take training and
  • Currency and quality of the training being improved, in order to achieve
  • The ultimate outcome of increased employment.

Exogenous effects influencing these results include cost of living, demand for skills and technical changes affecting the training’s currency. The size of the nodes indicates betweenness centrality, identifying those factors that connect many influences, thus propagating certain effects. The width of the edges indicates the hypothesized strength of influence. Possible unintended effects and a feedback loop are also shown.

Burrett

Lesson Learned: A key advantage of this approach is that that it creates a logic model using expert knowledge, rather than simply an evaluator/manager’s understanding of a program. This could also include other sources of information like findings from literature and program stakeholders’ experiences. Importantly, you could do this without imposing any prior idea of the logic model on those providing the cause-effect data other than including the program/outputs/activities and specifying the immediate/intermediate and ultimate intended outcomes.

A second major advantage is that the logic model utilizes network metrics generated from the data, so how the program and influences are expected to be related can be analyzed. For instance, factors that are thought to have an important role in propagating effects across the system would show high betweenness/eigenvector centralities.

The American Evaluation Association is celebrating Social Network Analysis Week with our colleagues in the Social Network Analysis Topical Interest Group. The contributions all this week to aea365 come from our SNA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi! I’m Mary Arnold, a professor and 4-H youth development specialist at Oregon State University, where I spend the majority of my time in the area of program evaluation, especially in capacity building efforts. This is my second time preparing a blog post for the EEE-TIG, and the invitation came at a great time, because I have been thinking pretty obsessively these days on how we can do a better job of building Extension program planning and evaluation capacity. One of the conundrums and persistent late night ponderings that continues to rattle around my mind is how we can do a better job articulating what is suppose to take place in programs. If we are clear on of what is supposed to happen in a program, then we also should be able to predict certain outcomes and understand exactly how those outcomes come to be. This notion of prediction is what underscores a program’s theory.

Because of the emphasis on program planning and that swept Extension in the early 2000s, most Extension educators are familiar with logic modeling. The good news is that   many educators understand the concepts of inputs, outputs, and outcomes as a result, so the groundwork is in place to think more deliberately about a program’s theory. But at the same time, there is scant evidence that logic modeling has resulted in better program planning practices, or led to the achievement of stated outcomes in Extension programs. And there is even less evidence that logic models are developed based on theory.

Lesson Learned: Theory may be implied in logic models, but too often it is understated, assumed, or just hoped for. Program theory is what connects the components of a logic model and makes it run!

Hot Tip! Did you know that there are two important parts to program theory? The first is the program’s theory of change, which is the way in which the desired change comes about. The second is the program’s theory of action, which refers specifically to what actions need to happen, at what level of success, for the program to reach its intended outcomes.

Rad Resource! My favorite resource for understanding and developing a program theory of change and action is Purposeful program theory: Effective use of theories of change and logic models (Funnell & Rogers, 2011). This book has loads of great information and practical help on bringing logic models to life with program theory.

Rad Resource! If you are looking for specific theories that are useful for Extension programs, The University of Maryland Extension has a terrific short guide entitled Extension Education Theoretical Framework that outlines how several well-developed theories can be useful for Extension programming.

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE AEA Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Ann Price and I am the President of Community Evaluation Solutions, Inc.(CES), a consulting firm based just outside Atlanta, Georgia. I am a community psychologist and work to infuse environmental approaches into my work developing and evaluating community prevention programs. Much of my work involves working with community coalitions.

Hot Tip: Appreciate how long it takes for community coalitions to mature. Often, community members want to jump in and get right to work. However, the first thing community coalitions need to do is develop structures and processes that will help ensure their long-term success. It may be helpful for you to work with your coalition to develop a logic model that details what steps the coalition needs to take in order to be successful. Here is one example from our work with the Drug-free coalition of Hall County, based on a model by Fran Butterfoss and Michelle Kegler’s Community Coalition Action Theory (2002). Having this Logic Model helped coalition members focus on establishing a good foundation and to recognize the importance of planning and evaluation.

Ann Logic Model

Rad Resource: Fran Butterfoss’s book, Coalitions and Partnerships in Community Health (2007), is a great reference book for coalition leaders, researchers and evaluators. It includes surveys that coalition leaders can use to assess the health of their coalition.

Rad Resource: Fran Butterfoss has a new book, Ignite! Getting Your Community Fired Up for Change, an excellent and accessible resource for coalition leaders and members filled with tips to inspire coalitions to action. 

Hot Tip: Community Anti-Drug Coalitions of America (CADCA) is another good resource for both coalitions and evaluators. They host The National Leadership Forum each December in Washington, D.C. and the Mid-year Training Institute held at various locations around the country. Both meetings include one-to one coaching for coalition leaders and a separate tract for youth, the National Youth Leadership Initiative.

Lesson Learned: “Evaluation as intervention” is a concept I have been pondering lately. When you find your coalition is stuck in a “meet and talk” rut, think about redesigning the evaluation to focus on the environmental change strategies the coalition has implemented and the community reach of each strategy. Work on documenting the link between their chosen strategies and community outcomes. Then, use evaluation data to provide more timely feedback to the coalition. This would be a great opportunity to involve coalition members in discussions about where they are, where they would like to be and how, working together, they can get there.

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello, I’m Shirah Hecht, Ph.D,  Program Evaluator with Academic Technology Services within the Harvard University Info Tech area.  Here is a simple “trick” for beginning to develop a research design.

I call this “system mapping.”  You may connect it to stakeholder analysis or concept mapping, since it blends the two in a way – but it goes a bit further than either, for research purposes.  It comes from a simple suggestion given to me by my graduate school mentor who taught qualitative field methods at Northwestern University, Howard S. Becker.  He credited the sociologist Everett C. Hughes for this method.

Essentially, the technique is to identify a central event or person, then radiate out from there to consider all the constituencies or positions or groups that connect to that central event or person.  This is a way of jump-starting your thinking about what the relevant data sources might be and to identify questions about your central topic.

For example, in education, the central event might be the classroom; the radiating circles might identify students, teachers, parents, and administrators, among others.  Alternatively, the central circle might hold the student as a central person; the radiating circles then might include the parents, teachers, other students, guidance counselors, testing agencies, etc.

After identifying these outer circles, you can pose relevant questions such as:

  • What is the perspective of each constituency on the central event or person?  What matters to them?  What is their investment in this process or person?
  • At what points do they interact with the central event, for the purposes of my research questions?
  • What “data” might they hold, whether in terms of process or perspective, to define or address my research questions?

This process also fits in nicely with developing a logic model with the program provider, to develop an evaluation project.  Even if you are not logic model-bound, it can frame a good conversation and understanding of the research planning process and the final decisions about data collection.

Here is a simple version of this map, generalized from a program for which I developed an evaluation plan.  The green highlights indicate the data collection sources: a focus group with program volunteers and a survey of clients.

Hecht

Lessons Learned: In research planning, move from the perspective of the constituency to specific research questions for the project.

Hot Tip: Combine this mapping process with the Tearless Logic Model process, to jumpstart the conversation about research plans with program staff. 

Rad Resources:  Everett C. Hughes offers the sociological eye on any and all processes we might want to research: The Sociological Eye: Selected Papers (Social Science Classics

The Tearless Logic Model

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Jan/14

12

Kylie Hutchinson on Are Logic Models Passé?

My name is Kylie Hutchinson.  I am an independent evaluation consultant and trainer with Community Solutions Planning & Evaluation.  In addition to my usual evaluation contracts, I deliver regular webinars on evaluation topics, Twitter weekly at @EvaluationMaven, and co-host the monthly evaluation podcast, Adventures in Evaluation along with my colleague @JamesWCoyle.

DistastefulLogic models (logical frameworks, logframes, whatever you prefer to call them) have been a staple of the evaluation field since 1990.  However, the more I embrace complexity and systems thinking, the more I wonder about their utility for evaluating programs.  Life is messy, and we all know that programs rarely unfold over time as they were originally intended.  Logic models are static, simple, and predictable, while real life is dynamic, complicated, and somewhat unpredictable.  So what’s a conscientious evaluator to do?

Hot Tip:  Don’t throw the baby out with the bath water. An 8 month boy bathing, isolated on white

While I’m not the logic model zealot I was ten years ago, I’m not exactly ready to abandon them either.  I think logic models can still be useful depending on the particular context of the evaluation.  I can think of many previous evaluations I’ve conducted where the process of collaboratively developing a logic model with program stakeholders was critical to building a shared understanding of the program and support for the evaluation.  I think the problems begin when we become so tied to it (“But it’s what we sent the funder!”) that we’re afraid to let the program adapt or evolve as necessary over time, or we develop blinders to all the other factors in the system that may influence the program’s outcomes.

Why on earth can’t we change a logic model mid-stream? Those of you who are familiar with Developmental Evaluation recognize this tune very well.  But consider this.  Even Michael Quinn Patton himself has said that Developmental Evaluation is not appropriate for every evaluation context.  For certain evaluations, program fidelity is critical.  So I think it’s a balancing act.  I also predict that we’ll begin to see the logic model evolve into something slightly different over time that better reflects the complex world that programs operate in.

Rad Resource:  The Logic Model Rubric. If you’re new to logic models, I’ve developed a simple rubric for writing logic models that can help you get started.

Rad Resource:  The Little Logic Model Webinar.  I also regularly offer a short and affordable webinar that introduces new evaluators and program staff to logic models in terms of their development, use, and most importantly, their limitations.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi there! I’m Ann Martin, a postdoctoral fellow and internal evaluator with NASA Innovations in Climate Education, which funds climate education projects as part of NASA’s Minority University Research and Education Program (MUREP). I’m also part of a cross-agency collaboration involving sister programs at the National Science Foundation (NSF) and the National Oceanic and Atmospheric Administration (NOAA).

This collaboration represents more than 100 projects that have received funding to conduct climate education projects in formal and informal environments; each project funds its own evaluator and determines its own evaluation plan. As part of that tri-agency effort, I’ve helped to facilitate a community of these evaluators. Throughout this week, the AEA365 blog will feature posts from members of our community, and what we’ve learned about evaluation of climate education.

This tri-agency evaluation group is entirely grassroots, depends on the efforts of its members, and functions with extremely limited resources. To kick off Climate Ed Eval Week, I’ll be sharing some thoughts on how to help a community like this work.

Lesson Learned: In April 2012, a large group of almost 40 tri-agency evaluators and funded project leaders got together to work on a common evaluation vision for climate education. The result was a draft logic model describing our portfolio of diverse projects. We found that the process of drafting the model, and negotiating which terms and concepts belonged, was as useful as the product itself. Each project has its own goals, and we worked together to resolve and align those into a representation of what the three agencies are working towards. This also started a long-term conversation, and helped us to identify challenges and opportunities. We’ve also found that evaluators are hungry for a place to share and find evaluation resources, instruments, and reports relevant to their sphere of interest – a place that won’t go away when funding does. We’re seeking solutions to this!

Clipped from https://nice.larc.nasa.gov/tri_pi/

Cool Trick: While meeting in-person got our grassroots evaluation group off to a roaring start, it’s tough to get together. Instead, we take advantage of opportunities to hold lunches or meetings at conferences like AEA, AERA, and AGU (going on right now!). This also helps us bring new evaluators and their perspectives into the fold.

Hot Tip: Online collaboration tools help us keep the community going. Our group uses Google Drive to share documents, and we’ve also looked into Sign Up Genius. This handy service allows participants to sign up for tasks (instead of time slots, like Doodle does).

Get Involved: If your evaluation work relates to climate education, and you would like to learn more, contact me at ann.m.martin@nasa.gov. Also, consider joining the STEM Education & Training TIG!

The American Evaluation Association is celebrating Climate Education Evaluators week. The contributions all this week to aea365 come from members who work in a Tri-Agency Climate Education Evaluators group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m Sheila B. Robinson, aea365’s Lead Curator and sometimes Saturday contributor.

You’ve probably read about AEA’s LinkedIn page for fabulous free discussion about anything evaluation. Today, I want to highlight one particular discussion that has sparked a good deal of participation from a diverse group of evaluators.

Terminology has always been a sticky point for evaluators as those from different sectors (i.e. health, education, non-profits, government, etc.) have developed their own preferences and in many cases, definitions of terms.

This* discussion started by a Project Manager received 51 responses – not the longest discussion  – but nonetheless a rich and detailed investigation into these two terms. (*you will need to have a LinkedIn account to access the discussion)

Evaluators rang in on this from a variety of perspectives. Positions identified in the profiles included a:

  • Research Analyst
  • Prevention Specialist
  • Independent Consultant
  • Strategy and Planning Advisor
  • Community-based Impact, Assessment and Evaluation Consultant
  • Impact, Monitoring, Evaluation and Research Specialist
  • Senior Public Engagement Associate
  • Policy Analyst
  • And several owners and presidents of research, consulting, or evaluation firms.

The discussion featured individuals who offered their own definitions of the two terms, after which several became engaged in a discussion of how these tools are used or should be used in practice.

Lesson Learned: Most commenters consider logic models and theories of change as related but distinct. Several indicate that theories of change are indeed embedded in logic models.

Here is how some commenters describe logic models:

  • help identify inputs, activities and outcomes
  • trace a flow of inputs through program activities to some sort of output or even on to outcome, and are usually intended as handy guides to program implementer
  • visual model of how a program works
  • represent the basic resource and accountability bargain between the ‘funder’ and the ‘funded’

Here is how some commenters describe theories of change:

  • show how and why outcomes/activities cause change
  • an attempt to make explicit the “whys” behind relationships or expected outcome
  • explicit or implicit theory of how change occurs
  • how one designs a program as it breaks out how and why the change pathway will happen
  • work behind the scenes, and can be drawn from to assemble logic models

Rad Resources: Several commenters offer resources for exploring these concepts:

I recommend these blog posts on the topic:

and this one on the topic of evaluation terminology:

And finally, I must recommend Kylie Hutchinson’s tools for untangling evaluation terminology:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Older posts >>

Archives

To top