AEA365 | A Tip-a-Day by and for Evaluators

TAG | logic model

Hello, my name is Michel Laurendeau, and I am a consultant wishing to share over 40 years of experience in policy development, performance measurement and evaluation of public programs. This is the sixth of seven (7) consecutive AEA365 posts discussing a stepwise approach to integrating performance measurement and evaluation strategies in order to more effectively support results-based management (RBM). In all post discussions, ‘program’ is meant to comprise government policies as well as broader initiatives involving multiple organizations.

This post discusses how to identify comprehensive sets of indicators supporting ongoing performance measurement (i.e. monitoring) and periodic evaluations, from which subsets of indicators can be selected for reporting purposes.

Step 6 – Defining Performance Indicators

When a logic model is developed using the structured approach presented in previous posts, and gets validated by management (and stakeholders), it can be deemed to be an adequate description of the Program Theory of Intervention (PTI). The task of identifying performance indicators then requires determining a comprehensive set of indicators that includes some reliable measure(s) of performance for each output, outcome and external factor covered by the logic model. In some cases, as discussed in the previous post, the set of indicators may extend to cover management issues as well.

Most performance measurement strategies (PMS) and scorecards also require the identification, for each output and outcome, of success criteria such as performance targets and/or standards, which are usually based on some form of benchmarking. This is consistent with a program design mode (i.e. top-down approach to logic models) based on inductive logic where each result is assumed to be a necessary and/or sufficient condition (as discussed in the TOC literature) for achieving the next level of results. This is however very limiting as it reduces the discussion of program improvement and/or success to the exclusive examination of performance in program delivery (as proposed in Deliverology).

Additional useful information that may be required includes the following:

  • Data type (quantitative or qualitative);
  • Data source (source of information for data collection);
  • Frequency of data collection (e.g. ongoing, tied with specific events, or at fixed intervals);
  • Data owner (organization responsible for data collection);
  • Methodology (any addition al information about measurement techniques, transformative calculations, baselines and variable definitions that must be taken into consideration in selecting analytical techniques);
  • Scales (and thresholds) used for assessing and visually presenting performance;
  • Follow-up or corrective actions that should be undertaken based on performance assessments.

Many organizations further require that performance measurement be designed with a view to address evaluation needs and adequately support the periodic evaluation of the relevance, efficiency and effectiveness of program interventions. However, evaluation and performance measurement strategies are most often designed separately, with evaluation strategies usually being arrested just before the actual conduct of evaluation studies. Evaluations are then constrained by the data collected and made available through performance measurement. In order for evaluation and performance measurement strategies to be coordinated and properly integrated, it would be necessary to develop them concomitantly at an early stage of program implementation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello, my name is Michel Laurendeau, and I am a consultant wishing to share over 40 years of experience in policy development, performance measurement and evaluation of public programs. This is the fifth of seven (7) consecutive AEA365 posts discussing a stepwise approach to integrating performance measurement and evaluation strategies in order to more effectively support results-based management (RBM). In all post discussions, ‘program’ is meant to comprise government policies as well as broader initiatives involving multiple organizations.

This post discusses how the relation of performance measurement to results-based management should be articulated and incorporated into logic models.

Step 5 – Including the Management Cycle

Some logic models try to include management as a program activity leading to corporate results (e.g., ‘financial/operational sustainability’ and ‘protection of organization’) that are presented as program outcomes. Indeed, good management can help improve program delivery and thus contribute to program performance. However, that contribution is indirect and normally achieved through the ongoing oversight and control of program delivery (and the occasional revision of program design) with requisite adjustments to operational or strategic plans being informed by the ongoing measurement (or monitoring) and the periodic assessments of program performance (see Figure 5a).

Results-based management (RBM) then depends on the identification of relevant indicators and the availability of valid and reliable data to correctly inform players/stakeholders and adequately support management reporting and decision-making processes. The quality and use of performance measurement systems for governance is actually one of many elements of Management Accountability Frameworks (MAF) in the Canadian Federal Government, with other elements covering expectations regarding stewardship, policy and program development, risk management, citizen-focused service, accountability and people management. However, MAFs are the object of development and an assessment process that is totally separate from the one used for Performance Measurement Frameworks (PMF) based on delivery process models and/or logic models.

Indeed, the management cycle is relatively independent from actual program operations, with management standing in a relation of authority above program staff to provide oversight and control at each step of the delivery process (see Figure 4b in yesterday’s AEA365 post).

Trying to build the management cycle as a chain of results (or as a part of it) in a logic model is then totally inappropriate as it creates unnecessary confusion between management and program performance issues. Presenting the results of good management as program outcomes also blurs the distinction between efficiency (i.e., the internal capacity to deliver) and effectiveness (i.e., program impacts on target populations). Figure 5b below shows how to properly situate the management cycle in a logic model itself, essentially as an authoritative or facilitative process without direct causal links to specific program results.

This does not mean that management issues should be excluded from PMFs. Relevant indicators of management performance should also be identified for monitoring purposes whenever they are identified by management itself as internal factors or risks that do (or may) influence program delivery.

The next AEA365 post will discuss ways of addressing indicators and actual measures of performance.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello, my name is Michel Laurendeau, and I am a consultant wishing to share over 40 years of experience in policy development, performance measurement and evaluation of public programs. This is the third of seven (7) consecutive AEA365 posts discussing a stepwise approach to integrating performance measurement and evaluation strategies in order to more effectively support results-based management (RBM). In all post discussions, ‘program’ is meant to comprise government policies as well as broader initiatives involving multiple organizations.

This post articulates how logic models should be structured when program designs include multiple strategic elements (or program activities) supporting a common objective.

Step 3 – Addressing Conditionality

Program interventions rarely rely on a single product or service to achieve intended results. In fact, program strategies are most often designed using multiple interventions from one or more players. In these situations, there normally exists some conditionality between separate program activities as they support and interact with each other. Addressed this way, the notion of conditions (also used in the TOC literature) allows structuring logic models by properly sequencing the multiple program interventions (i.e. converging results chains) deemed to contribute to a common final result that is specific to the program.

To an outside observer being exposed to multiple interventions, program activities may appear to be delivered in a sequential manner (from left to right) based on some observable results (e.g., outputs or immediate outcomes) until some final outcome is achieved (see Figure 3a). This would be the case of a person arriving at a hospital emergency or an employment centre and being subjected to a series of treatments or services.

However, from a program perspective, all activities are actually implemented in parallel with different clients and/or players. In the examples of the hospital emergency and the employment centre, it is the clients who are moving from left to right across activities as they are exposed to various program services. In programs that reach clients only indirectly (e.g., environmental programs or economic policies), it is rather the projects or client files that are shifting across activities while being processed and/or subjected to various program interventions.

Conditionality then allows taking into account the relationships between the strategic elements (or activities) of program interventions without the need to clutter the logic model with an exhaustive mapping and display of all possible interactions and feedback processes. Implicitly, all program activities are (or may be) influenced to some extent by previous activities situated at the left of the diagram (see Figure 3b). Thus, when conditionality exists and is properly taken into consideration, the positioning of program activities in the logic model becomes important for the description and understanding of the program theory of intervention (PTI).

The next AEA365 post will dwell further into program implementation and discuss how to best integrate delivery processes into logic models in order to effectively support management oversight and control.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello, my name is Michel Laurendeau, and I am a consultant wishing to share over 40 years of experience in policy development, performance measurement and evaluation of public programs. This is the second of seven (7) consecutive AEA365 posts discussing a stepwise approach to integrating performance measurement and evaluation strategies in order to more effectively support results-based management (RBM). In all post discussions, ‘program’ is meant to comprise government policies as well as broader initiatives involving multiple organizations.

This post presents the approach to the development of result chains and their integration within a Theory of Change (TOC) from a program perspective.

Step 2 – Developing the Program Theory of Intervention (PTI)

Program interventions are best modeled using chains of results with a program delivery (activity – output) sequence followed by an outcome sequence linking outputs to the program’s intended result (final outcome). Most models use only two levels of outcomes, although some authors advocate using as many as five. However, three levels of outcomes would seem to be optimal as it allows properly linking chains of results to broader TOCs, with the link being made through factors (immediate outcomes) that influence behaviors (intermediate outcomes) in target populations, in order to resolve the specific societal issue (final outcome) that has given rise to the program (see Figure 2a).

 

In chains of results, outputs are the products delivered by the program (as well as services, through a push-pull approach) that reach target populations, marking the transition between the sequence controlled by the program (i.e. program control zone) and the sequence controlled by recipients (i.e., influence zone of the program).

Logic models developed using this approach help clarify how the program intervention is assumed to achieve its intended results (i.e., the nested program theory of intervention) under the conditions defined in the broader TOC (see Figure 2b).

Developed this way, logic models do resolve a number of issues:

  • The models provide a clear depiction of the chains of results and of the underlying working assumptions or hypotheses (i.e. salient causal links) of the program interventions and of their contribution to a common final result that is specific to the program;
  • The models provide the basis to identify comprehensive sets of indicators supporting ongoing performance measurement (i.e. monitoring) and periodic evaluations, from which a subset can be selected for reporting purposes;
  • Indicators can also cover external factor/risks that have (or may have) an ongoing influence on program results and that should be considered (i.e. included as control variables) in analyses to obtain more reliable assessments of program effectiveness.

However, developing a logic model that is a valid representation of program theories of interventions is easier said than done. The next AEA365 post will offer some suggestions for achieving that goal. Further, since logic models focus heavily on program outcomes, they provide very little information on delivery processes in support of management oversight and control. Subsequent posts will be discussing how program delivery can be meaningfully addressed and properly integrated in program theories of intervention.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

AEA365 Curator note: Today begins a special theme week with an extended (7 day) series on one topic by one contributing author. 

Hello, my name is Michel Laurendeau, and I am a consultant wishing to share over 40 years of experience in policy development, performance measurement and evaluation of public programs. This is the first of seven (7) consecutive AEA365 posts discussing a stepwise approach to integrating performance measurement and evaluation strategies in order to more effectively support results-based management (RBM). In all post discussions, ‘program’ is meant to comprise government policies and broader initiatives involving multiple organizations.

Step 1 of 7 – Developing the Theory of Change (TOC)

Effectively addressing an issue normally requires first understanding what you are dealing with. Models are generally used in evaluation to help clarify how program are meant to work and achieve intended results. However, much confusion exists between alternative approaches to modelling, each based on different ways of representing programs and the multiple underlying assumptions on which their interventions are based.

Top-down models, such as the one presented in Figure 1a, usually provide a narrow management perspective relying on inductive logic in order to select the evidence (based on existing knowledge and/or beliefs) that is necessary to support ex ante the strategic and operational planning of program interventions. Assumptions are then entirely about whether the program created necessary and/or sufficient conditions (as discussed in the TOC literature) for achieving intended results. In this context, the role of ex post evaluation is too often limited to focusing on program delivery and vindicating management’s contention that observed results depend to some (usually unknown) extent on existing program interventions.

As a research function, evaluation should also support (re)allocation decisions being made by senior government officials regarding the actual funding of public programs. However, this stronger evaluation role would involve reliably assessing individual program contributions to observed results in a given context, and require properly measuring real/actual program impacts while taking external factors into account.

The first difficulty in achieving this task is recognizing that Randomized Control Trials (RCT) are rarely able to completely eliminate the influence of all external factors, and that the statistical ‘black box’ approach it uses prevents reliably transposing (i.e., forecasting by extrapolating) observed results to situations with varying circumstances. Generalization is then limited to a narrow set of conditions formulated as broad assumptions about the context in which the program operates. Providing a more extensive base to reliably measure program effectiveness would entail, in a first step:

  1. developing more exhaustive Theories of Change (TOC) including all factors that created the need for program interventions and/or that likely have an influence on the issue or situation being addressed by the program; and,
  2. determining which factors/risks within the TOC are meant to be explicitly ‘managed’ by the program, with all others becoming external to the program intervention.

Figure 1b shows what a program logic model would normally look like at the end of this first step.

The next AEA365 post will articulate the approach to the development of the more detailed Program Theory of Intervention (PTI) that is imbedded within the broader TOC.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hi! I am Susy Hawes from the University of Southern Maine’s Data Innovation Project (DIP). Part of the DIP’s work is providing free technical assistance to local nonprofits around a data issue. Even though the majority of nonprofits do not believe our time together should be spent around a logic model (because we have been told they are scary boring, or useless), we find most answers to any data issue can be found somewhere in the living and evolving document that is a logic model. Rather than dive right into yet another logic model template, we frame the conversation around a set of questions (that happen to line up quite nicely with the template we use). We get curious about what they are working toward in their day to day work, challenge how different activities and programs line up with their mission, and ask them to describe how they envision their organization contributing to a larger, population level change. Then, we tell them they just developed their logic model!

Lesson Learned: Tap into the freedom of plausible attribution. Often, organizations we speak with think they alone are responsible for creating those population level changes. We tell them that if their program model is based in a body of literature and evidence showing proven success then it is highly lausible they will meet those long-term outcomes, with the help of a community of initiatives working toward the same goals. At this point, there is quite often an audible sigh of relief and big smiles. Emphasizing that organizations on their own are not solely responsible for the long term outcome or population level result has a freeing effect. Too often, small organizations with limited evaluation capacity are asked to prove how they are contributing to these population level changes. Providing them with language around plausible attribution they can then use with their boards and funders is extremely helpful.

Hot Tip: Get creative! We developed a “logic model on pie” to explain the concept of a logic model in a simple, accessible and non-intimidating way. Creativity and humor can go a long way. Give it a try!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Kirk Knestis (Hezel Associates CEO) here again with more thoughts about ways logic models can inform an evaluator’s work and potentially benefit the program managers and innovation developers with whom we work. One area where I think our contributions may often be underutilized is that of sustainability of the evaluand programs or innovations we study.

Lesson Learned – I confess that I’ve been guilty of neglecting “inputs” as they are typically illustrated in logic models. I’ve generally focused on defined activities and desired outcomes as priorities when making decisions about evaluation designs, data requirements, and analysis plans. Inputs were, in my view, simply part of the context within which an evaluand existed to be studied. Recent work of two of my colleagues, Andrew Hayman and Sarah Singer, has changed my perspective on this, particularly focusing my attention on inputs as the source of valuable insights into how a program might be sustained. Sustainability is often a concern for our clients, so this understanding can translate into additional value we offer as part of our evaluation services.

Rad Resource – Sarah and Andrew presented a Roundtable at Evaluation 2016, facilitating discussion about sustainability, how we can evaluate it, and how evaluation might help programs become sustainable. For this session (slides are available in the AEA Public Library), they defined “sustainability” as “continuing program activities or vital components of program activities to achieve intended outcomes without relying on future grant funding.” Their session identified obstacles to evaluating sustainability but more importantly, explored strategies to help a program be more sustainable, including by backward-mapping through its theory of action.

Hot Tip – As an example, consider this slice of a conventional style logic model:

Input (Resource) > Activity (Program Component 1) > Outcome A > Outcome B

If the evaluation finds Program Component 1 (an activity) to be mission-critical to immediate Outcome A, which is in turn required to achieve distal Outcome B, then that activity should arguably be sustained. If that activity requires a particular resource as an input, then sustainability requires (a) sustaining that input to support the crucial program component, (b) replacing that input with another providing similar support for the activity, or (c) modifying the activity so it can be delivered without (or with less of) the input. Regardless, planning for sustainability requires attention to inputs, and evaluators can help program managers or innovation developers plan ahead to structure the evaluation and data collection to that end.

Rad Resource – Other hints are shared in the notes from a similar conversation we facilitated at the 2015 conference of the Eastern Evaluation Research Society (EERS), examining distinctions between “evaluating sustainability” and “evaluating FOR sustainability.” Presented in terms of Challenges and Solutions, these ideas provide concrete ways evaluators might start to leverage logic models to meet another need for clients, developed from some of our recent projects.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Kirk Knestis at Hezel Associates back with the promised “additional post” about logic models, this time challenging the orthodoxy of including “outputs” in such representations. Irrespective of the style of model used to illustrate the theory behind a program or other innovation, it’s been my experience that inclusion of “outputs” can create confusion, often decreasing the utility of a model as evaluation planning discussions turn to defining measures of its elements (activities and outcomes).  A common, worst-case example of this is when a program manager struggles to define the output of a given activity as anything beyond “it’s done.” I’m to the point where I simply omit outputs as standard practice if facilitating logic model development, and ignore them if they are included in a model I inherit. I propose that if you encounter similar difficulties, you might do the same.

Lesson Learned – W.K. Kellogg explained in the foundational documentation often referenced on this subject that outputs “are usually described in terms of the size and/or scope of the services and products delivered or produced by the program.” As “service delivery/implementation targets,” outputs are the completion of activities or the “stuff” those efforts produce. It’s generally understood that outputs can be measures of QUANTITIES of delivery (e.g., number of clients served, hours of programming completed, units of support provided). Less obvious, perhaps, is the idea that we should examine the QUALITIES of those activities. Even more neglected, however, is an understanding that the stuff produced can be usefully viewed as a source of measures of the qualities of the activities that generated it. In short, outputs are more data sources than parts of an evaluand’s theory of action.

Hot Tip – Instead of including outputs as a separate column in tabular or pathway-style models, hold off considering them until planning gets to defining how quantities and qualities of delivery will be measured for “implementation” evaluation purposes. Making the distinction here between that and measures of outcomes assessing “impact” of activities, this approach layers a “process-product” orientation on implementation evaluation, looking at both quantities and qualities with which activities of interest are completed. This simplifies thinking by avoiding entanglements in seemingly redundant measures among activities and their outputs, and can encourage deeper consideration of implementation quality; harder to measure so easier to ignore. It also takes outputs out of the theoretical-relationships-among-variables picture; an important issue evaluations testing or building theory.

Hot Tip – Work with program/innovation designers to determine attributes of quality for BOTH activities (processes) and the stuff in which they result (products). Develop and use rubrics or checklists to assess both, ideally baked into the work itself in authentic ways (e.g., internal quality-assurance checks or formative feedback loops).

Hot Tip – Another useful trick is to consider “timeliness” as a third aspect of implementation, along with quantity and quality. Compare timelines of “delivery as planned” and “delivery as implemented” measuring time slippage between the ideal and the real, and documenting the causes of such slippage.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Kirk Knestis, CEO of Hezel Associates and huge logic model fan, back on aea365 to share what I think are useful tweaks to a common logic modeling approach. I use these “Conditional Logic Models” to avoid traps common when evaluators work with clients to illustrate the theory of action of a program or innovation being studied.

Rad Resource – The W.K. Kellogg Foundation’s Logic Model Development Guide is an excellent introduction to logic models. It’s very useful to getting started or to ensuring that members of a team are on the same page regarding logic models. The graphic on the first page of Chapter 1 is also a perfect illustration on which to base description of a Lesson Learned and some Hot Tips that inform the Conditional Logic Model approach.

Lesson Learned – Variations abound, but the Kellogg-style model exemplifies key attributes of the general logic model many evaluators use—a few categorical headings framing a set of left-to-right, if-then propositions, the sum of which elaborate some understanding of “how the program works,” as such:

Inputs > Activities > Outputs > Outcomes > Impact

While the multiple levels of “intended results” (Outputs to Impact, above) provide some flexibility and accommodate limited context and complexity, program designers or managers often get bogged down in the semantic differences among heading definitions. Alternate labels may help but even then, clients and evaluators are either constrained by the number of columns, or have to work out even more terms for headings.

Hot Tip – Free yourself from labels! Rather than fussing over these terms, leave them out completely. Instead, define each element—still in its left-to-right structure—as a present-tense statement of a condition. For example, the Input “Running shoes” might become “Running shoes purchased.” The Activity “Run 3x per week” becomes “Exercise increases.” The Outcome “Weight will decrease” becomes “Weight decreases.” This mostly requires using passive language for Activities, but also necessitates thinking of what results look like once achieved, rather than describing them as expectations. These changes in semantic structure eliminate confusion about terms, and head off issues related to tense. The lack of constraining headings also accommodates the complexity and context often left out of typical logic models (e.g., our US Department of Labor projects, illustrations of which require 12+ columns).

Hot Tip – Translate the logic model into evaluation data needs by considering measures of quantity and quality for every element of the model, irrespective of where it falls in the chain of logic. Address the extent to which, and the quality with which, each condition is realized. One interesting note, in this approach Outputs become part and parcel to those measures, rather than pieces of the causal puzzle, but that’s an additional post.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi, I’m John Burrett, of Haiku Analytics Inc., Ottawa. One serious problem with logic models is that they usually leave out external influences and feedback effects, even when they may be important, because they make the model “too complex”. It is good to simplify, but ignoring important influences on program success when planning an evaluation may lead evaluators to fail to collect important data and to misinterpret results.

Trying to embrace complexity by drawing a web of boxes and arrows is not helpful: it’s too complex to use and explain and will drive your audience away. This will probably come only from the mind of the evaluator or program manager, thereby easily missing important external influences and other complexities.

Hot Tip: I recently stumbled onto an alternative approach during a mapping of factors of cause and effect related to a complex policy problem. Data was obtained from an expert panel, developing a matrix linking a number of factors with an estimate of strength and direction of relationship between them. Mapping this with network analysis software helped the panel to visualize what they had created.

It followed that this form of data could generate outcomes chains and logic models. Here’s a simple example: a program supporting trades training by providing grants to students and developing state of the art teaching materials in collaboration with trade schools drives the immediate outcomes of…

  • Students gaining the ability to take training and
  • Currency and quality of the training being improved, in order to achieve
  • The ultimate outcome of increased employment.

Exogenous effects influencing these results include cost of living, demand for skills and technical changes affecting the training’s currency. The size of the nodes indicates betweenness centrality, identifying those factors that connect many influences, thus propagating certain effects. The width of the edges indicates the hypothesized strength of influence. Possible unintended effects and a feedback loop are also shown.

Burrett

Lesson Learned: A key advantage of this approach is that that it creates a logic model using expert knowledge, rather than simply an evaluator/manager’s understanding of a program. This could also include other sources of information like findings from literature and program stakeholders’ experiences. Importantly, you could do this without imposing any prior idea of the logic model on those providing the cause-effect data other than including the program/outputs/activities and specifying the immediate/intermediate and ultimate intended outcomes.

A second major advantage is that the logic model utilizes network metrics generated from the data, so how the program and influences are expected to be related can be analyzed. For instance, factors that are thought to have an important role in propagating effects across the system would show high betweenness/eigenvector centralities.

The American Evaluation Association is celebrating Social Network Analysis Week with our colleagues in the Social Network Analysis Topical Interest Group. The contributions all this week to aea365 come from our SNA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top